k
New Member
Posts: 9
|
Post by k on Apr 24, 2016 1:00:45 GMT
Summary:The author presents 4 main problems that affect deliberating groups: - Groups Amplify the Errors of individual members
- Individuals use the available heuristic to answer questions of probability by asking whether examples readily come to mind. This is influenced by familiarity and salience of examples.
- Individuals use the representativeness heuristic: if A looks like B, we assume A causes B, or vice versa.
- Individuals are susceptible to conjunction errors: we believe A and B is more likely than either A or B alone.
- When a bias is widely shared, group deliberation will increase its effect
- Informational pressures and social influence have a stronger impact at the group level than for individual decisions
- But, groups are slightly less susceptible to egocentric bias (believing people think and act like we do) and hindsight bias (believing that, now that we have seen the result, we could have correctly predicted the outcome beforehand)
2. Groups members do not Elicit Information from other members - Hidden Profiles: accurate understandings that the group could have obtained, but did not.
- Common Knowledge Effect: information known by ALL group members has a greater influence than information known by only a few members.
- Discussion among group members encourages recall of information known by all group members, and does not help with recall of unshared information.
- Cognitively central group members, whose knowledge is shared with other group members, tend to participate more in deliberation, while members who are cognitively peripheral, whose information is uniquely held, tend to participate less.
- Cognitively central members often have higher credibility within the group. Group status affects a group member’s confidence in sharing information, particularly when that information isn’t widely known to the group.
3. Cascade Effect - Informational Cascades: individuals may become indifferent to their own judgments by attending to the opinions of others and allowing others’ judgements to be weighed equivalently or more heavily than their own
- Reputational Cascades: individuals do not withhold their opinion in order maintain social standing/good opinions of others
4. Group Polarization - Extreme Opinions: the outcomes of group deliberative processes are more extreme than their starting point before deliberation
- Authoritative Opinions: members of groups shift their opinion to conform with those of authorities
- Shared Identity: members shift their opinions to conform with the position they see as typical of the group
The author then briefly discusses how the internet might affect group deliberation, and how group effects might differ when talking about moral judgements rather than factual questions. Thought Questions: Are the author’s assumptions correct Are the decision-making heuristics characterized by Susstein actually errors (availability, familiarity/salience, framing effects, representativeness, conjunction)? Are these errors actually “bad”? Can you think of an example where a bias might be beneficial, from an evolutionary standpoint? If the author is correct, what can we do about it How might the effects of these biases be mitigated? (for example, would an interdisciplinary team without significant shared knowledge not suffer from Problem 2?) Can individuals be better supported in guarding against the problems that are associated with these heuristics? What are the implications for tech, particularly for crowd cognition tech Do you know of any interface designs that address or reduce the effects of the author’s problems both for individuals and/or groups? Do you agree with the author’s reasoning on the impact of personalized search (or services) on bias? (p97) Should HCI be responsible for amplifying/mitigating bias? Does his characterization align with your own experiences In your experience, do group discussions that take place over the internet have more or less bias than discussions had in person or through print media? Why might that be? In your experience, do group deliberations on moral judgements suffer more or less from bias than deliberations on factual questions? Were any of the problems the author identified a factor in that deliberation?
|
|
|
Post by julian on Apr 24, 2016 1:37:08 GMT
This reading contrasts quite nicely with the wisdom of crowds reading. It is actually complementary in that it shows the perils of trusting a crowd that is not acting independently and talks and discuss, ends up amplifying their own biases. Something I completely missed from this reading is a good example of a group of people who discuss and does not necessarily amplify their own bias. I wonder whether this is not in the reading simply because it does not happen or happens under very specific conditions or just because the focus of the reading.
Diversity in opinion is necessary: “The most general finding is that deliberation can help when biases are held by one of a few group members, but when a bias is shared across group members it will increase its effect”
How can we help a group of people recognize a sinking ship? “Groups are more likely than individuals to escalate their commitment to a course of action that is failing- and all the more so if members identify strongly with the groups of which they are part”
Random thoughts:
The internet, I thought initially would be then a perfect place for individualism and get the wisdom of the crowd but not in all contexts. For example, some online communities are worst (more radical opinions and behaviors) than non-virtual communities.
About the common knowledge effect: What does this say about leadership? Is a popular leader then one that is simply listening to common knowledge and that way telling people what they want to hear? How does someone with good but different opinions stand a chance?
In ARM, we had a lecture on leadership were we learned how leaders are those who: Speak the most and first in a group. Could it be simply that this people by speaking first are priming others, hence creating and owning common knowledge and in this way becoming central for the group?
Based on everything we read: Groups of people should not be allowed to take decisions unless they are diverse. But is that really the case? Companies have a board of directors, Countries have a Senate and Congress with a handful of parties representing all of the people. How does all this look now?
|
|
|
Post by xuwang on Apr 24, 2016 10:51:37 GMT
I recently read a survey paper on hidden-profile studies, i think the results presented in these papers are in general consistent. But there are many factors that could affect the group information processing and decision making process that are not mentioned in this paper, some studies even presented opposite effects, so that we can’t say xx factor has a sure effect on group processes. For example, group sizes could effect how information is processed, the larger the group size is, it’s more possible that the group decision making will end up with biases. And access to information during discussion process could increase exchange of unshared information, and reverse biases. Whether the group think the problem is solvable also makes a difference. If group members think there’s an optimal solution to the problem, they’ll be more willing to disclose their unique information.
For problem 2, Jigsaw grouping is one type of hidden profile groups, in which each group member learns different part of the learning materials, and then collaborate in groups. Because group members know each member has some expertise, and they’ll need each member’s knowledge to solve the problem, so that jigsaw grouping will increase the interdependence between group members, and it’s considered to be among the best practices in collaborative learning. I think to address the problem that group members don’t elicit information from other members, we could probably label each member’s expertise, or let each group member label themselves. So that people will know what to expect in their discussion. I also remember reading a study saying that when presenting the knowledge map (structure) of the group to each member, and place each member on that map, it’s helpful for information sharing in the group process. I think providing feedback/ visualization after each discussion could help further discussion as well. Maybe let each member be aware of how much he/she has contributed, how much unique information has he/she shared, etc. I think there are already group awareness tools doing this in HCI.
|
|
|
Infotopia
Apr 24, 2016 19:18:11 GMT
via mobile
Post by mrivera on Apr 24, 2016 19:18:11 GMT
This reading contrasts quite nicely with the wisdom of crowds reading. It is actually complementary in that it shows the perils of trusting a crowd that is not acting independently and talks and discuss, ends up amplifying their own biases. Something I completely missed from this reading is a good example of a group of people who discuss and does not necessarily amplify their own bias. I wonder whether this is not in the reading simply because it does not happen or happens under very specific conditions or just because the focus of the reading. Diversity in opinion is necessary: “The most general finding is that deliberation can help when biases are held by one of a few group members, but when a bias is shared across group members it will increase its effect” How can we help a group of people recognize a sinking ship? “Groups are more likely than individuals to escalate their commitment to a course of action that is failing- and all the more so if members identify strongly with the groups of which they are part” Random thoughts: The internet, I thought initially would be then a perfect place for individualism and get the wisdom of the crowd but not in all contexts. For example, some online communities are worst (more radical opinions and behaviors) than non-virtual communities. About the common knowledge effect: What does this say about leadership? Is a popular leader then one that is simply listening to common knowledge and that way telling people what they want to hear? How does someone with good but different opinions stand a chance? In ARM, we had a lecture on leadership were we learned how leaders are those who: Speak the most and first in a group. Could it be simply that this people by speaking first are priming others, hence creating and owning common knowledge and in this way becoming central for the group? Based on everything we read: Groups of people should not be allowed to take decisions unless they are diverse. But is that really the case? Companies have a board of directors, Countries have a Senate and Congress with a handful of parties representing all of the people. How does all this look now? I think the question about governments (eg. The senate, congress) is a fair one to ask, but it seems a little malformed. Usually political parties organize an overall mindset, but the members of the party still act with free will and have diverse opinions. And in theory, those members should be acting to represent the views on their constituents. Though in the American political climate this is not always the case. All that said, I'd argue that the fact that parties (or rather party members) represent different views means those views are diverse by nature.
|
|
|
Post by JoselynMcD on Apr 25, 2016 17:00:56 GMT
In my social psychology class this semester, we've dedicated a portion of the curriculum to understanding how 'group think' and group conformity works, and what exacerbates it (limited time for personal reflection, discomfort from deviation, requirements that dissent must be vocalized, etc). This paper is directly in line with many of the theories we've been discussing. I find the implications of Group Think to be quite unsettling; obviously, when considering the ramifications of group mentality and the associated fallacies, certain circumstances like jury deliberation immediately and appropriately jump to mind, but there are many more circumstances in everyday life that are affected by this phenomenon.
I worked on a sexual assault prevention project at my previous position, and we were repeatedly confronted with the reality that group think in fraternities would lead to behaviors (sexual assault, racism, and hazing) that individual members did not condone. Since then I've been concerned with systems that could be developed to mitigate these effects. Personally, I think a mechanism that interrupts group deliberations and requires members to take a moment to step back and reflect on how they really feel about the way the group is progressing would be a simple solution. An argument for the implementation of a system like this would be that research shows that groups make much riskier decisions than individual group members would make on their own, so potentially industry would be interested in the risk-modification applications.
|
|
|
Post by jseering on Apr 26, 2016 0:17:01 GMT
The internet, I thought initially would be then a perfect place for individualism and get the wisdom of the crowd but not in all contexts. For example, some online communities are worst (more radical opinions and behaviors) than non-virtual communities. I wonder about this. There's a lot that's been written about echo chambers etc, but I wonder if the problem with the internets as a whole isn't the anonymity but rather the fact that so many different types of people are mashed together. Diversity is important, but it isn't easy. I read a piece by Geoffrey Stone in the fall ("Privacy, the First Amendment, and the Internet") that had a quote that's stuck with me: "As a general proposition, as speech reaches a larger audience, its cumulative value will increase in the same proportion as its cumulative harm." I'm not sure I'm convinced that this is true, but I don't really know in which direction it's wrong.
|
|
mkery
New Member
Posts: 22
|
Post by mkery on Apr 26, 2016 0:54:36 GMT
Group Think seems critical on the internet, where the loudest voices can turn an online community into a hostile, prejudice, or simply biased place. This reading talks about the “Daily Me”, this idea that people will only surround themselves with communities they find comfortable and conforming to their individual values. Yet when we consider larger services, like Youtube or Twitter or online games, it seems like a scaled up version of what JoselynMcD talks about with the fraternity effect. To use services you find otherwise valuable, you may easily be buying into a level of toxicity, bias, terrible behavior that you would not otherwise condone. I’m curious how this relates to how internet groups self-regulate the beliefs in digital spaces and communities. I hear in the news lots of stories about gamers or anonymous people in the tech industry lashing out with hostile comments and threats. How do online communities self-regulate in a positive way? Are there other mechanisms than direct censorship(removing/reporting a post)?
|
|
judy
New Member
Posts: 22
|
Post by judy on Apr 26, 2016 1:48:16 GMT
Two thoughts:
1. "Groups do not elicit information from each other:" I find that this is true, but also that this is a problem that can be remedied (not that I know how to remedy it in large, unstructured groups online, but...). Asking good questions is hard. Asking the right questions of each other is even harder. And harder still is listening to other people's answers. However, in the collaborations that I've felt most at ease and most productive, we took the time to find out where each of us were coming from and the specific skills and perspective (if not exactly on the detailed level of information) that we bring to the group. In that way, each person was an "expert" at their own set of experiences, and each of those sets of experiences were important to the project. But to go through that process, we had a set method and we had a moderator putting the method into place--it was a structured collaboration. Collaborations are hard. Every time a hashtag goes viral (a seemingly spontaneous collaboration among strangers), I wonder how many times the hashtag or another draft of it failed and how many people were coordinating behind the scenes (in my interviews with activists on twitter this is often the case).
2. On the "echo chamber:" As bad as the fraternity effect can be, groups that are otherwise marginalized or dispersed are forming identities and cultivating community on social media. Black twitter being a salient example, but also communities of queer and trans people of color, Native Americans, people with disabilities, etc. are making space for themselves online when it might be challenging to create those spaces IRL. Someone I interviewed who is disabled talked about how having a place where he could joke about his pee bag spilling literally saved his life. And while we're here: are classrooms in high-profile research departments also echo chambers? Faith-based communities? Rec league sports? If there is something particularly bad about online echo chambers (and I agree that there probably is, I just don't quite understand it...the good/bad to everything and all that), how is it different than other communities of like-minded people who influence our decision-making?
|
|
|
Post by sciutoalex on Apr 26, 2016 2:32:11 GMT
A thought question I've had after reading these comments, especially Judy's, is what is the difference between the high-profile research department and the reddit subthread in terms of membership. One may say it's anonymity that means that no one is ever responsible for their actions. Joseph questions that and wonders if it's diversity rearing its ugly head. Some may play devil's advocate and argue that there is no difference!
I think there is a difference, it's in the relationship between in- and out-groups and how people move between them. In the real world—IRL as the kids call it—groups are sticky and pretty permanent. One must jump through hoops to be at a university or dedicate time to be accepted a church. Because of permanence, goals and mores can align and community can form. In real life we have temporary groups, people in a checkout line or people at a bus stop. Those are impermanent, but there is little risk of personal interaction.
On the internet though, no one has created methods of membership as good as church's and college's methods. I can pass between in- and out- with the sign-up of a forum or the subscribing of a list. This makes it difficult to establish group values and permanence. I think Judy's reference to minority communities online is a good example of this because membership in these communities is actually based on real life. I cannot just sign-in to transgender identity.
|
|
|
Post by fannie on Apr 26, 2016 3:11:46 GMT
I’m working on a project dealing with the #ILookLikeAnEngineer hashtag on Twitter that relates to what Judy’s talking about with the communities of people forming according to an identity (in this case, underrepresented groups in engineering). I agree with the points about having a space to talk with others like you, but when you have something like a hashtag as a social justice movement, the out-group might not be as aware of or properly understand the in-group’s feelings and concern. From interviews I did for my project, I’m seeing lots of positivity about the hashtag as a whole, but at the same time I’m seeing that it wasn’t really clear to anyone that anyone outside of people who already cared about underrepresented groups in engineering actually knew about it. So I could still see a potential drawback to an online echo chamber there - but it’s also very specific because the out-group is targeted already.
|
|
|
Post by Anna on Apr 26, 2016 3:33:20 GMT
On the internet On the one hand: as mkery brought up with the Daily Me reference, people can often choose or curate their online communities, which can make such communities become insular. I see this a lot on FB, and it bothers me. As mkery points out, the current alternative-- the YouTube trolling norm-- doesn't seem desirable either. But thinking about the 'how biases could be beneficial' question-- in other words, how we could take what we know about biases in groups, and rather than trying to mitigate the tendency towards bias amplification, work to further amplify those biases?: I feel like there's actually a lot more potential in the YouTube model. It's this place where biases are amplified, but people are still communicating. There's no option to unfriend. I don't know-- no specific ideas yet. On the other hand, as judy discussed,, I think there's a lot of really positive and important things happening on the internet in terms of community, group identity, group advocacy. And to speak to fannie 's point about the potential prevailing ignorance of out-groups: I think this may often be true, but as movements grow in size, they will naturally grow in visibility. And this could also be an HCI problem: how can we design systems so that there is more intergroup dialogue and visibility online? This kind of negates what I just said in my first paragraph, but I am regularly exposed to concerns of groups that I am not a member of through Facebook that I think I probably wouldn't be exposed to IRL; a lot of the discussions I see around race and gender and politics and sexuality and the media (etc. etc.) that I see on FB simply don't happen often in real life.
|
|
aato
New Member
Posts: 16
|
Post by aato on Apr 26, 2016 3:48:16 GMT
This reading contrasts quite nicely with the wisdom of crowds reading. It is actually complementary in that it shows the perils of trusting a crowd that is not acting independently and talks and discuss, ends up amplifying their own biases. Something I completely missed from this reading is a good example of a group of people who discuss and does not necessarily amplify their own bias. I wonder whether this is not in the reading simply because it does not happen or happens under very specific conditions or just because the focus of the reading. ... In ARM, we had a lecture on leadership were we learned how leaders are those who: Speak the most and first in a group. Could it be simply that this people by speaking first are priming others, hence creating and owning common knowledge and in this way becoming central for the group? Responding partly to Julian's comment above and the prompt "Can individuals be better supported in guarding against the problems that are associated with these heuristics? What are the implications for tech, particularly for crowd cognition tech" I'm usually hesitant to suggest technological scaffolding to solve human cognition error, but I think for deliberation groups there is definitely an opportunity to create something useful using insights from both Infotopia and Wisdom of Crowds. I'm thinking of maybe more of a process than a technology. 1) everyone in the room listens to whatever information they are given at baseline (e.g., in a courtroom, the trial) 2) everyone anonymously writes down their thoughts concisely including any points of confusion or indecision as well as instincts and current preferences 3) the system or a person aggregates these comments, tags them, buckets them, does a variety of information processing and maybe even pulls out short excerpts from submissions with the most tag popularity and displays them to the group in a report. 4) Deliberation begins I wonder if seeing the actual majorities and minorities in the group would help. This could of course just amplify some of the issues - maybe it makes speaking up against the majority even more painful. If we think the crowd has wisdom, maybe they start out in a good place after all.
|
|
|
Post by cgleason on Apr 26, 2016 3:58:14 GMT
Re: Are the decision-making heuristics characterized by Susstein actually errors (availability, familiarity/salience, framing effects, representativeness, conjunction)?
Yes, they are errors. That doesn't mean they are not useful, of course. As with optical illusions, the heuristics are correct most of the time, but when they screw up it can be very confusing. Availability, familiarity, representativeness, and conjunction are obvious deviations from the underlying probabilities, but they often work out in our favor. Is it wrong to think that dying from terrorism is more likely than dying in a car crash? Yes. Is it bad to think this way? I'm not sure. It certainly caused more deaths after 9/11 due to traffic accidents though.
As for online group discussions, I think this is a major problem. Whenever I want to see arguments on a topic, someone has already written at length about it. Any position in any argument is only a Google search away. If I read their position before forming my own, am I really aggregating all of the available information? Or am I just accidentally weighting others' opinions too highly? It's hard to get a sense of how much I should trust what I read, and how I can integrate knowledge in a meaningful way.
|
|
|
Post by francesx on Apr 26, 2016 4:07:22 GMT
The more I read about this topic, and my fellow PhD students' ideas, the more this looks like real life to me. Countries creating "identities" or communities within a country/city creating an identity. And to me, it is sad to see that without resolving this issue in the real life, it makes it harder to solve online.
On a not for those replies who say that on the Internet with a single sign up you can get into any group; what about those Facebook groups that you need to get approval from the admin to be part of it (along the same line, many cafes across Asia)
|
|
|
Post by bttaylor on Apr 26, 2016 4:10:25 GMT
I'm wondering if any technological interventions like what alexandra suggested have been tested? I feel like classroom feedback seems like an obvious place where anonymous feedback could be useful. People are reluctant to admit confusion, but maybe with some anonymous (or perhaps semi-anonymous) feedback mechanism would give a teacher a better sense of the class's level of understanding. This seems like something that probably has been studies to some degree.
|
|