aato
New Member
Posts: 16
|
Post by aato on Apr 19, 2016 2:04:42 GMT
I'd like to branch off of MaryBeth and Mike's points and add a slightly dystopian frame. This paper had me thinking a lot about Dr. Manhattan from Watchmen, who undergoes this gradual transformation from man to demi-god as the result of a freak accident. During this transition, he becomes certainly more rational and arguably more distant and cold. He ultimately ends up being rather nihilistic about humanity, and I've always wondered if the "perfect decision machine" might develop a similar attitude. So it's possible that this machine would solve all our decision making problems in an acceptable way, or that it would make ruthless, rational decisions without compassion, but I'd like to add the possibility that this kind of machine might struggle to not be indifferent. If the choice is between saving 100 people but killing 10, a virtually immortal machine might not see a significant difference between dying now from an accident or dying in 50 years some other way. Wow, I didn't think I'd end up that deep down the rabbit hole. I will totally join you down this rabbit hole. I mean this instinctive fear is the basis for a lot of popular science fiction, right? [spoilers ahead] You've got characters like Ash the android from Alien who will follow a directive no matter the cost to human life due to hyperrational pursuit towards a concrete goal, VIKI from I,Robot who decides that in order to follow Asimov's 3 laws to protect human life, humans must have their free will stripped away so robots can protect them from harming themselves, or Hal 9000 who menacingly asks "are you sure you're making the right decision?" I think there's an interesting idea in there where we could try to encode more human decision making fallacies into decision making software, such as taking into account all stakeholders desires and add some attribute weights to these different desires or maybe coding in some kind of conflict avoidance behavior.
|
|
|
Post by fannie on Apr 19, 2016 2:16:46 GMT
I think Joselyn brings up an interesting point about decision-making regarding online dating, but there’s something dehumanizing when 100 users become the jams in a shelf. I think there are certain kinds of decisions like these, or moral dilemmas like the ones that Mike brought up, that perhaps have a more fuzzy definition of what’s “rational” that would make it difficult for say a computer to make the “perfect” decision. Perhaps it could provide recommendations, but for cases like these I don’t think a final automatic decision could be made by the system, unless individuals’ values, identity, emotions, etc. (as Judy mentions) could be accurately modeled--a user still is there to make the choice. And then there’s the question of limiting those recommendations to reduce cognitive load/having to choose between all those jams, or if we should rely on visualizations like we’ve seen in the last class to help us make the choice (and in designing those visualizations, whether or not we can reduce the designers influence on the bias).
In regards to Franceska’s question about why we want people to make better decisions, Toby had talked about how we can benefit people’s emotional well-being, but I agree that it’s questionable what “better” is. But potentially, if we did move responsibility of a “better” or “perfect” decision, we could reduce the regret or emotional stress someone experiences later if they could blame it on the system for making a choice that did not happen to be “better.”
|
|
|
Post by mmadaio on Apr 19, 2016 3:50:11 GMT
Fannie, it's definitely dehumanizing when people on Tinder become "jams on the shelf". One method that Grindr (gay Tinder) used to reduce the cognitive load of decision-making is to introduce filters for people to self-select the attributes of their ideal match (in contrast with other systems that provide a recommendation based on previous "choices"). It's frankly, disgusting and depressing (but, maybe not surprising?), both that the designers of the app decided to facilitate this superficial selection process, and that people willingly engage in it. To Franceska's point about students' problem-selection in ITS's, the challenge isn't just deciding on the "optimal problem" for them to solve next, but in figuring out how to teach and scaffold appropriate problem-selection strategies. As novices, they don't have the full view of the problem space, nor is it clear that even if they did, they would be able to make a "rational" decision about which problem will be optimally beneficial for them to solve next. But, with this reading in mind, perhaps certain framings of the domain space and the inter-relationships of the problems (or topics, at a higher level) can help students make more "informed" decisions about their next problem to solve or topic to learn. In that way, even if the selection isn't the most optimal one, it might still be better than the decision they would have made, using only their local knowledge of the decision. On a poetic note, the section on the multiple "selves" we consider, and our identities that are made salient got me thinking about this ee cummings poem: www.unizar.es/departamentos/filologia_inglesa/garciala/materiales/poemas/cummings,selves.htm Or the studies where people make better investment decisions if they're shown pictures of themselves as senior citizens: hbr.org/2013/06/you-make-better-decisions-if-you-see-your-senior-self
|
|
|
Post by rushil on Apr 19, 2016 4:41:41 GMT
Mental accounting is one of the more commonly used tricks in marketing as well. I forgot the exact concept, but it is something like when you are presented with options such as Buy 1 for $x, 2 for $y but 3 for only $z, then the prices are put in a way to make you lean towards a certain option (which is not the cheapest one).
An interesting HCI tool would be that could determine all involved criterion for a person's decision and simply give them a reminder before performing an action, "have you thought of this and this before you complete this action?"
|
|
|
Post by cgleason on Apr 19, 2016 5:14:51 GMT
Are any of these “irrational” biases rational in a certain context? Can you justify why participants might have acted in a way that seemed irrational but actually makes sense when reframed?
Yes, I actually seek to learn about these precisely so I can use them to my advantage. For example, mental accounting can be useful to help save. If my money is treated as one large pool of assets, then I might be tempted to spend it frivolously. One of the reasons the money-in-seperate-envelopes budget method (or more modernly YNAB) is so effective is that it relies on mental accounting to force people to save even when they technically have enough money for a movie or something. Does this make sense to an Econ? No, but we are not Econs.
It’s mentioned in the paper that people have limited capacity combining information across attributes. Do you think computers can make better decisions than humans based on certain models? What do you think are the advantages and disadvantages of computers in making decisions?
Computers can make better decisions, depending on your definition of better. They can most definitely merge across attributes more reliably, as pretty much any linear classifier will demonstrate.
Which of these biases would you expect are overrepresented among PhD students if any? Which might be underrepresented?
I bet PhD students are more susceptible to delaying action due to the idea that more information might arrive. They are more likely to encounter unknown areas of a field. PhD students (in HCI at least) are probably more likely to notice priming effect, although I don't know that they can alter their behavior to avoid them.
|
|
|
Post by judithodili on Apr 19, 2016 5:22:47 GMT
These findings make sense - from a monetary perspective... It seems like you are saving more or losing less.
When it comes to making decisions, I get really tressed out by having choices. I'd rather have 2 choices than 4... anything after 6, I freeze up and forget the whole thing! I dont think there is such a thing as better decisions, or the right decision... I think, it depends on who is making the decision, and in most cases (for everyday commodity items), there is not really a bad decision. Just make one, and stick with it. Fantastic summary of the paper Joseph... I didn't pick out all those things when I read it so great to see all the major points laid out so nicely here.
|
|
|
Post by Adam on Apr 19, 2016 10:53:53 GMT
- What are some examples of decision making in your own life which use the principles that were mentioned in the paper? How do you think these biases shape our career decisions as PhD students?:
I feel like the irrational decisions of repeated decision-making versus specific instances (e.g., when participants played a "game with equal chance to lose $500 or gain $2000 wouldn’t play a single time but would play five times. They were happier to play six times than five, but when told to imagine that they had already played five and asked if they wanted to play a sixth they declined.") provides a good example how HCI research to demonstrate that users will often say they would do something, yet when it comes down to actually doing it, they act in a completely different way. In this example, people hypothetically would not choose to play a sixth time, yet when placed in a situation to actually play a sixth time, they did. As HCI researchers, we need to keep in mind that people are irrational and that what they may say might not be exactly how they would actually use an artifact, user interface, etc.
- In terms of HCI research, what kind of tools can be designed or developed to help people make better decisions (or more rational decisions)? (You may have a different definition of what “better” decision is)
I feel like some of these findings about irrational behavior could be applied as behavior change interventions. For example, maybe a weight loss app can frame the user's goal in such a way that guides the user to go down one of these irrational decisions that will ultimately result in them being more likely to be motivated to do what's needed to lose weight. This is similar to the example about how decisions are framed matters (e.g., "75% lean sounds better than 25% fat; 3.7% crime rate sounds much more serious than “96.3% crime free”). The framing of a task might lead to more motivating behavior.
|
|
vish
New Member
Posts: 17
|
Post by vish on Apr 19, 2016 23:16:52 GMT
Are any of these “irrational” biases rational in a certain context? Can you justify why participants might have acted in a way that seemed irrational but actually makes sense when reframed?
A lot has been discussed on decision making. Here, I like to stress on the endowment effect in decision making. We can observe endowment effect from day-to-day, the number of exchange of goods/information led by human agents to artificial agents is high. These external agents play a significant role in endowment effect on decision making. For example, if a person is to buy a car in a car fair that has both used and new cars for sale. The buyer is most likely to incline on the information of a car from an artificial agent ahead of a human agent because the buyer can trust the computer more? Now, shifting the scenario to a medical case, an artificial agent as a doctor vs human doctor, people easily trust human doctors ahead of an agent. All these decision-making is contextual. It is rational to rely on human doctors for medical consultancy because our mental model says "doctor's experience is better" and the agent doctors are yet to attain the same trust amongst people. However, in the case of car trade, people easily believe a computer stating direct facts ahead of a human agent, as mental models on human agents may suggest deceit and distrust.
|
|
|
Post by mrivera on Apr 21, 2016 15:11:32 GMT
francesx RE: "As a side note in learning sciences we have "options" within ITS that say allow the student to decide what problem to do next in the tutor, based on previous performance. Enhancing these tools with more data and explanations could help students make better decisions for their future learning." Why bother allowing students to decide which problems to do next in the tutor? If the gal is to have student perform better and learn, doesn't that model run the risk of having a student always do problems that he or she feels comfortable doing?
|
|