|
Post by cgleason on Apr 17, 2016 3:51:09 GMT
Paper: Tversky, A., Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science. Author Background:Tversky and Kahneman are well known psychologists who devoted a lot of time to studying human biases when it comes to judgement and decision making. They also explored how humans reason about risk and effects on the systematic method of thinking, as well as developing prospect theory together (which is mentioned in this paper). Kahneman is also known for his role in developing behavorial economics, for which he was awarded a Nobel Prize in Economic Sciences. Paper Summary:The authors discuss decision problems in which people seem to make irrational choices, based on the decision frame. They define: - Decision problem: acts one must choose, possible outcomes of these acts, contingencies related to these acts
- Decision frame: decision-maker’s conception of the acts, outcomes, and contingencies associated with a choice
A major theory in decision-making is the expected utility theory, where rational decision-makers who conform to a set of axioms will make choices based on the utilities of outcomes, preferring the highest expected utility. The authors instead propose a modification called prospect theory, where the overall value of prospect is represented by the equation: Where x is an outcome with the probability p, y is an outcome with probability q, π is a function of decision weights associated with probabilities, and v is a function of values associated with outcomes. They note these differences with expected utility theory: - The value function is an S-shape, concave above a neutral reference point (0) and convex below it, e.g. $10 to $20 seems like a greater value gain than $110 to $120.
- Response to loss is more extreme than response to gains
- Value of uncertain outcome is multiplied by a decision weight π(p) whereas expected utility theory weights outcomes by just the probabilityList item 2
In addition, in prospect theory, variations in the following framings can cause reversal of preference: - Framing acts
- S-shape of the value function contributes to risk aversion and risk taking
- Probability under/overweighting makes gains and losses seem more or less attractive/aversive
- Preferences change when options are combined as opposed to independently framed
- Framing contingencies
- Certainty effect: an outcome that is certain is impacted more than an outcome that is merely probable when the probability of the outcome is reduced by some constant, certainty also exaggerates aversiveness of loss
- Pseudocertainty effect: conditional framing of decision problems can give an illusory sense of certainty and affect preferences
- Framing outcomes
- Varying the reference point affects whether an outcome is perceived as a gain or a loss
- People often evaluate acts in terms of a minimal account, focusing on direct consequences of the act even if there are compound outcomes
Thought Questions:
- Based on what the authors have discussed about framing, how can we design systems that enable rational, unbiased decision-making? (if possible?)
- Now, how can we exploit these biases to our own ends? How do people do that already?
- Are computers susceptible to these same biases? Should they be?
- There are many decisions that deal with problems that have a low probability of occurring but have high risk or reward if they happen. How can we help people reason about these issues, even when most are "certain" they won't happen? Example: mass-extinction level asteroids.
- When is prospect theory “appropriate” for human behavior? When is it “irrational” and needs combating? Should we strive to be rational?
- Now that you are aware that you may be influenced by the chosen frame in decision making, how might your behavior change?
|
|
|
Post by sciutoalex on Apr 18, 2016 0:58:38 GMT
I find Thought Question 2 a bit quaint. Everything we see is exploiting these framing effects to get us to spend more money. Go to TJ Maxx and see a discount tag highlighting a big discount without ever clearly identifying why the price was so high in the first place (http://prod-cdn.thekrazycouponlady.com/wp-content/uploads/2015/10/PriceTagCodes.png). The NYTimes recently covered how Amazon sellers use a similar tactic to highlight huge savings (90% off!) from highly inflated base prices (http://www.nytimes.com/2016/03/06/technology/its-discounted-but-is-it-a-deal-how-list-prices-lost-their-meaning.html). These companies are reframing the neutral point of the purchase changing the consumers focus from the perceived utility of the purchase to the perceived benefit of getting a discounted item.
With the advent of algorithmic prices, are the studies Tversky and Kahneman ran outdated? If prices will be personalized for us, how will we evaluate the benefit of the transaction? Will we be forced to consider the utility of our purchases since there will only be price? Or perhaps Amazon is already considering the "discount percent" is part of their pricing algorithm. Some people will respond strongly to the discount, but others will be annoyed so more transparent pricing will be provided. It's a brave new world.
Maybe HCI should look for inspiration in advertising and marketing. They've certainly learned how to encourage large numbers of people. But if we use these effectively are we just as bad as they are?
|
|
|
Post by julian on Apr 18, 2016 1:27:46 GMT
Based on what the authors have discussed about framing, how can we design systems that enable rational, unbiased decision-making? (if possible?)
Add more explanations to everything? It seems like any bias generated by context (variations in the framing of acts, contingencies, and outcomes) can only be overcome this way. However this may not make sense for every single decision a person takes. I think before creating an HCI solution for supporting unbiased decision making, we need to find which decisions are worth supporting.
Are computers susceptible to these same biases? Should they be?
It depends on what you mean, there are no computer-equivalents (without really going to extremes) for the changes in the formulation of problems that cause inconsistencies in the decision process in humans. The kind of decision making problems that computers can solve have to be in the same format( I guess this could be the equivalent of context) and will likely do not work if presented with a different format. I suppose we can say in this way computers and humans are both similar.
There are many decisions that deal with problems that have a low probability of occurring but have high risk or reward if they happen. How can we help people reason about these issues, even when most are "certain" they won't happen? Example: mass-extinction level asteroids. I think that more importantly people should be taught to plan long term, specially poor people which is more susceptible to bad planning. Also, in this article everything was reduced to examples in which there was a numerical value for probabilities this is hardly the case in real life. As an example, driving a car is statistically way more dangerous than being on a plane and yet we do have to use car-like transportation daily, shall I be aware, is it even useful to know this? Should I take a plane everyday to CMU?. Now these are the kind of probabilities we do have available which are not exactly useful and we do not have a way to measure for example the chance of ending in a bad life changing situation by being friend's with the wrong person.
Now that you are aware that you may be influenced by the chosen frame in decision making, how might your behavior change?
I should definitely not do grocery shopping before dinner!. I did notice before that grocery shopping while hungry resulted usually on buying food in excess but I did never really thought about it. It makes sense though that our decision making processes are completely obscured by our inner state.
|
|
aato
New Member
Posts: 16
|
Post by aato on Apr 18, 2016 1:52:36 GMT
First of all, I was super hype for this reading because I've finally been reading "Thinking Fast and Slow" by Kahneman where he talks about his work with Amos Tversky very fondly. Re: 1) Based on what the authors have discussed about framing, how can we design systems that enable rational, unbiased decision-making? A lot of Kahneman and Tversky's work on heuristics and biases demonstrate the ways in which really tiny changes in a prompt can massively alter people's responses. A first step in designing systems that attempt to remove bias would be to review the most common heuristics and biases and what primes them. A second step would be to examine the purpose of your system and try to get a sense of who your target user base is. The norms and everyday reality of the user will likely influence the heuristics and biases the user is most susceptible to and how they are susceptible. A final step, as julian mentions, would be to inform the user as much as possible about what the system is doing and possibly to educate the user about the potential biases at play. <- there might be some literature about whether or not this actually does anything, I can't remember right now I don't think the above method is exhaustive, but it's probably something the people creating and designing systems should be thinking about. Tristan Harris, Co-Founder of Apture and former Product Philosopher at Google has some really interesting thoughts on Design Ethics in Silicon Valley. I saw him give a talk once and it was recorded here (https://www.youtube.com/watch?v=2sdlzbQhrRw) where he talks about the psychological underpinnings that make us addicted to our phones and emails that are unethically designed into the technology we use related to the idea of an attention economy.
|
|
|
Post by anhong on Apr 18, 2016 1:55:20 GMT
It's a very good questions whether we should be rational or not. I think it depends on how big the decision is and the context of making that decision. For a lot of small decisions, I think the irrationality doesn't really matter, instead they may introduce new experiences and serendipities in our lives. However, for decisions that involve the interests of many stakeholders, we should strive for rationality.
Another thing that came to mind is the influences of social networks on our decision making. Presenting us with how other people made the decision will drastically change our decisions. This is demonstrated in e.g. Sauvik's social cybersecurity work for people to adopt new technologies. I think this can be extended to also present the attributes people used to make decisions, e.g., when tech savvy users choose a product of service, they are even less likely to choose the one other people preferred if they are not tech savvy. Considering these when leading people to make certain decisions might be useful.
|
|
k
New Member
Posts: 9
|
Post by k on Apr 18, 2016 17:25:40 GMT
#5 Kahneman and Tversky's effort to revise a theory of rational choice with prospect theory makes me sad. The behavioral theory of rational choice had been thoroughly critiqued by researchers at the time of Kahneman and Tversky's writing. Here is one critique which I enjoy (also written by a Nobel prize winning economist) because it identifies a central weakness of Kahneman and Tversky's approach: as whether it portrays decision making faithfully in Rational Fools: A Critique of the Behavioral Foundations of Economic Theory, www.jstor.org/stable/2264946?seq=5#page_scan_tab_contents. I find Kahneman and Tversky's portrayal of an S-curve as a patch on a model that is fundamentally unable to portray decision-making as it is seen by the decision maker.
|
|
|
Post by stdang on Apr 18, 2016 19:47:18 GMT
One potentially interesting outcome of prospect theory is that empowering an individual with this meta knowledge might enable them to counter their natural decision making process by forcing alternative evaluation frames before making a decision. Of course, this is only helpful in either situations where there is sufficient time, or situations where the same decisions must be made multiple times, thus fluency at evaluating multiple frames is possible. Unfortunately, I think I also remember a case from kahnemann's book "Thinking Fast & Slow" where they discuss the limitations of metacognition at counteracting these effects. To some degree, especially under time pressures, individuals were still subject to the same decision making tendencies. Thus it might be interesting to design decision aids that are able to discover average or individual specific biases and highlight them in multiple equivalent problem statements in support decision making.
|
|
|
Post by JoselynMcD on Apr 18, 2016 20:25:00 GMT
6. Now that you are aware that you may be influenced by the chosen frame in decision making, how might your behavior change?
In undergrad, I took a business class on marketing and economics. In it we read about how all the biggest name brand retailers were using social and cognitive psychology tenets to dupe us into buying things we don't need. Things they would manipulate included temperature, lighting, music tempo, mis-representing the sale numbers, etc. After this class, and still to this day, when I enter one of those retailers (usually Target - one of the most tricky of foes!), I consider all the ways they are using the environment to shape my decision-making. Even with this knowledge in mind, I make poor decisions while shopping, i.e. buying shoes I didn't need because it was the last day of sale, or buying things that say "natural" even though that is a completely meaningless construct, etc. There's a cognitive load on the shopper in trying to assess each item, it's price, and the environ for rhetoric and influence, and I am certainly unable to keep up while flooded in this way. I think this paper, which is a much needed refresher on the concept of framing, will give me a slight boost in helping me to consider how I'm being influenced, but ultimately I believe that after a time I'll be unable to dedicate enough of my cognitive resources to this end and will thus make forced errors in my shopping decisions.
|
|
|
Post by Nick on Apr 19, 2016 0:40:36 GMT
Steven, I love the idea of decision aids. I wonder though how we might react to them. Imagine a bakery has a cookie surplus and their handing out bags of free cookies. You enthusiastically run over to take advantage of this once in a lifetime opportunity when suddenly, you're personal decision aid steps in front of you and says, "Cookies are bad for your health, try some kale instead."
What would you do? I know what I'd do.
Don't get me wrong, I think the idea is great for those borderline situations where you might just need a reminder that you're being duped into a poor decision, but even if I'm told that the whole "free bag of cookies" thing is a devious ploy to get me to try their product, I'm still going to get those cookies. And if we move anywhere beyond a simple reminder or suggestion, we start getting into dangerous territory.
|
|
|
Post by Anna on Apr 19, 2016 1:28:19 GMT
Regarding Nick and Joselyn's posts-- yes, basically all the time I know I'm being duped, and it's all a matter of how much I care, I think (?). Shopping can be stressful for me (thinking back to those 24 jams), so when I go, I treat it as a game, and I will alternately actively combat and willfully succumb to the barrage of marketing ploys. And okay, yes, probably I unconsciously succumb to a ton of different ploys as well. But even here, I still kind of feel in control, because it's like, yeah, we all know it's all a trick. And to Alex's point, haven't we reached a point where that's kind of the norm? And hasn't this arguably always the norm in transactions? E.g. traders duping other traders, haggling over prices in street markets, etc. In many ways we are now more informed consumers (e.g. can compare prices and qualities online, etc.), but the sellers have accordingly upped their game.
And yes, I do think it's worth considering how to use marketing's somewhat nefarious strategies to achieve 'good' HCI ends, so long as we do so warily/transparently (well, at least transparent post-study). For example, there's been some HCI work in motivation and healthy or environmentally-friendly behaviors that might benefit from a more manipulative marketing-style approach.
The $10 play thought experiment and the $5 calculator/jacket thought experiments resonated with me. For both cases, even as I read the situations side by side and was fully cognizant that they were essentially reframes of the same amount of loss/gain, I still found myself making the same decisions that the majority of participants made. The calculator example is easier for me to explain, because the thrill I get from getting a good deal (50% off in the $10 calculator example) typically exceeds the actual monetary value of the deal. But I'm still mulling over the play example, so I think I'll stop now and pick the mulling back up tomorrow during class.
|
|
|
Post by francesx on Apr 19, 2016 1:54:00 GMT
Re: Now, how can we exploit these biases to our own ends? How do people do that already?
The price and discount example that Alex brings is a very good one of how people exploit biases to our own ends. However, either than marketing, I do not see where else we would be able to exploit these biases. Certainly sellers and advertisers are doing that everyday, and as Anna and Alex say, we all know it's a trick.
Re: Now that you are aware that you may be influenced by the chosen frame in decision making, how might your behavior change?
I would think no. Maybe I need to be in front of a decision-situation, but the frame on its own in some cases might be important to deciding whether the action I will take is a rational or non-rational one.
|
|
|
Post by judithodili on Apr 19, 2016 2:26:12 GMT
RE: Based on what the authors have discussed about framing, how can we design systems that enable rational, unbiased decision-making? (if possible?)
I don't think this is possible because it is not a systems problem - human nature is such that everyone has different value expectations, and different lenses that color our decision making processes. Most of our definitions of what is right/wrong has little bearing on any universally agreed upon principles, but it almost entirely dependent on the culture that we are raised in. I also don't think that that biases are always a bad thing - we tend to think of bias in really obvious situations like hiring people from a certain race/school/country etc.. however, even little things like purchasing items on amazon.com almost entirely based on bias. The very first thing that most of us think about when it comes to purchasing an item on amazon is how good/bad the reviews are and use that to guide our buying decision process. Most people like to think that since amazon is comprised of different types of people, then they are more like to make a "well informed" decision. My personal opinion is that "well informed" decision making process really consists of finding people that have the same value systems as we do (e.g. they will use the product the same way we want to, value for money etc), and subconsciously give a higher value to their perception of the product, and use that as a driver to make their purchase (creating a bias in this seemingly well informed decision process).
RE: Are computers susceptible to these same biases? Should they be?
Absolutely!! Computers are made by humans, and humans are inherently biased so....
|
|
judy
New Member
Posts: 22
|
Post by judy on Apr 19, 2016 2:32:30 GMT
1. No. There is no such thing as un-biased decision making. 2. Yes. Joselyn described some marketing tactics nicely! Scare-statistics are another one, "if we continue doing X at the same rate for Y years, we will all certainly die!" 3. Yes. Computers are susceptible. 4. I'd be interested to think about this when it comes to end-of-life decisions. I feel like a broken record, but you have to address value systems to get people to change their frame. 5. I appreciate Steven's point about empowering people with meta knowledge. It makes a lot of sense. I was thinking about the example of losing $10 at the theatre vs. losing your $10 ticket. If I lost either thing, I'd be mad at myself. However, if I already purchased the ticket, then those bastard at the theatre already have my money. It's their gain. They are profiting off a silly mistake--a misplaced ticket. How many people work at this theatre? Can I not go back to the person I purchased the ticket from--won't they remember me?...etc. In order to see the two losses as the same, I'd have to let go of the idea that it matters that the business is making money off of me...that seems to me to require more than meta-cognition. It's not just recognizing that I'd spend $20 in either case. 6. Have any of us really gone through life this long without being conscious that how something is framed influences how we think about it?
|
|
|
Post by xuwang on Apr 19, 2016 2:49:43 GMT
This paper is very similar to the paper I and Joseph will be presenting, which also talks about biases people might have in their decision making process. #1 As also mentioned in the decision making chapter, the framing of options could effect how people choose. For example, people may only consider one part of the choice and may not pay attention to what’s the probability of that choice happening. i think to design systems that enable rational decision making, it should be able to remind users of factors that could play a role in decision making. For example, in choosing graduate programs, what factors should be considered, what factors are usually considered by other people (learning from others’ behaviors). Users may have the right to attribute weights to different factors, but displaying factors, and alternative for users to consider could be useful.
#5 As is mentioned in the when will we do better paper, subjective better decisions and objective better decisions are different. I was wondering when we’re biased by risk seeking or risk aversion, and made not objectively best decisions, but will we feel subjectively better with the decisions during the process? For example, will people be stressed out if they take a big risk on something that could possibly result in a big reward or a big loss? i think as long as people are aware of the potential benefits and losses from their choices and made decisions based on that, that should be considered as rational decisions.
|
|
|
Post by jseering on Apr 19, 2016 3:03:24 GMT
I think we generally seem to be agreeing that "unbiased" decision making isn't necessarily a thing we should strive for (if it exists?) but is there a way to reframe the question to something we can solve? For example, the saving $5 when it's 50% vs 10% example implies that we're making irrational choices, but as Anna said it's still thrilling when we save 50% on something and I don't think that this little bit of irrationality hurts us that much in the long run. Aside: I'm not even convinced that the example is reasonable.
Perhaps we should reframe the question to ask whether we can build decision-making aids to keep us from making choices that we'll later regret. Joselyn wrote a bit about this WRT department stores, where poor (and possibly regrettable) decisions are made under ~coercion. Could we help shoppers avoid making purchases they'll regret later? Or should we be critiquing the stores for trying to push shoppers in the first place?
|
|