|
Post by rushil on Apr 19, 2016 4:51:57 GMT
Maybe HCI should look for inspiration in advertising and marketing. They've certainly learned how to encourage large numbers of people. But if we use these effectively are we just as bad as they are? I don't think it's bad. They are just maximizing their valued criterion -- making money. As an HCI practitioner, you can choose to maximize upon another criterion, however there are definitely things that we can learn from advertising and marketing. As a personal example, I love traveling. I end up buying plane tickets month in advance if I can see a good online deal (regardless of the fact that I don't know months ahead, if I would even be able to make it or not! and yes, I have not been able to make it on one of such impulse buys). I am fully aware of the biases that lead me to buy those tickets, however I am not going to change my decision making process. Why? Because, I feel that what I value more in this particular scenario is the outcome. If I can *feel* that I am spending less money (even if I am not), and obtain a good outcome (being able to travel), then I don't care if some marketer makes some extra money off of me. I would see that as a win-win situation.
|
|
|
Post by Adam on Apr 19, 2016 12:04:40 GMT
Maybe HCI should look for inspiration in advertising and marketing. They've certainly learned how to encourage large numbers of people. But if we use these effectively are we just as bad as they are? I don't think it's bad. They are just maximizing their valued criterion -- making money. As an HCI practitioner, you can choose to maximize upon another criterion, however there are definitely things that we can learn from advertising and marketing. As a personal example, I love traveling. I end up buying plane tickets month in advance if I can see a good online deal (regardless of the fact that I don't know months ahead, if I would even be able to make it or not! and yes, I have not been able to make it on one of such impulse buys). I am fully aware of the biases that lead me to buy those tickets, however I am not going to change my decision making process. Why? Because, I feel that what I value more in this particular scenario is the outcome. If I can *feel* that I am spending less money (even if I am not), and obtain a good outcome (being able to travel), then I don't care if some marketer makes some extra money off of me. I would see that as a win-win situation. I mostly agree with Rushil. However, consider a situation in which you make an irrational decision of buying a product in which you save only $10 versus $25 on an identical product only because the marketer/website led you to believe you were getting a better deal on the first product (50% off versus 20% off). You would have been better off buying the 20% off product because you would have saved more money, but the decision frame the marketer had led you down was that a 50% savings is better than a 20%. I'm not sure that is a win-win situation and is definitely the marketer taking advantage of our biases and irrational thinking. I could see there being a browser plugin of some sort that detects when you are making an irrational decision and prompts you to reconsider. For example, in the scenario I described above, it could intervene when I try to buy the product that is 50% off despite me still having to pay more than the 20% off product. It could be as simple as a verification check (e.g., "Are you want to buy this for $x more than this similar product?"). Related side note: I wonder if/how people's buying habits for plane tickets would change if airlines started using similar tactics of "discounted" flights in their search results (e.g., save 20% off this more expensive flight than 10% off a cheaper flight).
|
|
|
Post by mmadaio on Apr 19, 2016 13:58:53 GMT
Perhaps we should reframe the question to ask whether we can build decision-making aids to keep us from making choices that we'll later regret. Joselyn wrote a bit about this WRT department stores, where poor (and possibly regrettable) decisions are made under ~coercion. Could we help shoppers avoid making purchases they'll regret later? Or should we be critiquing the stores for trying to push shoppers in the first place? Maybe this is the libertarian in me that's clawing to get out, but doesn't this feel just a little too paternalistic for people's tastes? Why should we stop people from making decisions they'll later regret? I think Joseph's last sentence about critiquing (or legislating/regulating) stores for their subtle coercions seems like a better world than one in which every potentially regrettable decision I might make (that I might learn from later!) is prevented by my oh-so-helpful decision agent that stops me from eating that extra Reese's cup. Or buying that completely useless new duvet. This is what bothers me about the learned helplessness of the "optimal" problem/topic-selection in current ITS's. Sure, students don't always know (or in fact, may very rarely know) the best choice of next topic to learn, particularly as novices in the domain. But to respond to this lack of knowledge with the brazenly confident delivery of the "optimal" next problem/topic for the student robs them of the opportunity to make those decisions for themselves, perhaps make a mistake, and learn the meta-skill of designing or arranging their learning on their own! I worry that students given the "optimal" path through a curriculum will never be able to leave the walled garden, or sandbox of the ITS, or that when they do, they won't have learned how to be autonomous learners. Can the same be said for my cookie-eating habits, or Target-shopping habits? That a personal decision aid will atrophy my decision-making abilities, in service of my optimal life-state?
|
|
|
Post by Amy on Apr 19, 2016 14:34:40 GMT
I agree with michael. When I read Adam's idea of an agent that stops you from making sub-optimal decisions...maybe it's the saticficer in me, but sometimes I really don't care if I'm making the "best" choice, and sometimes, like Michael pointed out, I can benefit from making sub-optimal choices and learning from that experience. There might be some decisions I'd want the agent to help with, but then I'm not convinced I could teach the agent what my version of "best" is
|
|
|
Post by kjholste on Apr 19, 2016 19:15:26 GMT
But to respond to this lack of knowledge with the brazenly confident delivery of the "optimal" next problem/topic for the student robs them of the opportunity to make those decisions for themselves, perhaps make a mistake, and learn the meta-skill of designing or arranging their learning on their own! I worry that students given the "optimal" path through a curriculum will never be able to leave the walled garden, or sandbox of the ITS, or that when they do, they won't have learned how to be autonomous learners. Can the same be said for my cookie-eating habits, or Target-shopping habits? That a personal decision aid will atrophy my decision-making abilities, in service of my optimal life-state? I'm inclined to agree for the most part, but I do wonder whether students will learn the meta-skill of designing/arranging/regulating their own learning effectively, in the absence of support. To get a bit meta: if we develop a system that directly provides "optimal" support for learning the meta-skill of regulating their own learning (assume this system is actually pretty good), are we robbing them of the opportunity to learn how to learn this meta-skill on their own? Is it a bad thing to rob them of these meta-meta-skills? And is it worse to rob students of the opportunity to learn the meta-skill rather than to rob them of the opportunity to learn how to learn the meta-skill? More broadly: this reminds me of the "AI personal assistants for computer desktop organization will inevitably infantilize us" argument that arose back when we were discussing the "how people organize their desks paper" in class. The worry is valid, assuming we don't want to be infantilized, but this outcome does not seem inevitable. Maybe even more broadly: both of the above relate to the way we decide to frame AI (e.g. as personal assistants, as 'corrections' for human irrationality/stupidity/limitations/etc, or as extensions of ourselves... cognitive aids/augmentations ['AI as IA (intelligence amplification)']), which may in turn have real consequences for the way AI systems are designed.
|
|