|
Post by rushil on Apr 19, 2016 4:24:25 GMT
This immediately made me think of a puzzle we used to be given as a kid. In the puzzle, the question asker had a pattern in mind, and gave the kids a sequence that satisfies the pattern. The kids could then give their own sequence, and the question asker would tell them if it fits the pattern or not. The objective was to find the pattern. The point of this puzzle was that the pattern was such that it is easier to figure out the pattern if you ty to find out sequences that do not fit into the pattern. This helps you set up boundaries and make a decision quickly.
I feel like we tend to do the same with online reviews. The very tendency to look at negative reviews helps us determine what are the criterions which the product fails on, and if one of our deciding criteria is in it, it's easy to reject the product. Using the negative valence helps us reduce search space while making a decision.
|
|
|
Post by julian on Apr 19, 2016 4:52:52 GMT
For the alternative explanation, I would have liked this to be the paper. Although DIV and other methods seem good at explaining partially some of the results, I also believe that constraints should be taken into account and that have a quite profound effect.
About reviews, I usually do not buy anything that goes below 3.5 in Amazon, also I check very quickly the reviews and try to find patterns, good reviews with very similar text could be an indicator that the seller is shitting and writing its own reviews. A pattern on negative reviews could indicate a very real problem with the product.
|
|
|
Post by cgleason on Apr 19, 2016 5:02:20 GMT
1) Compare the (predictions of the) DIV and/or EVSI models against your own intuitions about how you make choices based on online rating information.
I often seek out 4-star reviews because 5-star reviews are often not nuanced. The writer has obviously consumed the kool aid and has becoming a fanboy for the product, meaning that I can never learn anything from them. 1-star reviews have the same problem. 4-stars, on the other hand, tell me what to like with some measure of caution, lending some indication that the author carefully thought about their response. Or at least, that's my perception of 4-star reviews.
4.b) Do you feel the models discussed in this paper adequately consider the potential role of trust and other social information, in guiding human inferences and decisions?
I don't really think it gets at the expertise factor. I like the additional information Amazon and Yelp give me about when the review was posted, for example. It lets me understand how relevant the information is to the present day. Likewise, reviews on Reddit (often not star ratings) tend to be of a high quality because membership in a subreddit is self-selecting for good reviewers (passionate people). I will trust a Google engineer's reviews of USB-C cables on Amazon more that any other anonymous review. This paper misses all of that.
|
|
|
Post by Adam on Apr 19, 2016 11:25:59 GMT
1) Compare the (predictions of the) DIV and/or EVSI models against your own intuitions about how you make choices based on online rating information.
I feel like when I am searching for a product or trying to find a restaurant, for example, and I turn to online reviews/ratings, the way I perceive this ratings depends on how familiar I am with the product/service and the search space. If I am not all that familiar, I will usually begin with the EVSI model where I gather information and compare alternative products very quickly. By doing so, I am able to rapidly get a relatively broad view of the alternatives I can choose between. Then, I will begin to merge more into the DIV model once I selected a smaller subset of those alternatives in order to pick the best option. During the time in which I am using the EVSI model, I am not necessarily going into the nuances of the reviews (e.g., reading negative reviews), but rather simply looking at the relative proportion of positive to negative reviews. Only in the DIV stage do I really delve into the details and read specific reviews.
Also, I actually tend to put more weight on positive reviews over negative reviews. To Alexandra's point about when she reads a really bad review for a restaurant that has otherwise really good ratings, the negative rating seems to have a big effect on her decision to eat at that restaurant. For me, I will typically ignore such bad reviews so long as they are isolated instances (e.g., only show up in very few reviews). A very positive review will always affect my selection more than a very negative review.
|
|
vish
New Member
Posts: 17
|
Post by vish on Apr 20, 2016 3:36:16 GMT
Some of the football (soccer for Americans, sigh!) blogs/website that is known to provide player ratings fall under DIV model. It has a sense of PROLOG/Engineering Model implementation. I like it when the search has "constraint satisfaction" approach. It is so easier to pick things out than to go through an arduous process of searching and reading every detail of information.
|
|
|
Post by xuwang on Apr 21, 2016 2:57:17 GMT
I agree with a lot of previous mosts in that I look at ratings and reviews when i buy things, and if i’m deciding between two products, one of which has reviews, and the other doesn’t, i’ll favor the one that has reviews, because i think that gives me more information about the product. I think when i’m comparing between products, i tend to look at negative reviews more often, because i would want to know whether the product has some disadvantage that i can’t accept, so that i can narrow down my choices. Because i always open a lot of tabs, and then rule out options. As Amy mentioned, there’s a difference between overall satisfaction ratings and more specific ratings, i also pay more attention to the ratings if clothing runs small or runs large. And if the review has a picture attached, i’ll favor this product more. When i was trying to buy two jars for cookies, i had no idea what the size of the jar is from its description, and i ended up buying the one that has a picture uploaded from a customer which shows how many chocolates the jar is able to hold.
|
|
|
Post by mrivera on Apr 21, 2016 15:06:31 GMT
I *really* rarely ever look at online ratings of things because I find it extremely stressful. When I do look online, I think I would fall in the DIV model camp because I'm struggling to make a choice. However, this again, is only when I'm desperate and more when I am looking for something new (vs. trying to differentiate between known options). For example, when I moved to Pittsburgh last summer, I really wanted to eat at new restaurants every week. So I sometimes browsed Yelp and sometimes browsed TripAdvisor trying to 'gather information that would widen a gap' but I was also simultaneously discovering those things that I needed to widen the gap between. And the reason I find this stressful is that it's really impossible for me to discount negative reviews. So for a restaurant that is extremely highly rated but has one hugely negative review, I can't ever forget about that and I end up wanting to go nowhere. So for me the model is more like having a basket where I'm picking fruit. And once the basket is full of fruit I keep examining each piece and if I find even a tiny bruise or blemish I throw it out of the basket until nothing is left and I'm sad and I have to eat a piece of fruit off the ground where I threw it and now I'm sad about eating the dirty fruit. Very interesting - I ALWAYS look at reviews for everything before going anywhere or purchasing anything. Unfortunately, my spirit is not very adventurous, and I really want someone to just tell me what to do (most times) so the idea of going to a restaurant and looking through the menu stresses me out. I'd rather have you pick the restaurant AND tell me what to try (thank you aato ), or have a reviewer tell me if the place is worth going and what is worth trying there. I guess I fall more in the BSM camp, which pretty much echo's the maximizer/satificer paper. While I also look at reviews of almost everything, I inevitably decide that preferences of the raters are too polarized. If you look at any product that has been around for a decent amount of time, the reviews tend to just be either 1-start or 5-starts (on average around 3 or 4). Reading the reviews gives a good overview of someone else's experience but their experience is not necessarily going to me mine. This makes me just cave in, deciding to "test" the product out myself. Interestingly enough, I think a great return policy (E.g. Amazon's) makes it really easy for me to be dissatisfied with a product and still get my money back (or potentially test another one). For restaurants, I tend to just go and pick something from my top two choices, but that ends up putting me in the maximizer's regret :I
|
|