|
Post by JoselynMcD on Mar 29, 2016 2:32:07 GMT
Like others in this thread, I intuitively struggle to believe that there is little to no cost for leaving structuring until later in the user's action-sequence. The experimenters did their due diligence in how they performed the study, and I feel like their findings are valid, yet I don't think this would apply to other domains as readily. Generally when using a tool designed to abstract out user behavior, the real-world application will undoubtedly differ. I'd like to see a study that examines sensemaking "in the wild" so to speak, and then applies many of the same analysis tools that this paper applies to suss out the true benefits and costs of structuring information during the sensemaking process. PS. This wasn't mentioned in the paper, but Pinterest is increasingly being used as an ad-hoc tool for comparing items (or trips, services, etc.) bookmarking, organizing and sharing connecting nodes by many people - and it's extremely social in nature . I think it might be worth looking at how people are developing/or re-orienting other technologies for their own sensemaking, and then create models from there, as most people I know feel disappointed with the current options. Collaboration anyone?
|
|
|
Post by anhong on Mar 29, 2016 2:48:08 GMT
I think because of the overwhelming dimensions, the simple way of structuring and categorizing information might not be effective. Here, visualization techniques can play a significant role. For example, for a lot of electronics shopping websites, we can directly add multiple objects for side-by-side comparison. Also for Amazon, the dimensions are reduced to more user facing such as pricing, user satisfaction, etc. These visualizations can provide users with more direct feedback. I agree with the others about the task being not realistic and complicated enough, and I think the tool can be another factor.
One question I have about the measures is how informed and confident are the users to their decisions. Since they are not actually making their decisions on buying a product, it might only be a sense making tool rather than decision making. And I think there's a gap between the two.
|
|
|
Post by kjholste on Mar 29, 2016 2:55:22 GMT
Like Joseph, Felicia, and others, I'd also be interested in better understanding under what circumstances the creation of explicit categories might be maladaptive/suboptimal. And if these circumstances exist, is it always because structuring is not useful (or even harmful) until a task (which leverages that mental structure) is better defined? Or might it be because humans can sometimes implicitly structure information in ways that are more adaptive in the long-term (e.g. perhaps more flexible for use in a range of likely future tasks)*. If there are interesting cases where the latter is true, then might tools that adaptively support different means of structuring (and perhaps also visualizing) information according to the nature of the task at hand (and possibly other contextual variables) overcome some of the issues we expect in messier contexts? In other words: it would be interesting to try to answer Felicia's question ("... I wonder if there are any computerized tools that could help with [long-term/subconscious/delayed sensemaking?]") via a partial answer to Joseph's question ("What are our subconscious or "gut" modes of structuring things?"). * For indirectly related work, see this cognitive modeling paper and the prior empirical work it cites: Kemp, C., & Tenenbaum, J. B. (2008). The discovery of structural form. Proceedings of the National Academy of Sciences, 105(31), 10687-10692. www.pnas.org/content/105/31/10687.full
|
|
judy
New Member
Posts: 22
|
Post by judy on Mar 29, 2016 2:56:04 GMT
I am surprised every time I start a project...research project, group project, lit review, design...how strong the urge is to create structure from the get go, and how every single time I do, I end up throwing out the whole structure and starting again in a less-structured way. Why is that urge to structure so strong? Is it just wanting to get an answer and be done with it? Is it out of some sense of self-importance or genius, that I think a structure can spring from my mind without first gathering/brainstorming? Is it because I'd feel lost without one?
Recently, I've also been wondering if there's a value to giving into that urge to create structure from the beginning--if I also remember not to be attached to that structure. Last year a group of us in PNT played around with the idea of "prototyping the search," creating a draft of a model early on. The idea came up again a couple of weeks ago when a group of researchers were shaking their head at the horrible waste of time their early work had been. But is it really a waste of time? Perhaps for complicated tasks, we need some structure not only to get us started, but to give ourselves something to respond to, to shape, to poke holes in.
|
|
|
Post by fannie on Mar 29, 2016 3:07:19 GMT
I agree with what Felicia and some others pointed out about sensemaking without a specific goal. It also reminds me of moments where I’m talking to someone and I think “oh I remember this related article I saw,” and it becomes part of forming some model about some kind of concept in my head (that I have no goal for). For the subconscious sensemaking, there’s somewhat of a start with auto-completion with related keywords or recommendations according to what other people are looking for--I think someone in our P&T class had suggested something like this for the lit review, or something related to “you don’t know what you don’t know”. Or what others had mentioned in the desk organizing thread, using some kind of networked representation as a model and accessing through relationships between the concepts. Adding the social aspect, like Joselyn mentioned, could be really interesting, especially when you save an article or something someone might have shared with you because you thought it was interesting, or you see things on Pinterest or Twitter or Tumblr that you pin/retweet/reblog, as part of making sense of things as they come up in your life. But since this is more of a constantly changing mental model in your everyday life, I wonder if an automated system might be able to tell when it is important to review "foraged" subconscious information.
|
|
|
Post by rushil on Mar 29, 2016 3:27:17 GMT
This is an interesting paper. Intuitively, it does make sense that structure would come towards the end to be more useful. Until a person has seen a pattern, it is hard to draw out a conclusion. This somewhat aligns with the other reading where we can pick out differences in similar things easily. Once we start seeing similar things, it is easy to set out a structure by noting the differences, and hence it makes sense that the having a structure towards the end was productive.
I like Ana's point about tasks which explicitly don't require building a mental model. If we are dealing with a game of imperfect information, then building a structure/mental model is detrimental. This is tangential, but I would be curious to know what are the chaotic alternatives for situations with imperfect information.
|
|
|
Post by stdang on Mar 29, 2016 4:15:33 GMT
I agree with Michael and a couple others that the specific task likely contributed to the finding that there was no net cost created by Clipper. That nature of the task did not necessarily require difficult learning and organization of newly acquired concepts. The cognitive load of Clipper during a learning task seems like it would likely decrease productivity. So using clipper during a literature review may not yield the same result, though it would be interesting to see the result in this problem context.
|
|
|
Post by Brandon on Mar 29, 2016 4:53:16 GMT
I'm really curious how something like this would apply to other information domains. While there is discussion of information foraging being applicable to many domains (patient diagnosis, research literature) this tool seems really structured towards making within category comparisons. While I think that is useful, I don't know that it's quite as wide-reaching as I expected from the intro discussion (which frame it as tool for capturing information).
I think a lot of the discussion of whether delaying the structuring has additional costs is probably tied up to this type of comparative domain. I could very well imagine a similar study with apartment hunting where the mental model about what is important is changed by what features are available. However, taking Anthony's suggestion about planning a trip, which doesn't really have this zero-sum comparison, might result in a very different result.
It seems like there are probably a lot of different sense-making mental processes that are going on and it might be that a tool appropriate for one just won't work that well in another context.
|
|
vish
New Member
Posts: 17
|
Post by vish on Mar 29, 2016 4:53:35 GMT
Firstly, coercing a mental model of the tool/system in the minds of users should be intuitive to the users via the design of tool/system. Secondly, having mental models of humans on a system will be tough based on the detailing and common ground to be covered. The user modelling approach would require the system to have knowledge of various choices the individual makes. Having a common ground to understand the users by a system is necessary but the in-depth approach of having mental models of the users coerced into the system is difficult. In this paper, the clipper tool does a good job covering most of the common ground possible. However, the shortcoming is as many have pointed out that the difference in the information clipped between the start of a study and the end of a study.
|
|
|
Post by julian on Mar 29, 2016 5:27:10 GMT
@michael rivera: I think the task was just right for the hypothesis it was set to test. Remember that this is about sense making on a sensible amount of time not over the years or months which is what will probably require to understand string theory.
@mary Beth: I understand your concern with no comparing against a base case however, I believe the vision of the paper is to be able to capture and share (over the internet) this sense making process, hence a tool is definitely necessary to be able to scale up to millions.
Conditions 2 and 3 are interesting and they make sense with out previous readings, as a concept is forming initial exemplars or even properties of the exemplar become part of the mental model, as the user learns more this model gets more refined so, for C3 and C2 the models are better. In a way, maybe dissimilar stuff later makes more sense when a core structure or pattern is recognized.
It is interesting that the car seat and camera times do not follow the same pattern for the different conditions. This probably reveals some mediation effect between the complexity(time) and the sense making topic.
|
|
|
Post by Cole on Mar 29, 2016 5:44:44 GMT
I totally buy that putting structure off until later results in better categories. Like Mike said, people are probably familiar with how to shop, but that doesn't mean that they are aware of the relevant dimensions of a specific product space (except cost, perhaps). Structuring initially likely gives a lot of weight to initial categories and dimensions that just shouldn't have been created in the first place (structuring inertia). When people continue to tag, they will likely throw data into the same categories just because they have already made the effort to create them. I think that putting things off to the end allows the sensemaker to really get a feel for how important different dimensions are relative to each other (and to themselves). Additionally, they get a feel for what the range of those dimensions are. Now, I don't believe that just getting a sense of he weights and ranges of various product dimesnions allows shoppers to pick a product any better; it's hard to visualize a multidimensional space like that.
P.S. Shopping is still one of the areas where I think sensemaking is sorely needed, because (thanks to the Internet) I now put way too much research into buying any product. I usually have to read dozens of reviews before I am satisfied buying coffee filters.
|
|
|
Post by mmadaio on Mar 29, 2016 13:04:14 GMT
One of my big issues here is that conclusions are being made about the structure of participants' mental model of the information space, when what is being evaluated is a linear list of dimensions along which individual items are being evaluated, rather than a network-based approach, with some semantic information encoded in the edges [1, 2]. As we read in the concepts chapter, a feature list is not the most robust representation of a single concept, much less the conceptual structure and inter-relationships of elements of a mental model of an information space. Further, mental models are inherently internal, fragmentary, and dynamic, so to ask whether we need mental models is really asking whether we need externally-represented versions of them.
I agree with Mike R. (et al) that for a simple information-seeking and decision-making support task such as searching for a camera to evaluate and purchase, this would be useful (because of the simplicity of a linear list of feature dimensions rather than the conceptual structure of a mental model). But, for open-ended sense-making tasks such as exploring a new research domain, the conceptual structure would need to be more complex and easily revisable to reflect the dynamic nature of mental models. Now, Kittur et. al did incorporate revision of their list of dimensions, so this seems to address people's concerns about "getting locked into" a mental model representation with such a tool. If a more richly structured conceptual network were implemented to represent the mental model, similarly allowing users to revise its structure as their mental model's structure changed seems like it would be beneficial.
Finally, I also agree that time seems like not the best metric here. Are we really trying to make the sense-making process "more efficient"? For researchers, this process will continue our whole lives. Why would we want to speed it up, if that's at the cost of the richness of our representations? Perhaps some other metric would be more useful here, like "new insights gained" (the "unexpected phenomena and illuminating insights" of the Malone paper, or something).
Finally (pt. 2), for our P&T project, we automatically generated such a network structure, though of papers for a lit search, and not of the concepts as the nodes. We also found that efficiency or accuracy metrics (number of papers found, number of "important papers" found) were not as interesting as the perceptions of the users such as feeling like they were getting more done, or understood the information space better, or had some new thought about their search process or the domain. However, we also did not allow for the user to change the structure of the network after it was generated, an area we wanted to explore more in the future.
J. Langan-Fox, S. Code, and K. Langfield-Smith. Team mental models: Techniques, methods, and analytic approaches. Human Factors: The Journal of the Human Factors and Ergonomics Society, 42(2):242–271, 2000. D. Gentner. Structure-mapping: A theoretical framework for analogy.Cognitive science, 7(2):155–170, 1983
|
|
|
Post by Amy on Mar 31, 2016 13:58:47 GMT
I agree with Judy's point that relates to our PNT project last year - sometimes it's helpful to make a bad structure, just so you can find where the hole and problems are. This relates to Michael's point that time may not be the best metric for evaluating sensemaking tasks. While speed may be more valuable if the task is short-term decision making, when the task is instead a long-term sensemaking task that doesn't revolve around making a particular decision, speed doesn't seem to matter. So then structuring and restructuring can happen frequently - I wonder if there could be a study comparing early/late structure to frequent structure. Also, one problem that came up in our PNT project was how to evaluate the sensemaking task if we didn't use time. Even for a decision making task, it can be challenging to decide what the best "choice" is, but for a sensemaking task that isn't decision related, I'm not sure how it would be evaluated.
|
|