aato
New Member
Posts: 16
|
Post by aato on Mar 29, 2016 1:39:53 GMT
I echo Mary Beth's sentment [excerpt below] that it's dangerous to draw too many generalizations just for the sake of generalizable knowledge. I know last week I was disappointed with the lack of generalizable take-aways from the Name Game, but for a paper like this, I'm comfortable that the authors are using an understandably narrow study in order to understand one aspect of a psychological phenomena. I worry, however, about trying to draw generalization for HCI out of studies this reductive. This paper studies a narrow, specific perceptual phenomena. Trying to pull too much applicable theme from it, I worry, will dangerously lean towards making specious, non-scientific, and completely opinion-based claims. In HCI we would like to be informed by psychology and cognitive science. How do we do this without completely contorting, misrepresenting or misunderstanding less-applied science? At a certain point it's important to be critical with our science, but I think if we're going to poke holes in the study design for either being overly specific or making too broad a claim, you can start pointing out much more obvious issues. For example, does this really demonstrate that it's differences amongst similar things are more salient than different things or does it only prove that this is so for undergraduates at Northwestern? Overall I'm completely fine and pleased with the study design and part of the critical work of doing science is to replicate the findings and build on the knowledge here. HCI tends not to do replication studies and I think that's a huge problem.
|
|
vish
New Member
Posts: 17
|
Post by vish on Mar 29, 2016 1:48:45 GMT
Here I will relate the work of Gentner and Markman in two scenarios of HRI, where the robot continuously interacts with humans and the physical structural spaces.
First, identifying a scene by the robot. The robot is not fed with the knowledge of all the physical objects that it may counter during its movement in a physical space. The very concept of structural similarities and differences described in the paper can aid the robot to learn new object structures.
Second, identifying the user's intent in speech. In the paper, the authors cite "parallel connectivity" and state that if two predicates are matched, then their arguments must also match. However, I argue that this is valid only for a syntactic level not for a semantic level. How can we solve this anomaly in agent-human conversations?
This concept would work for syntactic analysis, but, very less for semantic analysis, especially NLP.
|
|
|
Post by xiangchen on Mar 29, 2016 1:53:01 GMT
My understanding of structural similarity is 'similarity at a meta level', which I think is very relevant to software learning. For example, I hypothesize that an Adobe Illustrator (AI) user will find it easier to learn Adobe Photoshop (PS) than Gimp (assuming these two do not have significant difference in difficulty, if such difficulty can be measured purely in terms of software design and independent of the learners' background). Amongst other things, AI and PS might share a similar but different menu hierarchy, window layout, and design tools, which, by the finding of this paper, will promote users' identifying differences and learning from them. Thus it is important to achieve structural similarities or coherence when designing a family of different software so that users can migrate from one to the other with less cognitive load.
|
|
|
Post by nickpdiana on Mar 29, 2016 2:26:15 GMT
I keep thinking about how the implications of this result socially or culturally, and it gets weird. First, it's worth mentioning that there are obviously important differences between the very experimental task explored in the paper, and the real-world phenomena of identifying differences on the basis of ethnicity, culture, religion, or race (when the stakes are higher). That said, I couldn't stop hearing mantras like, "We're more alike than different," and "Did you know all three religions worship the same god," and other stuff my grandmother posts on Facebook. Seriously though, if we take the results at their word (provided this phenomena generalizes), wouldn't they suggest that a large amount of animosity between two groups might (ironically) indicate that those two groups are actually very similar? Personally, I think most of the time it's more complicated than that, but I think at least at the level of the "Sneeches on the Beaches" this makes some sense.
|
|
|
Post by rushil on Mar 29, 2016 2:46:47 GMT
I agree with the last bunch of people stating that the paper was trying to illustrate that it's easier to pick out distinction between similar things. This kind of research gives us a window it how a part of human mind works. After reading the paper, the first two things that came into my mind were: (1) Metzler mental rotation; and (2) analogy. Both of these things rely on structural alignment in their own way and have been the basis for some interesting HCI work. For example Metzler rotation principle has been used to design rehabilitation tools for individuals whose cognitive abilities have been impacted. Therefore, there is definitely impact within the HCI field -- maybe not directly, but experiments like these do form basis for theories and principles that are further used to develop something "cool" within HCI.
|
|
|
Post by JoselynMcD on Mar 29, 2016 2:55:37 GMT
Welp this was a challenging paper for me in that I was grasping at straws in trying to align aspects of this paper's insights to my own realm of research. Mary Beth's response was right on the nose when she extrapolated that the UI applications are some of the most obvious uses. While reading this paper, I too thought of the potential UI uses to assist users with understanding a new app or interface. Beyond that I struggled a bit to think of the ways to apply this research to my own work. I'm looking forward to the class discussion on this topic, as I hope it will expand my thinking in this area.
|
|
|
Post by anhong on Mar 29, 2016 3:06:23 GMT
I agree with Xu that the reason differentiating similar terms are easy is because they are often place together for comparison. For hotel and motel, they are often introduced together, and we make decisions between them often. However, for car and roof, there's not much to compare with each other. However, I do think this is highly related to categorization, since placing items that are similar together can assist with picking out small differences. In UI design, placing completely different items in the same list will confuse the users, that's why lists will be categorized into sections in iOS settings. Maybe that's also why apps can be organized into folders, pages, or even colors, and people can easily spot the differences using the common baseline.
|
|
k
New Member
Posts: 9
|
Post by k on Mar 29, 2016 3:28:33 GMT
I agree with Michael R. and Mary Beth that this paper lacks contextualization of the findings. It could be strengthened by articulating the motivating arguments for the research questions. Why should we expect participants to respond differently in the two case of comparing similar and disimilar cases? How does answering the authors' research questions fit withing larger questions of categorization?
Also, I am not sure I understand what the authors mean by alignable differences. I'd like this idea flushed out more - grounding in prior literature, how it is operationalized in the study, etc. The authors cite their own prior work, but they detail their procedure and what the participants did but do not critically summarize what the results mean, how they fit within current understanding of categorization, and the implications for the current work. Participants listing commonalities between pairs is not enough to get me on board with the particular claim the authors want to make.
|
|
|
Post by kjholste on Mar 29, 2016 3:37:22 GMT
This and subsequent studies seem quite relevant to instructional design. For example, the similarity of graphical representations (along various dimensions) might be used to inform the sequence (including blocking versus interleaving) in which they are presented to students in order to promote recognition of important distinctions and commonalities. On another note: like Mary Beth and Alexandra, I also tend to worry about "trying to draw generalization for HCI out of studies this reductive". A few years ago, I felt that narrow, specific perceptual phenomena were the only phenomena worth studying. This was primarily for epistemological reasons... it was often possible to draw strong causal claims from experiments investigating these relatively restricted/isolated phenomena. Around the time I became interested in HCI (shifting away from cognitive science), I moved towards another extreme viewpoint: preferring to study much more complex phenomena, at the cost of scientific interpretability (i.e. by automating science, and automatically exploring enormous design spaces to optimize systems towards some supposedly-desirable objective). And at this point, I'm more open to the idea that a mixture of these approaches to research may be ideal. However, I'm not sure what mixture, and how best to make inferential leaps from restricted, controlled studies (e.g. in cognitive science) to inform the design of much more complex systems (in HCI) in a principled manner -- rather than cherry-picking from the relevant literature and extrapolating in a self-serving manner (given that such literature tends to vastly underdetermine the design of complex systems).
|
|
|
Post by stdang on Mar 29, 2016 3:58:56 GMT
I think the 5 minute limit for the first experiment was appropriate for the given task. The objective of the time limit is to increase the likelihood that either participants will attempt to persevere on harder items and as a result they will answer less items overall or they will tend to persevere less on difficult items, thus faster tending to answer more fast items. This tendency to answer fast items will create a differential between different groups if in fact one group (high similarity or low similarity) are differentially difficult. The interesting thing about 5 minutes is that is is appropriately scaled for completing 40 pairs and also that it is long enough to tax memory capacity such that its unlikely that priming effects will be dominant but not so long that cognitive fatigue is likely to set in. Changing the study to compare perceptual features changes the nature of the task and not long is tapping into the mental representations of the concepts and thus not providing insight into the desired cognitive process. I thought the study design was quite clever and well done.
|
|
judy
New Member
Posts: 22
|
Post by judy on Mar 29, 2016 4:06:22 GMT
I classify this study under the category "cool that someone identified this, but also it's pretty obvious," and I'm not really sure how to evaluate it. I wonder (riffing off of Qian's point) if it's difficult for participants to point out the difference of different things because the differences are so obvious. If you asked me to tell you what's difference between "light bulb" and "cat" I'd reply, "Why?" and I'd look at you dumbfounded, like, "What's the catch? What are you really about?" If I can't come up with a sensible structure by which to compare "light bulb" and "cat," then I give up. Am I overwhelmed?
I also wonder about the role of language here. In the discussion the authors reference a study asking children to draw pictures of "people who 'do not exist'" and then think it remarkable that children drew a person with two-heads. But the prompt (if referenced correctly in the paper) was not to draw "a creature that does not exist" or a "living thing that does not exist," but "people." Of course if you anchor them with the idea of "people," then they won't draw dragons. Is there something special in this finding that I'm missing?
|
|
|
Post by julian on Mar 29, 2016 4:58:50 GMT
I liked this paper, I think it has a very solid method and the way they corroborated their results with a second experiment was very well done. It is also interesting to notice the relationship between this paper and the categorization reading. I think in the article they may have found a human-categorization model or at least one that is not far from it. Basically they are describing the general mechanism for finding similar and dissimilar pairs, all that is left is to describe a way to systematicallygroup elements using this mechanism and we have an almost-human-clustering method.
|
|
|
Post by Cole on Mar 29, 2016 6:14:41 GMT
This sort of result seems obvious in hindsight, and I think that is often the mark of great research (or research that isn't novel at all ;-)). There is a reason teachers usually say to "compare and contrast" instead of "compare or contrast". Thinking about it from a social perspective for a second, this sort of reasoning could explain why small differences can be so divisive. People fight over tiny differences in theological doctrine, political policies that are much more similar than they are different, and which burrito joint in SF is really the best. How could that be used from an HCI perspective? Perhaps highlighting the similarities between what people perceive to be very different concepts can bring more focus on outside ideas. For example, instead of Yelp just highlighting Burrito joints "similar to" the place I'm looking at, they could also show "other hole-in-the-wall restaurants nearby". This wouldn't appear to be a similar set of results to me, but it would highlight similarity along a new dimension I hadn't considered, allowing me to really dig into the differences between them.
I think Netflix is pretty good at this. "Fight-the-System 60s Movies" is not a movie genre I would have come up with on my own, but seeing the movies they place in there allows me to see similarity along some axis ("60s movies" or "Fight the System" or both) and then contrast them.
|
|
|
Post by Amy on Mar 31, 2016 14:14:46 GMT
Judy's comment was helpful for me in contextualizing this paper - when someone asks you the differences between a cat and a lightbulb, you're likely not to know where to start, because it's generally not meaningful to differentiate things that have no common ground. In trying to think of how this applies to my research, and in response to Yang's comment about related work, I think this paper can be used as a strategy for how to effectively tie in seemingly unrelated fields. For example, one of my projects contrasts how professors and role-play gamers use technology. The argument is more persuasive if we first highlight how professors and role-play gamers are similar, how they share common ground, before we highlight meaningful differences between the two groups.
|
|