vish
New Member
Posts: 17
|
Post by vish on Apr 5, 2016 1:50:30 GMT
Knowledge representation in human-robot interaction:
In HRI, knowledge representation is essential to perform tasks. Although the perception helps to represent knowledge of the spatial data and other complex tasks, the robot may still fall short of representing some spatial or action based knowledge. The knowledge representation of those tasks which cannot be solved with perception alone can be solved with the help of users. As in, interacting with users to obtain some knowledge to perform tasks. A collaborative approach to obtain knowledge and its representation.
|
|
toby
New Member
Posts: 21
|
Post by toby on Apr 5, 2016 1:53:22 GMT
This article gives a clear overview of what analogy is and how our understanding and the theoretical model used to characterize analogy evolve. It also layouts the modern understanding of analogy as a complicated process of retrieving, mapping, inference and relational generalization. I find the computational models of analogy to be particularly interesting and I feel the models (SME, ACME, LISA) all represent some aspects of the analogy. I'll be very interested to see how can we apply those models in structured knowledge bases like Wikidata/DBpedia to find analogy between concepts
|
|
aato
New Member
Posts: 16
|
Post by aato on Apr 5, 2016 3:25:10 GMT
Re: What are different areas/ways in which our understanding of how analogical thinking works can be leveraged/applied to advance HCI? ... Re: (Just for fun) What is your favorite analogy? I was taking a storytelling course in college, and I went to meet my professor because I was having trouble with one of the essay assignments. Specifically, I was not sure how to even start the essay, but after that I kind of knew what I wanted to say. My professor (great one by the way), after listening to my concern, called in his office a random person from the hallway. He turned to me and said "This is XXX". I immediately stood up to shake hands with this random person and introduce myself " HI XXXX, I am Franceska". That's when I realized that what my professor wanted to convey to me was "Introduce your topic", for my initial concern of how to start the essay. I still remember this as one of the best (and most spontaneous!) analogies I have ever heard or experienced. Bahaha, this is like the cruelest lesson I have ever heard. And I think if it happened to me I completely wouldn't understand that the point was to introduce yourself. I think I would have thought, "oh, my professor wants me to ask this random person the same question, maybe this person is a good writer and my professor doesn't want to deal with me." So I wonder what kind of burden is on the analogy-maker to fully understand the context behind the person who the analogy-receiver. Not only would I have not understood that analogy, I would have been angered by the professor's inability to be clear and desire to make me uncomfortable as a learning mechanism. (tl;dr I don't like to be messed with when I'm trying earnestly to learn )
|
|
|
Post by Anna on Apr 5, 2016 4:38:19 GMT
"Do you think computers can successfully create analogies? Is there something not discussed in this reading that you believe is a core part of human analogical reasoning that a computer cannot accomplish? "
I agree with the parts Cole, Steven said that it seems computers are on track to create a wide range of analogies. But to not-answer the second question: in thinking about the potential limits of computers in relation to analogies, I'm more interested in analogical interpretation rather than analogical construction. That is, when given an analogy or metaphor, how does a human respond in ways that a computer might not be able to? Thinking about literature, when a reader is emotionally moved by a metaphor, who's winning: the author, or the reader? I suppose the cop-out answer would be both, but I would argue more benefits accrue to the reader than to the author in this case. So then, would it matter if in the future computers wrote all our analogies, if we still get to experience them?
Also, ditto to Alexandra's comment to Franceska regarding the storytelling analogy lesson.
|
|
|
Post by rushil on Apr 5, 2016 4:40:16 GMT
Someone pointed out correctly that a deep understanding of context is required to make good analogies and that is something that computers can't do really well at the moment. The subtle nature of what makes a "good analogy" is still mysticism to most of humans, let alone teaching it to a computer. This mysticism is also a core part of what I think is not discussed in the paper. Some people are better at looking at the meta level and coming up with analogies while others aren't, however the process of why and how isn't completely transparent. This sort of lies in the relational generalization area and that is also why I think that is the hardest part.
However, the applications in design are plenty. Besides the ones currently discussed, there is also information visualization that uses analogies to better convey / summarize text.
|
|
|
Post by xuwang on Apr 5, 2016 5:36:52 GMT
question 1: I think the relational shift that happens in children is similar to the difference between novices and experts. young children tend to have lower working memory capacity, and they have limited knowledge, so that they’re only able to note noticeable features of objects and are not able to abstract things and map things in their mind, when they grow older (more like experts), they are able to build mature mental models of information and understand more structural/schema similarity between objects. question 2: I’m not sure whether this is analogy or not, but in my analysis of MOOC data (I think in a lot of other studies too), we use a lot of proxy to variables, for example use the student’s click data on videos/quizzes/forums as a measurement of their engagement in the course. we’re using count of clicks, not considering how long they’ve watched videos, or how many posts they’ve read, etc., so that this is a proxy not an exact measurement. And a lot of the times, we’re working towards making the observed variables a better proxy for the concept we’re capturing. last question: I can’t think of a favorite analogy, but I feel it’s fun because I always relate things that are not seemingly related. for example, I’ve been saying learning programming is like learning to play the piano, which a lot of people disagree. I think apart from the algorithm thinking part, programming also relies a lot on practice and familiarity with methods, and is similar to playing instrument in this way. I think analogical thinking is generally beneficial for HCI researchers, because HCI is a very interdisciplinary field, and we need to talk to people from very different backgrounds. I feel sometimes it’s difficult to deliver an idea directly (for example explaining learning science to computer scientists), and it’s helpful to use an analogy that’s familiar to them.
|
|
judy
New Member
Posts: 22
|
Post by judy on Apr 5, 2016 12:18:17 GMT
I'm having a hard time thinking of anything insightful to say about analogies, metaphors and schemas. This is nothing new. Even schema's are a part of everyday life. Since this is stuff I've read (like Lakoff and Johnson) and know, I don't even know how to think of this as "cog sci." I don't want to put us back a mini, but is this science? I mean, I'm happy to read it, I just don't know how to contextualize ii. And on the computational models front--I remember being so disappointed to learn that AI's were actually built by humans hand coding a bunch of different scenarios/rules. It's just human intelligence indexed and distributed to us in a different form.
|
|
|
Post by Adam on Apr 5, 2016 12:37:53 GMT
I’m not sure whether this is analogy or not, but in my analysis of MOOC data (I think in a lot of other studies too), we use a lot of proxy to variables, for example use the student’s click data on videos/quizzes/forums as a measurement of their engagement in the course. we’re using count of clicks, not considering how long they’ve watched videos, or how many posts they’ve read, etc., so that this is a proxy not an exact measurement. And a lot of the times, we’re working towards making the observed variables a better proxy for the concept we’re capturing. This is an interesting take on what an analogy can be considered. Does using such fine-grained data as a proxy for higher-level conceptual variables count as analogy? I think it has a lot of similar components to analogy but I'm not sure it is, on it's on, considered an analogy from the perspective of people/humans creating and using analogies. That being said, I think this touches on Q4 in the discussion questions (i.e., Do you think computers can successfully create analogies?). Perhaps this is a good example of how computers can create analogies. Considering the process of analogical reasoning (retrieving, mapping, inference, relational generalization), I can definitely see how using clickstream data like this to generalize to things like engagement could be considered an analogy created by the computer.
|
|
Qian
New Member
Posts: 20
|
Post by Qian on Apr 5, 2016 12:41:03 GMT
So, I didn't fully follow the section on Computational Models of Analogy. I'm curious if someone with more domain expertise can clarify to some degree. I can understand the process behind the Structure Mapping Engine choosing a better aligned pair for an analogy, but I don't really see how that is moving much beyond the discussion of similarity from last week. The last paragraph of the conclusion mentions that these models are limited by the fact that they are hand-coded. I guess I don't fully follow what the models are doing if the meanings have to be hand coded. It seems to me that the 'work' of making an analogy is very much tied to the meanings that the phrase/concepts represent. The paper ends saying we need a deeper understanding of knowledge representation to do this properly. So what exactly are these models doing, then? I'm just struggling to understand their motivation/utility. If the underlying mechanics necessary are not yet understood, why are people building them? (I'm not trying to disparage the work. I'm sure there are reasons. I just don't understand them from reading this chapter) I echo Brandon's first comment on the similarity between similarity and analogy from an AI perspective. If structure mapping/analogy is merely a distant or abstract kind of similarity, then AI might easily master it. As this article stated in the beginning: analogical reasoning goes beyond the information initially given, using systematic connections …… to generate plausible although fallible, inferences about the target. It seems to me the “fallible” part that makes an analogy analogy, which would also be more difficult to implement than “plausible”. How can we tell an algorithm is not merely making connections based on similarity? Or is it looking for a nail with the test instance as a hammer?
|
|