nhahn
New Member
Posts: 22
|
Post by nhahn on Apr 3, 2016 3:08:38 GMT
(Written collaboratively by your wonderful discussion leaders) Summary: Holyoak starts by drawing from 3 different classical areas of psychology research: proportional analogies, metaphor, and knowledge representation. These areas paved the way to understanding how analogies are formed and structured by individuals. Specifically, metaphor pointed to this concept of a larger schema under which items could be compared, and knowledge representation further solidified this thinking into comparisons of higher order relations between objects. Today, analogical reasoning is understood as a complex process of: - Retrieving structured knowledge from long-term memory
- Mapping: representing and manipulating role-filler bindings in working memory
- Inference: performing self-supervised learning to for new inferences, and
- Relational generalization: finding structured intersections between analogs to form new abstract schemas.
The entire process is governed by the core constraints provided by isomorphism, similarity of elements, and the goals of the reasoner. Each of these constraints is dependent on the memory and the experience of the individual: as Holyoak notes, individuals can be biased toward different analogies based on the recency of certain memories and the availability of working memory. To date, several computational models of analogy have been developed. Earlier ones focused on the central process of structural mapping (SME, ACME, IAM, Copycat). More recent ones are based on neural mechanisms (STAR, LISA). Discussion Questions:- Why do you think there is a “relational shift” that happens in children? (This is when children go from object similarity to structural/schema-based similarity.) How do you think the development of this cognitive ability might be related to memory consolidation and abstraction?
- What are different areas/ways in which our understanding of how analogical thinking works can be leveraged/applied to advance HCI?
- Conversely, what are different areas/ways in which HCI can be leveraged/applied to augment analogical thinking in humans?
- Do you think computers can successfully create analogies? Is there something not discussed in this reading that you believe is a core part of human analogical reasoning that a computer cannot accomplish?
- Out of the components of analogical reasoning (retrieval, mapping, inference, relational generalization), which do you think is/are the hardest part of the process? Why?
- (Just for fun) What is your favorite analogy?
|
|
|
Post by Cole on Apr 3, 2016 4:34:28 GMT
Re: Do you think computers can successfully create analogies?
If we take a relatively constrained view of analogies (like those you might see on a SAT), I think computers should be able to easily complete analogies given a source and target. What I am not sure of is whether computers can (currently) understand if an analogy might be a "good" analogy. As mentioned in the chapter, analogies can be incoherent or used to obfuscate. Online discussion forums usually have no shortage of analogies ("Trump is like Bernie because of X!") but picking apart which analogies are both sufficiently valid and appropriate to apply to the situation is not clear. Humans are pretty bad at this though, so maybe I can forgive computers for not being able to do this yet.
Part of verifying whether the analogy is valid is understanding which local constraints can be broken for the sake of global optimization of the algorithm. Humans can reason about which constraints are relevant easily, but it may not be easy to do (currently) in a computational way.
I said "currently" a few times because I believe Strong AI will have no trouble with this.
|
|
|
Post by sciutoalex on Apr 3, 2016 20:49:18 GMT
I don't have a favorite analogy, but looking at other analogies made me realize why a useful analogy is really hard to develop. Most of the analogies on "best of" lists are humorous. I think this is because humor is universal (except when it falls flat), so the insights generated by humorous analogies can be easily understood by anyone. Have a girlfriend and a car (or a boyfriend and a dog)... well you'll understand many of the analogies on this list. For these simple analogies, how much would I need to specify to a computer to get a pleasing result? Here are 23 metaphors of boyfriend is like a dog (http://elitedaily.com/dating/always-wants-bone-23-ways-boyfriend-exactly-like-puppy/939976/), each of them illustrate a different aspect of being in a relationship. How would a computer decide which one is appropriate? It all depends on the goals of the analogizer. As analogy becomes more complex, with more nodes and edges being activated, this judgement becomes less important because there is less ambiguity in the relationships. Agreeing with Cole, I think computers will be able to create highly practical and accurate analogies eventually. These will be great as cognitive aids. But to be clear, these highly specified analogies are not the norm nor the kinds of analogies that *currently* give humans pleasure or insight. To have pleasure and insight through analogy, the analogizer must choose a simpler graph structure that contains more ambiguity, but then to set up the analogy in a way that the inevitability of the analogy seems obvious. I'd like to end quoting Nabokov on his process of constructing a particularly difficult chess problem. I think it's pretty clear that chess problems are for him an analogy about how he leads his readers through metaphor and analogy in his own works: "I remember one particular problem I had been trying to compose for months . . . It was meant for the delectation of the very expert solver. The unsophisticated might miss the point of the problem entirely, and discover its fairly simple, 'thetic' solution without having passed through the pleasurable torments prepared for the sophisticated one. The latter would start by falling for an illusory pattern of play based on a fashionable avant-garde theme . . . which the composer had taken the greatest pains to 'plant' . . . . Having passed through this 'antithetic' inferno the by now ultrasophisticated solver would reach the simple key move . . . as somebody on a wild goose chase might go from Albany to New York by way of Vancouver, Eurasia and the Azores. The pleasant experience of the roundabout route (strange landscapes, gongs, tigers, exotic customs, the thrice-repeated circuit of a newly married couple around the sacred fire of an earthen brazier) would amply reward him for the misery of the deceit, and after that, his arrival at the simple key would provide him with a synthesis of poignant artistic delight. (Speak, Memory 291-2)" (originally quoted in www99.libraries.psu.edu/nabokov/walter.htm)
|
|
|
Post by jseering on Apr 3, 2016 21:44:49 GMT
I also have to agree with Cole in saying that computers can currently create some limited but decent set of analogies, though their ability to really accurately predict the impact of these analogies is still in the future. A good analogy is more than just finding similar things with alignable differences; a good analogy also requires understanding of both the context and the audience. Political analogies are a good example of this. Whether or not a comparison is actually realistic (or appropriate), a good analogy in this context is defined based on its ability to inspire or inflame. This involves a measure of understanding the particular prejudices of a specific audience, which is probably even more important than "understanding" the comparable characteristics of different objects.
With regard to particular applications of analogy- we've already seen how analogy can be useful in design of user interfaces that make sense. Many icons in computer programs are analogical. Analogy is, broadly, a useful method for ideation both in research and in practice. It allows us to reframe a situation in a fairly abstract but potentially useful way, e.g. "what if we thought about this like a boxing match?" in a case that has nothing to do with boxing. Analogy can also help different research concepts cross domains; signaling theory in biology inspired social signaling theory, which incorporates many of the same ideas but in a different context.
|
|
mkery
New Member
Posts: 22
|
Post by mkery on Apr 4, 2016 0:18:19 GMT
Re: Do you think computers can successfully create analogies?
Thinking of Microsoft’s Tay and Cole’s discussion on analogy examples in online forums, I certainly think computers can successfully create “good” “appropriate” analogies --or at least, this is reasonably feasible in the reasonably short future. With Tay, that AI was learning to talk like a teenage girl, but by learning from pretty much anyone on Twitter. An AI agent with perfect “morality” is perhaps an unachievable goal because morality is a fuzzy concept, but by learning narrower biases, good or bad or appropriate or inappropriate analogies seem all possible. Tay made some great appropriate analogies appreciable only if you are a terrible person, perhaps. Holyoak discusses that analogies depend on the goals of the analoger and the biases of the audience. An agent may perhaps learn what key features of concepts can be appropriately made into analogy by many examples and learning who/what sources to listen and bias to. (This is a very difficult task, if an agent never encounters the word “nazi” they may not know how to judge it, but lots of heuristics are probably possible).
I think computers may struggle for a long time, however, on creating novel analogies such as “how is X like a house?” when X shares only metaphorical imagined features with a house and the answer isn't online.
|
|
|
Analogy
Apr 4, 2016 1:50:39 GMT
via mobile
Post by mrivera on Apr 4, 2016 1:50:39 GMT
RE: "Why do you think there is a “relational shift” that happens in children?.." How do you think the development of this cognitive ability might be related to memory consolidation and abstraction?
The relational shift may be rooted in our lack of exposure to the real world in earlier life. As children our brains/understanding is a blank slate. When we begin to learn and process the world, we look at objects and compare directly, but as our knowledge of the world increases, it becomes far more difficult to compare on a direct object basis. Our concept of what a particular object is also grows- a cup is just just a cylinder that holds liquid, it could be a cylinder that holds liquid and has a handle. And to keep this concept consistent in our brain we abstract things, creating a super category for that seems to describe a higher-level similarity. In regards to learning, the switch may be necessary to allow us to make more connections between things.
|
|
|
Post by anhong on Apr 4, 2016 2:19:21 GMT
I think the happening of "relational shift" in children is because of their understanding and experiences of the world. At first, when they see an object, the things they see are the features, like shape, color, material, etc. When they have more experiences, not only more things to draw from, but also the understanding of the same object, they can make inferences in more dimensions. This is quite similar to computer vision and deep learning. The first several layers kind of represent blobs, edges, shapes. And when getting higher and higher, the model can gradually learn concepts like cat or dog. Therefore, for the components of analogical reasoning, relational generalization is the hardest for computers.
Also, I agree with the above that computers can make decent analogies, if they have the data to draw from. However, understanding what analogies fit what context and audience is relatively hard, since computers have not been able to understand context and human effectively.
As for how analogical thinking can be applied to HCI, I think a successful example is the use of metaphor in interface design. Using existing metaphors in the real world can reduce our cognitive load at understanding something new. We can also design interactions in the interface, beyond similarity of objects, that can help us make high-level analogies when transferring the knowledge to something new, such as data model, interaction flow through the use of visualizations and workflows.
|
|
|
Post by fannie on Apr 4, 2016 4:06:55 GMT
I agree with Anhong about the existing use of metaphors in HCI in our interactions with technology (such as desks, files, oracles, etc., as we discussed in previous classes). In relation to the discussion about computers creating analogies, another application could be recommendation systems, where computers might be able to “create” analogies based on behavior like browsing and spending. It could also be incorporated in tutoring or training systems where users can be encouraged to learn by applying analogical thinking. In relation to my own work, reading about their examples comparing countries and people and their decisions (e.g. making decisions because someone seems analogous to Hitler, or not wanting to be like Japan attacking Pearl Harbor) made me think about analogies extended to comparing yourself to others, which could be used for perspective-taking, such as understanding why others act the way they do and if that's analogous to any of your own actions.
For the other way around--HCI could augment analogical thinking by providing ways to better find the links between different concepts in order to form new schemas. For example, a system could help users doing a literature review when they’re trying to link ideas from different papers in order to generate new ideas, or connecting methods and problems between different disciplines in order to generate new solutions. Such a system could visualize the connections or make suggestions to users about similar constructs.
|
|
|
Post by francesx on Apr 4, 2016 15:18:32 GMT
Re: What are different areas/ways in which our understanding of how analogical thinking works can be leveraged/applied to advance HCI?
In HCI, specifically in Learning Sciences, I think analogies can be a powerful tool to help understanding and learning of difficult topics. I am not familiar with the literature in this field, but many analogies from physics are coming to my mind: the horizontally stretched elastic fabric that bends when a heavy object is put on it (to represent the bend in space-time by heavy bodies such as the sun), or how a watermelon can be an atom and the seeds are the electrons wondering around. On the other hand, analogies seem hard to find for certain domains. For example, from various interactions with teachers and researchers who work in the algebra domain, it seems that there is "no good" analogy to linear equations. "The balance" principle is the most used one, which cannot be used with the "negative terms" and does not distinguish between the variable and constant concepts.
Re: (Just for fun) What is your favorite analogy?
I was taking a storytelling course in college, and I went to meet my professor because I was having trouble with one of the essay assignments. Specifically, I was not sure how to even start the essay, but after that I kind of knew what I wanted to say. My professor (great one by the way), after listening to my concern, called in his office a random person from the hallway. He turned to me and said "This is XXX". I immediately stood up to shake hands with this random person and introduce myself " HI XXXX, I am Franceska". That's when I realized that what my professor wanted to convey to me was "Introduce your topic", for my initial concern of how to start the essay. I still remember this as one of the best (and most spontaneous!) analogies I have ever heard or experienced.
|
|
|
Post by JoselynMcD on Apr 4, 2016 17:32:48 GMT
RE: Conversely, what are different areas/ways in which HCI can be leveraged/applied to augment analogical thinking in humans?
In light of the fact that analogies can be very strong tools for 'swaying emotions and influencing political beliefs', it stands to reason that HCI could be leveraged to tease apart arguments that use analogies to point to the areas of difference or non application. As analogies are similar, yet flawed ways to learn more about the relationships between two situations, there could be room for HCI to intervene to find the areas of difference that might now be as obvious (either by design or by accidental omission). One plausible design I wish for you to imagine is a system that utilizes world history data sets to create analogies, or tease apart analogies that have been used, e.g. perhaps between US invasion of Vietnam and the 2003 US invasion of Iraq.
RE: (Just for fun) What is your favorite analogy? One doesn't immediately jump to mind, but this question reminded me of my favorite podcast "The Worst Idea of All Time" in which the two cohosts embark on a mission to watch the dreadful 'Grown Ups 2' (7% rating on Rotten Tomatoes) weekly for a year. In their weekly post-viewing review of the movie (the core of the podcast), they frequently use rich descriptive analogies that liken their experience to that of people in the most historically depraved situations on Earth - to great comic effect.
|
|
|
Post by stdang on Apr 4, 2016 19:56:43 GMT
Responding to whether computers can create analogies, I believe that there has been significant progress in this area depending on how you are qualifying whether an computer has been "successful". Finding mappings between words or semantic concepts or even images has been accomplished by a number of computational models of metaphor and similarity. I've encountered some language-based models that utilize large repositories of knowledge (eg: wikipedia or the internet as a whole) in order to look for conceptual links and to be able to draw some form of analogy from these databases (not exclusively by repeating ingested data). There is a robotics speaker presenting this week on context-free grammers for manipulation. This is an extension(or analogy if you will) of the analogy creation paradigm where any given manipulation motion can be drawn known submotions as is relevant based on how the context of the problem maps to knowledge of applications of submotions.
Another interesting problem with the computational aspect of analogy production is that to a degree we are treating machines as adult humans and thus many times they are expected to reproduce a singular human ability (analogical reasoning) with adult proficiency at runtime. To some degree one might consider more developmental models that would be able to perform some graded version of analogical reasoning depending on acquired experience, and as knowledge increased, analogical reasoning ability may increase similarly. These models may tend to demonstrate similar developmental patterns as children such as the "relational shift" as a consequence of knowledge representational change over time.
|
|
|
Post by Amy on Apr 4, 2016 20:44:31 GMT
Jumping off of what Anhong said, I agree that as children gain more experiences with an object, they can start to abstract more features, but I think it also has to do with seeing multiple examples of the same object. If you've only ever seen one chair, it's hard to make accurate abstractions, but once you've seen 10 (or 100) things that are chairs, and 10 things that are not chairs, it becomes possible to make structural connections. I'm also wondering about memory consolidation. Do we reach a point where our brain has to consolidate memories, and we just haven't reached that point when we're younger? Or is there just a point when our brain becomes capable of consolidating memories?
Combining Franceska's comment about using analogies to teach and the discussion about computer-created analogies, I'm wondering if this is a fruitful research area for cognitive tutors or other learning tools? If the tutor could create analogies to help the students learn, how much better would it be at teaching? But, I think AI is still a ways away from being able to produce analogies that would be good enough for tutoring. So is it better to just let humans do the things that humans are good at, and not worry if the computer can make an analogy?
|
|
|
Post by mmadaio on Apr 4, 2016 23:34:16 GMT
On the computational analogy-generating question, there has been some work at Georgia Tech (among other places) on developing an AI that uses conceptual blending to engage in co-creative pretend play with a child [1]. So, it certainly seems possible that more complex analogies could be created by an AI (although the existing system mentioned only does entity and predicate substitution, and not a more substantial restructuring), though the source and target domains would likely need to be well-structured. Even for humans, though, unprompted spontaneous retrieval is an issue, as seen in the radiation example. Crucially for a learning science approach that leverages this work, for an expert, the abstract schema is more readily accessible when confronted with novel problems that share its structure. So, it would seem essential that any ITS hoping to support expert analogical reasoning would help students develop abstract schema and move away from the more easily perceptual concrete (surface) features. One area of this I thought was under-addressed was the use of analogy as a rhetorical device for persuasion as opposed to reasoning. It would seem like, for President Bush, for instance, creating an analogy between the Iraq War and World War II that placed him in the role of FDR or Churchill in opposition to Hussain-as-Hitler would be beneficial for his war-time goals, but would not be particularly useful for reasoning about the rationale to go to war. Although analogies can certainly be generative, they might also lead the listener towards a seemingly "inevitable" conclusion when used for malicious purposes. [1] www.aaai.org/ocs/index.php/AIIDE/AIIDE13/paper/viewFile/7433/7650
|
|
|
Post by xiangchen on Apr 4, 2016 23:50:40 GMT
I like how this article covers a lot of the theorectical/philosophical ground of analogy, rather than simply resorting to empirical evidence. Analogy is the similarities between the relationships amongst the constituent elements of two situations - quite self-explanatory. The differentiation of source and target is a little unexpected, but seems reasonable as it resolves the chicken and egg problem. Schema, to me, seems a great example of knowledge representation. It captures some tacit aspect of reasoning, i.e., what prompts one to think of something in some ways similar to something else.
|
|
|
Post by bttaylor on Apr 5, 2016 0:15:26 GMT
So, I didn't fully follow the section on Computational Models of Analogy. I'm curious if someone with more domain expertise can clarify to some degree. I can understand the process behind the Structure Mapping Engine choosing a better aligned pair for an analogy, but I don't really see how that is moving much beyond the discussion of similarity from last week.
The last paragraph of the conclusion mentions that these models are limited by the fact that they are hand-coded. I guess I don't fully follow what the models are doing if the meanings have to be hand coded. It seems to me that the 'work' of making an analogy is very much tied to the meanings that the phrase/concepts represent. The paper ends saying we need a deeper understanding of knowledge representation to do this properly.
So what exactly are these models doing, then? I'm just struggling to understand their motivation/utility. If the underlying mechanics necessary are not yet understood, why are people building them? (I'm not trying to disparage the work. I'm sure there are reasons. I just don't understand them from reading this chapter)
|
|