|
Post by bttaylor on Apr 3, 2016 3:19:32 GMT
Background:
This reading is a chapter from Language in Mind: Advances in the Study of Language and Thought by Dedre Gentner and Susan Goldin-Meadow.
Dedre Gentner is Director of Cognitive Science program at Northwestern University. Last encountered with the No Difference Without Similarity paper from last week.
Overview: This chapter presents more than a decade of Gentner's research exploring the question of WHY we’re so much smarter than other animals. What are the unique capacities of humans that lead to superiority?
Her answer is two-fold: 1) We can learn by analogy 2) We use symbolic systems (*shoutout to Alexandra’s undergrad major) such as language and mathematics
She also argues that RELATIONAL LANGUAGE multiplies our analogic abilities
Career of Similarities In “The Career of Similarity” (1991) Getner laid out a developmental progression from concrete—>abstract that focused on the transition from using “object similarity” to “relational similarity.”
Literal similarity is the easiest to learn. Young children can only compare things that are very similar. Over time they learn small differences that result in small abstractions. The ability to compare and abstract builds and expands to concepts. Comparisons among exemplars promotes abstraction and rule-learning
Structure-Mapping in Analogies: Gentner also wrote the paper on similarity and structure-mapping we read last week. As you may recall, structure mapping is the comparison process of aligning and mapping different concepts (the groups of shapes from last week's reading). There are different types of similarities and recognizing them is a key component of analogies.
Take the example “The dog chased the cat”.
A simpler, concrete analogy would be “The coyote chased the lynx”. The objects (dog-coyote) and the relationships between them match.
A more complex, cross-mapped analogy would be “The cat chased the mouse”. The object match (cat-cat) here is inconsistent with the relationship match.
Quine (1960) said that over a child’s development they move from recognizing only “brute” (concrete) similarities to “theoretical” (abstract) similarities.
Relational Language: Relational language is inherently culturally and linguistically shaped. It is also harder to learn as evidenced through developmental studies.
Example: 4 yr olds think “uncle” is “a nice man with a pipe.” It’s only later that they understand the uncle’s relationship to them and their mother/father (Kiel and Batterman, 1984)
Amongst adults, consistent relational language makes it easier to match new examples with stored exemplars. Technical vocabulary in experts is a reflection of how relational vocabulary can be used to better trigger knowledge retrieval.
Cognitive Development and Relational Language: Young children struggle with mapping relational characteristics (relative size, location) when those relationships are cross-matched with objects (e.g. they will choose a small cup as a match for a large cup rather than a large object). However, when presented with familiar relational terms (e.g. the daddy cup, the baby house) they are more likely to select for the appropriate relational characteristic.
These studies indicate that the use of relational language plays a role in the children's ability to recognize these relationships.
Takeaways: “Language is neither a lens through which one forever sees the world, nor a control tower for guiding cognition, but a set of tools with which to construct and manipulate representations.”
Language makes implicit labels and systems explicit. Likewise, an analogy makes common structures that were once invisible, visible. Once relational concepts are extracted, learning more words can happen by combining concepts.
Humans go beyond “the current resources of their language to develop new relational abstractions.”
Learned relational tools help us structure knowledge.
Discussion Questions: 1. We’ve been talking about the analogies we use for technology (files, desktops, mommy). Let’s think about what makes a “good” analogy. Choose one tech/interface/app and talk about the relational concepts that link it to a “material” concept.
2. What language (i.e. names for things (“cell phone”) or verbs used to describe procedures (“save”)) has been passed down from previous generations in computing that you think capture either a particularly salient concept or an outdated one? In the case of outdated language what would you change the language to?
3. How do our cultural/linguistic systems shape the concepts we encode in computers? And more importantly, what does that mean in a global tech market? How does one make technology that works with and for different cultures (or with and for their own culture)? (Check out AfriCHI: http://africhi.net)
4. Gentner suggests a process for learning difficult (abstract) concepts through the use of relational language, in your education, practice or research, does this hold true? Do you have any examples from your own experience?
|
|
|
Post by julian on Apr 3, 2016 20:45:17 GMT
Answering to: Let’s think about what makes a “good” analogy. Choose one tech/interface/app and talk about the relational concepts that link it to a “material” concept. I think one of the best and easier to understand analogies in a OS is the Trash. You send stuff there that you don't need anymore, or that is broken, or simply is just not that important and you need space in your HD. If you send something to the Trash by mistake you can always take it out. When the trash is too full you empty the trash and everything in there is gone forever. This analogy is so perfect, that the physical trash can operates on pretty much the same way. What makes it good is that it is very simple, the relational knowledge imbued on throwing something to the trash can in the real word and throwing a file in the Trash is the same. Also, this analogy translates to pretty much any culture. Now on things that I found very interesting about the reading: 1. Language as a higher level abstraction enabler: Never thought about this role for language and how, by this means, we can abstract more easily, language itself is already a level of abstraction. Language also enables to easily transfer knowledge by de-situating or dis-embodying up to certain level, although I believe the key part is the separation from perception (like in the monkeys example). 2. On the conclusions, it is stated that one of our advantages is that humans come without or very little pre-installed software! and we just learn about our environment. But is that so.. there are some recent studies showing that we may come with some Physics knowledge at least! www.npr.org/sections/ed/2015/04/02/396812961/why-babies-love-and-learn-from-magic-tricks
|
|
|
Post by jseering on Apr 3, 2016 22:23:48 GMT
Language is certainly very important as an enabler for higher-level abstractions, but it's also worth thinking about the ways in which our language limits us. It has been proposed in various places that we can only effectively think thoughts that we can describe using our own existing language (see also "think of a color you've never seen before," which is a little bit different but not too far off). One of the themes of everybody's favorite high school dystopia, 1984, is that simplifying language will make revolt impossible because the people won't have any way to conceptualize it. This is a bit extreme and probably simplistic, but I think there's an element of truth. The things we know how to talk about influence the choices we make.
I think it's really important to think about what we come "pre-installed" with, per Julian's point above. Lots of work (e.g. Chomsky) has demonstrated that we can't be completely blank slates when we're born because we all have tendencies to make similar interpretations of physical phenomena like language. Understanding what about us and particularly the way we see the world is "fundamental," i.e. ~written into our genetic code, has some fairly profound implications.
|
|
mkery
New Member
Posts: 22
|
Post by mkery on Apr 3, 2016 23:38:59 GMT
In reaction to jseering, let’s be careful to not assume linguistic determinism. en.wikipedia.org/wiki/Linguistic_relativity The Sapir-Whorf hypothesis and linguistic relativity holds that our language influences how we think about concepts, but doesn’t determine what we can and cannot say (if you don’t have a word for it, you can express any idea in many words). So in the 1984 example, I would suppose the simplified language may have aided a social/cultural pressure to not speak about revolt, but the language alone could not have prevented anyone from conceptualizing it. If we make it not just the language but social factors, I agree with jseering because culture influences what we talk about and what choices we make. In response to (2) the word “cell phone” is a simple example of a technical word which may be outdated but perhaps is also useful for categorization. We perhaps culturally are using the word “smart phone” less because there is an assumption that all phones will converge to “smart phones” and thus there will be no adjective needed for “phone”. A cell phone in the most modern sense is a small tablet computer that can make calls. Yet, though I can make calls from my laptop and use my phone as a computer and rarely to make calls, I would argue the word “cell phone” is still relevant for similarity and categorization. Perhaps a phone is a way of reaching someone, at their house or as they move around. I can reach my parents by their landline phone, and my friend by their non-smart phone. Thus, it may be words don’t need to be “outdated” but their concept can change as the categories they cover change.
|
|
|
Post by mrivera on Apr 4, 2016 0:27:36 GMT
In response to "(2) What language (i.e. names for things (“cell phone”) or verbs used to describe procedures (“save”)) has been passed down from previous generations in computing that you think capture either a particularly salient concept or an outdated one?"
I would say "screenshot" (both verb and noun) captures a salient concept of something passed down from previous generations in computing. For as long as we continue to use visual displays, this concept will be relevant. Here's an interesting thought- given the rise of audio-based interfaces- Alexa, Google Now, Siri- what will become the word for an audio snippet that we want to capture in an instant on from audio interface?
|
|
|
Post by anhong on Apr 4, 2016 2:50:11 GMT
For the example of a cellphone, it was more to describe the phone that uses cellular technology. When Apple released iPhone, it was already no longer a "phone", but the analogy is for people to better understand and accept the transition from a traditional phone to a "smart device". We rarely use our phone to make phone calls, but surfing the Internet, use apps, play games, send messages, so is our phone a browser? a game controller? I agree with above that it's already transformed for categorization. Similarly, are we still using our computer to compute?
For mrivera's point, I think if we really mean the audio clip, maybe "soundbite"? However, for a audio interface, we might still be visualizing the current state of the conversation visually in our head, so to capture the state of that moment, I think it would still be best represented in visual forms.
|
|
|
Post by JoselynMcD on Apr 4, 2016 19:22:37 GMT
RE: 4. Gentner suggests a process for learning difficult (abstract) concepts through the use of relational language, in your education, practice or research, does this hold true? Do you have any examples from your own experience?
It can be hard to try to learn or teach principles or behaviors of things that are invisible (organic chemistry, anyone?!), so employing metaphors that use visible phenomenon can be rather helpful. In my own experience, the easiest way to learn (and teach) the principles of DC circuits through the oft-used water metaphor that is also employed in Gentner's paper. It essentially likens water (amount) to the charge, batteries to pumps, resistance to a gate, pressure to voltage, and flow to current. For very simple DC circuits, the analogy is apt, but if one was learning about more advanced electrical engineering, they would immediately see that the analogy doesn't take into consideration electromagnetic fields, and thus is no longer a strong analogy.
|
|
|
Post by sciutoalex on Apr 4, 2016 20:42:53 GMT
Speaking of color and language and what determines what we see, this is one of my favorite RadioLab stories: www.radiolab.org/story/211213-sky-isnt-blue/ "What is the color of honey, and "faces pale with fear"? If you're Homer--one of the most influential poets in human history--that color is green. And the sea is "wine-dark," just like oxen...though sheep are violet. Which all sounds...well, really off." I think what most intrigued about this article was the slippery relationships between language, memory-retrieval, and analogy. Both language and analogy aid memory-retrieval, but they also color and bias it. Maybe this also helps us to be smart? We've been thinking that the value of analogy is the inference it enables in new situations. But analogy also helps us recall situations. Through analogy we can recall not just immediately connected memories, but more distant memories that have faintly similar structure. That's very powerful and seems to be unique to humans. I have some inchoate thoughts about language being just an analogy for memory. But that makes no sense.
|
|
|
Post by francesx on Apr 4, 2016 20:58:46 GMT
3. How do our cultural/linguistic systems shape the concepts we encode in computers? And more importantly, what does that mean in a global tech market? How does one make technology that works with and for different cultures (or with and for their own culture)? (Check out AfriCHI: http://africhi.net)
I actually have a relatively funny story about this. When technology and computers and stuff became popular in my country back in the day, everything was in English or Italian, or Greek. Namely, there was no software that had menus or instructions in Albanian. One or two years ago, I happened to come across a software that had been either created or translated in Albanian, and what we know of as "Save" in english was translated in Albanian as the meaning of "Save" in the sentence "I saved his life". I was a little bit shocked at the time, and also I had this first reaction "What do they mean by this?".
One note that I would like to make, that might be important to this conversation (by using myself as the only data point) is that language is very culture dependent and it also depends on how you learned the language. When I say "I love you" in Albanian is very different than when I do the same in Italian, French, English or Korean; the intensity and the way the feeling is conveyed seems different to me. Similarly, other emotions and how words express them, feel different or convey different intensities to me when I use different languages. If this holds true, then how our cultural/linguistic systems shape the concepts we encode in computers might be quite important when designing new technologies.
|
|
|
Post by stdang on Apr 4, 2016 21:08:47 GMT
I find myself applying Gentner's theories to learning difficult concepts quite often. Many times when I'm an struggling with a particularly algorithmic or computational modeling paper, the majority of the body of the paper presents the concepts symbolically with appropriate mathematical operations. I many times have a hard time understanding the implications of these models or algorithms, so I have to contemplate an application scenario or two and map the variables and operators to the application space. This allows me to concretely ground the abstract concepts in my own past experience with data and helps support my understanding by related the new poorly understood conceptual space to a more comprehensible and familiar space.
One interesting language concept that has been passed down through historical tradition in computing is the "bug". With its roots in an actual insect ruining vacuum tubes and wiring, this concept has continued to describe the notion of a system fault. The notion of a "bug" implies that there is some single root cause (a line of code somewhere) that is behind a given undesired behavior or state. However, this notion of a bug primes you to look for these small errors and to "squash" them or for a particularly "buggy" system you need to go through and "exterminate" the pest problem. However, this tends to push us away from more systemic issues or algorithmic errors that sit at higher levels of abstraction. The idea of the "bug" might better be replaced with the concept of a personality with quirks. This embraces the more interconnected nature of software such that "fixing" a behavior in one part of the system may lead to compromises or emergent behaviors in other parts the same way balancing your personality (being more social makes you less of a quiet observer). But this personality driven view of software leads one to consider the wholistic implications of a software behavioral "fix".
|
|
|
Post by Amy on Apr 4, 2016 21:11:44 GMT
I think copy, cut, and paste are analogies that still make sense even though the language has been passed down. The icons even remind the user of the material concept they relate to ("cut" is a pair of scissors.) Although I do think the clipboard analogy can get a bit complicated for new users, because it seems like an unnecessary material object for the analogy. Maybe the clipboard was more helpful when computers were new, and users would wonder "where did the copy go?" but perhaps now it is less important?
I second Mary Beth's reaction - language doesn't determine what we can or can't say. Societies make up new words all the time. I think it's more interesting when technology creates some interaction that needs a new word, or gives a new meaning to a word, because there isn't an available material analogy. (trying to think of good examples... spam? hashtag? avatars? meme?)
|
|
vish
New Member
Posts: 17
|
Post by vish on Apr 4, 2016 23:40:19 GMT
Q4) Gentner suggests a process for learning difficult (abstract) concepts through the use of relational language, in your education, practice or research, does this hold true? Do you have any examples from your own experience?
Response: My research work is on 'facial expressions interpretation for human-robot interaction.' It is difficult to learn all the possible emotions the human users may elicit during a conversation. It is observed that many facial expressions may overlap. Moreover, when the program (a machine learning algorithm) output fluctuates amongst the values such as "happy", "surprised", "normal", "content", etc, it is a necessity that robot should not misjudge these varying emotional output from the system as an unstable behaviour of the human user. Instead, it should be able to interpret that the emotions observed are of positive emotions category. Therefore, it is essential to understand the behavioural pattern of the human user (via program output) over a time period using a relational model. Thus, it holds true for my research.
|
|
|
Post by mmadaio on Apr 4, 2016 23:49:13 GMT
Not to get too deep in the weeds, but Joseph, HAS Chomsky demonstrated that we come "pre-installed" with a "universal grammar" module? I'm fairly certain that the linguistic community is still very deeply divided on that issue.
If we don't assume an inevitability to linguistic development, particularly of relational language, than we should ask whether and how certain people develop relational language at different developmental rates than others, or develop it more completely. As Gentner discusses, the use of relational language by caregivers was a crucial factor in children's learning and transfer, and so not all children may receive the same cognitive and linguistic stimuli as others. She rests her 2nd argument on the influence of cultural factors in learning, to build on previous generations, but to assume that these happen inevitably, or are evenly distributed across the population seems to ignore the vast disparities in cognitively stimulating environments for many children.
As a side note, I love her characterization of psychologists' response to Fodor ("Oh go away.." etc). I'm curious about other people's take, not just on that specific issue (that learners need a prior conceptual understanding of what a word means in order to attach a word to that meaning), but also on how scientists eager for empirical, experimentally valid results can presuppose a foundation that may not actually be sound, despite the apparent validity of their findings.
|
|
|
Post by xiangchen on Apr 4, 2016 23:51:06 GMT
I think this paper brings up an important aspect in cognition not captured in our previous readings - language. As we spoke of cognition concepts - memory, categorization, analogy - we seldom pay attention to the medium through which we think of and communicate them. This paper's take on the role of language is quite enlightening. It is considered as one of the two factors that make us smart. Without the full-fledged development as well as fluency in a language, we can't even speak of a concept in our head, let alone communicating it to the others. A precise symbolic representation of certain concepts - best exemplified by how symbols represent mathematic concepts - are the cornerstones upon which we can build up new concepts or sophisticated relationships between these concepts.
|
|
|
Post by Cole on Apr 4, 2016 23:52:02 GMT
@amy: Do you think that the notion of a "clipboard" has somehow constrained the functionality of that feature? On almost all systems, you can only have one thing in the clipboard at once, and cutting or copying something else causes you to overwrite it. In Emacs, however, it is (somewhat confusingly) called a kill-ring. Because it is a ring, you can "rotate" it to select other items you have cut/copied (think like a Lazy Susan). If the clipboard had instead been some other object, like a ring or even a bulletin board (which is larger), I wonder if people (designers or users) would have thought about the feature differently. I actually think that the Emacs kill-ring is overly complicated and that the single-item clipboard works 99% of the time anyways, but it's a thought.
I find the idea that adults constantly extend the language to incorporate more representation very interesting. I've been thinking a lot today about political speech that we have devised to instantly recall similar situations, such as the "-gate" suffix to denote a major controversy or "Papers" to denote a massive leak (Panama Papers and Drone Papers have been on my mind). These are so useful because people instantly recall prior events and get the sense of the type of relationship. Of course, this coining of political slang can also be used to make mountains out of mole hills, so it might not be a net positive.
|
|