vish
New Member
Posts: 17
|
Post by vish on Apr 3, 2016 5:29:26 GMT
The reading "Complex Declarative Learning" is a chapter from "Cambridge Handbook of Thinking and Reasoning", written by Michelene T.H. Chi and Stellan Ohlsson. Michelene T.H. Chi is the director of the Learning and Cognition Lab in the Institute for the Science of Teaching & Learning , Arizona State University. Stellan Ohlsson is Professor of Psychology and Adjunct Professor of Computer Science at the University of Illinois at Chicago (UIC) since 1996.
Summary:
The authors discuss and provide a framework for complex declarative learning.
> The three structures of complex declarative knowledge are:
Networks: Intuition that everything is related to everything else; Theories:Some knowledge elements are more important than the others; and Schema: declarative knowledge represents recurring patterns in experience.
> The seven dimensions of monotonic change in declarative knowledge:
1. Size: cumulative acquisition of pieces of knowledge. 2. Connectedness / density: density of relations between the knowledge elements. 3. Consistency: the degree to which the multiple assertions embedded in an intuitive theory are true at the same time. 4. Granularity Level: the level of knowledge representation. 5. Complexity: a change in the schema to represent the concepts when the existing schema is insufficient for representation. 6. Abstraction: given a pre-existing set of abstractions, it is possible to re-represent an object or a domain at a higher level of abstraction. 7. Vantage point: a change of perspective.
> The learning paradox: Monotonic and Non-monotonic change. Monotonic change - the types of cumulative and re-representational changes where the original concepts remain. Non-monotonic change - the re-representation of the concept where the original concept is abandoned.
> Non-monotonic change:
1. Non-learning responses to contradictory information:
- Assimilation
- Abeyance
- Bolstering
- Recalibration
2. Learning responses:
- Transformation via local repairs
- Bottom-up replacement
- Top-down replacement
- Transfer via analogies
- Ontological shift
Questions: 1. Think of a domain of complex declarative knowledge in which you have recently experienced change in your knowledge representation network beyond the accretion of new concept nodes (change in size). Identify whether that change was monotonic or non-monotonic, describe the dimension of change you experienced, and explain whether your reading this chapter shed light on that process of representational change.
2. Knowing what we know about learning and non-learning responses to contradictory information, how might we better motivate people to engage in non-monotonic change for knowledge representations for which they might have a personal, perhaps politically or religiously motivated investment (e.g. climate change or “anti-vaccination”)?
3. Does the fact that declarative knowledge can undergo so many kinds of transformations and be represented in different ways strengthen or weaken the argument that the best way to think about knowledge is using the declarative/procedural division?
4. Why do we have all these barriers to nonmonotonic change? Why isn't it the case that we can just update our knowledge smoothly in the face of more (or better) information?
5. Do you see examples of these kinds of knowledge changes in your own line of research? Are there instances where your research community has struggled with nonmonotonic change (or monotonic change for that matter)?
- (Michael, Nick, and Vish)
|
|
|
Post by sciutoalex on Apr 3, 2016 21:24:20 GMT
So I'm taking Applied Machine Learning with Carolyn Rosé, and I think my learning about this new domain has many of dimensions of monotonic knowledge acquisition as well as as some of the qualities of non-learning responses that resist non-monotonic acquisition. The size of my knowledge has increased: I know more procedures and terms. I can combine those procedures and terms into simple patterns that I can apply to a variety of problems. The complexity of my ML schema has become greater as I've learned these new concepts, and the granularity has increased as I've been forced to do practice problems. I don't think the connectedness of my knowledge nor the abstraction level has greatly increased, perhaps those are more advanced dimensions that require a minimum size and complexity of knowledge before they increase. I'm not sure if my vantage point has changed, though Carolyn I feel would like us to think about how a ML algorithm sees the world. But I don't think she has put it in those words!
Because my knowledge of the domain is neither highly connected nor existing at multiple levels of abstraction, I think it is easier for me to assimilate and evade contradictory knowledge. Because a piece of information will contain multiple connections, with my sparsely connected knowledge graph, I can find a way to place the knowledge without it forcing me to confront how that piece of information contradicts my mental model. In a classroom, where I'm receiving a large stream of information, I can also evade contradictory information by focusing on the knowledge that supports my mental model. For a beginner, that might be a good thing.
I think the question of how does this apply to our own learning is an illuminating one. As my short narrative points out, these dimensions are not just not independent, they may be hierarchical or causal. I'd like to read more about when in the learning process the different dimensions are activated.
|
|
|
Post by jseering on Apr 3, 2016 22:02:56 GMT
With regard to the second question above-
I went to an interesting seminar last fall that the question reminded me of. The speaker (See Campbell, T. H., & Kay, A. C. (2014). Solution aversion: On the relation between ideology and motivated disbelief. Journal of personality and social psychology, 107(5), 809.) was talking about attempts to understand why some people, who in the US tend to be politically conservative, don't believe that global warming is occurring. According to him, the simple model is that these people don't understand the science, so they don't believe the proposed solutions are necessary. His model is somewhat more interesting- these people don't like the proposed solutions, and this dislike of the proposed solutions causes them to be more skeptical of the science. This is a reversal of the proposed direction of causality. In his experiments, he was able to get people to be more or less likely to "believe" in global warming simply by priming them with a proposed solution that they would be okay with. Conservatives became more likely (and liberals less likely) to "believe" in global warming if they were given a prompt that offered a free-market solution.
I think this work has some really interesting implications for how we approach learning about sensitive issues. It suggests that it's not simply a matter of making the people who don't understand "smarter" by increasing size/connectedness/complexity of declarative knowledge, and it also suggests that the people who already "understand" might not really understand at all; they simply accept what they are told because it matches their political beliefs. This makes the problem a lot harder than simply one of better education.
|
|
mkery
New Member
Posts: 22
|
Post by mkery on Apr 3, 2016 23:13:37 GMT
I was interested that the author mentions multiple anecdotes of scientific progress and meta-research on scientific progress such as how a french chemist Lavoiser’s conceptions of chemistry changed on the path to developing a major theory.
I struggle a lot with nonmonotonic knowledge while doing research in an area where, on multiple levels of abstraction, there is very little consistency or agreement among sources. After months of studying my research topic, I still become confused by what to believe. Reading a research paper in my area, much of it is simple, known monotonic information that strengthens what I’ve learned before, but I struggle to recall of all the intricacies of disagreement between sources and I struggle keeping track of which sides of these disagreements I can agree with in trying to build a reasonably unified concept of my own research. I’m interested how this work on complex declarative learning fits with a meta understanding of research, and the act of forming schemas/themes out of knowlege with unknown, chaotic, or unstable relationships.
|
|
|
Post by mrivera on Apr 4, 2016 0:58:26 GMT
jseering On "It suggests that it's not simply a matter of making the people who don't understand "smarter" by increasing size/connectedness/complexity of declarative knowledge..." I wonder what factors contributed to the participants disliking particular solutions. Perhaps the increase in knowledge should be focused on the benefits of a particular solution (over focusing on the severity of global warming). In this manner, it's still about making people "smarter" but the focus of what they need to be smarter about is changing. mkery On "I struggle a lot with nonmonotonic knowledge while doing research in an area where, on multiple levels of abstraction, there is very little consistency or agreement among sources." Sounds like you should do research on Structural Alignment since "theories of similarity generally agree that the similarity of a pair increases with its commonalities and decreases with its differences".
|
|
|
Post by julian on Apr 4, 2016 2:22:46 GMT
In response to: 1. Think of a domain of complex declarative knowledge in which you have recently experienced change in your knowledge representation network beyond the accretion of new concept nodes (change in size). Identify whether that change was monotonic or non-monotonic, describe the dimension of change you experienced, and explain whether your reading this chapter shed light on that process of representational change.
I'm currently taking Graduate Ai and at the very beginning I thought (and was told by others) this was a course about search+decision making mostly. The search part was true however the decision making part was not, this became optimization. Later on it became almost obvious that decision making is really an optimization process, however without having clear this new concept, it is hard to see that optimization techniques can be used to solve decision making problems using for example linear programming. Notice how in the first place I didn't really have "Ai" knowledge nonetheless I had a pre-setup Ai network for which initially had only slight connection to optimization, later optimization's role in the network became central. This could be seen as a monotonic change since the connections of an existing node increased, however it could also be non-monotonic since decision making became almost replaced by optimization.
2. Knowing what we know about learning and non-learning responses to contradictory information, how might we better motivate people to engage in non-monotonic change for knowledge representations for which they might have a personal, perhaps politically or religiously motivated investment (e.g. climate change or “anti-vaccination”)?
A way this could be done is through Cognitive Behavioral Therapy (CBT) which basically consists on correcting a person's belief model. This correction usually replaces negative thoughts by more positive ones. Straight from wikipedia "CBT helps individuals replace "maladaptive... coping skills, cognitions, emotions and behaviors with more adaptive ones", by challenging an individual's way of thinking and the way that they react to certain habits or behaviors ". The therapy process seems not as straightforward and involves many sessions (fast version of CBT takes at least 12 hours which is about 12 sessions). However, CBT was designed to threat serious mental disorders and may not work for changing political or religious views.
|
|
|
Post by anhong on Apr 4, 2016 3:56:10 GMT
4. Why do we have all these barriers to nonmonotonic change? Why isn't it the case that we can just update our knowledge smoothly in the face of more (or better) information?
I think, similar to machine learning, for a non monotonic change, sometimes there's not only a change in the weighting of parameters in the existing model. Instead, there need to be new features added into the model, or even use a completely different algorithm to deal with the new data distribution. Therefore, if we are understanding a set of knowledge in a certain way, the model we built in our head might be wrong. And when a new phenomenon come, we might need to retrain the model to fit the new data.
|
|
|
Post by francesx on Apr 4, 2016 15:49:19 GMT
4. Why do we have all these barriers to nonmonotonic change? Why isn't it the case that we can just update our knowledge smoothly in the face of more (or better) information? (Speculation) Because more and better information is a relative measure. If there is a clear measure of better (i.e proves scientifically vs not), maybe people who understand the value of "proven scientifically" can more easily update their knowledge. Or maybe, because we choose to believe something is right, and we get confused if that is not. Confusion can lead to frustration (in learning in general), so maybe it is a subconscious act to refuse confusion. I am thinking here in terms of learning subjects, but also in culture change. 5. Do you see examples of these kinds of knowledge changes in your own line of research? Are there instances where your research community has struggled with nonmonotonic change (or monotonic change for that matter)? I am too new to my field to know much about this kind of thing, but if it counts, I can bring an example from Physics When Einstein came up with the general theory of relativity, a lot of Newtonian Physics fans (most of them actually) were against it. It took a while (cite: Einstein's biography) for the field to change and accept the new theory.
|
|
|
Post by xuwang on Apr 5, 2016 2:44:33 GMT
Question 1: I’ll consider the knowledge we learnt about design thinking and design research a non-monotonic change for me. Previously, I don’t have a mental representation of design thinking. And reading the papers in design mini leads to a cognitive conflict with my previous understanding of scientific research. for example, in behavioral science/psychology studies, what we usually do is to manipulate only one factor, with all other variables controlled to make causal claim of the effect of one certain factor. However, in design research, multiple factors are incorporated at the same time, and the goal is no longer to investigate the effect of one factor, but to move users to a preferred state with the designed artifact. I think this is a non-monotonic change which removes my previous misconceptions about design research. And I also think the cognitive conflict actually helped me with my understanding about the topic.
Question 2 & 4: One way mentioned in the book How people learn (Bransford et al. ) is to help learners reveal their pre-existing knowledge and problems, and help them realize why they’re wrong. I think if we already have a mental model of something, when we come across new information, intuitively we’ll fit that piece of information into the existing model, or adjust the model a bit to hold the new information, rather than thinking about breaking down the whole mental model and construct a new one. So I think helping people to reveal their prior knowledge, for example visualizing their mental models, and let them realize which part of their mental model is wrong and why it’s wrong, will help them abandon the previous model and construct a new one.
Also inquiry-based learning has been found to be more effective than lecturing in a lot of studies, I think it’s along the same line. in an inquiry-based learning setting, if learners are to discover problems themselves and reasoning about them, it’ll be easier for them to adapt to a new model than having someone else tell the learner which is right.
|
|
|
Post by Amy on Apr 5, 2016 3:25:53 GMT
Question 3: I didn't quite grasp the division between procedural and declarative knowledge. For example, I think that learning how to play a sport is procedural knowledge (I know 'how' to defend against this play). But what if I'm just learning to recognize different plays. Has that now moved into declarative knowledge? (I know 'that' is a zone defense) If that's a proper distinction, then I'm more interested in how procedural knowledge and declarative knowledge relate to each other, how learning one helps learn the other.
But perhaps it's different when we are talking about knowing how to physically do something instead of just mentally knowing?
|
|
|
Post by bttaylor on Apr 5, 2016 3:39:40 GMT
4. Why do we have all these barriers to nonmonotonic change? Why isn't it the case that we can just update our knowledge smoothly in the face of more (or better) information?
I suspect a lot of the barriers to non-monotonic change are there because they are (or were) evolutionarily helpful. If you've managed to survive with the knowledge and practices you already have, then it's probably a safer bet to stick with what you know than to abandon it in the face of anything contradictory. Given an environment where our immediate personal survival is no longer in constant jeopardy, our default responses may not be optimal. It's nice that we can stop and think about our thoughts and evaluate things at a higher level, but it does seem that surprising that it's not our necessarily our default response. This is why we need robot friends to help calculate probabilities that our personal experiences won't be able to intuitively grasp.
|
|
Qian
New Member
Posts: 20
|
Post by Qian on Apr 5, 2016 3:40:56 GMT
It’s exciting to see Design’s “problem reframing” claim shows up in Cog mini by the name of “Non-monotonic change”. In the non-learning stage, people examine and calibrate the knowledge landscape; in learning, the assumptions are challenged and point of views shifts.
In response to Q4. Why do we have all these barriers to nonmonotonic change? Why isn't it the case that we can just update our knowledge smoothly in the face of more (or better) information? I think partly it’s because not all new information instantly fits into the existing mental model/knowledge landscape. However, I also doubt fitting into it should be the goal learning/education pursues. Does nonmonotonic change necessarily lead to “learning”, or merely an updated point of view? Do nonmonotonic changes hurt the “fresh eye” of the learner (aka creativity, cross-domain analogy, etc)?
|
|
|
Post by stdang on Apr 5, 2016 4:21:08 GMT
There is an interesting interaction between knowledge and identity that challenges some of the questions that you put forth. Beyond the effort required to undergo non-monotonic learning, some knowledge is privileged differently from others. I remember a study that talks about how well individuals learn new knowledge if it requires that they revise existing knowledge that pertains to their sense of self. If an individual, who is a devout orthodox christian, is taught a new fact about the origins of man, and this fact is noticed as contrary to their own beliefs while also directly activating their whole knowledge representation of God and religion, then the individual is more likely to treat the new information with lower weight than their existing knowledge and thus not see a reason to settle the contradiction. If my memory serves me right, this mechanism is a major barrier in some of the scenarios of knowledge inertia that you mention in question #2.
To extend this idea further and answer question #4, it is likely that this form of knowledge inertia is a powerful survival mechanism. There is likely high energy expenditure in performing non-monotonic learning and this energy is costly. Not all contradictions are important enough in our lives to merit the energy expense of revising our mental structure. In this way, we can run into contradictions all day with little to no consequence. When a relevant circumstances impacts us sufficiently, we will take notice and likely be motived to spend this energy in order to reduce the chance of a future recurrence of the issue.
|
|
|
Post by Cole on Apr 5, 2016 5:01:21 GMT
So after reading the other papers and finding that schemas, examples, and analogies help with ideation, this paper got me thinking about how we might want to use the properties of learning and knowledge to help with that process. If it is true that knowledge is represented as domain specific semantic networks and that connectedness can increase among that network, would trying to blend domains be more likely to create cross-domain schemas?
I remember reading a few years ago that when you studied flash cards for classes, you shouldn't do a session with just Math cards and a session with just Physics cards, etc. Having sessions across subjects (supposedly) would help you learn how to transfer knowledge from one domain to another since you were encoding information together in time. I wonder if that is a good idea for research as well. If I read my machine learning papers right after my cog mini papers, will I be more likely to have an insight about how machine learning architectures resembles some cognitive models? What about the other way around?
On the other hand, what happens when they conflict? If my computer vision algorithm behaves in a way that is dissimilar to a biological model of vision, that doesn't seem like a problem, but it sure as hell bugs me.
|
|
|
Post by fannie on Apr 5, 2016 5:01:31 GMT
I think it’s easier to fall back to existing schemas than it is to rewrite what you know. Especially if you’ve been taught something your whole life or everyone around you or people you trust mostly believe in that. It’s also easy to just follow what you heard from what a few people as “truth,” either because you don’t have access to many sources or it’s hard to process information from all the different sources. To motivate them to engage in non-monotonic change, perhaps a dramatic way to address responses like abeyance and bolstering is to continuously bring the contradictory information to attention so that it can’t be postponed and while limiting access supporting evidence. We could also try to decrease use of personalization/preferences (filter bubble) to encourage exposure to diverse viewpoints.
|
|