|
Post by nickpdiana on Mar 20, 2016 3:05:30 GMT
Quick Recap of the Major Points - Cognitive Tutors use and inform evidence-based curriculum - Cognitive Tutors built on ACT-R work (Procedural and Declarative Knowledge) - Complex tasks can be decomposed into "Knowledge Components" - Model tracing can reveal when there's a discrepancy between what the teacher wants (and thinks) their students are doing, and what they're actually doing - Cognitive Tutors can also monitor student performance and adapt instruction to a specific student's needs - We can analyze learning curves to define Knowledge Components
Some Thoughts In general, I'm a fan of thinking about learning in this way, but a critic might ask: What's missing in ACT-R? For example, consider the following line from the paper:
"...education is most efficient when it focuses students most directly on those individual knowledge components that have relatively low strength..."
It certainly makes logical sense that the most efficient learning happens when the focus is on strengthening weak skills, but imagine, for a minute, that we're talking about a human student rather than an information processor. This certainly doesn't seem like a good time for the student. In this optimal model, the student is experiencing almost constant failure. The succesful completion of a problem is inefficient (though necessary for assessment) because the student isn't learning (success = mastery).
I wonder there's some benefit to giving students time to rest on their laurels. Maybe even to encourage students to admire how far they've traveled before forging onward? Though it might also be the case that a carefully calibrated challenge reward enough?
Are there other ways an ACT-R framework falls short of the "real thing" in education?
|
|
|
Post by sciutoalex on Mar 21, 2016 13:32:56 GMT
Seeing the essentially flat correct-answer curve, I had a similar set of questions that you have Nick. How does this make students feel to progress and become smarter, but have no sense of reward or acknowledgement of their progress? I hope the cognitive tutor does give students a sense of their real improvements, and I assume the authors didn't feel such a implementation detail was worth mentioning. In some ways, constant increasing challenge would increase students' capacities for extended-focus, working through difficulties, etc. These are valuable skills, but these skills can't be reduced to "knowledge components," can they? So it's funny, a highly cognitive model of learning might also be helping do much more teaching than just skill achievement.
I also connected this reading to Bjork's remembering and forgetting paper because it seems that Cognitive Tutors work on the basis of strengthening the information retrieval process. By showing many instances, the student strengths the retrieval mechanism while also abstracting the process from specific instances. I wonder how Cognitive Tutors could use Bjork's insight that forgetting can be useful to learning. Bjork mentions that changing locations is a kind of forgetting, but for most elementary-school learning, it occurs in a single room that only changes once per year. Would using the cognitive tutor in different locations increase student's retention of skills? What if recess were twice as long but the tutors were used outdoors?
Could cognitive tutors be used for non-cognitive learning? Has that been looked at? Can a cognitive tutor teach me to procrastinate less or have more determination to work through difficult problems or situations? Could it teach me to be more comfortable in social situations?
|
|
|
Post by stdang on Mar 21, 2016 16:51:40 GMT
I think you raise an interesting point that Cognitive Tutors focus strictly on cognitive skills and while those are very important, they are not the only component of a student or any person for that matter. Look closely while you are directly tutoring someone and you will quickly realize that mastery of the materials isn't the only thing you are paying attention to in decided what to say or do next. Students are emotional and have complex highly variable personalities. Some students would prefer it if you let them bang their head against a wall and fail repeatedly rather than accept help. Other students like to persevere a little before asking for a hint, and then there are those students who just want the answer and want the assignment to be finished. Each of these students could have similar mastery profiles according to a cognitive tutor at some point in time, but the ability of the tutor to most efficiently push the rate of their learning will likely be variable because of the tutor's inability to detect and accommodate for the affective state and personality of the students.
This is one of the major criticisms of cognitive tutoring paradigms that I run into most often. Computers (and robots) are frequently viewed (and portrayed) as cold calculating machines. Human teachers on the other hand of thinking, feeling beings who can detect and adapt to the social and emotional needs of a student. Learning is frequently perceived as inseparable from the social and emotional components of learning activities, so it is interesting to look at how the next generations of cognitive tutors can incorporate these softer human elements into their cold calculating algorithms to bridge the gap and drive more acceptance of intelligent tutoring systems.
|
|
|
Post by JoselynMcD on Mar 21, 2016 19:23:42 GMT
I'm so completely out of the world of intelligent tutoring systems, that I feel somewhat out of sorts, so please bear with me in this post.
Nick raises a lot of the same questions that I had when reading this article. Namely, I am left considering how the responsiveness of the ACT-R system to one's needs as a learner would potentially fall short of one's needs as a human student. I quite relished the periods of time where I felt really strong in an academic area that generally challenged me, like say Calculus, and I feel like those periods where I was among the leads in class on understanding a topic, helped buoy my self-esteem during periods of the class when I felt less capable. While the paper does address some of the personality factors present in students, I still fear that a system like the one described could potentially be the most effective at teaching me new material, yet not fully access my humanity and those social psychological factors therein.
Having just reviewed Bjork's article, I like Alex, was struck with the thought of how some of the principles of his work re: learning, forgetting, and remembering, could be incorporated into the ACT-R model in a way that alters situational factors so that forgetting could be more easily a conduit for remembering. I liked Alex's suggestions about altering the location of where the student interacts with the intelligent tutor. From a design perspective, I'm curious if temporal factors i.e. times of day, avatar and general aesthetic modifications, tasks accomplished or games played, and general demands of the interaction could all be manipulated in an attempt to encourage forgetting and re-learning/salience of information.
|
|
|
Post by mmadaio on Mar 21, 2016 21:28:54 GMT
I agree with what others have said about the frustration of students who are receiving problems whose difficulty is perfectly tailored to their level, so that they will always be "struggling". On one hand, there might be a solution in the cognitive tutor, such that students could, not just relax, but be given Knowledge Components which they had mastered much earlier, for them to engage in retrieval and promote recall of those concepts, facts, or procedures. Alternatively, students who have mastered one set of KCs could be paired with students who have not yet mastered that set, for them to act as tutors on those KCs on the days when they are not using the cognitive tutor. This would allow students who have "mastered" the procedural problem-solving steps of that KC to reinforce that mastery by being forced to articulate, explain, or restructure their understanding of it through explaining it to a peer. Note the difference in student performance on the perhaps more conceptual SAT vs. their performance on problem-solving questions after using the Algebra I tutor (on page 252). Finally, a solution might be in changing our expectations of what "successful" learning looks like. This is obviously much harder. We reward proficient performance in schools - with grades, graduation, scholarships, etc, but rarely do those assessments reward students' consistency of performance over time, or the depth of their conceptual understanding (being harder to assess), or their ability to transfer that mastery to new contexts or analogous concepts. If you think about the way musicians practice, they DO practice only (mainly) the skills they're weaker on. Of course fundamentals like scales are important, but when expert musicians practice them, it's either to reinforce them through practice to stave off forgetting, or as component skills to be integrated into a more complex skill. It's almost a truism in music education that if you're not practicing the skills you're weak at, you're wasting your practice time. But, with high-stakes assessments in the classroom, and even formative assessments being used as class grades, practice is often conflated with the performance - so students who should feel no stigma about their lack of mastery over concepts are punished for not being proficient at the skills they are actively working on. I'm also left wondering what teachers do the other 3 days a week when they're not using this? How do teachers incorporate their more fine-grained knowledge of student mastery into their subsequent teaching, if at all? To the point about developing a socio-emotional skills (SES) or metacognitive tutor, some work is being done on that, but I'm not sure how much I would call the models we have so far for SES (curiosity, rapport, grit, etc) a "cognitive" model instead of just a computational model of that skill or phenomena. For metacognitive - check out Azevedo's 2002 paper on this: tinyurl.com/zdv4k3k
|
|
|
Post by judithodili on Mar 21, 2016 22:20:31 GMT
There is a ton of Cognitive Tutor research out there on how/when to encourage students, and how research that shows that being using a polite tone in tutors produce better learning effects. It seems like a "duh", but the point is, the way some tutors are designed already take into consideration encouragement, reinforcement, and cognitive load in the way the students are being tutored. I personally have no problems with breaking down content into knowledge components, because I liken cognitive tutors to real tutors (e.g after school supplemental tutors/homework assistants) vs general education administrators. The function of a tutor is to fill in knowledge gaps in a way that encourages the student to master the material and be interested in learning more and Cog. Tutors do that fairly well.
My issue with the ACT-R based and other Cog. Tutors in general is that they are designed one kind of student in mind. There is currently research that shows that the way students interact with tutors are largely dependents on their cultural dispositions towards asking questions, working independently, and feeling comfortable with failure. This leads the tutors to be used in ways other than those they were designed for (which messes up the data the teacher gets for the areas that they need to work on), and also doesnt take into account the cultural diversity of the students that attend American schools. Like everything in HCI - the answer to how these tutors can support students needs is very dependent on a variety of factors and I dont think they do nearly as good a job supporting this diversity. This may make them not only detrimental for some students learning, but may potentially give the teachers false information on the areas they need to work the students to improve their knowledge gaps.
|
|
mkery
New Member
Posts: 22
|
Post by mkery on Mar 21, 2016 22:24:01 GMT
If cognitive tutors can be criticized, as Steven and other bring up, “cold calculating machines” it’s interesting that the primary metric for their success is student’s scores on standardized tests, also perhaps the cold calculating side of education. In figure 1 (pg 215), the simple black-and-white image of a math problem is reminiscent of the GREs. I’m very interested in discussion, as others have brought up, of how the design and environment for these cognitive tutors effect their role in the classroom. For example, there is a textbook that goes with the software. Do students have the book in front of them during computer labs, so that they can flip back in the book for reference as they would doing their homework at their kitchen table? Is the book engaging and colorful, or plain text like the screen? Or are they immobile in their chairs staring at this minimalist screen that looks like a GRE or an SAT, essentially an exam? To what degree are students practicing exams?
Cognitive Tutors and the study of KCs seem to be effective empirical evidence for learning. An obvious direction, but I’m curious how presentation matters. If the students can reflect on their own KC trajectory visualized, do they feel more motivated to keep struggling through their weak points when they see a clear path?
|
|
|
Post by anhong on Mar 21, 2016 22:49:59 GMT
This paper demonstrated a model of organic relationship between basic research, applied research, and field testing. With a theory behind, people can better understand how a system work, why it works and why it doesn’t work. Hypothesis can be generated from theory, and can be tested in testing. However, I feel not all research in HCI are embracing this model. Except for learning sciences and social computing, I think very few theories are being actively used, generated, or tested in technical HCI research. Is this a problem? Why there’s not many besides Fitts's law and GOMS?
About the point of “Maybe even to encourage students to admire how far they've traveled before forging onward? Though it might also be the case that a carefully calibrated challenge reward enough?”, I think there’s a difference between motivation and learning. Motivating the students to keep them learning is certainly important. But when admiring how far they’ve traveled before, that moment students are not learning much. Instead, I agree with the paper’s point that the learning happens most efficiently for strengthening weak skills.
|
|
|
Post by xuwang on Mar 21, 2016 23:16:32 GMT
Hi all, I'm another discussion leader of this paper. And sorry for being super late about the post. I enjoyed reading people's discussion on cognitive tutor, and i'll briefly summarize what has been discussed previously for following discussions.
One major critique about cognitive tutor in previous posts is that it acts like a "cold calculating machine", it's able to provide students' with challenging questions, but a lot of the times it overlooks important social and emotional factors in the process of learning. As Michael has pointed out, in terms of musical education, people practice the skills/or the parts that they are weakest on most of the time. I think similarly, in an education context where the goal is very specific and clear, for example, learning to swim, learning to play a piece of music, learning to drive. The most efficient way is to practice the knowledge components we're weakest on. But in classroom education, where the end goal is not only to master a certain set of problems, we'll need to incorporate more humanity, and social-emotional factors in these tutors.
There has been previous work on using cognitive tutors for collaborative learning, in which a pair of students are using the cognitive tutor together, they're asked to discuss, collaborate and complete the tasks together. I think this line of work helps address some concerns of social-emotional factors of using cognitive tutors. Also I've also read previous work saying that if students are allowed to choose problems for themselves, they tend to choose problems they've already mastered, this helps explain why using cognitive tutors to emphasize on the weakest knowledge components is important.
Other questions that have been brought up include: 1) what are the teachers' roles in implementing cognitive tutors? Does cognitive tutor have some effect on the pedagogy of instructors? 2) Are these cognitive tutors preparing students for exams only? How are they used and implemented in classrooms 3) How effective cognitive tutors are on teaching non-cognitive skills? e.g., collaboration, being consistent, self-esteem, etc. 4) Will it be efficient if we provide students with a clear path of their knowledge components mastery trajectory (through visualization)? 5) cognitive tutors are treating all students as the same kind of student, how can we better address individual difference while designing these tutors?
|
|
|
Post by kjholste on Mar 21, 2016 23:39:36 GMT
* In one of the comments above, Alex Sciuto said “In some ways, constant increasing challenge would increase students' capacities for extended-focus, working through difficulties, etc. These are valuable skills, but these skills can't be reduced to "knowledge components," can they?”. But in some work on intelligent tutoring systems that support students’ acquisition of metacognitive and self-regulated learning skills, these actually are modeled as (reduced to) “knowledge components” (then again, one might argue – maybe post-hoc – that these were simply intended as ‘useful’ models and needn’t have any psychological reality).
Nick asked: ‘Are there other ways an ACT-R framework falls short of the "real thing" in education?’ ACT-R has always struck me as an extremely flexible framework (not necessarily in a good way). So, as a follow-up question for discussion: Is it possible for the ACT-R framework itself to ever actually fall short of the “real thing” in education? Or is it always the case that particular models are just “incomplete” and need refinement/extension.
* "...education is most efficient when it focuses students most directly on those individual knowledge components that have relatively low strength..."
In general – focusing only on ‘cognitive’ factors -- I don’t think this claim (at least, the use of the word “most”) is particularly well established. This might imply that an optimal (let’s say, in terms of learning gains per time spent, as measured by some pretest and posttest that are reasonably well-aligned with an intelligent tutoring system’s curriculum) problem selection algorithm would, at any decision step, select a problem targeting a student’s weakest knowledge components (KCs). This has been operationalized in various ways in intelligent tutoring systems – such as “always select the problem with the lowest average estimated probability of mastery, across all within-problem KCs” -- sometimes under the label of ‘cognitive mastery’.
One element that’s absent in many models of student learning that draw upon ACT-R is a model of human ‘forgetting’/decay (though there is much research in the learning sciences supporting spaced practice, and some ITS work that directly responds to this work). I think this may be in part because early intelligent tutoring systems were designed to be used over relatively short time scales, where the effects of forgetting were assumed to be negligible. Under a naïve problem selection algorithm that targets KCs with low estimated mastery, the system will present KCs with high estimated mastery less and less as time progresses, and thus will have fewer and fewer opportunities to re-measure mastery (in a context where humans forget, this will cause the student model to diverge from reality). Separately, this assumption does not capture the potential importance of more complex/higher-order sequencing effects (e.g. potential benefits of interleaved practice for concept learning, and potential prerequisite structure among KCs… even though there is no “transfer” between KCs, by definition).
|
|
|
Post by fannie on Mar 21, 2016 23:41:32 GMT
Thanks Xu for the summary of the discussion!
I would like the see how the cognitive tutor could be applied beyond math, or SAT-testable subjects. They make a point about having to integrate skills into more complex tasks, which I appreciate because they try to choose real world examples and kids so often ask what the point is in learning something like math. But I’d be interested to see how successful they are in more complex applications like physics or engineering. I would hope that they’re being careful not to fall into the trap of just tutoring for exams, because I know in the past I’ve learned/memorized things just for the SATs and APs that I forgot fairly soon after.
I would hope that the teachers are keeping track of students’ progress with the tutors as well. It would be useful if they could have some means of analyzing it or suggestions for how to incorporate it into their curriculum. If the tutor will prepare for mastery according to exams, maybe it’s the teacher’s role to fill in broader knowledge (like stories told in history class, rather than factual information that would appear on an exam). Or are there topics that they only need to rely on the tutor for?
I’m also interested in the point about the tutors for non-cognitive learning that Alex brought up. For social interactions specifically, I know there’s social skills training with virtual agents (ex. MACH at Media Lab, public speaking/interview simulations), but they focus more on sensed information. Perhaps sensed information related to cognition (e.g. brain activity) could be incorporated into these tutors.
|
|
|
Post by jseering on Mar 22, 2016 0:11:35 GMT
I'm interested in the way many of us seem to be trying to fit the role of the tutor in with or in place of the traditional role of the teacher, and we seem to be comparing this sort of simulated problem-based education with the "real thing," which is something more applied (e.g. Fannie's real world examples). This seems like a definitional problem to me, like what we talked about with emotion. If we define emotion as something ephemeral or somehow uniquely human, machines won't ever have it, but if we describe it as something directly procedural then machines already do have it.
In the same vein, if we talk about learning as something semi-spiritual (growing as a person, etc etc) it's much harder to understand whether cognitive tutors are effective. In order to create an effective teaching tool, we really need a better definition of what learning really is in the way that we mean it in the classroom. If learning is answering questions objectively correctly, then cognitive tutors can teach. If learning is growing as a person, we probably need teachers. It's a complicated question, but we should at least try to question our assumptions.
|
|
|
Post by francesx on Mar 22, 2016 1:18:19 GMT
Improving instruction and improving student learning are two different things related on some levels and not on some others. Instruction can (and should ideally) help student learning, however we can have student learning without good instruction, though Cognitive Tutors and other intelligent tutoring systems. Note that by instruction I mean teacher instruction, human teacher teaching in the classroom environment. Improving instruction on it's own is not the duty of a Cognitive Tutor; rather the information a Cognitive Tutor posses can be used to guide and improve instruction (through other means as tutor reports or a dashboard system).
I am somewhat familiar with the Carnegie Learning platform, and have a little experience with teachers using it, so replying to Mike's question:
"I'm also left wondering what teachers do the other 3 days a week when they're not using this? How do teachers incorporate their more fine-grained knowledge of student mastery into their subsequent teaching, if at all?"
Some of them don't. The other 3 days a week are work off the tutor, in the classroom, pure lecture or instruction followed and based on the CL curriculum. To my knowledge, the tutor is complemented and complements the classroom lecture, at least for what I have seen.
|
|
nhahn
New Member
Posts: 22
|
Post by nhahn on Mar 22, 2016 2:29:37 GMT
Following up on Joseph's point regarding the frighteningly deep question of "what is learning" I somewhat wonder about the ability of cognitive tutors to successfully true learning. While there is learning for the sake of standardized tests, etc. as mentioned by others, at least I view the benefit of learning as being able to apply learned concepts to unsolved problems in the world. This is somewhat in line with what the Bjork article put forward as "learning". Therefore, to truly learn concepts, can cognitive tutors provide the variety to form both a strong and fully formed abstraction? My fear from cognitive tutors is that they suffer from the same problem of traditional learning, in introducing and recalling concepts in the same environment over and over (like the type of questions asked etc.. ). Do alternative learning environments and instruction (like project based learning) offer benefits on this front? I feel like there might be literature on this, but I don't do learning science research XD.
|
|
|
Post by amy on Mar 22, 2016 2:41:17 GMT
In response to thread about cognitive tutors being "cold calculating machines" without emotions, and to push back on Judith's comment just a little, there's been research that shows students respond better to tutors that don't use polite language, but instead are slightly snarky and tease them in a friendly way. It's interesting that students learn more from the machine when the machine tries to act like a human.
I'm also wondering about teachers, and how cognitive tutor researchers are taking their opinions and needs into account. The authors stated that cognitive tutors succeed or fail based on how well the teacher can integrate it into their course. The paper mentioned wanting to incorporate teacher needs with regards to the curriculum, and offering teacher training, but I can't help thinking that a teacher would need more than an appropriate curriculum to really know how to integrate a tutor successfully.
|
|