mkery
New Member
Posts: 22
|
Post by mkery on Apr 10, 2016 3:19:11 GMT
(post is collaboration between Steven, Julian, and MaryBeth)
Authors: Stuart Card: Was a Senior Research Fellow at Xerox PARC, PhD in Psychology from CMU Thomas Moran: Distinguished Engineer at the IBM Almaden Research Center in San Jose, California. Allan Newell: Was a researcher in computer science and cognitive psychology at the RAND Corporation and CMU. PhD from CMU-Tepper simplified model of human cognition Model Human Processor specifically meant to inform human-computer interaction. Primarily focused on “human performance” such as Fitts law.
Summary: In this reading the human mind is approximately described and modeled as an information processor composed of three sub-systems: perceptual, cognitive and motor. Each subsystem has its own processor and memory. Also each subsystem has its own characteristic set of parameters (i.e., each subsystem is not simply a copy of the other). For the execution of any given human activity any subset of the subsystems can be used, hence the activity will be affected accordingly (by the subsystem used) in many different aspects like: how long it will take to be executed and the accuracy of the execution(if there is an execution).
Questions: (pick among the following or propose your own) 1. In this reading, it is not explained how the 3 different processors operating in the Human information-processor are coordinated to work together(either in series or in parallel). How would you account for this coordination processor? Is this even necessary or each processor can just communicate and coordinate directly with the others?
2. This work seems to focus on minutia of things like memory storage and reaction times. Can you point to places where this is important in HCI research today? Do you think the reasons this was important in the 1980 have shifted since HCI and computers have evolved further? a. What strengths and/or problems do you see with utilizing this model of human processing for the design of HCI systems or investigation of HCI phenomena? b. Does a general model lead to improper predictions/findings that a more detailed model might not?
3. With the 3 systems given, do you believe this is a sufficient representation for human processing? Is it a sufficient representation just to apply to human-computer interaction? a. What limitations if any do you see? Are they general, domain/application specific? b. If someone could give you an updated textbook Human Model of Cognition, simplified specifically for application in HCI today, what aspects of cognition would you want this model to focus on? c. How do you feel about the human mind being modeled with very computer-centric language like "processor" or "cycle time"? Is this helpful? Limiting?
4. In the human information processor model, there are three processors, however none seem to deal with affect. Why is that?
5. This model is based on older findings in cognitive science and even at the time of writing, Simon admits to some possible alternatives. Do you find that the implications of some of the alternatives that he presents (alternative models of memory, box vs depth of processing, etc) would lead to different predictions or constraints within your own research?
|
|
nhahn
New Member
Posts: 22
|
Post by nhahn on Apr 11, 2016 21:23:37 GMT
One of the things I always worry about with these types of models is the parameter estimation step. A lot of these models seem to incorporate a number of parameters, that for a particular scenario, the model is then trained to fit. After going through ARM, most people should be familiar with the problem of model fitting, specifically overfitting your data. When combining all of these "processors" together, I feel like this could become a huge problem in a model like this. Because there are so many different variables you are trying to account for with they different cognitive systems, pretty much anything you feed to it would be agreeable. However, I also see the value in a tool such as this, because it could enable you to predict how well a particular interaction might work. I'm unsure how you obtain a balance between these two, maybe someone with some information about complex cognitive models could tell me how?
|
|
|
Post by Anna on Apr 11, 2016 21:55:43 GMT
In response to #2: I don't know how much of this is important in HCI research today-- HCI design and implementation, yes. But researchers aren't really grappling much with Fitt's Law anymore, are they? Or worrying about reading rate, key stroke time, etc. Well, actually, maybe I take this back-- I suppose in the FIG lab and other areas that are very directly looking at physical ways of interacting with computational devices, this kind of stuff is still pretty research-relevant.
I think we've reached a stage where a lot of the findings are pretty set though, to the extent that now when people are designing systems, whether for research or commerce, they can follow a set of design rules based on what we know about human processing/performance in relation to computers. But at the same time, this could pigeon hole people/researchers into particular ways of designing for and thinking about computer systems that don't consider more complex and individualized models of cognition and performance.
|
|
|
Post by jseering on Apr 12, 2016 0:30:27 GMT
@anna, I think that's kind of the key question here. How important is this to HCI research today? I don't think our group has many people grappling with fundamental cognitive processes on the level of Fitts's Law, but is that because this approach has gone out of fashion or because we've moved on to more complicated cognitive processes? In this mini we're talking a lot about these sorts of models, but I don't think it's specifically for the purpose of iterating on them as much as for the purpose of considering how different models of the mind can inform both questions about the world around us and designs to improve it. It's certainly possible that one of us will come up with the next Fitts's law, but I think we're all more likely to take the existing version of Fitts's law (or a similar law) and mash it together with some social bits or some design bits to make something more uniquely HCI.
|
|
toby
New Member
Posts: 21
|
Post by toby on Apr 12, 2016 0:56:05 GMT
I agree with Anna that such attempt to use a model to describe human mind and the whole information processing process can over-abstract the reality and potentially limit the design of computer systems. It's a bold assumption that the way human interacts with future artifacts follows the same theory derives from observations and experiments on how human interacts with existing artifacts. In fact I think this effect may affect academia research more than the R&D in industry. As it's not unheard of that new commercial products succeed and get adopted by the users despite the design contradicts the "best practice" suggested by previous literature.
One thing I think worth discussion is that, in academia, how should we encourage works that evaluate existing theories or models in new domains? Like for my research, i would be a little bit hesitate to just take a conclusion on mobile user interaction from a paper published in '05, or even '10. But there is little incentive for anyone to replicate the study because it's just hard to get the results published if the conclusion is that the old conclusion still mostly holds true.
Like for Fitts Law, I've seen works evaluating it in the context of touch interaction or gaze interaction. I would really like to see similar works for other existing theories and new domains.
|
|
|
Post by francesx on Apr 12, 2016 2:04:24 GMT
I first thought the link to the paper was the wrong one; with so many equations it reminded me of quantum physics... On a more serious note, I am not totally convinced by the P6: Power Law of Practice (2.4, page 57). I do understand the keystroke example the authors mention, and that in general this is something that more or less each of us has seen and experienced in our own lives. However, what I do not really agree with is the fact that this is a "law". Are the authors saying that in any case or scenario this law will hold? Because I can think of many scenarios that a learner is doing the task according to the law, but still not "learning" or "understanding" it, thus the task should be altered or represented in a different way to the user (more scaffolding, precede with a similar easier task, etc.)
On a more general note, it is kind of interesting the idea that we are no more than "machines" that operate on certain laws or principles. If we had a bunch of formulas that defines us and our behavior, it would be so much easier for HCI research to come up with ways to support "us" and our lives/behavior/etc.
|
|
|
Post by cgleason on Apr 12, 2016 2:34:36 GMT
> 1. In this reading, it is not explained how the 3 different processors operating in the Human information-processor are coordinated to work together(either in series or in parallel). How would you account for this coordination processor? Is this even necessary or each processor can just communicate and coordinate directly with the others?
I wouldn't account for this coordination processor, as it doesn't seem necessary. Since the coordination is likely solely internal, it ought to have few measurable attributes. The ones that you could measure are likely already accounted for in the other processors. Additionally, it doesn't seem like it is as important for designing systems. Anything we do to affect the person has to go through the other processors anyway, so that will likely have more impact.
Now, it would be interesting to find out the coordination cost and behavior simply for curiosity's sake. I am not convinced it is really a distinct processor in the human mind however, so the model might be incorrect. As humans, the models we create to explain these phenomena are overly-abstract and simplistic. It's likely that coordination is really a mess of neurons between all of the different systems. Even with an accurate map of the brain, we may never be able to comprehend the nuances of it's behavior (try debugging even a simple neural net).
|
|
judy
New Member
Posts: 22
|
Post by judy on Apr 12, 2016 3:08:54 GMT
I probably don't have enough context to talk about this paper. It feels like an exercise of its day. It was really important to model the human mind as an information processor. It was important to try to simplify (as the chapter admits in its first sentence) the human mind into these tiny operations and tidy processing categories. There was a need to try to bridge the gap between the language of psychology and the language of computing. Does it accurately represent human processing? No. Do I read (er...skim...er...barrel through) this and come out of it with a better understanding of the mind? No. Do I think it would accurately describe how a professional basketball player navigates in the paint and then throws a no-look pass to a teammate or how Basquiat knew when a painting was completed? I don't see how. Is it useful for HCI today? Maybe thinking about the connection between perception, cognition and motor skills is still useful. Maybe something like Fitts Law is useful when it comes to haptics? Can this model still inform interesting discoveries or design in virtual or mixed reality environments? I don't know.
|
|
|
Post by bttaylor on Apr 12, 2016 3:30:09 GMT
2. This work seems to focus on minutia of things like memory storage and reaction times. Can you point to places where this is important in HCI research today? Do you think the reasons this was important in the 1980 have shifted since HCI and computers have evolved further? a. What strengths and/or problems do you see with utilizing this model of human processing for the design of HCI systems or investigation of HCI phenomena? b. Does a general model lead to improper predictions/findings that a more detailed model might not?
I would argue that this is still relevant to HCI, it's just that processing power has grown so dramatically that don't typically encounter system delays that approach our cognitive delays. Modern interfaces respond quickly enough that we don't really notice the delays, but that wasn't the case when this was an active area of research. Network delays can introduce these types of delays (though things like online gaming networks certainly have ways to avoid excessive lag).
I think the larger issue, is that it's hard to apply models like this in a prescriptive way. I may be able to explain why one interface is 'better' than another (or run a Fitt's law type study to demonstrate that it is), but it's hard to build something from this. I feel like there's something analogous to trying to use the physical properties of individual electrons to design computer chips. In theory, it would work, but in practice it's really hard and you'll probably mess up some variables.
|
|
|
Post by kjholste on Apr 12, 2016 3:38:27 GMT
One of the things I always worry about with these types of models is the parameter estimation step. A lot of these models seem to incorporate a number of parameters, that for a particular scenario, the model is then trained to fit. After going through ARM, most people should be familiar with the problem of model fitting, specifically overfitting your data. When combining all of these "processors" together, I feel like this could become a huge problem in a model like this. Because there are so many different variables you are trying to account for with they different cognitive systems, pretty much anything you feed to it would be agreeable. However, I also see the value in a tool such as this, because it could enable you to predict how well a particular interaction might work. I'm unsure how you obtain a balance between these two, maybe someone with some information about complex cognitive models could tell me how? Hey Nathan, I'd recommend checking out: Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological review, 107(2), 358. laplab.ucsd.edu/articles/Roberts_Pashler2000.pdfand also the ensuing debate. ...This is an old discussion of the hazards of naively using model fit comparisons to argue for one theory or another, and it's certainly neither the first nor the most recent, but it seems to be something of a 'classic' paper within the field (I took a seminar on computational cognitive modeling a few years back, and this was chosen as the jumping-off point for all discussions regarding the validity scientific arguments based on comparisons of complex models). I think it's also a valuable discussion, because I still see instances of theoretical arguments being made in HCI research, based upon these sorts of shaky comparisons among cognitive models. AbstractQuantitative theories with free parameters often gain credence when they closely fit data. This is a mistake. A good fit reveals nothing about the flexibility of the theory (how much it cannot fit), the variability of the data (how firmly the data rule out what the theory cannot fit), or the likelihood of other outcomes (perhaps the theory could have fit any plausible result), and a reader needs all 3 pieces of information to decide how much the fit should increase belief in the theory. The use of good fits as evidence is not supported by philosophers of science nor by the history of psychology; there seem to be no examples of a theory supported mainly by good fits that has led to demonstrable progress. A better way to test a theory with free parameters is to determine how the theory constrains possible outcomes (i.e., what it predicts), assess how firmly actual outcomes agree with those constraints, and determine if plausible alternative outcomes would have been inconsistent with the theory, allowing for the variability of the data. .... Also, Toby wrote: "One thing I think worth discussion is that, in academia, how should we encourage works that evaluate existing theories or models in new domains?" ^ This idea of running sensitivity analyses, and not just replications is really interesting to me. In the discussion over how to encourage more replication (e.g. in psychology) generally, I'm not sure I've seen 'generalization to new contexts'/'contextual sensitivity of effect' emphasized much. In fact, I usually see 'evaluations of theories or models in new domains' discussed in negative contexts, such as discussions of massive fishing expeditions where researchers initially do not detect the effect they 'desire' (in order to publish), and so they keep partitioning their data along various contextual dimensions until an effect suddenly seems to appear... and then come up with a just-so story for this observation. Does anyone know of existing movements focused on bringing about more of what Toby's described, though?
|
|
|
Post by rushil on Apr 12, 2016 8:24:15 GMT
1. In this reading, it is not explained how the 3 different processors operating in the Human information-processor are coordinated to work together(either in series or in parallel). How would you account for this coordination processor? Is this even necessary or each processor can just communicate and coordinate directly with the others?
I think they didn't describe it because, we simply don't have an answer. I don't think anyone fully understands how each part works and this is their best guess attempt at how it may work.
In response to #2: I agree with Joseph where it is more likely we would build upon the existing theory by mashing it with social/digital bits. I think understanding of these existing models has less of a direct impact on the ongoing research in HCI, but rather brings an understanding of an existing model and how can it challenge the underlying assumptions. For example, I think Julian used a modified version of Fitt's Law for one of his recent works, which is a good example of using the understanding of an existing model and modifying based on your own needs.
|
|
Qian
New Member
Posts: 20
|
Post by Qian on Apr 12, 2016 12:59:30 GMT
I agree with Brandon that the models are still relevant to HCI, especially in interactive techniques. For other branches of HCI, especially when involving affection and cognition, I doubt their flexibility and practicality. In HRI, similar methods are used to model the perfect social dynamic between humans and agents. Just like models in this article, the parameters in HRI models need to 'overfit' to particular user/situation to be conditionally valid. However, it remains a paradox whether humans want robots to behave socially, even when the models are capable of doing so. In addition to the question of coordination cost (as noted by Cole), there is also the question of whether the model can fit the various goals HCI is after.
|
|
|
Post by Amy on Apr 12, 2016 13:40:21 GMT
I don't know much about designing systems for people with physical or mental handicaps, but maybe that is an area where the minutia of cognitive processes are still studied and relevant? Because all the details and reaction times would have much greater individual differences in that research space.
I was thinking about this partially because of question 4 and partially because of comments on the paper ideas, but why are cognitive psychology and social psychology so clearly separated in research? Doesn't how we feel influence how we think, and vice versa?
|
|
|
Post by anhong on Apr 18, 2016 3:58:01 GMT
This model allows us to divide the complex functions of our brain into three components and study separately. This divide and conquer approach is very important in solving this ultimate problem. However, it's still far from simulating human brain functions, partly I think is the coordination between the three subsystems. How do they communicate withe each other? How do a command being handled by the three subsystems? Solving these problems will enable us to simulate a passive human processor. The next big question is how to create consciousness, how do machines actively come up with actions.
|
|