|
Post by Felicia Ng on Apr 3, 2016 1:32:33 GMT
(This post was collaboratively written by Felicia and Alex!)
Summary:
In this paper, the authors propose a new method of crowd innovation called distributed analogical idea generation in which the creative process is broken down into 4 discrete steps that are amenable to crowdsourcing tasks: 1) Identify multiple product examples. 2) Induce a common analogical schema from these examples. 3) Identify other domains where the schema could be applicable. 4) Apply the schema to the target domain to generate a new product.
To investigate the viability of this new method, the authors conducted 3 experiments with crowdworkers on Amazon Mechanical Turk:
- In Experiment 1, the authors showed that participants generate better ideas when given an analogical schema than when using example-based approaches. (Steps 3 & 4)
- In Experiment 2, the authors examined the conditions under which participants are able to generate good schemas. (Steps 1 & 2) They found that showing more examples that support a schema enable crowdworkers to better induce the schema. Showing contrasting examples did not help schema induction.
- In Experiment 3, the authors demonstrated that this crowdsourced distributed analogical idea generation process is viable with one group generating schemas from examples and a second group generating ideas using the first group’s schemas. However, only good schemas from the first group were more likely to lead to good ideas from the second group - bad schemas did not have this effect.
-----
Discussion Questions:
1) What do you think about the generalizability of the results? This paper takes the theories we read about in the other journal articles and attempts to scale them. How well do you think the authors succeeded? 1a) How would the results compare for more focused tasks? (The tasks in these experiments were all open-ended ideation - users chose both their problem and then the solution. What if participants are told to solve a specific given problem?) 1b) How would the results compare for skilled vs. unskilled workers? (The participants in these experiments were all Amazon Mechanical Turk workers. What if they were PhD students? What if they were industry professionals?) 1c) How would the results compare for problems of different complexities? (The problems in these experiments were all everyday layman problems. What about scientific/technical problems that require specific expertise?)
2) Do you think this system would actually help the ideation process? Has the authors’ system actually reduced the hard creative work of product development? Or is the challenge of finding appropriate examples actually just as much work as ideation itself?
3) How could we augment an individual’s process by strategically interjecting crowd intelligence? What if the end goal was to produce smarter and more creative individuals rather than a list of design proposals?
4) How do the steps in this distributed analogical idea generation process relate to categorization in the human mind? How do they relate to alignable and non-alignable differences?
5) Theories of analogical transfer are used to motivate the creation of this new distributed process for innovation, but conversely, what do the results from these experiments tell us about how analogical thinking is done in the human mind? Do they teach us anything new?
|
|
|
Post by jseering on Apr 3, 2016 22:36:09 GMT
I think this is a pretty reasonable implementation of the principles of analogy to technological innovation. As we've discussed, both in this week's posts and last week, the idea that you can create new ideas by saying "what if we did something like that other thing, but in this new context" is really powerful. I think I posted here last week about how a similar approach helped me with my creative writing as an undergrad, and if my memory serves me correctly we read about a similar type of prompting in the design mini. Per the first question above, I think this approach is extremely generalizable, and I think that it would be a useful approach for any open-ended task. I don't believe that this approach would help very much in tasks that have defined right and wrong answers and a standard procedure, though I can imagine a somewhat different scenario where making analogies can help someone learn how to do something procedural. I can't think of a great example, but there's probably a situation in which a student might get better at learning geometry if it were compared with something like playing hockey.
I'm most interested in the fifth question above, but I don't have a good answer formulated. What does it mean that comparing geometry to hockey helps one student but makes no sense to another? What is the cognitive process that's facilitating learning there, and how can we use it to determine which analogies are best to help a given student learn?
|
|
|
Post by mrivera on Apr 3, 2016 22:55:12 GMT
@(2) This system is helpful for the creative process, but I don't necessarily believe it has "reduced the hard creative work of product development." Applying known examples/a common schema to a new problem is great but what we need to reduce effort is a the system auto-generates the schema and examples. The system has merely re-framed the core difficulty of the creative process. In thinking about how I solve problems or do creative things, I rely very much on an analogical approach. I revisit past experiences or perhaps something in my current problem reminds me of a previous challenge then I use my previous solution to guide by current solution or endeavor. In general, I believe our minds work by subtle associations (or at least I think mine does- this why I find puns to be so wonderful ). But puns are easy for everyone and neither is solving problems with analogies unless you already have the experiences or knowledge to produce the analogy. Finding the analogies/examples is a step in the right direction, and part of what makes this system useful is that the effort is distributed across people with many different sets of experiences/knowledge.
|
|
mkery
New Member
Posts: 22
|
Post by mkery on Apr 4, 2016 0:47:39 GMT
Re: Question 1b and 1c
A main point in the setup of this paper is that diversity of examples produce more heterogenous designs. Thus I think the nature of participants responses would be different between two populations (PhD students and MTurk workers) however ideally the solution would incorporate the perspectives of both. Sometimes a newcomer on a research project can provide important insights simply from a fresh perspective on the problem. Domain specific problems, as these readings discuss, can be aided by making analogy across domains. Certainly making specific problems approachable to the participant is a difficulty of participant selection (don't throw equations at someone who cannot understand them), but diversity is surely desirable.
Re: Question 3
Simply discussing my research with other people helps me consider new perspectives and thus helps me be more “creative” in a way. So, I don’t think crowd schema-generation in this paper is too far from a helpful cognitive aid. If I could give a crowd of programmers, for example, a few examples of an interaction I was thinking of, this could help me find related existing systems and make connections I might not, using the experiences of many programmers.
|
|
nhahn
New Member
Posts: 22
|
Post by nhahn on Apr 4, 2016 2:15:33 GMT
Continuing on the discussion on novices vs experts, I would like to point to some of the other research mentioned in Holyoak’s Analogy paper. He mentioned a study looking at divergent vs convergent examples, and the accessibility of the resulting schemas for individuals. In that study, as the novices became experts, they were able to more easily access the schema converged upon and apply it to novel problems. So, while an expert might be more likely to be somewhat fixated, they can more easily apply analogies from their field of interest.
I felt that this was somewhat reflected in the second experiment, looking at schema generation from examples. Notably, because individuals most likely only had a surface level understanding of the examples provided, they could not take advantage of some more complex and interesting differences in the contrasting examples condition. I wonder if it would, then, be useful to do this type of process with a bunch of diverse experts (which I think is kind of what IDEO does?). I worry about the generalizability of this system, and if participants would be able to activate some of the higher level relationships between examples.
|
|
|
Post by anhong on Apr 4, 2016 3:14:13 GMT
I think it's great that using analogical methods, we can generate more effective ideas with the crowds. However, if we really want this process to be distributed, we may need to consider a lot more. For example, what analogies make sense for which population. How are the analogies themselves formed influenced by the local culture and how can it impact the culture in return. Especially for global companies, they might need this platform to generate culture specific ideas and branding strategies using the intelligence of the local crowd. Maybe ask a local crowd to come up with some analogical schema, and generate ideas, and then ask another crowd in the same culture to evaluate the ideas.
Also for (3), I think it depends on the definition of smarter. If we want the person to have a wider range of ideas, then providing intelligence from a diverse crowd might work the best. However, if we want the person to think really deeply about a specific problem, then maybe an expert crowd in that field will work better. For example, in the WearWrite paper, we asked crowd workers to help write articles, blog posts, and academic papers in the course of a week, and it turns out that the topics like presidential candidate got a lot more interesting content than the introduction of an academic paper.
|
|
|
Post by JoselynMcD on Apr 4, 2016 19:47:22 GMT
RE: 1) What do you think about the generalizability of the results? This paper takes the theories we read about in the other journal articles and attempts to scale them. How well do you think the authors succeeded?
The experimental design and the results from the studies conducted in this paper lead me to believe that the results are highly generalizable. Plus, these studies were run on Mechanical Turk, which is fast and efficient and thus any lingering questions like Anhong's above re: analogies that make sense for various populations could be quickly tested and resolved. The only part I am struggling with is the framing of the problem state. The X-Ray example for tumor treatment is strong and convincing. We already know that the solution should be so we can easily say "look they got it right, this is working." But what about scenarios where we can't realistically know what the right solution is, or scenarios where a small (maybe even one) participant comes up with an ingenious solution, but we have no idea how to implement it. Moreover, what about in the design scenario, if we were to crowd-source solutions to design problems, we'd probably get a myriad number of solutions offered that would be great, yet not economically feasible and thus not viable as a solution.
RE: 3) How could we augment an individual’s process by strategically interjecting crowd intelligence? What if the end goal was to produce smarter and more creative individuals rather than a list of design proposals?
I think this is an interesting way to think of the application of crowd intelligence. I don't think it would be an apt solution for daily life processing for most individuals as it could be used as a crutch or reduce the impact of novel/unique thinkers if over-employed. However, for problems that individuals face where there is a sticking point of sorts, a crowd-intelligence might be a solution that moves the idea forward.
|
|
|
Post by francesx on Apr 4, 2016 21:20:30 GMT
5) Theories of analogical transfer are used to motivate the creation of this new distributed process for innovation, but conversely, what do the results from these experiments tell us about how analogical thinking is done in the human mind? Do they teach us anything new?
What relatively worries me about this approach is that for us to create new knowledge we might need only to map it back to a similar thing. Is that the case, or am I misinterpreting the point here? I wonder if Einstein or Newton used a similar mapping to a different thing or domain (as Joseph mentions above "what if we did something like that other thing, but in this new context"). Or whether they came up with this new knowledge from nothing. If it is the mapping part the solution, then we should focus our efforts more on retaining the skills that make us more able to identify and use such mapping?
|
|
|
Post by xiangchen on Apr 4, 2016 23:50:13 GMT
I like how this paper applies research in analogy to idea generation, which has become increasingly popular with crowdsourcing. While existing approach is fairly stochastic and relies on chances, this paper has found a way to structurally guide people's imagination and ideation, yielding better resultant ideas. One immediate question is how to find good and many examples. It seems a non-trivial task, regardless of whether there is specific goal or domain for the ideation. It goes without saying that good examples lead to good ideas. Without a systematic way of finding such examples, there seems still a fair amount of chance in the analogy-based process.
|
|
toby
New Member
Posts: 21
|
Post by toby on Apr 5, 2016 1:19:27 GMT
I find the topic/research question of this paper super interesting, and the experiment design very reasonable. However, I'm a little bit disappointed at the result analysis, and it only says the outcomes from schema-based idea generation are "better", but not on how are they different from those from example-based ones. I expect to at least see a score breakdown on the three dimensions used (novelty, usefulness and practicality)
Also, a hypothesis came naturally to me after reading the result is that the novelty of the idea outcomes are limited by the examples given. I would like to see some measure of with-in group similarity of the ideas, or add a new experiment group under the example-based condition, but with a different set of examples.
|
|
|
Post by julian on Apr 5, 2016 2:37:07 GMT
1) What do you think about the generalizability of the results? This paper takes the theories we read about in the other journal articles and attempts to scale them. How well do you think the authors succeeded?I think the results are definitely generalizable but I think only half of the problem has been solved. In the paper it is shown that with schemas from curated examples, interesting ideas can be generated, but how about more problem oriented ideas, meaning the idea generated has to solve an specific problem. As an example, take a look at this TED talk www.ted.com/talks/tal_golesworthy_how_i_repaired_my_own_heart , where an engineer literally fixed his heart using his knowledge about pipes. This did only happen because the speaker had the heart condition he fixed. My point is, finding the right analogy hence the right examples and schemas is a very difficult task. 3) How could we augment an individual’s process by strategically interjecting crowd intelligence?
Probably by showing the ideas and schemas generated from the crowd. I would wonder though how this would affect the individual depending on its own creativity, for example would a highly creative individual will benefit from this more or less than a not so creative individual. 5) what do the results from these experiments tell us about how analogical thinking is done in the human mind? Do they teach us anything new?
I think the experiment results may be implicitly saying that all these processes could be working on isolated stages in our brains and not necessarily on the same process. For example, a person could simple experience situations and some of them get stored as examples of something in particular. Later on, maybe even days late those examples are retrieved, analyzed and schemas are generated. Even further away in time, this person recognizes in a situation common properties with an already distilled schema and some interesting idea is born.
|
|
|
Post by Amy on Apr 5, 2016 3:04:25 GMT
2) Do you think this system would actually help the ideation process? Has the authors’ system actually reduced the hard creative work of product development? Or is the challenge of finding appropriate examples actually just as much work as ideation itself?
I don't know if anyone (besides Steven) is familiar with Joel's work with IdeaGens (or if it's connected to this work at all?) but Joel made a crowdsourced brainstorming tool where some people are assigned to idea generation and some people are assigned to providing schemas for 'inspiration', without anyone providing appropriate examples to the schema creators. I'm not familiar with the specific results of his work, but it seems like the knowledge discussed in this paper about how people use analogies can be utilized in a variety of ways to support innovation. So to answer your original question, yes, I think analogies do help the ideation process and there are ways of using the author's findings in systems that don't rely on a researcher generating examples.
|
|
|
Post by Cole Gleason on Apr 5, 2016 3:14:10 GMT
I'll focus mainly on the third question: 1a) Open-ended ideation is an interesting problem because it seems to be harder than verifying that an idea is good. Even if this approach isn't generalizable to other forms of ideation, I'm not sure that it matters. More focused ideation usually have some sort of process anyways. 1b) I think that PhD students or industry professionals would likely generate more practical or feasible ideas due to their domain knowledge, but I'm not sure how much more novel they would be. Novelty seems to often be driven by serendipity, not intimate knowledge. 1c) More complex tasks will definitely need more intimate knowledge, however, and I'm not confident that this approach will scale to that. Generating these schemas seems harder and harder as the domain knowledge required grows.
The real benefit of the crowd approach just seems to be the differing experiences that people bring to the table. If you can somehow pull those out of people, it's possible that they may be able to make analogies that one person or team did not have the background to think of themselves.
|
|
vish
New Member
Posts: 17
|
Post by vish on Apr 5, 2016 3:14:53 GMT
Reading this paper reminded me of a paper by Di Salvo et al [http://www.cs.cmu.edu/~illah/CLASSDOCS/disalvolocalissues.pdf]. In Di Salvo's paper, they try to familiarize participants with the capabilities and limitations of robotics and sensing technologies. Next, they help the general audience to connect the technology to local issues. Although the project was successful, there were some issues such as time constraint, user involvement, ownership, and funding. Regardless, participatory design is to bring the user involvement in utilising the existing system and design, and collect the feedback. This feedback is researched further to develop and improve the system. This can involve the users from various background, giving rise to variations in the usage pattern that can be useful to adopt a better system design.
|
|
|
Post by bttaylor on Apr 5, 2016 3:20:14 GMT
1) What do you think about the generalizability of the results? This paper takes the theories we read about in the other journal articles and attempts to scale them. How well do you think the authors succeeded?
It seems like this paper took fairly established methods for improving design and broke them down in a way that could be chunked by crowds. I'm curious if a similar chunking process could be effective in evaluating analogies used in educational settings. Given that analogies are to some degree culturally and language dependent, it would stand to reason that certain individuals or populations may be more or less likely to properly parse any particular analogy. Someone mentioned a watermelon being used as an analogue for an atom. I wonder if there would be a way to leverage crowd-sourcing to determine how well various pedagogical analogies are understood by different groups?
As far as idea generation goes, one of the advantages of crowd-sourcing is to get a wide variety of perspectives. I would be curious to see if the introduction of schemas would lead to less diversity in the ideation process. I get that, in theory, crowd-sourcing the schema production could mitigate this, but it makes me wonder if there are points in the ideation process that lend themselves more to fixation or idea pruning that other points.
Also, as an aside, I am wondering how well idea quality can be evaluated. I don't mean to imply that I don't buy that schema presentation resulted in better ideas, I just wonder how well 'expert' evaluators do and how they might compare to a more diversified, crowd-sourced evaluation.
|
|