|
Post by Cole on Mar 29, 2016 3:19:17 GMT
I would say that when I first read this, the "classical view" seemed intuitive to me, although I knew the author was setting it up just to knock it down. Even reading about the other theories makes me want to go back to the simpler classical view.
When it comes to using this knowledge to influence HCI, the simplest example comes to mind. How often has a piece of software asked you to put data into a category and you struggle to find where it belongs? Today I put a transaction from eating out into my budget, and I couldn't decide if it belonged under "Food" or "Alcohol". Earlier I set up a Facebook page, and I didn't know if it was "Software", "Technology", or "App Page" when asked to categorize it. I think this is why many CMS systems like Wordpress introduced tags. It's hard to pick one category where a piece of data can belong, but it's easy to throw 5+ tags on something. If you have autocomplete for your tag entry, now you are essentially performing the attribute listing described in the article.
|
|
|
Post by nickpdiana on Mar 29, 2016 3:19:35 GMT
I really like the point you hit on in #8 -- that maybe our models of how concepts are represented are influenced by other ways of thinking (i.e., traditional logic). In the same vein, a probabilistic model of concepts might feed off of Bayesian thinking in CS (or vice versa). A probabilistic model makes much more sense intuitively than the classic models, but even so, it make you wonder if or how cognitive psychology might revise this model if there is some revolution in thinking.
To Felicia's point about our skill at automatic categorization, I'd like to add that we're equally skilled in encoding those categories. If we see our first three legged dog, it doesn't take us five minutes to search through or internal file system for the right directory to store this new information in. Most of this learning is done implicitly. That's equally amazing to me.
|
|
|
Post by JoselynMcD on Mar 29, 2016 3:22:49 GMT
Prior to reading these chapters, I honestly hadn't spent much time considering categories and what a reduction in cognitive load they provide for us. For that I am eternally grateful, but still I am slightly ambivalent about categories in regards to the way that categories (and humans inability to often challenge them) can result in reductionistic or biased ends. Humans instantaneously put new things (or people) they encounter into categories without much mental labor, and rarely do they questions whether this was an appropriate, nuanced, or unbiased categorization. . My hope is that the fields of AI/ML would potentially model their categorization structures in a way that is more comfortable with ambiguity, nuance, and edge-cases (that's where all the fun in the world is, anyway).
In social psychology we talk a lot about the schemas that are prompted (either consciously or not) and how those schemas can affect behaviors and attitudes; Murphy too discusses schema, essentially an abstraction for the way we bin and stack relational information about features belonging to particular items in a category. The way he explains it makes it seem like sort of the second tier of a feature list - more nuanced and able to be conjured when needed by the person.
As for my favorite theory - Wittgenstein all the way. I agree that important concepts are potentially unable to defined. His rationale, while arguably too negative, are quite compelling.
|
|
nhahn
New Member
Posts: 22
|
Post by nhahn on Mar 29, 2016 3:41:13 GMT
I found the section regarding similarity calculation interesting, and in my eyes somewhat short. I was surprised that more research wasn't focused on this calculation, as a different metric could cause vastly different results. For example, in ML, different similarity metrics can account for scale and shift invariants in the data, which in some cases you might want to throw out, and in others you would want to preserve them. I liked the multiplicative similarity metric that they presented -- it felt very similar to the weighting scheme used by neural networks. However the discussion on categorization and feature differentiation was having me consider a different similarity metric based on distributions. Rather than encoding the distribution information in a simple weight, it seems that understanding that there are a wide range of possibilities, as well as what the common frequencies for those possibilities looked like, would be important for judging similarity. Distributions seems somewhat apt for handling this. Playing off of that -- could a prototype of a category simply just be a function of these distributions, where you are selecting the features with the highest probabilities across the different feature distributions?
I'm not sure how much distributions oriented thinking there is in the psychology literature, but it seems like it could be a useful way for thinking about feature similarity calculations.
|
|
|
Post by rushil on Mar 29, 2016 3:51:06 GMT
Someone pointed out that math has no problems following strict rules and definitions. I don't think that's true. There are still unsolved mathematical problems which cannot and don't fit into the currently defined parameters set in the mathematics world.
To answer one of the discussion questions, I was not surprised that the view was open to empirical shortcomings. A lot of what has been discussed in the paper, and what we know about how concepts are developed is fuzzy. If something is not completely transparent and has an "aha" moment related to it, it most likely can be subjected to empirical shortcomings. It does not imply that it's a bad concept, it just means that we do not have sufficient knowledge or defined parameters to fully capture the concept within the existing world. Some part of it still lies on the outside.
As for the AI discussion, the recent example of Microsoft AI shows how concept learning is far from how humans are able to grasp it. A few trolls were able to turn the bot into learning the most obnoxious things. In this particular case, the translucent nature of how we judge the moral implications of certain things makes it challenging for a machine to fully replicate the same order of learning.
|
|
|
Post by anhong on Mar 29, 2016 3:53:21 GMT
3) When it comes to categorization, clustering, and concept learning, where do you think AI and machine learning currently falls furthest from human abilities (and/or vice-versa)? Why might this be the case? 3.a) And (how) might the study of human concept learning, representation, and use inform the development of AI and machine learning (and/or vice-versa)?
I think human is really fast at retraining an existing model given a new data point, and integrate it into the new model. This is why human learn so fast and can apply it right away by adapting to the situation. The current limitation of machine learning is that the model is quite static. The training time of an expert system takes days, which is quite impressive. However, if one error is made, the whole system need to be retrained. Also, I feel machine learning and AI researchers are keeping reinventing other people's wheels. There are so many classifiers people have trained, and so many algorithms people have designed. Why can't there be a meta-classifier and meta-algorithm that takes into all these previous knowledges's perspectives, and generate a meta-opinion based on maybe voting?
|
|
|
Post by fannie on Mar 29, 2016 4:10:14 GMT
#3 Human categorization definitely happens much faster than ML, and I think even at a really young age we don’t need the hundreds of examples or strict definitions to categorize a dog as a dog or a cat as a cat that ML might need, we see it and we get it. Even cartoon versions, when they look super different from the real thing. I went to one lecture of an ML class here and it was discussed a bit why this is, and one interesting thing that came up was whether or not it had something to do with how people have evolved this way, how in the past we needed to easily recognize what was a predator or what was food. This somewhat reminds me of what Anhong is talking about, building on existing knowledge for new ‘meta-’ classifiers/algorithms kinda like building on the knowledge-base of people?
|
|
|
Post by Brandon on Mar 29, 2016 5:17:43 GMT
Re: Whether animals can be classified on the basis of DNA seems to me to kind of skirt the real issue on a sort of technicality. Yes, you could probably make a very fine-grained technical definition of 'dog' based on specific DNA features. But that's not what people are doing when they talk about dogs. We don't have that level of information so our usage of the term 'dog' is. I guess I'm agreeing w/ mrivera that a lot of the fuzziness in categorization is the result of imperfect information. That imperfect information can be on the perceiving end (it looks like a dog from here, but maybe it's a wolf) or on the category end (if you don't have a DNA-level definition of 'dog' or what is a game). Either way, the categories will be fuzzy.
Someone mentioned Pluto, which is an interesting in that it's something that I (nor anyone else) have no direct perceptual experience of and, speaking for myself at least, no really good working definition of a planet. NASA evidently has a well-defined technical definition of planet. So then, Pluto's planet status is just a matter of how our best measurements of it align with a narrowly defined 'planet' definition, I guess? But people, who have no direct experience with Pluto, got pretty upset about that. I don't think I really have a point here.
|
|