Week of January 10th
Being a philosophy student, the question of whether we can think without language has always been very interesting to me. The idea of smelling or seeing something as an experience can arguably be said to be prelinguistic. However, to be self conscious, or to be aware of what is going on in my own brain, still appears to require language. I think this has to do with the difference between experiencing and describing that experience. A person can experience something, but without being able to describe it, even just to themselves, to me it seems that they would lose the ability to remember it. Without relating it to other experiences, the memory gets lost, and so the only aspects that survive are those that can be put into language. Despite the many rejections of the Whorfian Hypothesis, I think that in a longitudinal development of an individual, the different languages have more of an affect that is immediately apparent from the Weak version that certain contrasts might not be possible is a particular language, while easy to make in another.
Bcorrigan 18:42, 14 January 2011 (UTC)
Week of January 17th
By far the most ineresting, perhaps because of its novelty, was learning about the phonetics of sign language. I admittedly know no sign language, but had learned a little of the alphabet at one point, in conjunction with eurthmy which was part of my primary school curriculum. That was a lot more intuitive, and so would not be a basis for a language, but the play of space did seem very similar for what you were expressing (though not for what you meant specifically). The only thing that got me was the difference between location and orientation struck me as odd. I think what rankled me was that the hand is not free to move in any orientation, and its connection to the arm meant that the entire arm would have to move to change the orientation. This makes me think that the distinction is really one of increasing the classifications for academic reasons rather than because it is necessary. Looking up ASL on Wikipedia and seeing that it is the third most used language in the States was thought provoking, not merely because of its predominance, but also because the idea of using a language feels a little strange to me. the word is necessary to include spoken and signed language, but I wonder if signers also feel that 'use' is inadequate to express the concept of signing as I feel it is to capture the process of talking.
15:32, 24 January 2011 (UTC)
Week of January 24th
In thinking about how telephones only use a limited frequency band, and that most of the information in speech is isolated to two bands, I started wondering how much sound can we create with our voices that fall outside of the audible range? any? After exploring wikipedia's article on Audio Frequency, I realized that we probably don't. trying to create a sound higher than 6 octaves above middle C sounds fairly impossible. All puns are entirely unintended.
The effects of top down recovery of sounds, phonemic restoration or the McGurk Effect are really mind boggling. Although we may think that we hear is the sounds that were made, and that we are merely decoding those sounds, the fact that the decoding actually changes the sounds for us is surprising. If people are are incapable of understanding me because I mumble sometimes, how much sound has to actually be distorted before they start failing to understand me. I really should work on my enunciation and articulation.
On the topic of reading, I wonder if there is a difference between males and females for reading upside-down? The disparity between mental rotation abilities could create a difference, but probably only when a person starts. Practice would likely make the person immediately recognize the characters for what they are, instead of having to actually rotate them in their heads. I wonder this because I find that many people are surprised that I can read upside down almost as quickly as I can right side up.
Bcorrigan 15:59, 31 January 2011 (UTC)
Week of January 31st
single words that contain entire sentences. I am sorry, but that just seems ridiculous to me. Mostly from the point of view of trying to read it. I recognize that people can read faster than I can, but it still takes a certain amount of time for me to process a sentence. Perhaps they don't speak as quickly? Maybe to read a word like "n ä ï k ì ḿ l y ì ï à" takes longer to read on average than 'excruciating' (alright, it has one more letter, but I think the comparison is valid). Actually, I would guess that because of the way that we, as English speakers and readers, can recognize words wtihuot hnaivg to raed ecah letetr, would prove impossible for people who spoke the Kivunjo language.
I recognize that some fonts may lead to the lexical reading effect better than others.
I have heard of experiments that tested the arithmetical abilities of children who had learned their numbers in different languages, and some languages just were faster than others. There were even individuals who could do math in different languages had different speeds. I can't find a source right now, but I was taught it in class at some point in the last year. Anyways, perhaps these languages that have such small parts of meaning allow people who think in them to actually process their thoughts faster. I believe that this would be a very difficult thing to ever assess, because of the difficulty in finding a close enough translation that speed could reasonably be compared. However, I think that it would be really interesting to test the different speeds of reading and cognition between different languages. Particularly between an analytic language, and a polysynthetic language.
Bcorrigan 15:48, 7 February 2011 (UTC)
Week of February 7th
So after watching the clip on WATSON, right before class, and then 2 minutes into class, after handing in a paper on the philosophical possibility of artificial intelligence, it reminded me of Tesler's Theorem (as quoted by Douglas Hofstadter ) "Artificial Intelligence is whatever hasn't been done yet." The idea that syntax is just computation is not a terribly novel idea, particularly because the definition of syntax includes a definition for programming languages. The argument about whether this machine, that can use language quite effectively (apparently they think it can challenge the champions), is actually intelligent of course depends on the definition. It would be hard to argue that the machine was self-relfective, especially after seeing it make such bizarre errors. However, humans also make mistakes that can appear to illuminate the fact that they are not paying attention to what they are saying. Freudian slips for one. Before i write too much off topic, we also say thing by rote that we might mean upon reflection, but not actually have thought about at all. The phrase "Have a good one" come to mind.
The other thing that really interested me this week was the difference between connectionist models of the mind, and the spreading activation theory. I really like that despite the shift in connectionism away from dedicated nodes, the use of vector distances can be used to give a representation of the system that actually appears similar, insofar as like is grouped with like. The ability for the connectionist network of Rogers and McClelland (2004) generate the proper verbs for what a canary can do is striking. It would be really astounding however, if the computer was able to give statements about its own abilities, although learning these, instead of merely having a threshold to activate a particular phrase might not happen for quite some time.
Bcorrigan 15:58, 14 February 2011 (UTC)
Week of February 14th
All right, so if the inside fridge to animal size ratio was such that the elephant had to be removed before the giraffe could fit, why couldn't the giraffe kick the fridge door open when the King of the Jungle called a meeting? obviously she had been slighted by him. A distinction that I found very interesting this week, was the one between schema and script. Trying to wrap my head around the difference between what is experienced, and what usually happens seems, at first blush, to be pretty much the same thing. I guess these two terms rely heavily on multiple experiences to differentiate the possible, with the probable. A rude waiter could certainly be part of the schema, but not likely part of the script. If someone told me that they had gone to a restaurant the night before, I would not assume that they had been mistreated by the staff, whereas I would assume things like sitting at tables, menus tips and talking to the waiter. The distinction between a particular experience, and the common experience is likely hard to create with only a small sample. for example, despite never having actually been to a restaurant in France, if someone who had no French language skills, and a terrible red neck had told me they had gone, I would probably expect that they had been treated with disdain. This would however, not really stem from the restaurant script, but be a combination of the 'hick goes to town' script, where he makes a fool of himself, and the restaurant schema, where waitstaff can sometimes be jerks.
The idea that diferent languages use different cues left me wondering how languages could be compared on levels of understanding, and studying how these are changes in things like word order and inflection affect the interpretation and priming probably were the source. However, I think that ERP measurements of confusion and difficulty processing might also be fruitful, and it would be interesting to see if certain types of changes produced different degrees of effect, and possibly at different times in processing. Although I concede that this might not give any more information than well constructed measurements of priming.
Bcorrigan 20:51, 18 February 2011 (UTC)
Week of February 28th
Alright, so although not the most thought-provoking example, something that really nagged at me this week was that when i saw the word 'grastly,' I immediately knew that it was a Blended word, made up of 'gross' and 'ghastly.' Although the topic was speech production, I am really interested in how I was able to make that leap. Possibly because the reading of 'ghastly' was not difficult to comprehend, and then associated words that start with 'gr' and would only be entitled to a small part in the Blend was probably 'gross.' This ties back into speech perception, and I would suggest that it has a similarity to the Liberman et al's Motor Theory of perception (1967), where how the sound is likely formed is used to conclude what the sound is. This would be on a higher level, but a self-report would lead to me saying that I tried to say the word, and found the words that I would mostly likely confuse, coming up with the correct ones. This was done out of context as well, and not embedded in a sentence, which makes it a little more impressive.
An issue that this brings up, is one of the label of "top-down" processing. In terms of categorical perception (to tie this back into this week, with the way that young children can detect changes in cross cultural music that adults can't), it is usually suggested that this is caused by the adults have a top-down way of perceiving the music. I believe that not only is this a misnomer, but that it is actually misleading as well. I think that, if we were to separate processing into the levels of distinct notes, and then one higher that uses scales or moods, it is not the case that the notes are not processed, but that they also activate the higher category. This higher category would then just not change when the music changes keys in a different culture's music, so that the adult would not become aware of the change, having come, through experience, to trust the categorical relationships built over many trials, rather than the more difficult processing of note to note relationships (which is what the children likely do, not having built up the categories yet). This is a little heady for a the space of a blog, but I think the important thing is not that the sense data is not processed first on a high level, and then on a lower, but instead progresses through the levels coming out at the high level, and retroactively is assigned its place in the lower levels, not by the processing, but by the subjects who think that they are taking a more active role in their minds than merely being determined by the wiring in their brains. It isn't that the important comparisons don't occur at the top level, but that to get to the top level, there was a lot of bottom level processing already.
Bcorrigan 20:37, 6 March 2011 (UTC)
Week of March 7th
For the gesture lecture, I was a little taken aback at the suggestion that considering anything that cannot be written down as non-linguistic is a cultural artifact. All categorizations are cultural artifacts, for a linguist to say that there is language that people don't call language, is ridiculous. However, there is something to be said for the idea that perhaps there is a contradiction in the classifications, and that un-writable gestures can have meaning that should be included in language. I looked it up in the OED, and although the primary definition only included spoken and written communication, secondary definitions broaden the scope to include anything that communicated meaning (facial expressions, gestures, etc.). Perhaps in everyday use people forget that linguistics does not only concern that which can be written down, but to suggest that it is a cultural artifact is to forget what de Saussurre said about language, namely that it is arbitrary. It is equally valid to have a different word for communication that cannot be written down as it is to bring everything under the scope of 'language.' Maybe that wasn't completely on topic, but that kind of confusion, especially from a linguist, irks me. I was also intrigued by the correlate of babies learning to speak that confuse 'me' and 'you' is similar to how they confuse pointing to themselves and to others. The failure to grasp how one is supposed to modify actions and speech when the participants are the subject of the conversation is fairly amusing.
I had never known that evil pencils were the cause of the writer's block that I get when I sit down to write these blogs. Not actually writing these down with a pencil though makes the argument difficult. I would argue that the blank page is more daunting, which is actually easy to escape by adding to the previous blog post, as does having a title that makes the page less blank.
I also recognize phonological errors while typing. Often these are caused by the word that I am going to type next, and so the first letter from the next word, or even the start of one of the embedded syllables finds its way into the word being typed. Although many typos when I was younger could be pegged as merely missing the right key, or not having the keyboard perfectly memorized. I think it would be interesting to look at when this starts, and how skilled someone has to be with a computer before they actually get so comfortable that they start making different types of mistakes.
Bcorrigan 21:53, 13 March 2011 (UTC)
Week of March 14th
Categorical perception. Yes it wasn't a large part of the discussion this week, but it still grabbed my interest, and I can't seem to shake it. I had originally understood that this was learned, because babies could discriminate all categories. However, being corrected this week, that despite babies being able to differentiate between all categories, they were not as able to discriminate as well within categories. I had originally assumed that the pathways for these categories were just learned. This made sense because babies obviously learn categorical perception of their own native language, and learn to not distinguish between categories that do not exist within their native tongue. Further research (alright I started on wikipedia) led me to a paper (Tijsseling, A. and Harnad, Stevan (1997) Warping Similarity Space in Category Learning by Backprop Nets.) that showed that a connectionist back propagation model developed categorical perception(CP). Although this suggests that CP is actually learned, and not genetic, because it was not programmed into the network examined here, there is an issue with the fact that backpropagation has not been satisfactorily described as mirroring an actual biological process within the brain.
The mechanism that is suggested is that there is a compression within a category based on the stimuli presented, and at the boundaries, there is better discrimination because there is a spreading of the differences. This suggests that perhaps one of the reasons that it is difficult to tell whether categorical perception is learned or innate is because despite not having innate categories, babies are still constantly presented with exemplars that, to use the perceptual magnet effect as an example, would appear closer to the centre than actually all over. This could lead the baby to lump particular sounds together as their listening develops. I would be really interested in seeing if a baby were brought up hearing an even distribution of all sounds, whether they would still have the capacity to differentiate better between categories than within them. If only ethics boards could see the benefits of this test, having babies only hear computer generated sounds for the first few months of their lives. (lthough, perhaps a way of measuring what they did hear over the course of the day, and supplementing it with sounds that occur at category boundaries might be able to counterbalance natural exposure. Bcorrigan 15:01, 21 March 2011 (UTC)
Week of March 21st (SPRING IS HERE!)
So I was really interested in the final presentation, although I did forget to write down the presenter's name, so I can't actually cite her. That Parkinson's is less prevalent than aphasia was a really striking revelation, and although at first I was skeptical about how impressive that was considering the range of effect of aphasia, I recognized that individuals with Parkinson's have a decline, so at the early stages, there is not really a difference. It was really fascinating to watch the videos of the two individuals who had Broca's and Wernicke's aphasias, because I had not seen anyone like either of them before. It became very evident why there had not been a spokesperson for wither aphasia before, because it was so difficult to understand either of them (alright, it was impossible to understand the fluent-aphasic).
The level of understanding that the fluent aphasic had of the comic that he was describing was very interesting to try and determine. How he could think that he was making sense, or what kind of sense he was making in his own head is to me mind boggling. Is the total lack of sense due to concepts not having a regulation that could keep their activations in order? Perhaps it is true, but I find it hard to believe that the only reason that I make sense is because I can listen to myself. I recognize that I might make mistakes if i could not hear myself, or monitor what I was saying, but it seems that it would take a lot crossed connections to have a thought that was associated with an entirely different word. Possibly before the actual cognitive monitoring, there is a strong amount of feedback when the concept is attached to the right word, and that this is an essential part of how we actually even think. To describe this would be that I might want to think of a brother, but in not activating the word "brother" other words that are normally activated with it which would in turn help isolate the activation of the concept might lead to completely different words coming out. "monkey" perhaps, as the concept of brother and monkey for me are quite closely related.
Bcorrigan 15:32, 28 March 2011 (UTC)
Week of March 28th
The debate over Ebonics was interesting in both the issues raised in the debate, as well as the way it was carried out. before I would want to give an opinion however, I would certainly want more information, particularly because the two sides seemed to be arguing against different points, instead of each other. One thing that I was left wondering was that if it was a language, how had it developed, as I do not believe this was addressed. Wikipedia suggests that it is a creole, which certainly lends credence to treating it as a language, but not necessarily to the argument that it should be used to teach in schools. Perhaps offering it as language credit, like Spanish or French would contribute it possibly becoming an academically recognized language. Another possible course that it might take would be to form a creole with US English. The problem that I see with this project however, is that if English already has such a pervasive written stronghold in the US, and the possibility of people understanding each other despite the different language structures, there does not seem to be much likelihood of English syntax being shifted towards any compromise. Despite the fact that Ebonics is a language, and the suggestion that the young children need to understand it to communicate with the older generations, I would be interested in comparing fMRI's of these older monolinguals, English/Ebonics bilinguals, and other bilinguals to see if the differences in bilinguals and monolinguals is observed in these Ebonics/English ones. The overlapping vocabulary could make differences difficult to identify, or it could be that the differing syntax would show large differences. This would largely just show what activations are due to syntax as opposed to vocabulary. Bcorrigan 13:06, 4 April 2011 (UTC)
Week of April 4th
For an overall analysis of the class, I believe that I enjoyed quite a bit of it, while being frustrated with others. Some problems couldn't be helped, such as the room, which, having no sunlight, meant that I found it very hard to stay alert and attentive during classes. I think there were also too many students for the focus that the class seemed designed to have, and the professor and TA had to do a lot of extra work, and knowing that there was a lot to be done meant that it was easy to think that some marks might be affected by not having the time they deserved. Part of this is likely due the fact that if I were in that position, I would find it hard to devote as much time as Dr. Newman and Sarah actually did. For that they should be commended.
The debates were quite interesting, but I found that despite the closer than average calls, they seemed to be a little one sided, so that one team would have to completely re-frame the question in order to try and argue effectively. Particularly the ESL and Ebonics ones.
The Wiki idea was novel, and I don't think what i was doing really struck me until my Dad looked at it, and was really impressed. Yes, its just my Dad, but that is what really drove home the idea that someone would probably actually come across it and find it useful. The problem I found with this as the major way of marking, however, is that it meant that there was no weight attributed to learning the material in class, and so if other subjects had assignments that did have weight, it was very easy to let keeping up with Psycholinguistics slide. This is likely just a problem with Neoliberal society, but I do think that the blog posts worked towards evening this out.
I think that overall, I learned a lo of extracurricular skills, and am very happy with that fact. I also learned more about psycholinguistics, as well as connectionist models of the mind, and am very interested in pursuing more knowledge in that area in the future.
Bcorrigan 19:14, 8 April 2011 (UTC)