- 1 Samantha Arseneau
- 2 The Week of January 10, 2011
- 3 The Week of January 17, 2011
- 4 The Week of January 24, 2011
- 5 The Week of January 31, 2011
- 6 The Week of February 7, 2011
- 7 The Week of February 14, 2011
- 8 The Week of February 28, 2011
- 9 The Week of March 7, 2011
- 10 The Week of March 13, 2011
- 11 The Week of March 21, 2011
- 12 The week of March 28, 2011
- 13 The Week of April 4, 2011
- 14 References
Samantha Arseneau is a third year student at Dalhousie University in Halifax, Nova Scotia. She is currently completing a double major in Psychology and English, and applying for the combined honours program. Upon completion of her first degree, Samantha would like to enrol in the Speech Language Pathology Master's program. As someone who has always been interested in language and communication, Samantha jumped at the opportunity to study psycholinguistics. The study of language from a psychological perspective peaked her interest, and she hopes that the class will help in broadening her overall understanding of the subject. Along with her classmates in Psyo3190, she is currently developing an online textbook via Wikiversity on the topic of Psycholinguistics. The project is one of great interest for Samantha, as she believes that knowledge should be accessible to all. Stay tuned for more information on the class project, and Samantha's thoughts on the course!
The Week of January 10, 2011
Today in class we discussed what is a language and an array of history and theory. While much of the history component was review of materials that I had learned in a previous class, I was struck by one piece of information presented in lecture. Dr. Newman at one point discussed sign language and its function in language. Upon thinking of the different components of language and their relevance towards sign language, I began to wonder about empirical research done on morphemes. I know there is very little research in the field of prefixes in spoken language, but what about in sign language? And would the findings be the same between spoken and sign? Would prefix sensitivity depend on the language (French Sign language vs. Russian vs. English?). As I am doing research on morphemes in one of my other courses, I found this idea to be of great interest to me.
Another aspect that peaked my interest was the discussion of Kanzi the Bonobo. I know that there's evidence of speech as an overlaid function in Chimps, but I began to think of other research on Chimps and their relevance in regards to Kanzi. For example, Viki could only say 4 words, but understood language very well, and Nim Chimpsky, who was raised in a rich environment, reached a ceiling effect in the syntactic correctness that was learned (Nim wasn't able to successfully combine more than 2-3 words at a time). With all of the constraints evident in language acquisition in chimps, what exactly is it about language that increases observational knowledge in these animals, and, does that mean that there's some evidence that language input could have a greater effect on higher order cognitive functions that output? While I'm sure that I'm probably missing something, these were, in fact, my initial reactions to the course material today. Hopefully they raised some questions for you too!
The Week of January 17, 2011
The week of lectures started off with a completion of the "Language and the Brain" lecture series. One subject that caught my attention was the portion of the lecture on aphasias. As I am writing my textbook chapter on Language and Arithmetic, I began to think about the implications that aphasias can have on one's math skills. Since both math and language share similar properties, such as symbolic meaning and grammar, it would seem that there is a great chance that both subjects are severely impacted by aphasias. For instance, one study examined the association between phonological awareness and arithmetic in fifth graders, finding that phonology was related to small-sized math problems. Furthermore, it was also found that the problems most related to phonological awareness had to do with retrieval (Smedt, Taylor, Archibald & Ansari, 2010). As some aphasias, such as Broca's aphasia, demonstrate issues in spontaneous speech (an area that would require both phonological awareness and processing), it would appear that one could associate these phonological issues with math retrieval problems (as, I said previously, there seems to be a correlation between phonological awareness and small sized math problems). If such is the case, then what kind of mathematical problems arise from acquired damage to the language areas of the brain? I am still in the early phases of research in regards to my textbook chapter, so I am still quite fuzzy on the relationships between language and arithmetic, but I hope that I will be able to find a more definite answer to my question as I continue to read the literature on the topic.
The Week of January 24, 2011
One of our lectures this week focused on the topic of reading, an area of psycholinguistics that really interests me. Currently, I am doing some research in a language lab (particularly with morphemes), so I found that there were some aspects of this lecture that brought up some questions on the reading front. For instance, one technique that I noticed is typically used when studying word processing is the missing letter effect (MLE). I have found this method useful as it allows the researcher to quickly assess what kind of words the participants are focusing on and what kind of implications it has on their reading abilities. I understand that this is a much more cost effective way of studying reading, but I was wondering if there would be benefits to completing an MLE test while also studying eye tracking? One thing that I really enjoy about the MLE test is that participants are reading for meaning, so paying attention to what they attend to can in fact have a great effect on what one is testing; but by adding the eye tracking, we could see what kinds of words are focused on more, what part of the word individuals attend to, and how this affects their overall understanding of a text (with adults, anyways). Moreover, in regards to eye tracking, I was also wondering what kind of patterns would arise in children who are learning to read. For instance, which part of a word would a child be more apt to focus on? Would they tend to look at the word orthographically and focus on familiar letter patterns, or would they have a decent understanding of the semantics behind the word and pay attention to morphology (for example, focusing on the base of the word as opposed to another familiar letter pattern). Also, would the eye tracking change depending on base frequency, or family size?
The Week of January 31, 2011
This week the lecture series was cut short due to a snow storm and a holiday, so we were only able to learn about the topic of morphology. During the lecture, professor Newman discussed ASL (American Sign Language) and how it varied morphologically to other languages. In terms of its typology, ASL is considered polysynthetic, meaning that most words are represented by one motion, and that there aren't necessarily different signs for different speech parts, like affixes. This idea of typology and ASL really caught my attention as we also briefly discussed the overgeneralization of morphemic rules. Typically when a child is acquiring language, they tend to overgeneralize when combining morphemes to adhere to specific speech rules. For instance, one would generally add "ed" as a suffix to create the past tense of a word (i.e. I walked downtown); however a child might overgeneralize by saying "I goed to the store" instead of "I went". While this idea of overgeneralization seems to hold true for most languages and would almost appear to be a universal step in language acquisition, I wondered if individuals who learn ASL as a first language also make similar overgeneralization errors despite the fact that one motion can represent a morphologically complex word. If individuals learning ASL did in fact overgeneralize in the way that other languages do, would there be a systematic signing system that would match these errors (for example, a sign that would represent "goed" as opposed to "went"). As someone who is rather naive about ASL, I am really interested in learning about the different linguistic components of the language (including morphological inflection), as well as the way that it varies and is consistent with other languages.
The Week of February 7, 2011
A few topics caught my attention this week in lecture. To begin with, one of the major topics discussed in Monday's class was lexical access and the notion of priming. While we were talking about the many ways in which priming occurs, I was wondering what kinds of ways one can actually stop priming. For example, I know that if one were to do an articulatory suppression task (i.e: say the word "the" over and over again while reading a list of words and trying to remember them for a later recall task) it would prevent an individual from processing a visual image into a phonological code, reducing the probability of making a phonological error on a recall task (i.e: saying that there was a "p" in a list of words when there was really a "b"). If such is the case for memory tasks, then would keeping the phonological loop busy through articulatory suppression cause someone to be less likely to prime from a visual word? It may seem like a silly question, but I always find it interesting the way that some tasks can trick the brain into working in a certain manner.
Another subject that I found really interesting this week was syntax. Since I am writing my chapter on language and arithmetic, the formulaic structure of syntax really caught my attention. As a result, I found myself thinking of the various ways that one could manipulate said formula to compose sentences that varied on several levels, including surface frequency, overall frequency etc. Just as it is possible to create problems that range in difficulty due to their equation, is there a specific formula that invariably creates sentences that parallel on all levels except for the words in the sentence?
The Week of February 14, 2011
In class we discussed a few different syntactic theories and the way in which they tend to divide the scientific community; however, it seems to me that they could almost be complimentary of one another. For example, one can look at words in a sentence in terms of meaning, but how can one be sure that one is in fact reading for meaning? By looking at the formulaic structure of syntax (as suggested in Chomsky's theory), one can control for all aspects in the sentence except for meaning. That way, one can be sure that individuals are reading for meaning and not using any other cues , like orthography. By combining both the theories centralizing around meaning and formulaic structure, one can get a clearer picture of how syntax works. So, maybe the theories do not need to be considered so separate after all?
The Week of February 28, 2011
This week in lecture I was fascinated by the topic of language and music. As someone with a background in music, I found it really neat to learn about the various brain areas that are active when processing music and language. While learning about the communal cognitive resources in class, I couldn't help but think of one of my friends, a fellow musician who is fantastic at his craft, but remarkably tone deaf. For instance, in class we discussed a study that examined non-experienced and experienced musicians when listening to music. They found that participants who were experienced in music were able to discriminate between 2 pieces of music that only varied slightly in musical notes (both songs shared the same musical properties). Interestingly, my friend, who has been trained in music for several years, cannot discriminate between songs that are only slightly different. So, I couldn't help but wonder, how can he be so good at playing/writing music without being about to hear the notes when tuning instruments/singing? This, of course, got me to thinking about the brain areas that could be affecting my friend's music abilities. We discussed in class the role that Wernicke's area plays in language and music comprehension, so I was wondering if a deficit in the Broca's area could be resulting in my friends inability to produce music in key? Or, if there might be a combination of brain areas that could cause my friend's strange musical abilities.
The Week of March 7, 2011
As someone doing a fair bit of research on reading and writing in one of my other classes, I found the topic of writing this week really fascinating. At one point we discussed the evolution of writing language and how it developed from pictographs to logographs and eventually to phonetic representations of words. While thinking of this, I thought that that it was interesting to note that writing started off fairly syllabic and evolved into something more phonetic, making me think that a chunking in terms of writing would be most intuitive type of breakdown. So, if written language was intuitively chunked (i.e: syllabic), then why do most school systems mainly focus on phonetics when teaching children to write and read? It would seem to me that to teach children the skills they need to learn to read and write, it would make sense to teach children to write phonetically, but also with a chunking method, as it may be one of the intuitive ways to learn writing. One of the best ways this could be accomplished would be through teaching a morphemic breakdown of words. By teaching children to find the "words within words" (i.e base words, etc), kids would have yet another skill to draw upon when learning to write. I think that this is important, as both written and spoken language are integral to our society.
The Week of March 13, 2011
The lectures this week focused on the topic of development. Most of the discussions and readings this week actually made me think about next week's topic as well (Bilingualism). I come from a bilingual background, so I find it interesting to think of development in monolinguals and bilinguals and how they might differ. In particular, the assumption of mutual exclusivity caught my attention in terms of bilingualism. With mutual exclusivity, it is assumed that infants only learn to map one word to one object. This, of course, would not seem to work of a child that is bilingual, as they would learn two separate names for everything. For instance, both "arbre" and "tree" would be mapped onto the same object. So, if the assumption of mutual exclusivity holds true, then does this mean that children who are bilingual develop language skills differently than monolinguals? Moreover, does this mean that children who speak more than one language store language differently? And do other assumptions in terms of development hold true? Would one have to change their techniques when speaking to their children to help with development? Another notion that was brought up was the fact that children can map and identify words differently depending on who the infant is interacting with. If such is the case, would it be beneficial to have one parent teach one language and another parent teach the other? I hope that we touch on some of these development questions when we start to discuss bilingualism.
The Week of March 21, 2011
I absolutely loved the guest lecture today on aphasias, and didn't realize it's prevalence along within our society. As someone who wants to become a speech pathologist in the future, I found that having the opportunity to actually see videos of individuals with aphasia was fascinating. One of the videos that really caught my attention involved a man with non-fluent aphasia and his attempts at communicating with his wife. In lecture we discussed the fact that many individuals with non-fluent aphasia have difficulties with written communication; however, to make himself understood, the patient often wrote down what he was trying to say. In doing so, he was for the most part able to communicate and even read the words that he had written. This made me wonder about the spectrums of aphasia, and what specifics neural impairments would cause the different kinds of non-fluent aphasia. If there are so many different types, and different spectrums, of aphasia, then there must be an equally large set of treatments available. This in turn made me think about the different methods of therapy for each type of aphasia. I would have to imagine that global aphasia would be one of the most difficult to treat, but I wonder what methods would be most effective to help someone who have deficits in both speech comprehension and production? Of course intensive therapy is the best route, but how do you communicate with someone that they do have a problem when they can't understand language or can't get the words out? Would you have to start from the very basic elements of language development, or would you be able to take an approach? Another piece of information that boggled my mind in class was that in recovery some types of aphasias could turn into other types of aphasias. For instance, global aphasia can eventually become another, less severe, aphasia. However, how does that happen? If someone has a deficit that affects areas involved in speech comprehension and production, then how does one of the deficits simply disappear? And, if someone is able to lessen the severity of their aphasia (i.e. go from a global aphasia to a non-fluent aphasia), could they also once again develop their language skills? Can one completely cure their aphasias?
The week of March 28, 2011
This week marked the start of the debate section in our class. I have to say that listening to the different viewpoints of contemporary topics in the psychological field has been really neat. One debate that really caught my attention was the argument for/against the "fast forword" reading intervention program. I find that both teams worked very hard to present their point in a very clear and precise manner; however, there was one point brought up that I was a little unsettled with. This point was the notion that the company who invented the intervention program must be tampering with data in the research to make the program look more successful (this, of course, was a point brought up by the 'against' team). While I know that the group promoting the "fast forword" program did not back up the program with as much empirical evidence as the 'against' group, I do feel that their usage of the company's results on the helpfulness of the program was valid. I understand that the company is in fact making money from the program, but would they not have to consider the ethical implications of manipulating their data to make it look more significant than it really is? Most boards would not let a study be considered if it was not ethically valid. Not only that, but the goal of the program is designed to help individuals with dyslexia, so it seems a bit unjust to automatically assume that the company is only interested in personal gain and are not willing representing their data properly. I understand that some corporations do in fact fund their own research to make their product seem better than it really is, but I think that one needs evidence before accusing the company.
I also found it interesting that in the case of the debates there never seems to be a strictly right or wrong answer. For example, evidence illustrated that the "fast forword" program was both beneficial and not. I found similar answers when I was researching information for my own debate. This coming week I will be arguing against the notion that right hemisphere activity helps with the recovery of aphasia; however, with as with most topics in our debate section, there does not seem to be a clear single side to the argument. In fact, it seems that one's opinion on the subject can vary by large amount. This leads me to wonder who decides which side is right when it comes to controversial subject matter in the psychological field. Do the researchers decide (even though the literature on the topics vary significantly)? Or is it the government? Or perhaps the public? How does one solve a contemporary issue? I can't week to see what the next week's worth of debates has in store!
The Week of April 4, 2011
This week we were asked to focus on blog on our reactions on the way that the course was organized. I have to say that I found this course's structure to be very different from any other class that I've taken, and I've loved it! It was really nice to know that the effort I put into my work (such as the wikiversity chapter) made a contribution to something much bigger than my educational experience. I really feel like I was a part of something that benefited others who are interested in psycholinguistics. I also thought that the debates were a great way to illustrate controversies that can arise in the world of psycholinguistics. One aspect of the course that I didn't particularly like was the blog section. I understand that it was a method to consolidate information learned in class and develop critical thinking skills, but I sometimes found that another assignment form would have been more effective. Overall, I feel that the course was very effective in teaching me about the subject material. If given the choice, I would take the course again in a heart beat! Thanks for a great class, Dr. Newman!
De Smedt, B., Taylor, J., Archibald, L., & Ansari, D. (2010). How is phonological processing related to individual differences in children's arithmetic skills?. Developmental Science, 13(3), 508-520. doi:10.1111/j.1467-7687.2009.00897.x