User:Starfield

From Wikiversity
Jump to: navigation, search

My name is Jillian Baker and I am an undergraduate student at Dalhousie University. Currently in my third year of the neuroscience program my current interests lie in vision, audition and language.

Blog entry #0-Jan 7, 2011 As an undergraduate student, life is normally pretty scheduled and predictable. You wake up, go to class, study, write an exam or paper, go to sleep, and start all over again the next morning,term, etc... However, as a student of Aaron Newman's third year psycholinguistics class at Dal, this pattern of activity has been altered. This year, professor Newman has arranged the scheme of the class around Wikiversity. Upon discovering that Wikiversity does not have a psycholinguistic textbook and finding motivation from the work of a class in Australia, Dr. Newman challenged usundergraduates to create a textbook on psycholinguistics. Each student in the class is to be responsible for a single chapter along with exercises for that chapter and a blog. I was excited at this idea because it is a different, innovative way to engage students in not only learning but also help improve their writing and editing skills. For my topic I have chosen morphology, or the study of the structure of words, as it has always interested me but I never had the chance to study it. I look forward to learning about psycholinguistics, writing about morphology and reading what my fellow undergraduates write!

Blog Entry #1-Jan 13, 2011 Yesterday in class Dr. Newman asked us if it is possible to think without language. On the surface this question didn't seem very complex but as I gave it some thought I realized that it is a question that depends on the interpretation of language and thought, previous exposure to language, and many other variables that weren't clear in the beginning. So is it possible to have thought with no language? In my opinion, yes it is due to evidence present from aphasia patients, people with little exposure to language and so on. We discussed the Whorfian hypothesis which has a weak version: linguistic relativism, in which there are differences across languages and a strong version: Linguistic determinism in which, our knowledge of language determines how we think. I do believe that language shapes how we think, for example, if a certain language (dani) only has 2 terms to describe color (mola and moda, light and dark respectively) I do believe that their perception would differ from people who speak a language that is rich in color description terms (English). Although I believe that language shapes our thoughts, I do think that it is possible to have thought without language but that the thought is less rich then it would be with some form of language. This is due to the inability to communicate and converse with other beings, learn from one another, and the known enhancement effects that language has on other cognitive development (evidence comes from studies with deaf children). So overall, I do believe that thought without language is possible but that having some form or exposure to language greatly enhances the quality of thought.

Blog entry #2-January 22, 2011 One phenomenon that was discussed in class this week was lateralization differences in right-handed versus left-handed individuals and gender differences. It is highly popularized that the left hemisphere of the brain is dominant for language in most people, especially right-handers while left-handed people tend to be a little bit less lateralized or more right hemisphere dominant in terms of language. It is also pop-science that females are less lateralized than males, in which if a female had a stroke comparable to a male, the male’s language abilities would be more affected. I was surprised to learn that there are no gender differences and that while it does appear that language is left-hemisphere dominant this varies across populations, even among right-handed people.

Another concept that fascinated me this week was phonology. I really enjoy phonology because I’m an audition girl, I love the auditory system. We have not covered acoustic phonology yet though, so I’ll have to save that for another day. What amazed me was American Sign Language (ASL) phonology. I never considered the idea that ASL wasn’t once considered a language because of the believed lack of phonemes. It was William Stokoe who discovered that sign language does indeed have phonemes, termed cheremes that are based on the location, hand-shape, movement and orientation of the sign. I began to wonder about how these cheremes are processed by the brain. Are they processed in the same manner as our phonemes despite it being a form of non-verbal communication? Does language acquisition take the same path as spoken language? Or does sign language involve a different form of processing all together? I hope that some light will be shed on these questions, and I’m looking forward to trying to find some answers!

Blog Entry # 3-January 30, 2011 Please note the change in the entry numbers, to clear up the confusion of submissions

Have you ever read a chapter on reading? As you read, you begin trying to analyze how you are reading, how your eyes are moving, how we are linking ambiguous symbols to a meaning and so on. As you try to understand how our brains are capable of something as complex as reading, it gets more and more involved. Although I have done a brief introduction to the topic of reading in psychology and am aware of the vast amount of research on the topic, I was interested to learn more about the steps involved in reading. From beginning to recognize letters as letters and not just random symbols, to knowing the logographic representation and being able to sound things out, to recognizing words, and being capable of orthographic fluency. I was unaware of how involved something like reading, which seems so natural and innate to me, really is. The concept of phonological awareness is new to me, but I was fascinated to learn about its effects on reading. Phonological awareness (PA) is the awareness of the sound structure of language and the ability to manipulate it. Various studies have demonstrated the beneficial effects of phonological awareness for reading but the strength of the effects varies across languages. This leads to another important factor of reading, grapheme-to-phoneme correspondence (GPC), which refers to how well the sounds are represented in the word (if high GPC sounding out a word will give you the right pronunciation). It has been found that languages with high GPC such as Turkish have a higher PA. What does this mean for deaf people? People who are born deaf never received the chance to gain experience with phonology but reading still depends on mapping out sounds on to words. While many deaf people do undergo oral training, in which they can learn the vibration of the vocal chords to different sounds, and to lip-read, the average deaf adult can only read at a 4th grade level. Clearly, research needs to be done in order to improve the techniques available to deaf people; however, it does act as a good demonstration to show how important sounds are to the process of reading even though it is not allows read aloud. I challenge everyone to read a paragraph and attempt to bring to consciousness how you go about reading because it is something that seems so natural and is, at least for me, next to impossible to bring into awareness.

Bog Entry # 4-February 4, 2011

This week in lectures, we discussed the topic of morphology, which is actually the topic of the chapter I am writing for the psycholinguistics textbook. Much of my chapter focuses on what morphology is and the basics of morphology, this is the linguistic part of psycholinguistics. The lecture gave me some interesting possible topics for the ‘psycho’ part in psycholinguistics. I was fascinated by the typology of different languages, mainly by polysynthetic languages such as Turkish. A polysynthetic language is capable of expressing an entire sentence in one word. This is fascinating to be because I wonder how these type of languages evolved. How is it that one word came to represent an entire sentence/thought? Another topic is the past-tense debate in morphology. I have come across this in my research for the morphology chapter. The debate is centered around how verbs are processed in the brain and how they are analyzed/stored in the mental lexicon. One side of the debate is that it is completely based on combinatorial rules (Chomsky and Halle) while the other is based on associations, in which all are simply stored in associative memory (Rumelhart and McClelland). More recently a dual-mechanism approach has been proposed in which Pinker, 1991, suggests that irregular verbs are stored as whole-forms in the mental lexicon while regular verbs are combined in the lexicon using the combinatorial rules. Evidence for this comes from priming experiments in which walk primes walked and vice versus while teach does not prime taught, as well as frequency effects.I will be discussing both sides of the past tense debate and the evidence currently available further in the morphology chapter.

Blog #5-Feb. 13, 2011 Have you ever tried to think about putting a sentence together? Where the verb goes, the subject, adjective, etc...? Syntax refers to the rules used in sentence grammar to combine words into meaningful, and correct phrases. Syntax is something that comes very naturally to us as language develops, and it essentially becomes automatic. For me personally, I find it very hard to think about grammar and apply it consciously, even to check if my grammar is correct on a paper. Grammar and syntax is so natural to us that it is easy to forget the complexity involved. We need to not only be able to understand straightforward sentences but also things like puns, and jokes. The complexity of these things is highlighted by limits in technology, in which, until recently computers were unable to “understand” our language it apply it in a meaningful way, like answering questions. IBM has shed some light on the complexity of our language through the development of “Watson”. Watson is a machine that has been developed by IBM as a question-answering system. Watson seeks the information out and then answers the question you initially asked. This means that Watson can indeed not only understand our language, but respond to it even when it involves the use of puns and jokes. Watson will be making is public debut Monday, Feb. 14, 2011 on the Jeopardy Show, which will be a true test of the abilities of Watson! I’ll be tuning in to see how Watson performs, and how well he is able to truly understand human language.

Here is a link to IBM's Watson page if you're interested [1]

Blog post #6-Feb. 17,2011

Have you ever sat down and tried to break down how language works, how we take ambiguous symbols, make them into words then further combine words into meaningful phrases based on syntactic rules and semantics? Or how we carry out the process of languages in our brains? That’s what Dalhousie’s Psycholinguistics course is all about. The past few months have really opened my eyes up to how intricate and complex our brains really are and I’m a third year neuroscience student. Sure, everyone agrees that our brains our super complex, otherwise we would have cracked the code on how it works by now, but until you really stop and think about a complex process such as language and think about what is involved in language, you don’t truly grasp the complexity. What is so amazing to me is the speed and efficacy at which the human brain is capable of performing language. If someone asks you a question, in order to answer it you are required to syntactically parse the question asked, semantically understand, combine the correct, meaningful words into an answer, and then produce it, which involves the use of the motor system. Yet to do all these things the brain is so fast and efficient that it requires often less than a few seconds, that is how automated the process is.

As mentioned in my previous post, Watson, a question-answer machine produced by IBM debuted on Jeopardy Feb. 14-16. I watched these jeopardy shows and was quite interested in how they showed the probability or how confident Watson was in particular answers after searching databases. However, something else became quite evident, often Watson was not very confident in any answers and he wasn’t always that fast at answering (buzzing in), therefore allowing the other contestants a chance to answer. At the end of the competition Watson did come out victorious, but it was a fairly close result. What I find important to note is that Watson had access to thousands of sources to find his answers, the human contestants on the other hand, actually knew the answers and had them stored in their memory and they were still able to keep up with this super computer. This sheds light on how fast and efficient our brains truly are when in comes to cognitive processes, especially language.

Blog Post #7-March 6, 2011

The topic of discussion that really grabbed my interest in lectures this week was the comparison of language and music. Here’s a little more information about me. I am a musician, I began playing piano at the age of four and since then have learned to play the clarinet, tenor (my passion), alto and soprano saxophone. I was in my highschool orchestra as well as the Jazz band and was the first tenor by grade 11. I love music so much that if it wasn’t for my love of science, I would have majored in classic music.

Although I have always viewed music as a way to communicate and express emotions, I had never stopped to think about how similar it really is to our day-to-day language. Certain aspects of language and music do differ, for instantance, athough both language and music have minimal units, what they are (phonemes and notes) differs. Production is also different in that everyone can speak fluently but not everyone can play music well. However, music and language are infinitely generative, this is one of my favorite aspects of music, you can always create new pieces. What is important to note is that generativity is dependent on rules, either grammatical rules for languages or rules of harmony for language, yet another thing in common.

I was interested to learn that music is not only usually left intact in aphasics but also that it is possible to loose only the perception of harmony (amusia). Patel (2005) did an experiment demonstrating that in priming tasks (which broca’s aphasics are known to do poorly on for language) control subjects will respond faster the closer the chords are while in aphasics there is no difference (i.e no priming). A follow up ERP experiment showed that a syntactic violation ERP is very similar to a musical syntactic violation ERP. These pieces of evidence therefor suggest that while music and language occur in different parts of the brain, certain areas are shared between the two such as those required for syntax and integration (Shared syntactic resource hypothesis).

Another highly interesting topic is the development of music in children because much like the development of language, infants appear to be able to hear mistuning in both their native and non-native languages while adults only detect non-native mistunings at chance. This is similar to language in that infants have the ability to hear all possible phonemes but with exposure to particular ones (the native language) the others drop out. Because of this and previous knowledge that exposure to multiple languages early on helps retain phonemes and therefore acquire languages more easily, I wonder if exposure to music early on as infants would help the child hear music and produce music more efficiently? For example, would earlier exposure allow for better tuning ability as a musician, would the be better at taking music they hear and playing it by ear, etc…?

On a final note (no pun intended), I wanted to give some examples of music that I find so expressive and I warn you I have broad tastes, from classic to Japanese and Korean which I only know a bit of the language in, but because music is expressive, it doesn’t require knowing the words.

Claire de Lune by Claude Debussy [2]

The Battle by Harry Gregson-Williams (Chronicles of Narnia soundtrack) [3]

Cherish by Ai Otsuka (Japanese) [4]

Friends by Ai Otsuka (Japanese) [5]

Waiting on the World to Change by John Mayer [6]


Blog 8-March 13, 2011


Over the last few lectures we have discussed topics that have not only been interesting to discuss but that have made me consciously aware to the point of annoyance. The last few classes have discussed the topics of speech errors and gestures, two very common topics that I have never really stopped to think about.

Speech errors occur everyday, but no one stops to think about why and how they occur. I personally was only ever aware of them because they can sometimes make a situation very awkward and embarrassing, like when you are presenting at a conference and you say “sacrified” as in sacrificed/died. But how is it that these are even occurring? Well, that probably depends at least in part on the type of speech error made. The example above is what is called a blend because it is a mixture of two words, perhaps these occur because of semantic relation (they both have the similar meanings) but one can’t be sure. There are other types of speech errors including addition (Carefully→clarefully), deletion (platic), substution (using open when you meant closes), etc… After the lecture on speech errors I became excruciatingly aware of my speech errors. I make A LOT. On Sunday alone, I made 14 that I was aware of, and there was probably more because I was at my riding barn were often I talk without thinking. In fact, thinking back on movies I have watched, one in particular stands out “Mean Girls” where the dumb blonde character goes “Grool” as in great and cool. This makes an interesting point, often speech errors are attributed to people who are, let’s say not the brightest crayon in the box, but in actuality, speech errors happen too everyone but they may not be consciously aware of it. So next time you go try to make fun of someone’s speech error, remember speech errors happen!


The topic that both intrigued me and is now driving me crazy is the topic of gestures. Personally, when I think of gestures I picture a Frenchman talking like mad with his hands. However, gestures just like speech errors are experienced by everyone. As Dr. Newman was given this lecture he was making some gestures purposely but he made many that he was probably not consciously aware of. Quite honestly it drove me nuts to try to learn about gestures as you watch your professor use them pretty much constantly. However, it did bring great awareness to me about my own gestures and how much everyone really relies on them when they’re talking. In its own right, gestures are their own form of communication but they also accompany language and help the listener have a deeper understanding. Gestures also help the person talking organize their thoughts and explain them out. Here is another tidbit about myself; I am a math geek, I especially love calculus. I find it similar to trying to solve a difficult puzzle and it captivates me. One unique thing I do when solving a math problem is that I solve it in the air. By this I mean that I literally visualize/gesture the entire process in the air by writing in the air. I actually solve the question without ever showing a calculation (mind you I rewrite the steps during exams). People have often asked me why I do this, but the only answer I am able to give them is that it helps me organize my steps so that I can truly understand the problem. This is one of the key functions of gestures and I have proven it true through my own experiences.

Just as a side note I decided to make a rough count today in class of how many gestures Dr. Newman used throughout the 50 minute class. The rough estimate was something like 28 and these include things like pointing (deictics) and other types of gestures.

For fun, try watching this video from the Bugs Bunny cartoon. At the end of it choose a favorite seen and try to explain it in as much detail as you remember. What you should notice is how many gestures you use as you act out the cartoon. [7]

Blog post 9-March 20, 2011-The topic of discussion this week was language development; from how language differs in children compared to adults, to the timeline of language acquisition. A few years ago in the first year of my university degree I learned that each language has a specified number of phonemes and that babies are capable of pronouncing and understanding all the phonemes in all the languages. This is different from adults in that due to exposure to a particular language, adults can only use the phonemes that exist in the languages they spoke form an early age.

This has important consequences, in which, babies are capable of learning all languages quite easily while adults have difficulties learning new languages due to phonemes present in the new language that did not exist in their primary language(s).

Although babies are born with the ability to potentially learn any language, a critical period for language exists (thought to be puberty). If exposure to a language has not occurred within the critical period, the ability to learn that language becomes more difficult. This generalizes to all aspects of language where if a certain aspect of language is not learned during the critical period, for example irregular inflections, it will be very difficult to acquire later on.

Another concept that fascinates me is motherese. Motherese refers to how mothers tend to talk to their infants and it generally occurs with no conscious awareness. In motherese, mother’s tend to overexaggerate the stress and prolong the sounds of vowels. This is received positively by infants and it actually helps infants discriminate sounds. This shows that although motherese is something that mother’s do naturally, it is universal and does have a function which suggests that it may have been selected for during evolution.

A fad that has gained popularity over recent years is baby signing. Baby signing refers to teaching babies to communicate through signs. Pop science has often described baby signing as functioning to increase the cognitive ability of the infant, however, in truth there is no evidence to suggest that baby signing improves cognitive ability. It is actually more for social benefits than language or cognitive benefits as it can be learned earlier than verbal communication. There is also some controversy over whether the first “signs” are actually signs or gestures.

Here is a link to an example of baby signing. It also demonstrates the misrepresentation of the evidence behind baby signing for enhancing cognitive development. Enjoy!

[8]

Blog post 10-March 25, 2011

The last few lectures of the year encompassed the topic of bilingualism and whether the controversy surrounding bilingualism has a basis. Bilingualism refers to knowing two languages but we often use it to refer to polylingualism. Throughout this course I have been revealing tidbits about myself as the blog post have progressed, so here is another; I am bilingual, I speak both English and French and although my French is a little out of practice I am still fairly fluent in it. There has been a lot of talk about whether bilingualism is an impedance or if it is beneficial during development. It has been shown that although it can initially slow development (difficulties keeping languages separate) by grade 5(about age 10) there is no disadvantage to being bilingual and in fact by this age the children are capable of performing complex translations as well as better attentional control. There is also substantial evidence demonstrating that it is more difficult to acquire languages later in life suggesting a critical or sensitive period for language acquisition. However, there are exceptions to this and a cool video was given to us in class about a guy who learned many languages in his 20s [9]


One question I would like to investigate is whether it is possible to learn language to some degree unconsciously. As you have previously seen from my music and language post, I listen to Asian music. I also spend more time watching Asian dramas, mainly Korean and Taiwanese, than I do north American T.V. I watch these with English subtitles and can pick out a few common words here and there. However, in December my mac battery died during the exam period and I drove into DAL to get them to check my battery. I got there about 15 minutes before closing and they did some diagnostics and sure enough my battery was dying. I got a new battery and was on my merry way. I was half way across campus talking to a friend on the phone that does know Korean when I realized I had left my charger at the DAL computer help desk which was closing in 5 minutes. According to my friend I blurted out an entire freaked out paragraph or two in Korean that made perfect sense. As stated previously I do not speak Korean, so I am wondering if at some subconscious level it is possible to learn language through mere exposure and association but not be able to recall on command?


This week we also had a guest lecturer come and discuss the different types of aphasias and how prevalent yet undiscussed it is in North America. A striking fact was that there are more people living in the US with aphasia than Parkinson’s disease yet hardly anyone knows what aphasia is. The point made was that because it is a communicative disorder among other things, it is hindered in avocation because people cannot act as a spokesperson due the primary deficit being communication. Initially when a lot of people see the videos displaying the issues with language that these people have, they think of aphasia as just an annoyance, but aphasia can be crippling. I personally link non-fluent aphasia with locked-in syndrome in certain aspects. Although non-fluent aphasics tend to be able to function other than language deficits, they are aware of the deficits and it becomes frustrating. They try to express themselves but no one understands them, just as in locked-in syndrome. It may perhaps be even more mentally taxing because these people could live potentially normal lives if they could simply communicate. I find aphasias a very striking topic and I will be exploring the influence of the right hemisphere during recovery from aphasia for my debate topic.

Here is a link showing the symptoms of someone with non-fluent aphasia [10]


Blog post 11- April 1, 2011 Happy April Fool's Day!!!

SIDE NOTE:: Sarah, you marked my blog post 11 as blog post 10. I sent you an email but have not heard back, the comments given for blog bost 10 match the topic of my blog 11 post, can you maybe just reverse them and grade blog post 10 this week?.

The end of my third year at DAL is nearing, and the lectures are wrapping up! Dr. Newman has a unique and creative idea for presentations, debates. Instead of the traditional presentation style where you just boringly stand at the front of the class and present a topic that is generally of little interest to you, Dr. Newman uses debates because they are interactive and stimulating. The topics are very current issues surrounding the topic of psycholinguistics and each team (for and against) has four members. After both sides have presented and a Q and A session is done, the teams get a try at a rebuttal! This format allows the class to interact with one another and is a little bit more fun than the typical presentation.


The first debate this week was on the Oakland Schoolboard where the school is trying to implement the Ebonic language as the primary language of the school. Like most of society I always that about this language as more of a poor version of English, and was surprised to learn that it is in fact a recognized language. The for side argued that this should be used as the primary language taught in schools in Oakland because it is the primary language of the community and that by telling kids its wrong in school, they are taking away part of their culture. They discussed its value as a language and argued that maybe fewer students would be held back or less would drop out of school. The against side brought up many good points, the two that really struck me was the impact later in life and the issues implementing it in the school system.

I have to say that I sided with the against team despite some aggressiveness towards the other team. The points that they discussed I agreed with. They brought up that by teaching students primarily in Ebonics they will not be adequately prepared to ever leave that community. As demonstrated by my own ignorance, most of society thinks of Ebonics as poor English or a form of slang. This means that the form of writing required in later education will not be taught to the degree necessary to sustain post-secondary expectations and its possible that in job interviews Ebonic speaking people will not be considered for their true abilities but rather the way they talk. Although I understand the point of view of the for side, the practical application will be difficult as most teacher’s also do not speak Ebonics. It would be rather difficult for them to teach something they know little about. For example, it would be like a French person teaching Spanish just after learning it, there are similarities but all in all they are pretty different. I think that if there is a want for this type of school in the community, it showed be implemented in the manner that French schools are here in the HRM. The HRM’s French schools are privately owned, but not “private schools” it’s a public version of a French school. This is possible to do for Ebonics as well.

The other debate so far this week was the debate on the Fast forWORD program which is a dyslexia therapy that is suppose to be the best out there. However, the against side brought to light that the company owns the patent on the program and no one is able to independently research it with out their permission (and probably funding). This is suspicious to me. Another point that really struck me was that after two years, the initial effects of the program do not hold up, they are short-term results not long-term. Here is another fact about me; I am dyslexic, although these days you wouldn’t be able to tell. I received therapy when I was younger and have very few issues now, in fact I am one of the top students in the neuroscience program for my year. Had I needed to stick with the program for life in order to gain “long-term” treatment, I do not believe I would have been able to get to where I am. I also do not believe that this is fair to ask of dyslexic patients especially when other programs do not have these issues.

Here is a link to what the Fast ForWORD program claims to accomplish [11]

Blog post 12-Final post, April 8, 20011

The last blog post is a reflection on the course, what we enjoyed what we didn’t and what we think needs improvement!

Although I wasn’t sure what to expect from a course on psycholinguistics, I found myself enjoying a lot of the material. I had no previous background with the study of language, except being to a speech therapist when I was younger. However, the topics really intrigued me as a whole, with the ones linking language to other factors of life, such as music and learning (effects of bilingualism) really grabbing my interest.

I really enjoyed how Dr. Newman integrated topics that the students could relate to into lectures and debates. It made for a much more interactive class and made me personally more willing to learn about the material! I also enjoyed the podcasts being on our course website, as it gave me the chance when I was sick to still hear the lecture, as sometimes its difficult relying on a classmates notes. I think my favourite topic was language and music because it really opened my eyes to the connections between the two.

My favourite part of the course was definitely the blogs because it gave you a chance to reflect on the weeks topics, explore questions that weren't brought up in class and just get creative. I find a lot of university classes are very generic, lectures..study...exam...repeat. Psycholinguistics gave us a chance to be creative, to "use the other side of the brain" as pop science would say, and this is something I have really missed since moving past highschool!

The new format of marking, incorporating wikiveristy was a good experience and it allowed us to gain valuable research experience, however, it could also be extremely frustrating. I suggest that in the future there be a guide for general codes such as how to do the references, links, adding pictures etc… as it is very confusing for non-computer geniuses like myself. Also I think some emphasis should be taken off illustrations as some peoples topics were rather easy to find pictures for while others such as morphology and syntax were much more difficult to find pictures to incorporate into the chapters. I believe this would make the marking a little bit more just!

The debates were extremely enjoyable, both to do and to listen to. I do however, have one suggestion. I believe it would be more fair to either flip a coin to see who goes first during the rebuttal or have the for side go, then the against and then the for side can go again so they get a chance to address the points brought up by the against side, just as the against side gets to do. During my teams debate we rebutted first then had to listen to the against side make up points about our presentation, which were completely untrue and the against was able to address the points we brought up in rebuttal. However, we never got that luxury! But overall its a fun and friendly way to have a presentation!

Overall I really enjoyed the class and it let me have new experiences. On top of gaining research experience, the chance to write a chapter for other students to use is quite a challenge and I feel that it improved my writing abilities greatly!

I just wanted to say thanks to Dr. Newman and our TA for doing a great job this semester! I really enjoyed the class and that was on large part because of you!