Short Bio 
I am a fourth-year Psychology and French double-major at the University of King's College and Dalhousie University. I will be writing regular blog posts and contributing to Psycholinguistics on the mental lexicon for my third-year course in psycholinguistics.
Also, I have uploaded some images I have created for my contribution to the Wikimedia Commons. Here's the link to my user page there.
Blog Posts 
Linguistic Determinism (January 14, 2011) 
I've always been intrigued by the relationship between language and cognitive processes, such as thought, and where the distinctions lie between these. When I think, even as I write this blog post, I perceive my thought as occurring through words. Is this just the final stage of my thinking before oral or written production or does my language determine my thinking in some way? There is evidence that goes against this in terms of how people who speak different languages think about colours. Speakers of different languages with different numbers and types of colour words can all segment the colour spectrum equally well (Rabin and Cain, 1986, as cited in Jay, 2003). There is a caveat to this, however, in terms of how people think about odours. For example, since odour perception is far more ambiguous than colour perception, a person will think that a strawberry drink smells like a raspberry one if it is labeled “raspberry” (Engen, 1991, as cited in Jay, 2003). Also, odours with specific labels are remembered better than those with more vague ones. Thus, while different languages may not specifically determine what their speakers think, it seems to me that, language can be used in a top-down manner to help people with ambiguous perceptions and memories. Though language probably doesn’t determine thought, it does seem to help facilitate and organize it. This makes me wonder about people who grow up without any language. If language is involved in thought, is their thought impaired in some way? I remember learning about the Wild Boy of Aveyron in one of my classes, who grew up in the wild with no language. When he was found, he seemed to lack any sort of emotional regulation, screeching and grunting every time he didn’t get his way. Perhaps this was due to the fact that he didn’t have the necessary language to regulate his thoughts and mentally soothe himself. As he learned language, these episodes seemed to decrease. Of course, the fact that he wasn’t socialized by humans in the wild could be a confounding factor, for language is intimately linked with social behaviour. I’ll end this post with a final thought; “Can language, in and of itself, become associated with thinking without the social component?” --Nathanael Crawford 13:50, 14 January 2011 (UTC)
Thinking About the Mental Lexicon (January 21, 2011) 
So, I’ve been working on my outline for the section in Psycholinguistics I’m writing on the mental lexicon. I’ve already done a fair amount of research on this topic for a research project I’m doing at Dalhousie on cognate words and how they are represented in the bilingual mental lexicon. The literature I’ve based my research on argues heavily for a “morphological” organization of the mental lexicon (both monolingual and bilingual) meaning that words are organized by their root word (Marslen-Wilson et al., 1994; Rastle et al., 2000; Lalor and Kirsner, 2001) . In my course textbook, however, a broader view of the lexicon is presented, which seems to rely heavily on a semantic organization of words, whereby words that share meaning to a high degree are placed closer together and thus access each other faster (e.g. in priming studies) (Jay, 2003). What I want to know is, is there one main was words are organized in the lexicon (e.g. by a shared root form) and then all the other information (e.g. semantic, phonological etc.) is linked up to that representation, or are they all conflated together? A related question I have is, “Do words map directly on to the concepts they represent or are they completely separate?” Also, “Is conceptual knowledge different than semantic knowledge about a word?” I'm looking forward to examining these questions further. --Nathanael Crawford 23:14, 21 January 2011 (UTC)
The McGurk Effect (January 28, 2011) 
We’ve been talking about speech perception in class for the past week, and we recently learned about a really neat perceptual effect known as the McGurk Effect. This effect occurs when the auditory and visual cues of a spoken phoneme do not match up. This effect primarily occurs with phonemes with formants that mirror each other on a spectrogram (e.g. /g/ and /b/ have first formants that mirror each other) (Newman, 2011a) . The result: when one hears /b/ and sees the lips move to say /g/, is a phoneme with an intermediary first formant is perceived, in this case /d/. What I want to know is if this works for all consonant combinations, for example, if sounds more similar than /b/ and /g/ produce this effect, (e.g., /ʃ/ and /s/), and if it occurs between consonants voiced differently, (e.g., /z/ and /s/). It would seem to me that this effect would only occur for consonants that form distinct phonemes in a given language with distinct phonemes that fall in between them in terms of frequency/formant shape and place of articulation. Furthermore, they could only vary along one parameter, such as place of articulation. After having looked at the paper in which this effect was originally described (McGurk & McDonald, 1976) , the effect is the same for the voiceless counterparts of /b/, /d/, and /g/ (/p/, /t/, /k/), but they also concluded that some consonant combinations work better than others. (Would this effect be seen for fully voiced, murmured, and aspirated words too?) --Nathanael Crawford 20:28, 28 January 2011 (UTC)
Morphology and ASL (February 2, 2011) 
In class on Monday, we learned about the typology of morphology in different languages. At one extreme, analytic languages have a 1:1 morpheme to word ratio, with a separate word to mark tense, aspect, number etc. At the other extreme, polysynthetic languages, such as Kivunjo, have a many:1 meaning to morpheme ratio and morpheme to word ratio. Number, tense, aspect and even subject and object, many of which are all expressed by one marker, can all expressed in the same word (Newman, 2011b). We also discussed how American Sign Language could be considered a polysynthetic language. Here, visual space is used in various ways to convey number, person etc. simultaneously. For example, we learned that the they way one says “you (2pers.sing.subj.) give (2pers.sing.) me(1 pers.sing.obj.),” by performing the sign for “give” and moving the hand straight outward simultaneously. To reverse subject and object, the same sign and motion are simply performed in reverse. We learned that adverbials can also be expressed simultaneously by facial expression. This makes me wonder how tense and aspect are marked and, more generally, if morphology works in the same way in the mind of a deaf person, who “sees” morphology all at once, as it does with those who hear morphology linearly. In short, I found an article that answers both these questions. Emmorey (1991)  shows that the continuative aspect, for “give,” for example, is conveyed by performing the “give” sign in a circular motion. (Is this done at the same time as the action for number to say, for example, that, “I am giving you chocolate”?) Also, she found uniquely morphological priming for ASL signers, as there is for English speakers (Marslen-Wilson et al., 1994), but that is was stronger for aspect inflections, which are more semantically rich, than agreement inflections. --Nathanael Crawford 01:24, 3 February 2011 (UTC)
Syntax and Working Memory (February 13, 2011) 
In class on Friday, we discussed Chomsky’s Generative Grammar. Specifically, we looked at syntax (or sentence grammar), how it is different from semantics even though it contributes to understanding in many ways, and how Chomsky suggests that it is a distinct cognitive faculty. Our professor highlighted the fact that, despite the widespread acceptance of this model, Chomsky did not account for the limitations working memory imposes on syntactic processing. Perhaps I am jumping the gun for next class, but I am interested in what some of these limitations are. Though sentence grammar is indeed generative, theoretically allowing for an infinitely long “run-on” sentence (e.g., See the third paragraph of the introduction to the King James Bible), we can only store so much information in our brains for immediate processing at a time. Initially, I assumed that this simply meant that, like with digits in phone number memorization, there was only a maximum number of words or phrases that the working memory could hold. It is actually not as simple as that, from what I have read. One example that illustrates this point is that of individuals’ ability to process “center-embedded clauses.” King and Just (1991) found that individuals took less time to read subject-relative embedded clauses, like, “The bully who hit the boy yelled,” than object-relative embedded clauses, like, “The bully who the boy hit yelled” . This shows that phrase or clause positioning is more important than number in terms of the use of working memory's resources. Ultimately, this suggests that perhaps grammar is not as separate from other cognitive faculties (like working memory) as was initially assumed. --Nathanael Crawford 22:06, 13 February 2011 (UTC)
Gesture and the Mental Lexicon (February 22, 2011) 
Part of my course mark for my contribution to Psycholinguistics is based on a peer revision of another classmate’s chapter. I chose to review the chapter on gesture because I thought it would be fairly unrelated to my chapter on the mental lexicon and I wanted to learn something completely new. Well, I certainly did learn many new things, but I was most intrigued by a point made by the author that indirectly related gesture to my chapter topic. In one section of her chapter, the author makes the point that gesture has been shown to facilitate lexical access for certain types of words (specifically visuo-spatial words). Lexical access is intimately linked to the structure of the mental lexicon; how words are organized in the mind. While I contend that many aspects of words are represented in the mental lexicon, such as phonology, morphology and semantics, she implies (or I infer) that the gestures associated with a word are represented too. Indeed, as she cites, Morel-Samuels and Kraus (1992)  have shown that the more familiar as person is with a word, the less time it takes for him or her to produce its associated gesture. This suggests that a gesture, a specific motor movement, not only facilitates access to, but is also represented along with the word it describes (i.e., its “lexical affiliate”).
There are a couple of points to be raised from this. First, one must remember to control for this when studying how the other aspects of words (mentioned above) are represented in the mental lexicon (e.g., Are we using confounding motor tasks to indicate responses like lexical decisions while assessing the role of morphology?). Second, what does this mean for people who communicate primarily by ASL? Are ASL signs represented like gestures in the mental lexicon or is it more complicated than this (which I suspect). Interesting things to think about… --Nathanael Crawford 13:59, 22 February 2011 (UTC)
Music and Reading (March 6, 2011) 
Recently in class we discussed a very interesting topic, the relationship between music and language in the brain. Do the brain regions responsible for processing in these areas overlap in any way? Are they the same or completely different? There is certainly good reason to believe that similar cognitive faculties are involved in both cases because there are many similar (if not the same) skills involved in musical and linguistic processing and production. For example, both language and music have phonology, differences in sounds indicating differences in meaning, and rules for combining those sounds appropriately. Also, there is an underlying grammar to both musical and linguistic phrases and even a symbolic notation for writing these phrases (and sounds) that must be mastered.
There is also reason to believe that music and language areas may be dissociated. A classmate of mine mentioned the movie that won the Academy Award for Best Picture this year, The King’s Speech. There is a great scene in this movie, where King George (played by Collin Firth), is having a great deal of difficulty reading a selection from Shakespeare. When his therapist puts earphones with music playing on his head and gets him to read, he does so flawlessly. The therapist even teaches him to use well-known tunes to sing what he wants to say to overcome his stuttering. This suggests that, when the normal speech production route is impaired, the musical route can compensate for this because it is distinct in some way.
Another thing that intrigued me was that King George, in this movie, was reading. In class, we focused mainly on the oral aspect of language, rather than the written… but what about reading and music? I found a very interesting article on this topic, specifically about how learning to read music can facilitate, and even enhance, children’s learning to read regular text (Henderson & Bernstorf, 2002). This paper, Linking Music Learning and Reading Instruction, cited a number of studies that showed a correlation between the various aspects of mastering music and learning to read. Specifically, the authors noted that one study showed that pitch skills and reading skills correlate highly. Though more research is needed in this area to make stronger conclusions, the authors highlighted one main benefit to retaining music programs in schools: It gets kids excited about reading in general! For example, when kids read lyrics underneath the notes, this reading of text becomes associated with the positive emotions elicited by singing/playing music. Thus, while the exact neural mechanisms that are involved in both music and language may still be unclear, it is clear that interest in music should not be discouraged if we want to promote the healthy development of reading skills in children. --Nathanael Crawford 02:26, 7 March 2011 (UTC)
Speech Production and the Mental Lexicon (March 10, 2011) 
In class on Monday, we discussed models of speech production. These models all try to explain the steps involved in going from the thought of a word, its semantics, to expressing it in the correct sequence of phonemes. As we went through these models, I noticed that many of them were very similar to many of the models of the mental lexicon I read about for my chapter contribution to Psycholinguistics. In particular, Levelt and colleagues’ (1999) model of speech production intrigued me. It built directly off the Bock and Levelt (1994) model of the mental lexicon, which is a spreading activation model that accounts for, what I consider to be, all the key components of word knowledge. These include the context in which it is used, its semantics, syntactical information, morphology, and phonological information. Typically, support for models such as this comes from various tasks that induce lexical access (such as lexical decision tasks). Support for models of speech production, on the other hand, come from the more spontaneous speech of people. While Levelt and colleagues’ two models are very similar, I have now realized that the Model of the Lexicon seems to be more static, or structural, while the Model of speech production is more dynamic. The former focuses more on the components of word knowledge, while the latter focuses more on how we actually move from component to component to get to speech output.
A question I have is: Is lexical access different when we are trying to achieve the complicated task of speech production compared to when we are making a more simple motor decision about a word? Related to this, do we have one mental lexicon that is accessed differently in speech (or text) perception and speech (or text) production or two separate lexicons for each that are interconnected? Should we go beyond mere lexical decision and semantic categorization tasks to model the lexicon to include tasks that require more spontaneous production rather than just split-second decisions? --Nathanael Crawford 14:37, 10 March 2011 (UTC)
Revisiting Linguistic Determinism (March 19, 2011) 
The first topic we covered in my psycholinguistics course was the degree to which the language(s) we know can determine the very way we think about the world. I’ve learned a few things since then, which have helped me to refine the way I approach this topic.
While researching my chapter on the mental lexicon I came across a very interesting quote, which I think is very applicable:
“Although words depend on the existence of concepts, the inverse is not true: concepts can, and do, lead a life independently of words.” -- Christiane Fellbaum
The concept conveyed in these elegant words is that the former do not need the latter: we do not need language to think! There is actually a lot of support for this. First, just think of abstract concepts like “love.” We all have an intuition of what love is, but we’d be hard pressed to find two people to agree on an exact, word-based definition of it. Secondly, when you look at people with tip-of-the-tongue disorder and certain types of aphasia (e.g. anomia) or elderly people who have word-finding problems, they can often eventually define a concept accurately through circumlocutions and long descriptions of the concept for which they cannot find the appropriate word. This takes a lot of effort, however, and people don’t want to have to expend this kind of effort often, so it is easier for them to just not talk. When words fail altogether, people can fall back on gestures to express their thoughts and feelings. Sometimes people can even know a word, but just lack the ability to express it verbally, as in the example of Carly Fleischmann, a severely autistic girl who could not speak, but began miraculously typing her thoughts out one day. Here is one very striking, yet very heartbreaking, thing she has to say,
“It is hard to be autistic because no one understands me. People look at me and assume I am dumb because I can’t speak.” -- Carly Fleischmann
This highlights the one thing I find the most disturbing about the implications of the linguistic determinism hypothesis; that people who don’t have verbal language are stupid. Worf (the father of Linguistic Determinism) made the fatal error of assuming that people could only conceive of what their language allowed them to talk about. This is clearly not the case.
On Monday, we discussed in class how language has to develop within a critical period of development for people to be able to use it effectively. It seems to me that the specific language we learn during our formative years may determine the concepts we can convey to others through words, not necessarily the concepts as they are represented in our minds. Though there are certainly consequences to not learning language early in life and different outcomes for people who learn different languages, we need to remember that people who do not communicate in the ways we prefer are in no way cognitively inferior to us.
Getting back to my first post, and the Wild Boy of Aveyron, his body language clearly showed that there was so much going on in his head that he just could not convey with words. If anything, his acting out was probably more of a social phenomenon rather than a linguistic one; he was frustrated that he couldn’t communicate and hadn’t learned socially appropriate ways to deal with this frustration. In the end, he did learn more appropriate behaviours and alternative ways to communicate (e.g., cue cards), many of which are still used by clinicians today. --Nathanael Crawford 15:40, 19 March 2011 (UTC)
Bilingualism, the Brain, and the Mental Lexicon (March 23, 2011) 
Today in class we talked about bilingualism. Building off of our discussion on critical periods in language development, we discussed the effects of age of acquisition of a second-language on one’s proficiency in that language. One article that we discussed that piqued my interest was on the parts of the language areas of the brain that overlap in bilinguals. Kim et al. (1997) published a paper in Nature that showed that distinct parts of Broca’s area are activated (in fMRI) when late second-language learners perform internal descriptions of their day, depending on which language they use. Early second-language learners, however, showed a high-degree of overlap in the parts of Broca’s area that were activated for each of their languages while performing the same internal descriptions. Wernicke’s area, by contrast, shows overlap in the areas activated by each language irrespective of age of acquisition of the second language. The authors of this study were not terribly explicit about the specific linguistic function of these areas for bilinguals, and there were a number of important limitations to this study because it was so general in nature.
Turning to my personal interests in the mental lexicon, there are a number of studies that show that different areas of the brain light up depending on the specific properties (e.g., syntactic or semantic information, open or closed class) of words that are targeted by experimental tasks. I wonder if these same areas would overlap in bilinguals, with varying ages of second-language acquisition, if the same tasks and designs were used, but modified accordingly. There is conflicting information on this in the lexical decision, priming literature that does not use neuroimaging (Voga & Grainger, 2007) , when looking at bilinguals reaction times to different types of words from the languages they know. I wonder if the type of imaging studies I mentioned could provide some clarification here. --Nathanael Crawford 01:08, 24 March 2011 (UTC)
Reading Programs and Conflicts of Interest (March 30, 2011) 
On Monday we began our group debates in various topics related to psycholinguistics. I was really intrigued by the today’s debate on the Fast ForWord® reading intervention program and how successful it has been reported to be (particularly by the Nevada Department of Education). Their report showed a 22% increase in children’s reading ability from one grade to the next, a result that puts it ahead of competing programs. There were so many competing programs that I couldn’t even count them all!
This leads right into one of the central issues debated; allegations of conflicts of interest between the creators of the Fast ForWord® program and many of the researchers who did studies that show results that show positive outcomes. My purpose here is not to argue for or against these allegations. In fact, I feel that people who use “conflicts of interest” as a main argument against the implementation of specific reading programs should think twice. Every reading program created by a private company could be accused having conflicts of interest associated with it. It would be hard for this not to be the case; creators of these programs have to work hard to design and manufacture them and then promote them to highly critical scientific, political, and educational communities. Also, it would be hard for a reading researcher, who knows how to help children who struggle, not to base a commercial product off his or her own research.
Furthermore, different studies of different regions even show varying results in terms of their success, and there is not so much “conflict of interest” that every study is impacted by it! To me, it seems more a matter of how these reading programs are implemented once they are purchased, rather than whether they are “good tools” or not, that determines how successful they are (in general). I don’t feel that antagonizing their promoters is helpful.
In fact, it seems to me that focusing solely on “conflicts of interest” in studies can actually hinder their effectiveness. The Reading First controversy in the United States comes to mind on this topic. Here, the United States government provided funding to schools to implement any reading programs as long as they were based on “scientifically-based reading research.” In some areas, it showed little success in terms of children’s reading comprehension and phonological awareness, while in others, it was hugely successful. The ultimate downfall of this program surrounded not the scientific studies on the program, but the allegations of conflicts of interest among those who spearheaded it in Washington. The leaders were accused of promoting or allowing into Reading First reading programs they were personally invested in. Being reading experts themselves, though, they were supporting researchers and programs that they knew were legitimate; programs that fit the strict scientifically-based criteria! It seems to me that sometimes “conflicts of interest” are really conflicts of common sense… isn’t it logical that experts are personally invested in programs that they know work and find that their research supports them? --Nathanael Crawford 01:04, 31 March 2011 (UTC)
Language of Instruction Debate and Course Review (April 6, 2011) 
The debate topic for today’s class concerned whether the dominant language of a country should be the only language of instruction for immigrants. The “for” side mainly argued that this should be the case for two key reasons: 1) First, studies show that the younger immigrants are when they begin to learn the dominant language, the more fluent they become in it. 2) Second, it is necessary for immigrants to learn the dominant language to be able to get jobs. The “against” side argued that this should not be the case because for a number of reasons. 1) “Motivation” is the key factor in terms of job success for immigrants, suggesting that it is important for them to be able to cultivate their native language to maintain their culture and drive to succeed in life. 2) They also stated that immigrants who maintain their native language, and learn the country’s dominant language secondarily become better bilinguals overall. 3) It was their third point that really got me, though. They argued that immigrants tend to cluster together in communities that promote their native language. This means that there will be job opportunities immigrants in their native language in the larger of these communities. Indeed, this is the case in large metropolitan areas here in Canada, like Toronto and Montreal.
The reason why this won me over is because it made me recall a sociolinguistics class I took last year where the professor highlighted the fact that this sort of “clustering” behaviour was actually the norm throughout the world, and that most countries are actually very linguistically diverse. Indeed, us North Americans often forget that the unilingual "world" we operate in is actually quite abnormal. Some countries in Africa don’t even have a “majority” language, but have multiple minority languages. Work may take place in any number of languages in these countries and people get along quite fine. My point: I think that we cannot generalize what we take for granted as the linguistic norm here in Canada, and I wish both debate teams could have addressed this issue.
Shifting focus completely to the course in general, I was really intrigued by it overall. I really liked the approach of having us contribute a chapter on Wikiversity for our main mark for many reasons. First, it allowed us to pick a psycholinguistic topic that we found interesting and delve deeper into it than we would normally in a course like this. Secondly, the fact that we were responsible for our own research and could write our chapter how we wanted really motivated me to work hard and engage with the material. Thirdly, I found it very rewarding to know that others all around the world would be able to benefit from my study, and not just the professor or TA that does the marking. The same goes for the blog posts. Because we could write about what we wanted, I felt I could just write pages (I would have liked to see them be worth a greater percentage of my mark, however, due to the effort I put into them). Anyway, the course was unique overall, and I wish more of my courses were like this (promoting a more liberal, personally relevant learning style). --Nathanael Crawford 17:14, 6 April 2011 (UTC)
External Links 
- Wild Boy of Aveyron
- The McGurk Effect
- Introduction to the King James Bible
- Carly Fleischmann
- Fast ForWord®
- The Reading First Controversy
- Jay, Timothy (2003). The Psychology of Language. New Jersey: Pearson Education.
- Kirsner, K., Lalor, E., & Hird, K. (1993). The bilingual lexicon: Exercise, meaning and morphology. In B. Weltens (Ed.), The bilingual lexicon. (pp. 215-248). Amsterdam Netherlands: John Benjamins Publishing Company. Retrieved from http://ezproxy.library.dal.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=1995-97659-009&site=ehost-live
- Marslen-Wilson, W., Tyler, L. K., Waksler, R., & Older, L. (1994). Morphology and meaning in the english mental lexicon. Psychological Review, 101(1), 3-33. doi:10.1037/0033-295X.101.1.3
- Rastle, K., Davis, M. H., Marslen-Wilson, W., & Tyler, L. K. (2000). Morphological and semantic effects in visual word recognition: A time-course study. Language and Cognitive Processes, 15(4-5), 507-537. doi:10.1080/01690960050119689
- Newman, A. (2011a, January 26). Psychology 3190. Class lecture on speech perception.
- McGurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746-748. Retrieved from http://www.nature.com/nature/journal/v264/n5588/abs/264746a0.html
- Newman, A. (2011b, January 31). Psychology 3190: Class lecture on morphology.
- Emmorey, K. D. (1991). Repetition priming with aspect and agreement morphology in American Sign Language. Journal of Psycholinguistic Research, 20(5), 365-388. doi:10.1007/BF01067970
- King, J., & Just, M. A. (1991). Individual differences in syntactic processing: The role of working memory. Journal of Memory and Language, 30(5), 580-602. doi:10.1016/0749-596X(91)90027-H
- Morrel-Samuels, P. & Krauss, R. M. (1992). Word familiarity predicts temporal asynchrony of hand gestures and speech. Journal of Experimental Psychology: Learning. Memory, and Cognition, 18, 615-623.
- Hanson, D., & Bernstorf, E. (2002). Linking music learning to reading instruction. Music Educators Journal, 88(5), 17.
- Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22(1), 1-75. doi:10.1017/S0140525X99001776
- Bock, K., & Levelt, W. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of psycholinguistics. (pp. 945-984). San Diego, CA US: Academic Press. Retrieved from http://ezproxy.library.dal.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=1994-97824-029&site=ehost-live
- Fellbaum, C. (1998). WordNet: An electronic lexical database. Cambridge, Massachusetts: MIT Press.
- Kim, K. H. S., Relkin, N. R., Lee, K., & Hirsch, J. (1997). Distinct cortical areas associated with native and second languages. Nature, 388(6638), 171-174. doi:10.1038/40623
- Newman, A. J., Pancheva, R., Ozawa, K., Neville, H. J., & Ullman, M. T. (2001). An event-related fMRI study of syntactic and semantic violations. Journal of Psycholinguistic Research, 30(3), 339-364. doi:10.1023/A:1010499119393
- Brown, C. M., Hagoort, P., & ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: Open- and closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281. doi:10.1162/089892999563382
- Voga, M., & Grainger, J. (2007). Cognate status and cross-script translation priming. Memory & Cognition, 35(5), 938-952. Retrieved from http://ezproxy.library.dal.ca/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2007-13806-009&site=ehost-live