Have a Question? Contact the Humanities Office or an Academic Unit

Cognitive Science of Language Lecture Series

The Cognitive Science of Language Lecture Series is a forum where all are welcome to attend talks by established researchers on recent innovations and current trends in Language and Cognition.

 

2016-17 Lecture Series Line Up – please notice the location for each talk

  • April 5th – 3:30 pm to 5:30 pm BSB 121 Dr. Suzi Lima
  • Wednesday April 5th – 3:30 pm to 5:30 pm BSB 121 – Dr. Suzi Lima  (Federal University of Rio de Janeiro & University of Toronto) –  Title:“The role of count lists in the acquisition of numerals” Abstract: A central question in the literature on the development of natural number concepts is the role of count lists in this process. In this paper we compared two different theoretical perspectives on this question: Carey’s cultural construction hypothesis (count lists provide the requisite placeholder structure for children to infer the relative relations between number words and acquire numbers beyond the limits of parallel individuation) and Spelke’s language combinatorics hypothesis (the human combinatorial capacity enables children to learn higher number words as denoting set sizes composed of smaller ones (e.g., “five” is “three” and “two”). In order to test these competing hypotheses we did three studies in Yudja (approximately 294 people; Brazil). Yudja children are not exposed to count lists before school-age (neither in Yudja nor in Brazilian Portuguese, which is the second language taught at school but not spoken inside the community). Furthermore, number words are highly compositional: numbers from five to twenty (higher number) are formed by combining the word for hand or toes and number words from one to four. If verbal count lists were unnecessary (contra Carey’s hypothesis), we would expect pre-schooling children to be able to learn higher number words using the process of building sets of sets using linguistic cues as predicted by Spelke’s proposal. While almost all Yudja children are monolingual, they eventually hear adults use numbers greater than 5 in Brazilian Portuguese in the community (again, not in a count list form). Once at school, children are exposed not only to Yudja count lists, but also to Brazilian Portuguese ones.Studies 20 adults (control) and 28 children, ten not enrolled in school (4 to 7 years-old; M=6.0; Stdev=0.47; 4F) and eighteen enrolled in school (8 to 13 years-old; M=9.3; Stdev=1.33; 9F) participated in three tasks in Yudja and Brazilian Portuguese (all based on Wynn 1992): the recitation task (participants had to count objects that were lined up on a table), the give-a-number task (children were asked to put N objects in a paper box. Each number on a given list was tested twice, and the order of the numbers was randomized) and the point-to-x task (participants saw a pair of pictures (2) and had to point to the one that corresponded to the number asked. This task served as reliability check of Give-a-Number.Results The recitation task has shown that pre-schooling children could recite a count list up to 5 in Yudja. Results in Brazilian Portuguese are not clear cut: most children could not count beyond five but a few children could count at least to 5 in BP. Our results have shown that schooling clearly affected children’s numerical abilities and development from subset to CP-knowers. In the recitation task, non-schooled children’s performance was centered in low numbers (0-5) both for Yudja and Brazilian Portuguese; schooled children presented a better performance on higher numbers. Thus, once presented to number words in a systematic and ordered fashion (count lists) children managed to evolve from subset knowers to CP knowers in both languages. Besides, morphological transparency of the logic of counting of the number words in Yudja did not facilitate the process of children becoming CP-knowers in Yudja. Our data from Give N task and Point to x task have shown that despite the fact that pre-schooled children presented a better performance in Yudja in comparison to Brazilian Portuguese in low range numbers and despite the fact that they are presented with both count lists (Yudja and Brazilian Portuguese) simultaneously at school, they become CP-knowers in Brazilian Portuguese before they do in Yudja.Summary Supporting Carey’s hypothesis, our results suggest that a verbal count list is necessary for a child to be able to transition from a subset knower stage to a CP-knower stage and that morphological transparency of the logic of counting does not facilitate the process of developing number knowledge in the early stages of acquisition of those words.
  • Wednesday March 22th – 3:30 pm to 5:30 pm BSB 121 – Dr. John Anderson  (York University)  –  Title:“Imaging the Aging Bilingual brain” Abstract: The process of aging involves a decline of executive control, speed of processing, and memory.   The neural architecture supporting these constructs similarly decays.  Often, senile decline is characterized as unavoidable.  If we live long enough, we will live to get dementia.  There are, however factors which reduce the rate of dementia occurrence conferring “reserve” for those whose lifestyle choices have afforded them this extra time.  Exercise, education, and maintaining strong social connections are some such factors.  Being bilingual and managing two languages on a daily basis lifelong is also thought to be a reserve factor.  This talk explores the neural underpinnings of cognitive reserve in bilinguals relative to monolingual peers using fMRI and DTI.
  • Wednesday March 8th – 3:30 pm to 5:30 pm BSB 121 – Dr. Lucie Menard (UQÀM) –  Title:“Multisensory speech perception and production” Abstract: In face-to-face conversation, speech is produced and perceived through various modalities. Movements of the lips, jaw, and tongue, for instance, modulate air pressure to produce a complex waveform perceived by the listener’s ears. Visually salient articulatory movements (of the lips and jaw) also contribute to speech identification. Although many studies have been conducted on the role of visual components in speech perception, much less is known about their role in speech production. However, many studies have emphasized the important relationship between speech production and speech perception systems. If, as suggested by many researchers, perceived visual and auditory cues are not independent but instead act in synergy and complement each other, they must be involved in the speech production process. In this talk, we explore the effects of auditory and visual feedback on speech production. Congenitally blind children and adults will be considered.
  • Wednesday February 8th – 3:30 pm to 5:30 pm BSB 121 – Dr. Khalil Iskarous – (University of Southern California) Title:“Discreteness and dynamics in computation: From octopus behavior to language” Abstract: What are the general computational principles involved in cognition, especially language? Research within the cognitive science and generative linguistics traditions has sought general principles of computation, which may become specialized in language. In this talk, it will be argued that similar computations may underlie three skills: octopus behavior, syntactic structure computation (asymmetric c-command), and the planning and execution of speech production. These computations are based on competition and coordination of simple computational units, which can determine outputs characteristic of a wide variety of motor, perceptual, and cognitive skills.
  • Wednesday January 25th – 3:30 pm to 5:30 pm BSB 121 – Jessica Coon – (McGill University) Title:“The linguistics of Arrival: Aliens, fieldwork, and Universal Grammar” Abstract: If aliens arrived, could we communicate with them? How would we do it? What are the tools linguists use to decipher unknown languages? How different can human languages be from one another? Do these differences have bigger consequences for how we see the world? The recent science-fiction film Arrival touches on these and other real questions in the field of linguistics. In Arrival, linguistics professor Dr. Louise Banks (Amy Adams) is recruited by the military to translate the language of the newly-arrived Heptapods in order to answer the question everyone wants to know: why are they here? Language, it turns out, is a crucial piece of the answer.
  • Wednesday November 23rd – 3:30 pm to 5:30 pm BSB 104Sid Segalowitz(Brock University) Title: “When Do We Know the Meaning of a Word (or a Picture), and What Does this Meaning Mean?” Abstract: I want us to address the question, When does the brain show evidence for having accessed the meaning at a cognitive level of input even if this is well before we can report on it? Once we address this question, we are led to much more difficult questions, such as “What do we mean by ‘meaning’?” And that ultimately is what I want to address, not only with respect to words. I will present evidence from our event-related potential studies that people show that the content of visually presented stimuli differentiates during very early stages of processing, starting with the P100, a component that starts rising about 80 ms and peaks at about 100 ms after stimulus onset. This happens whether the stimuli are words, line drawings, or faces versus houses. Traditional models assume a linear sequence of input to decomposition to meaning. Our results however require a parallel processing brain model. My question for you will be whether this has implications for linguistics.
  • Wednesday November 2nd, 3:30 pm, BSB 104: Ellen Lau (University of Maryland): Title: “Neural Investigations into Syntactic and Semantic Combination: from Beginning to End” Abstract: One of the great remaining mysteries of cognitive neuroscience is how structured and temporally extended sequences like sentences are encoded and navigated in memory. Accordingly, research on the neural bases of sentence processing has begun to shift from ‘violation’ paradigms to measurement of the brain activity associated with ‘normal’ comprehension. In this talk I will discuss a series of EEG, MEG and fMRI studies that take this approach towards better understanding the basic processes supporting sentence comprehension. One set of experiments investigates whether or not a full syntactic structure is actively maintained in working memory across the course of a sentence such that activity is greatest near the end (Pallier et al., 2011). Although our ERP results from coordinated structures are consistent with such a model, effects observed in a parametric manipulation of structure in MEG and fMRI are better explained by syntactic prediction processes that occur near the beginning of the sentence. I will also discuss our investigations into a recently-introduced time-frequency approach that highlights neural activity modulated at the same rate as syntactic constituents or phrases (Ding et al. 2015). Finally, I will present results from a new fMRI experiment that asks whether particular regions selectively support the computation of argument structure by comparing lexically-matched noun phrases and verb phrases (e.g. the buried treasure vs. buried the treasure).
  • Wednesday September 28, 3:30 pm, BSB 104: Lyn Turkstra (McMaster University): Title: “Measuring social cognition in spoken and written communication” Abstract: The term social cognition refers to primate cognitive functions that are specifically engaged in social interactions. There is growing evidence that social cognition is impaired in many adults with neurological communication disorders, and social cognition theories and research have profoundly influenced on our understanding of these communication impairments. Most social cognition research has focused on the ability to “read the minds” of others, based on their facial expressions and other non-verbal cues. This talk will present evidence that information about others’ minds also can be conveyed by subtle verbal cues, adding to our understanding of the powerful ways in which language shapes our social world.

Past Lectures, 2015-16

  • Tuesday April 12, 11:00 am, TSH 203: Phaedra Royle (Université de Montréal): Title: “Specific language impairment in French: Verb morphology” Abstract: Specific language impairment (SLI) is characterized by persistent difficulties affecting language abilities in otherwise normally developing children (Leonard, 2014). It remains difficult to identify young children with SLI in French. A previous study has shown that the correct production of passé composé (perfect past) in French is related to the conjugation group (regular vs irregular verbs) in typical children but not those with SLI (Royle and Elin Thordardottir, 2008). However, in that study participants were very young and showed floor responses, while verbs were not controlled for their morphophonological properties. We have recently recreated this experiment with older children and verbs in each of four past participle categories (ending in –é, –i, –u, and Other irregulars). Children with SLI in preschool or first grade were tested using an Android application, Jeu de verbes (Marquis et al, 2012). We compared their results and the error types to those of control children. Results show significant effects for linguistic group (SLI < control), and verb group (é = i = u > Other), as well as an interaction between these factors: Children with SLI’s performances did not vary according to verb conjugation group (é = i = u = O), reflecting a lack of sensitivity to inflection patterns, while control children showed this sensitivity (é = i = u > O). Children with SLI also showed different non-target production as compared to controls, with more use of the present tense in past tense contexts. We conclude that children with SLI do not master this morphosyntactic process in the same way typical French children do.
  • Monday, April 11, 4:00 pm, TSH 201: Karsten Steinhauer (McGill University): Title: “Factors modulating ERP signatures of L2 acquisition and L1 attrition” Abstract: Event-related brain potentials (ERPs) provide an excellent method to study the temporal dynamics of language processing in real-time. This includes the fascinating neurocognitive changes that occur while a new language is being acquired. In the past 20 years, ERP research investigating sentence processing in second language (L2) learners has led to a number of models that try to address these neural changes and the role of modulating factors such as age of acquisition (AoA), language proficiency, first language (L1) background, the type of language exposure (e.g., implicit versus explicit training environments), as well as inter-individual differences in learning trajectories and processing preferences. An important limitation of this research has been that AoA and L2 proficiency levels are typically (negatively) correlated in L2 learners, such that AoA effects attributed to a “critical period” may instead simply reflect their proficiency level. Attriters, whose late-acquired L2 has become the dominant language, may shed important new light on the respective role of these factors. However, whether and to what extent L1 attrition is characterized by similar neurocognitive changes, and whether such changes may mirror those in language acquisition – but “in reverse” – remains an open empirical question that only few recent investigations have begun to address. My talk will first provide an overview of recent findings and controversies in ERP research on L2 acquisition, especially in the domain of morpho-syntactic processing. The second part will focus on a series of large-scale ERP studies from our lab that probe brain signatures for lexical-semantic and morpho-syntactic processes in Italian immigrants who have lived for many years in Montreal (Canada). These participants describe English as their predominant language and report problems in their L1 (Italian). ERP online data have been collected for both their L1 (Italian) and their L2 (English) and are compared to the ERP profiles of English and Italian monolinguals, as well as to English-Italian bilinguals who acquired the two languages in the reverse order. Among other advantages, this complex design allows us to investigate how factors such as (i) being “bilingual” (versus monolingual), (ii) age of language acquisition (AoA), and (iii) proficiency levels in each language, interact and modulate neurocognitive mechanisms underlying online language processing.       
  • Wednesday, March 23, 3:30 pm, TSH 203: David Poeppel (Max Planck Institute and New York University): Title: “Speech is special and language is structured” Abstract: I discuss two new studies that focus on general questions about the cognitive science and neural implementation of speech and language. I come to (currently) unpopular conclusions about both domains. Based on experiments using fMRI, and exploiting the temporal statistics of speech, I argue for the existence of a speech-specific processing stage and a specialized neuronal substrate that has the appropriate sensitivity and selectivity for speech. Based on experiments using MEG, I discuss the basis for abstract, structural processing. These results demonstrate that, during listening to connected speech, cortical activity of different time scales is entrained, concurrently, to the time course of linguistic structures at different hierarchical levels. Critically, entrainment to hierarchical linguistic structures is dissociated from the encoding of acoustic cues and statistical relations between words. The results demonstrate syntax-driven, internal construction of hierarchical linguistic constituent structure via entrainment of cortical dynamics. My conclusions — that speech is special and language structure driven — provide new neurobiological provocations to the prevailing view that speech perception is ‘mere’ hearing and language comprehension ‘mere’ statistics.
  • Wednesday, February 24, 3:30pm, TSH 203: Guillaume Thomas (University of Toronto): Title: “Tense on Nouns: evidence from Mbya Guarani” Abstract: In English and many Indo-European languages, tense is a functional category that is largely realized as verbal inflection. Because of this fact, most syntactic theories of tense from Aristotle to Pesestky and Torrego (2004) have characterized it as an inherently verbal category. However, this conclusion has been challenged by cross-linguistic studies that look beyond Indo-European languages. In particular, Nordlinger and Sadler (2004) have argued that tense is attested and interpreted in the nominal domain in numerous languages.Guarani languages (Tupi Guarani: Argentina, Bolivia, Brazil and Paraguay) have figured prominently in the ongoing debate on the putatively verbal nature of tense. Although Paraguayan Guarani was presented as a nominal tense language in Nordlinger and Sadler’s (2004) typology, Tonhauser (2006) argued that Nordlinger and Sadler’s analysis of Paraguayan Guarani temporal markers was misguided. Tonhauser’s arguments were in turned challenged by Thomas (2015), who argued that the interpretation of nominal temporal markers in Mbya Guarani is strikingly similar to that of English tenses, once pragmatics is factored into their analysis.In this talk, I will review existing arguments for and against the analysis of Guarani temporal markers as tenses, and I will present new arguments in favor of their analysis as nominal tenses.
  • Wednesday, January 27, 3:30pm, TSH 203: Gary Libben (Brock University): Title: “Morphological Structure and Cognitive Function” Abstract:Words such as mousescreencomputermonitorkeyboard, and trackpad are all words that describe things that we associate with digital technology. For most language users, they seem to be examples of a single language structure—the word. Yet, for many morphologists, they are quite different. The words mouse and screen are monomorphemic, the word computer is derived, the word monitor contains a suffix, the words trackpad and keyboard are compounds.  A great deal of psycholinguistic research has addressed the question of the extent to which these differences play a role in online processing and what consequences the possible answers to the question may have for our understanding of cognitive representation and processing. In this presentation I propose that morphological structure is fundamentally a psychological phenomenon that is subject to variation within an individual as a result of specific task demands and experience over time. This view has two key components:  (1) Morphological Transcendence: This is the claim that the representation of words in the mind changes through the lifespan as a result of the experience that an individual language user has with processing specific words and morphological families.  (2) Morphological Superstates:  This is the claim that morphological constituents exist psychologically in a morphological superstate up to the point at which they are measurable through acts of language production or comprehension. I discuss these claims with respect to data from English, French, German, and Hebrew. 
  • Wednesday, January 20, 3:30pm, TSH 203: Margaret Grant (University of Toronto): Title: “Ambiguity and Incrementality in Sentence Processing”Abstract: Uncovering the nature of ambiguity resolution during comprehension has been a central project of the field of psycholinguistics. This aim has persisted because the nature of ambiguity resolution in comprehension has critical implications for models of sentence processing in general. In this talk, I will bring together two current lines of research on ambiguity resolution, with a focus on sentence comprehension during reading. The first line of research provides a novel direct comparison of the processing of structural and referential ambiguities. These two ambiguity types have been extensively studied in separate literatures, with the two fields of research arriving at opposite conclusions. Evidence from the processing of structural ambiguities, such as ambiguous modifier attachment, favors models in which a single analysis of ambiguous material is adopted without a cost to processing (e.g., Traxler et al., 1998; van Gompel et al, 2001). This evidence stands in contrast to models in which multiple analyses are simultaneously adopted and compete for selection (e.g., MacDonald et al., 1994). Contrary to the literature on attachment ambiguities, competition has been observed between available referents in pronoun resolution (e.g., Badecker & Straub, 2002). I will present a series of studies using a variety of methods, including eye movements during reading, self-paced reading and an ambiguity judgment task, to show that the separation in the literature between these two ambiguity types is perhaps misleading. While there is a shift in results based on differences in the reading task, both attachment and pronoun ambiguities show a similar processing profile when compared directly. The second line of research investigates the way that the processor reacts to semantic ambiguity.  This new work examines the processing of Determiner Phrases that are ambiguous between an individual interpretation and an amount/degree interpretation (e.g., the pizzas in the pizzas would be tasty food for the hungry students vs. the pizzas would be enough to feed the hungry students). The results of a study of eye movements during reading suggest that the processor immediately commits to a single interpretation of the DP, with the default being determined by properties of the DP itself. I will discuss these findings in the light of semantic theories of degree/individual polysemy (e.g., Rett, 2014) and in light of previous psycholinguistic findings on polysemy and other semantic ambiguities (e.g., Frazier & Rayner, 1999; Frisson 2009). Taken together, these studies on a broad range of ambiguity types suggest that the processor may exhibit different behavior in handling one type of ambiguity given a change in task demands, and that under equivalent experimental conditions, different ambiguity types may or may not give rise to similar processor behavior.  
  • Wednesday November 25, 3:30 pm, TSH 203: Philip J. Monahan (Centre for French and Linguistics, University of Toronto Scarborough,Department of Linguistics, University of Toronto): Title: “Phonology as the Basis for Predictions: Evidence from perceptual and neurophysiological measures” Abstract: Despite significant variation in the speech signal, we comprehend spoken language with little effort. The responsible perceptual and brain mechanisms, however, remain poorly understood. First, using perceptual and neurophysiological measures, data is presented that suggests only certain features serve as the basis for predicting the speech signal. In particular, I present data from a segment identification task which suggests that [+voice] segments allow English participants to predict that the following coda segment will also be [+voice]. Then, I present data from a pair of MEG experiments supporting an underspecified representation for mid-vowels in American English. In particular, mid-vowel standards showed reduced oscillatory power in the pre-stimulus beta-frequency band (18-26 Hz) compared to high-vowel standards. Second, I argue that listeners are sensitive to phonological long-distance dependencies during perception. Using Basque sibilant harmony as the test case, I present data from both behavioural methods and electroencephalography (EEG). These results suggest that listeners use phonological knowledge as a source for their predictions and that evidence of these predictions is evident in early brain responses. Practically, this work demonstrates that theoretical concepts can be used in conjunction with an array of methods to understand long-standing questions in speech perception. Moreover, these results suggest that listeners use their rich phonological knowledge predictively during online comprehension, pointing toward a class of models that posit prediction and feedback.
  • The lecture scheduled for Wednesday October 28, 2015 has unfortunately been CANCELLED.
  • Wednesday, October 21, 3:30 pm, TSH 203: Linnaea Stockall (Queen Mary, University of London): Title: “Solving Humpty-Dumpty’s Problem: how we put morphologically complex words back together again”Abstract: Over the past 15 years, considerable evidence from a range of different languages and methodologies has converged to provide clear evidence that the early stages of visual word recognion involve a mechanism of form‐based morphological parsing, which operates across all potenally morphologically complex words, regardless of formal or semantic opacity (Rastle and Davis 2008, Lewis et al 2011, Royle et al 2012, Fruchter et al 2014, inter alia). Comparavely little attention, however, has been focused on how linguistic processing proceeds once morphological constituents have been idenfied.
    In this talk I’ll discuss the results of a number of recent and ongoing experiments using a range of methods to investigate how we rapidly access information about the constituents of morphologically complex words, and how we make use of this information to reassemble the pieces and evaluate their syntactic and semantic wellformedness. I’ll focus much of the talk on ‘fresh from the lab’ data from a project with Alec Marantz & Laura Gwilliams (NYU) and Christina Manouilidou (UPatras) that we are just now analysing, in which we are investigating the neural spatio‐temporal dynamics of access to the lexical category vs. argument structure representations of verbal stems. I’ll argue that by focusing on the apparently simple question of how we detect and make use of information about morphological constituents, we can gain significant insight into the overall architecture of the human linguistic system.

Past Lectures, 2014-15

  • Wednesday, October 22, 3:30pm, MMC BSB 108: Lisa deMena Travis (McGill University) Title: Macro- and micro-parameters within and across language families Abstract: Languages vary in large and in small ways, and linguists can undertake macro-comparative work (e.g. comparing English and Mohawk) or micro-comparative work (e.g. comparing Northern Italian dialects). Often macro-comparative work is done across language families with the goal of uncovering macro-parameters while micro-comparative work is done within a language family with the goal of uncovering micro-parameters. In this research, I undertake micro-comparative work across language families (Austronesian and Mayan) to better understand a possible macro-parameter (VP-fronting). More specifically, I hypothesize that the co-occurrence of clefting wh-construction with V-initial languages can be explained through a macro-parameter of VP-fronting, explaining both V-initial word order and predicate fronting in clefting constructions. Within this macroparametric study, I investigate the status of clefting structure in an SVO language (Bahasa Indonesia) and micro-variation within the clefted structures comparing two dialects of Malagasy, an Austronesian language, to Kaqchikel, a Mayan language. The goal is to understand some of the details of these clefting structures that allow them to be reanalyzed leading to different setting in the macro-parameter. I argue that it is the status of the clefting particle that allows shifts in the syntactic interpretation of the structure leading to different choices in the macro-parameter.
  • Wednesday, November 19, 3:30pm, MMC BSB 108: Lisa Archibald (University of Western Ontario) Title: Developmental Differences in Language and Immediate Memory Processes: Implications for Children with Language Learning Disabilities Abstract: Some children struggle to learn to their first language despite otherwise typical development. Such children, however, do not form a cohesive group. They have difficulty with varying aspects of language, in diverse circumstances, and at different stages of development. Research conducted in the Language and Working Memory Lab has been aimed at improving our understanding of the complex basis of language learning by examining the interdependency of two cognitive systems, working memory and the developing linguistic system. Taking an epidemiological approach, we have identified groups of children with impairments in language and/or working memory and examined the differential impacts of these impairments on language processing tasks such as sentence repetition and grammaticality judgment. As well, pilot work has demonstrated both domain-specific and profile-specific treatment outcomes for children with different language and working memory profiles. These results clearly underscore the potential benefits of developing a better understanding of the underlying cognitive limitations associated with impaired functioning in individual children.
  • Wednesday, November 26, 3:30pm, MMC BSB 108: Michela Ippolito (University of Toronto) Title: Similarity in counterfactuals: grammar and discourse Abstract: In this talk I investigate the issue of the context-dependence of counterfactual conditionals and how the context constrains similarity in selecting the right set of worlds necessary in order to arrive at their correct truth-conditions. The present proposal is that similarity is constrained by what I call Consistency and Non-Triviality. Assuming a model of the discourse along the lines proposed by Roberts (1996) and Buring (2003), according to which conversational moves are answers to often implicit questions under discussion, the idea behind Non-Triviality is that a counterfactual statement answers a conditional question under discussion and, therefore, is required to make a non-trivial assertion. I show that nonaccidental generalizations which have often been taken to play an important role in the interpretation of counterfactuals, are crucial in selecting which conditional question is under discussion, and I propose a formal mechanism to identify the relevant question under discussion.
  • Wednesday, December 3, 3:30pm, MMC BSB 108: Elizabeth Cowper (University of Toronto) Title: Locative Have: An applicative account Abstract: This talk discusses work in progress. Building on earlier work by Brunson and Cowper (1992), and more recent work by Bjorkman and Cowper (2013), I propose a new analysis of sentences like those in (1).(1) The tree has a bird’s nest in it.
    (2) The garden has had many flowers planted in it.I argue that ‘have’ spells out a peripheral applicative head (Kim 2011) above Event, the head hosting viewpoint aspect, and that the subject merges in the specifier of the applicative head before moving to spec/T. The applicative head assigns an affected interpretation to its specifier. This account correctly predicts a) the interactions between ‘have’ and the spellout of other auxiliaries in the clause, and b) the special meaning associated with the construction.  I will conclude with some thoughts on why the pronouns in (1) and (2) cannot be replaced with anaphors, and on the question of how many different heads are spelled out by “have”.Brunson, Barbara, and Elizabeth Cowper. 1992. “On the topic of ‘have’.” TWPL.
    Bjorkman, Bronwyn, and Elizabeth Cowper. 2013. “Inflectional shells and the syntax of causative ‘have’.” CLA Proceedings.
    Kim, Kyumin. 2011. “External Argument Introducers.” Ph.D. Thesis, U of Toronto.
  • Wednesday, January 21, 3:30pm, DSB/505: Adrian Staub (University of Massachusetts, Amherst) Title: What does cloze probability measure?  Response time and modeling evidence. Abstract: It is widely accepted that a word’s predictability influences on-line comprehension, as a more predictable word elicits shorter reading times and a smaller N400 than a less predictable one. The predictability of a word is generally operationalized in terms of cloze probability, i.e., the proportion of subjects in an off-line production task who provide the word as a continuation of the sentence. The present work investigates the process by which subjects produce a cloze response, ultimately challenging the assumption that cloze probability can be equated with predictability. In two large-scale cloze experiments, subjects read a cloze prompt in RSVP format, and their response time (RT) to initiate a verbal response was recorded. Cloze probabilities closely replicated previous norms with the same items from a standard untimed task. In both experiments, higher probability responses were issued faster than lower probability responses. In both experiments there was also a sizable, and arguably counter-intuitive, relationship between item constraint (i.e., the probability of an item’s modal response) and RT: A low probability response was issued faster in a more constraining context. We show that these two RT effects, as well as other details of the data pattern, naturally emerge from a simple evidence accumulation model. Potential responses independently race toward a threshold, with the elicited response being the first to reach the threshold. The model assumes variability between potential responses in their mean time to reach the threshold, as well as within-response trial-to-trial variability. Increased item constraint is modeled as arising from increased between-response variability in finishing time. We argue that if cloze responses are produced by an activation-based race process, it is far from obvious that cloze probability is an appropriate measure of speakers’ subjective probability distribution over upcoming words. Moreover, this model of how cloze responses are produced makes comparison of cloze probabilities between items less meaningful than is usually assumed, as the relationship between a word’s underlying activation and cloze probability is not even monotonic, when comparing across items.
  • Wednesday, February 25, 3:30pm, DSB/505: Debra Titone (McGill University) Title: What the eyes reveal about first and second language reading:  Explorations of cross-language competition, emotion and individual differences. Abstract: Eye movement investigations have been crucial for building a deep understanding of the linguistic processes and representations that support first and second language reading.  Eye movement methods are ideally suited to this task: they have great temporal precision, allow researchers to observe language processes as they naturally unfold, and enable elegant gaze contingent manipulations that address theoretical questions with great rigor and precision.  In this talk, I present data from my laboratory investigating a variety of questions of relevance to first and second language reading processes.  These include the factors that modulate the real-time comprehension of language-unique words, words that straddle a bilingual’s two known languages (e.g., CHAT, which means cat in English and a conversational exchange in French;  PIANO, which refers to the same musical object in both English and French), and words that vary with respect to their emotional charge (e.g., SEX vs. SKY).  Across studies, we are particularly interested in how differences among bilinguals in L2 ability and other cognitive capacities (e.g., executive control) affect bilingual reading performance.
  • Wednesday, March 4, 3:30pm, TSH 203: Laura Sabourin (University of Ottawa) Title: Language Processing in Bilinguals: Evidence from Lexical Organization and Cognitive Control. Abstract: Much of the current research in my lab is aimed at determining the effects of age of immersion (AoI), manner of acquisition (MoA), and proficiency on how bilinguals (and language learners) process language. Initial research data at the lexical level shows that, for native speakers of English with L2 French, an early AoI is required for lexicons to become integrated (Sabourin et al., 2014a). However, in a preliminary follow-up study looking at native French speakers with L2 English, it appears that even a late age of L2 immersion can result in integrated lexicons if the MoA is more naturalistic (Sabourin et al., 2014b). Previous research on cognitive control in bilinguals has not always shown a bilingual advantage (Costa et al., 2009), and its existence has been debated (Paap & Greenberg, 2013). In our investigations aimed at accounting for the conflicting results found in the literature (Sabourin & Vinerte, 2014), we investigated participant grouping and task difficulty effects on the Stroop task (which measures cognitive control). While we find no differences between simultaneous and early sequential bilinguals, age groups traditionally both classified as “early” bilinguals, when the task uses only one language, we find a significant difference between the two groups when the task mixes both languages. Based on the data collected to date in our lab (including studies at other levels of linguistic processing), I hypothesize that while for many bilingual and language learning groups AoI is often the most important factor in determining how languages are processed, there is an important role for factors such as MoA and the context of bilingualism.
  • Wednesday, April 1, 3:30pm, TSH 203: Jon Sprouse (University of Connecticut) (TBA) Title: Experimental syntax and three debates in linguistics. Abstract: Over the past 15 years or so, there has a been a substantial push within theoretical syntax to adopt more formal experimental methods for data collection. The obvious question to be asked about any method is what does it buy us in terms of theory construction and evaluation? In this talk, I would like to review some contributions that formal experimental methods have made to three debates within the field: (i) Is the data underlying syntactic theory valid?, (ii) Can complex syntactic constraints be reduced to independently motivated aspects of sentence processing?, and (iii) Is there a role for innate, domain-specific knowledge in learning syntactic behaviors? My hope is that each of these topics will not only show the value of formal methods for linguistic theory, but also point the way to future work on these questions.

Past Lectures, 2013-14

Unless otherwise indicated, all Fall talks (i.e., October 9 to December 4) take place in TSH-201, and all Winter talks take place in DSB-505.

Add the CogSciL Lecture Series to your calendar using this link.

What’s the community got to do with it?

Language is inherently variable. People alternate between two or more ways of saying the same thing in every conversation and in all communities. This variation exists at all levels of grammar from lexical choices (e.g. couch vs. sofa) to pronunciation differences (e.g. talking vs. talkin’) to morphological alternations (e.g. go slow vs. go slowly) to discourse-pragmatic phenomena (e.g. I love it vs. I like love it.). Why do people do this?

In this presentation, I outline Variationist Sociolinguistics, an area of Linguistics that studies this variation and analyses it statistically, comparatively and in reference to the social context in which it occurs (e.g. Tagliamonte, 2012, in press). The explanation for this behavior necessarily lies in the linguistic system, but it also is highly influenced by external aspects of its use (Labov, 1970; Sankoff, 1980). In order to tap the system underlying this variation, analyses must be capable of modelling the simultaneous application of social and linguistic predictors and their interaction (Cedergren & Sankoff, 1974; Labov, 1994:3). This type of behavior in language may be stable, but it may also be changing, often rapidly (Labov, 2001). This means that historical, cultural and regional information may be required to interpret its use. Comparative techniques assist the analyst in evaluating similarities and differences across relevant categorizations of the data (e.g. age, sex, ethnicity, social network) (Tagliamonte, 2002). Taken together, the methodological procedures and statistical techniques of Variationist Sociolinguistics, as I will exemplify in this presentation, provide insights into the grammatical system as well as its social embedding, and therefore rich and viable means for understanding and interpreting language behavior in socially defined populations.

Selected References:

Cedergren, Henrietta J. & Sankoff, David (1974). Variable rules: Performance as a statistical reflection of competence.  Language 50(2): 333-355.
Labov, William (1970). The study of language in its social context.  Studium Generale 23(1): 30-87.
Labov, William (1994). Principles of linguistic change: Volume 1: Internal factors. Cambridge and Oxford: Blackwell Publishers.
Labov, William (2001). Principles of linguistic change: Volume 2: Social factors. Malden and Oxford: Blackwell Publishers.
Sankoff, Gillian (1980). A quantitative paradigm for the study of communicative competence. In Sankoff, G. (Ed.), The social life of language. Philadelphia: University of Pennsylvania Press. 47-79.
Tagliamonte, Sali A. (2002). Comparative sociolinguistics. In Chambers, J. K., Trudgill, P. & Schilling-Estes, N. (Eds.), Handbook of language variation and change. Malden and Oxford: Blackwell Publishers. 729-763.
Tagliamonte, Sali A. (2012). Variationist Sociolinguistics: Change, observation, interpretation. Malden and Oxford: Wiley-Blackwell.
Tagliamonte, Sali A. (in press). Analysing and interpreting variation in the Sociolinguistic tradition. In Krug, M. & Schlüter, J. (Eds.), Research Methods in Language Variation and Change. Cambridge: Cambridge University Press.


  • Wednesday, November 20, 3:30pm: Marc F. Joanisse (The University of Western Ontario)Measuring Implicit Phonological Processing With Eye Tracking and Event-Related PotentialsPhonological knowledge is typically measured using explicit judgments, as in categorical perception and phonological awareness tests. Although these have provided useful assessments of general phonological ability, I argue that they are heavily influenced by sensory factors, task demands and response modality. As a result they provide at best an indirect measure of the many underlying mechanisms involved in phonological knowledge. In this talk I discuss work in my lab that is pursuing a different approach, in which we implicitly measure phonology during spoken word recognition. In our approach, listeners see pictures of familiar objects and then hear words that either do or don’t match what they see. Manipulating the phonological similarity of what is heard versus what is expected reveals interesting modulations in eyetracking and event-related potential (ERP) measures. I discuss how this approach can be used to study phonology both in children and adults, thus providing insights into a number of domains of language research: (1) the nature of phonological deficits in children with dyslexia; (2) the extent to which such difficulties differ from those observed in children with specific language impairment (SLI); and (3) the extent to which phonological processing differs cross-linguistically, as in the case of Mandarin, a tonal languages with remarkably different phonological structure from English.
  • Wednesday, November 27, 3:30pm: Kazunaga Matsuki (McMaster University)The Roles of Thematic Knowledge in Sentence ComprehensionPeople possess a great deal of knowledge about real-world events. This knowledge, specifically with respect to event participants and their relations within an event (thematic knowledge), is an important component of how people understand language. In this talk, I will present results from two sentence comprehension studies that examined different aspects of how thematic knowledge influence sentence comprehension by addressing two critical unresolved issues. First, I investigated whether manipulation of thematic knowledge can lead to processing disruption in sentences that are otherwise assumed to be free of processing difficulty. This issue is particularly important for adjudicating among two major theories of sentence comprehension, two-stage and constraint-based theories. Second, I investigated how thematic knowledge affects the construction of sentential meaning representations, and how misinterpretations can occur during that process. Specifically, the study evaluated a few possibilities regarding how misanalyses of thematic roles might occur in full passive sentences that varied in plausibility. The novel aspects of this study involved in-depth analyses of the types of errors that participants make, and using ERPs to investigate on-line processing differences. I will conclude that people’s knowledge of the roles played by specific types of participants in specific types of events immediately and continuously influences language comprehension.
  • Wednesday, December 4: 3:30pm: Sylvain Moreno (Baycrest) Brain plasticity from perception to cognition: The role of video games in altering brain functionNeuroeducation is an emerging field in cognitive science, in which neuroscientific methods are used to study skill transfer and learning. Previous studies examining such training programs have reported mixed results (Detterman & Sternberg, 1982). Some studies found small, but significant, improvements in performance on untrained transfer tasks (e.g., problem-solving tasks; Lovett & Anderson, 1994), whereas other studies have found no transfer to untrained tasks (Olesen, Westerberg, & Klingberg, 2004). Yet, in spite of these mixed results, successful skill transfer to non-music related task has been demonstrated for musical training (Schellenberg, 2004, for a review, Moreno, 2009b).This presentation will outline findings related to transfer of skills from a video-game based music training program to untrained auditory and cognitive processing skills such as language. It will focus on three main questions: (1) Is transfer of skills possible between cognitive activities?; (2) If so, how can we qualify the nature of this transfer?; and (3) What can the neural correlates of these transfer mechanisms tell us about transfer and learning?
  • Wednesday, January 22: 3:30pm: Lee Wurm (Wayne State University)Emotion Effects In Lexical ProcessingModels of spoken word recognition have not historically included semantic or affective information as part of the recognition process. Such effects have been presumed to be much later. After all, how can a word’s meaning affect the recognition process before the word has been recognized? A growing body of research suggests, though, that such effects are not only early but pervasive. I will discuss some of this research, focusing on affective dimensions we developed while trying to make sense of previous findings. In several studies we have found that lexical decision times are predicted by a Danger x Usefulness interaction. In our view the interaction argues for an embodied (or “situated”) model of cognition. I will also make connections to work on memory and, if time permits, on cognitive aging.
  • Wednesday, February 5: 3:30pm: Jeff Mielke (North Carolina State University) Individual differences shape phonological typologyLinguistic patterns can be studied at three distinct levels of granularity: the individual, the language, and the set of all languages. At the individual level, descriptions can refer to an individual’s vocal tract morphology, acquisition history, and cognitive processes. A language-level description can include patterns and associations typically shared by members of a speech community. A crosslinguistic comparison can identify the patterns that universally/frequently/rarely/never occur in language-level descriptions. Generative grammar posited a direct link between crosslinguistic universals and the fundamental sameness of individual language learners, with language as an epiphenomenal intermediate level (e.g., Chomsky and Halle 1968, Chomsky 1986). Language has also been analyzed as a dynamical system, with familiar typological patterns emerging as a consequence of language use and language change (e.g., Ohala 1981, S. Kirby 1999, Blevins 2004). An intriguing implication of the latter approach is that differences between individual language users could shape the development and structure of languages.Phonology provides a particularly nice testing ground for the interaction of individual differences, because language-level phonological patterns often clearly reflect constraints imposed by the central nervous system, the vocal tract, the auditory system, and social interaction, all of which vary nontrivially across individuals, and all of which are easier to investigate now than 50 years ago. I will present phonetic data and survey data to contrast individual-level variation (North American English /r/ allophony, Canadian French rhotic vowels, and VOT accommodation in English) with similar variation at the language level (/l/ velarization crosslinguistically and regional variation in English short-a tensing). I will argue on this basis that the development of familiar phonological patterns crucially depends on individual-level variation and language-level convergence. This approach also offers an account of sound change minimality through an individualized notion of what qualifies utterances as same or different.
  • Wednesday, February 26: 3:30pm: Stefan Th. Gries (University of California, Santa Barbara)Statistical methods in corpus linguistics: recent improvements and applicationsBy its very nature, corpus linguistics is a discipline not just concerned with, but ultimately based on, the distributions and frequencies of linguistic forms in and across corpora. This undisputed fact notwithstanding, for many years, corpus linguistics has been dominated by work that was limited in both computational and statistical ways. As for the former, a lot of work is based on a small number of ready-made proprietary software packages that provide some major functions but can of coursen not provide the functionality that, for instance, programming languages provide. As for the latter, a lot of work is very unstatistical in nature by relying on little more than observed frequencies or percentages/conditional probabilities of linguistic elements.However, over the last 10 years or so, this picture has changed and corpus linguistics has evolved considerably to a state where more diverse descriptive statistics and association measures as well as multifactorial regression modeling, other statistical classification techniques, and multivariate exploratory statistics have become quite common. In this talk, I will survey a variety of recent studies that showcase this new-developed methodological variety in both synchronic and diachronic corpus linguistics; examples will include applications of generalized linear (mixed-effects) models, different types of cluster-analytic algorithms, principal components analysis and other dimension-reduction tools, and others.
  • Wednesday, March 19: 3:30pm: Martin Hackl (MIT)On the Acquisition and Processing of Only: Scalar Presupposition and the Structure of AlternativesThe abstract of the talk can be found here.

Past Lectures, 2012-13

  • Wednesday, September 19, 2012, 3:30pm: Gerard Van Herk (Memorial University)From “Arr!” to -S: What pirates, yokels, and Newfoundland drag queens tell us about language`s social meaningsCompetent members of a sociolinguistic community share norms about the social meanings associated with linguistic features, as well as the linguistic features associated with social groups. When a linguistic feature is shared by multiple (marginalized) groups, it develops a broader social meaning, and becomes available for performance, mimicry, joking, literary representations of types, and other sociolinguistic work. This talk will describe recent quantitative work on one such feature, the non-standard use in urbanizing Newfoundland of verbal -s (as in “I loves it!”). As the social contexts of -s use change, so do its social meanings, so that what was once a marker of rural identity becomes associated with young urban females and then drag queens.(In celebration of International Talk Like a Pirate Day, September 19.)
  • Wednesday, October 24, 2012, 3:30pm: Chris Kennedy (University of Chicago)Vagueness, Imprecision and ToleranceWhen I say “the theater is packed tonight” or “there are a lot of people in the theater tonight,” my utterance leaves a certain amount of uncertainty about the actual number of people in the theater. The same uncertainty about actual number is typically present when I say “the theater is full tonight” (even if the number of seats in the theater is known) or “there are 1000 people in the theater tonight.” In all cases, this can be traced back to the fact that we use and interpret utterances like these tolerantly:  small differences in the actual number of people in the theater typically do not affect our willingness either to make these utterances or to accept a speaker’s utterance of them. However, there is an important difference between the two sets of utterances: “the theater is full” and “there are 1000 people in the theater” can be used or understood in a way that is fully precise, but “the theater is packed” and “there are a lot of people in the theater” cannot be so used or understood. This distinction — the possibility of “natural precisifications” (to use a term from Manfred Pinkal) — is one of several empirical properties that distinguish vague terms like ‘packed’ and ‘a lot’ from (potentially) imprecise ones like ‘full’ and ‘1000’.The central theoretical question is how to account for these empirical differences while at the same time explaining why both kinds of expressions can be tolerant. Do we take the shared property of tolerance to indicate that both vague and imprecise expressions have the same core semantic/pragmatic analysis, and find a way to resolve or explain away the differences; or do we take the differences to indicate that vagueness and imprecision reflect a fundamental  semantic/pragmatic distinction, and find a way to accommodate the shared property of tolerance?  My goal in this talk is to present some arguments in favor of the latter position.  I will begin by providing linguistic and experimental evidence which argues in favor of a distinction between vagueness as a fundamentally semantic phenomenon and imprecision as a fundamentally pragmatic one. I will then argue that any reasonable pragmatic model of imprecision is one that will automatically give rise to the phenomenological properties associated with tolerance.
  • CANCELLED Thursday, November 8, 2012: Liina Pylkkänen (New York University) — talk co-hosted with the Department of Psychology, Neuroscience and Behaviour, please refer to their website for the abstract
  • Friday, November 9, 2012, 3:30pm, DSB-505: Alec Marantz (New York Univrsity)Words and Rules Revisited: Separating the Syntagmatic and the Paradigmatic in MorphologyPinker’s influential presentation of the distinction between the combinatoric units of language (the “words”) and the mechanisms that organize the units into linguistic constituents (the “rules”) rested on a strong, but ultimately incorrect, theory about the connection between a speaker’s internalized grammar and his/her use of language: the regular syntagmatic combination of units leaves no lasting impact on the brain, while repetition of a unit strengthens or alters its representation in memory. Thus, the telltale sign of combination is a lack of frequency effects (no behavioral consequences of the frequency of regular past tense forms like “walked” — only the frequency of the stem “walk” matters — so “walk” + “ed” is a syntagmatic (“rule”) combination), and the telltale sign of a memorized unit is frequency effects (behavioral consequences of the frequency of irregular past tense forms like “taught” — only the frequency of “taught,” not “teach,” matters — so “taught” is memorized (“word”), rather than formed via syntagmatic combination). The psycholinguistic and neurolinguistic literature of the past 30 years has demonstrated that syntagmatic combination, no matter how “regular,” does leave a trace of some sort in the brain such that frequency effects of various sorts are characteristic of brain and behavioral evidence both for atomic items (morphemes) and for combination of items. Nevertheless, linguistic theory does distinguish between atomic units, which “compete” for positions in syntax along the “paradigmatic” dimension of language, and combination of units, which are organized according to the “rules” of syntax. The Neuroscience of Language Lab at NYU has been using MEG to explore the differences in the neural bases of syntagmatic and paradigmatic frequency effects with the ultimate goal of using neural measures to help answer difficult linguistic questions. For example, work in Distributed Morphology has argued for the universal separation of the roots of lexical items (nouns, verbs, adjectives) from the lexical category information (n, v, adj). Is the relationship between the root and the category-determining feature syntagmatic (involving the syntactic combination of root and a category morpheme) or paradigmatic (involving a category feature associated with the root, but not combined with the root via the syntax)? This question is parallel to Pinker’s question about the connection between the verb and past tense for English irregular verbs – is it syntactic (rules) or paradigmatic (words) – and we know the answer is “rules” in this case. Can we exploit the same general types of experiments that demonstrate that past tense in English is always computed as a syntactic combination of units to show that lexical categories also involve a syntactic relation between a root and a category morpheme? In this talk, I will present some of the recent findings of our Lab that suggest that paradigmatic effects, quantified in terms of entropy or uncertainty about which atomic element is being processed, may be separated from syntagmatic effects, quantified in terms of the surprisal of an atom being processed in comparison to syntagmatic expectations built up from prior experience. If speakers processing a word stem show entropy effects over the possibilities that the same root might occur in different lexical categories (“hammer” as a noun or verb), the results would argue against the Distributed Morphology account, whereas if these speakers instead showed no such entropy effects but surprisal effects at the resolution of the category ambiguity, the results would support this account.
  • Wednesday, November 21, 2012, 3:30pm, DSB B107: Daphna Heller (University of Toronto)Common ground and the probabilistic nature of referential domains Theoretical approaches to reference assume that definite descriptions such as “the candle” are used to refer to a candle which is uniquely identifiable relative to a set of entities defined by the situational context. Thus, the interpretation of definite descriptions crucially depends on listeners’ ability to correctly construct this situation-specific “referential domain”. While there is considerable experimental evidence that listeners are indeed able to use various types of information to construct referential domains in real time, some evidence seems to suggest that information about common ground is not used for this task. That is, evidence in the psycholinguistics literature is mixed regarding whether listeners incorporate the distinction between shared and private information in the earliest moments of processing. In this talk, I will review some of these apparently-contradictory results (Keysar et al., 2000; Heller et al., 2008), and argue that they can be explained under a novel approach to referential domains. Specifically, I propose that instead of choosing one domain over another, listeners simultaneously consider more than one domain, weighing probabilistically their relative contribution. I present data from two experiments in support of this approach, and discuss the implications for our understanding of referential domains more generally. Keysar, B., Barr, D.J., Balin, J.A. & Brauner, J.S. (2000). Taking perspective in conversation: The role of mutual knowledge in comprehension. Psychological Science, 11, 32-37. Heller, D. Grodner, D. & Tanenhaus, M. K. (2008). The Role of Perspective in Identifying Domains of Reference. Cognition, 108, 831-836.
  • Wednesday, January 16, 2013, 3:30pm, DSB-505: Randy Newman (Acadia University)Is a rows a rose as Van Orden (1987) claimed? New evidence from ERP and fMRI research regarding the use of phonology in activating the meaning of wordsLearning to read requires forming associations between the sounds (i.e., phonology), spellings (i.e., orthography), and meanings (i.e., semantics) of words. While there is a general consensus that phonology influences reading, theories differ in the importance they assign to phonological information, particularly in skilled readers. So-called strong phonological theories propose that computation of phonology occurs early and automatically in the course of reading. An alternative view assigns less importance to phonology, arguing that phonological influences are dependent on factors such as reading skill and word frequency. Work in my lab at Acadia and with various collaborators employs event-related brain potential (ERP) measures and functional MRI to define the temporal and spatial dynamics of phonological processing as a means of adjudicating between these opposing views. My talk will focus on experiments that have taken advantage of the homophony of the English language to clarify phonology’s role in activating the meaning of written words. Homophones are words with identical pronunciations, but which differ in spelling and meaning (e.g., bear/bare). The rationale for using homophones is that if word meanings are activated solely from orthographic representations, then only the meaning of a presented homophone should become activated. In contrast, if phonology activates the meanings of words, then presentation of a homophone will result in activation of semantic representations associated both with the presented homophone (e.g., bare) and of its homophone mate (e.g., bear) – a so-called homophone effect. Results from a series of experiments have shown that the presence of homophone effects depends on a number of factors including word frequency, word predictability, the word-likeness of nonword fillers and the type of paradigm employed. The general conclusion of the research conducted thus far is that the use of phonology in contexts that most closely resemble natural reading (i.e., sentence verification paradigms) is likely early and automatic. However, in contexts where readers must make lexical decisions involving homophones presented amongst nonwords with atypical orthography (e.g., roynt), readers appear able to make decisions based on a superficial analysis of orthographic information. Implications for models of reading will be discussed.Background literature Jared, D., Levy, B. A., & Rayner, K. (1999). The role of phonology in the activation of word meanings during reading: Evidence from proofreading and eye movements. Journal of Experimental Psychology: General, 128, 219-264.Newman, R. L., & Connolly, J. F. (2004). Determining the role of phonology in silent reading using event-related brain potentials. Cognitive Brain Research, 21, 94-105.
  • Wednesday, February 27, 2013, 3:30pm: Yoonjung Kang (University of Toronto) “Tonogenetic sound change in Korean stops and natural classes”Tonogenesis is a commonly attested sound change whereby phonation contrasts of consonants give rise to and eventually become replaced by tonal contrasts on an adjacent vowel. Korean has a typologically uncommon three-way contrast of voiceless stops among aspirated (heavily aspirated, pha), lenis (lightly aspirated, pa), and fortis (unaspirated, p’a) stops. The stops are differentiated by voice onset time (VOT, an acoustic measure of degree of aspiration)—aspirated > lenis > fortis—and also by the fundamental frequency (f0, an acoustic correlate of pitch) of the immediately following vowel—aspirated ˜ fortis > lenis. Studies on Seoul Korean from the last decade or so find that the two long VOT categories (aspirated and lenis stops) are losing their VOT distinction and that their f0 difference is emerging as the primary cue for the contrast (Aspirated: phal >pál (H); Lenis: pal > pàl (L)).In this talk, I will draw on data from synchronic, diachronic, and dialectal variation of Korean stops to examine the development of f0 contrast over time, both apparent and real. We find evidence that the development of f0 contrast is adaptive; namely, f0 distinction is further exaggerated where VOT distinction is threatened. In those dialects where lenis stops (the middle VOT category) overlap with fortis stops in VOT, f0 is further raised following fortis stops while in those dialects where lenis stops overlap with aspirated stops in VOT, as in Seoul Korean, f0 is further raised following aspirated stops. At the same time, we also find evidence that in Seoul Korean, f0 enhancement targets a broader natural class of sounds (i.e., all aspirated stops and fricatives) rather than narrowly targeting the segments directly involved in the threatened contrast (i.e., aspirated stops). In sum, the study shows that the tonal contrast is shaped through adaptive dispersion to maintain threatened contrasts but the dispersion is mediated by phonological structure–i.e., a distinctive feature.
  • Wednesday, March 13, 2013, 3:30pm: Wen Cao (McMaster University/Beijing Language and Culture University)Perceptual studies on Chinese Tone-3 and its focusingBeing low ([+L]) is regarded as the distinctive feature of the Tone-3 (T3) in Standard Chinese. However, previous work has not yielded clear conclusions on its basic/recitation pattern. Different descriptions can be found in the literature, such as: dipping, broken, low-level, low-falling, falling-rising, falling-level-rising, etc.. In the first part of my talk, I will introduce my team’s recent work on Chinese tone perception. We conclude that the “falling-level-rising” pitch contour is the “ideal” isolation form of T3. And the best value in five-pitch-levels is /2112/, in which /11/ occupying 60% of its duration.Then how can a low tone syllable in Chinese sentence be perceived to be focal accented or not? In the second part of my talk, I will introduce an experiment aiming to answer this question. A total of 156 sentences containing tone-3 words were synthesized and used as stimuli in a perceptual study. The sentences differed in the falling value between the two high pitches, and in the duration and phonation types of the T3 syllables. Thirty-nine subjects were asked to judge where the focus or accent was for each sentence. The results show that at least three degrees of pitch drop are involved in the focus recognition: a big sized drop of about 10 semitones; a middle sized drop of about 6 semitones; a small sized drop of about 2 semitones. The results suggest that the three sizes of pitch drop have different indications in Chinese intonation, depending on both the tone and the tone combination. In perception, there are various ways to realize tone-3 focus in the Tx-T3-Ty sentences series, but in production or for text-to-speech synthesis, the rule simply is making a middle sized pitch drop with a long and creaky T3 syllable. Similarly, to focus on the low tone syllable in the T3-Tx-Ty sentences, a creaky T3 syllable is essential. However, a long T3 syllable is a strong determinant for a low tone focus in the Tx-Ty-T3 sentences.
  • Wednesday, March 27, 10:30-12pm: Evelina Fedorenko (MIT) (a talk co-hosted with PNB; more details can be found here)
  • Wednesday, March 27, 3:30-5pm: Ted Gibson (MIT)Language for communication: Language comprehension and the communicative basis of word orderPerhaps the most obvious hypothesis for the function of human language is for use in communication.  Chomsky has famously argued that this is a flawed hypothesis, because of the existence of such phenomena as ambiguity.  Furthermore, he argues that the kinds of things that people tend to say are not short and simple, as would be predicted by communication theory.  Contrary to Chomsky, my group applies information theory and communication theory from Shannon (1948) in order to attempt to explain the typical usage of language in comprehension and production, together with the structure of languages themselves.  First, we show that ambiguity out of context is not only not a problem for an information-theoretic approach to language, it is a feature.  Second, we show that language comprehension appears to function as a noisy channel process, in line with communication theory.  Given si, the intended sentence, and sp, the perceived sentence we propose that people maximize P(si | sp ), which is equivalent to maximizing the product of the prior P(si) and the likely noise processes P(si → sp ).  We show that several predictions of this way of thinking of language are true: (1) the more noise that is needed to edit from one alternative to another leads to lower likelihood that the alternative will be considered; (2) in the noise process, deletions are more likely than insertions; (3) increasing the noise increases the reliance on the prior (semantics); and (4) increasing the likelihood of implausible events decreases the reliance on the prior.  Third, we show that this way of thinking about language leads to a simple re-thinking of the P600 from the ERP literature.  The P600 wave was originally proposed to be due to people’s sensitivity to syntactic violations, but there have been many instances of problematic data in the literature for this interpretation.  We show that the P600 can best be interpreted as sensitivity to an edit in the signal, in order to make it more easily interpretable.  Finally, we discuss how thinking of language as communication can explain aspects of the origin of word order.   Some recent evidence suggests that subject-object-verb (SOV) may be the default word order for human language.  For example, SOV is the preferred word order in a task where participants gesture event meanings (Goldin-Meadow et al. 2008).  Critically, SOV gesture production occurs not only for speakers of SOV languages, but also for speakers of SVO languages, such as English, Chinese, Spanish (Goldin-Meadow et al. 2008) and Italian (Langus & Nespor, 2010).  The gesture-production task therefore plausibly reflects default word order independent of native language.  However, this leaves open the question of why there are so many SVO languages (41.2% of languages; Dryer, 2005).  We propose that the high percentage of SVO languages cross-linguistically is due to communication pressures over a noisy channel. We provide several gesture experiments consistent with this hypothesis, and we speculate how a noisy channel approach might explain several typical word order patterns that occur in the world’s languages.

Past Lectures (2011-12)

  • Wednesday, January 18, 2012, 3:30pm: Dr. Masako Hirotani (Carleton University)
  • Wednesday, February 1, 2012, 3:30pm: Dr. Colin Phillips (University of Maryland)
    Senator William McMaster Invited Lecturer in Cognitive Neuroscience of Language

    “Linguistic Illusions: Where you see them, where you don’t”
  • Wednesday, February 15, 2012, 4:00pm: Dr. Keren Rice (University of Toronto)
    “Athabaskan verb templates and what underlies them”
  • Wednesday, March 21, 2012, 3:30pm: Dr. Ellen Bialystok (York University)
    “Reshaping the Mind: The Benefits of Bilingualism”
  • Wednesday, April 4, 2012, 3:30pm: Dr. Ronnie Wilbur (Purdue University)

Past Lectures (Fall 2011)

  • Wednesday, September 21, 2011, 3:30pm: Dr. Julie A. Van Dyke (Haskins Laboratories)
    “Memory interference as a determinant of poor language comprehension”
  • Wednesday, October 19, 2011, 3:30pm: Dr. Diane Massam (University of Toronto)
    “Variations on Predication: Word order and case in Niuean”
  • Wednesday, October 26, 2011, 3:30pm: Dr. Grit Liebscher (University of Waterloo)
    “Identity construction through language: The case of German Canadians”
  • Wednesday, November 2, 2011, 3:30pm: Dr. Roger Schwarzschild (Rutgers University)
    “Quantifier Domain Adverbials, Semantic Change and the Comparative”

Past Lectures (2010-2011)

  • Friday, October 1, 3:30pm: Dr. Michael Walsh Dickey (University of Pittsburgh)
    Automatic processing and recovery of complex sentences in aphasia
  • Wednesday, October 27, 3:30pm: Dr. Mathias Schulze (University of Waterloo)
    Measuring textual complexity
  • Wednesday, November 10, 2010, 3:30pm: Dr. Jennifer Cole (University of Illinois)
    Investigating the variable prosody of everyday speech
  • Wednesday, November 24, 2010, 3:30pm: Dr. Ileana Paul (University of Western Ontario)
    What do determiners do? Wednesday, December 1, 2010, 3:30pm: Dr. Juan Uriagereka (University of Maryland)
    A Clash of the Interfaces
  • Wednesday, January 12, 2011, 3:30pm: Dr. James Walker (York University)
    Phonological Variation in Toronto English: Linguistic and Social Conditioning
  • Wednesday, January 19, 2011, 3:30pm: Dr. Cristina Schmitt and Dr. Alan Munn (Michigan State University)
    Acquiring Definiteness: Syntax, Semantics, Pragmatics and Acquisition
  • Wednesday, January 26, 2011, 3:30pm: Dr. Veena Dwivedi (Brock University)
    Individual Differences in Shallow Semantic Processing of Scope Ambiguity
  • Friday, February 11, 2011, 3:30pm: Dr. Florian Jaeger (University of Rochester)
    How communicative pressures may come to shape language over time
  • Wednesday, March 2, 2011, 3:30pm: Dr. Alana Johns (University of Toronto)
    The Language of the Inuit: What we don’t know.
  • Wednesday, March 16, 2011, 3:30pm: Dr. Michael Schutz (McMaster University School of the Arts)
    Deconstructing a musical illusion: causality and audio-visual integration.
  • Wednesday, April 6, 2011, 9:30 – 2:30: Student Research Day
    MUMC-2J13
    Hear talks and view posters presenting the work of student researchers in the Department of Linguistics and Languages.
    Full schedule.

Past Lectures (2009-2010)

  • January 13 – Dr. Craig Chambers, University of Toronto (Mississauga)
    Referential Anticipation in Incremental Sentence Comprehension: A Result of Shallow or Rich Linguistic Processing?
    View the talk
  • January 27 – Dr. Mike Kliffer, McMaster University
    Prescriptivism: An inquiry into its Cognitive Side
    View the talk
  • February 03 – Dr. Ian Smith, York University
    Missionary Language Practice & the Unwitting Conversion of Ceylon Portuguese
  • February 10 – Dr. Uli Sauerland, ZaS Berlin
    The Origin of Embedded Clauses
    View the talk
  • February 24 – Dr. Ann Bunger, University of Delaware
    The Role of Nonlinguistic Event Representation in First Language Acquisition
  • March 03 – Dr. Usha Goswami, Cambridge University
    Combining Educational Neuroscience and Cognitive Developmental Psychology: The Example of Learning to Read
  • March 10 – Dr. Steven Brown, McMaster University
    Neural Control of Vocalization and Vocal Imitation in Humans
    View the talk
  • March 24 – Dr. John Whitman, Cornell University
    The Formal Syntax of Alignment Change: the Case of Old Japanese
    View the talk