Cognitive Science of Language Lecture Series
LECTURE SERIES 2018-19
PLEASE NOTE THE LOCATION FOR EACH TALK!
This schedule will be updated as information becomes available. Graduate students taking CogSciL 725 and 726 should also consult course website on Avenue to Learn. All graduate students in the program are strongly encouraged to attend lectures and meet our guest speakers.
‘Speaking in Tongues’: Staging Hospitality of (Non)Translation
Location: CNH 102
Guest Speaker: Dr. YANA MEERZON
Bio: Professor Yana Meerzon teaches at the Department of Theatre, University of Ottawa. Her book publications include A Path of the Character: Michael Chekhov’s Inspired Acting and Theatre Semiotics (2005); and Performing Exile – Performing Self: Drama, Theatre, Film (2012). She also co-edited Performance, Exile and ‘America’; Adapting Chekhov: The Text and Its Mutations; History, Memory, Performance; Routledge Companion to Michael Chekhov, and a special issue of Theatre Research in Canada on theatre and immigration. Her new book project is entitled Performance, Subjectivity, Cosmopolitanism.
Abstract: As multilingualism has become one of defining elements of today’s world, it has also turned into the marker of contemporary theatre which uses multilingual dialogue to investigate potentials of interpersonal communication and theatrical translation, and to create communities of new cosmopolitanism (Venuti 469). Translation as linguistic (non)hospitality (Karpinsky 2017) is tightly linked with migration, “and while the political nature of language is certainly not exclusive to migration scenarios, migration enhances its visibility, highlighting the interplay of linguistic choices which are variously permitted, frowned upon, singled out for praise, or simply barred.” (Polezzi 346)
Performance arts remain one of the primary public venues to construct and discuss the power and the faults of translation and multilingualism. It allows one to ask “whether our obsessive interest in language and its identitarian qualities should necessarily be read as a reification of alterity and whether translation therefore necessarily becomes an instrument of control or whether there are spaces for translators and self-translators to act as witnesses to the experience of migration and to sustain multilingual practices which defy any rigid association between state, language, identity and the apportioning of rights” (Polezzi 347).The work of immigrant Canadian theatre is a good example of such practices.
The official politics of multiculturalism and bilingualism has served as catalyst to its multilingual production. From Fennario’s Balconville that imagines its audiences fully bilingual and thus capable of following bilingual dialogue without translation to a many-lingual In Sundry Languages by Babayants that envisions its spectators familiar with many but not all languages used on stage, Canadian multilingual theatre has been staging linguistic (non)hospitality in which a separation between I and myself takes place. The performative strategies of this practice constitute the focus point of the proposed talk.
Learning what to (not) take for granted
Location: TSH 203
Guest Speaker: DR. ATHULYA ARAVIND
Bio: Dr. Athulya Aravind is a post-doctoral fellow at the Harvard Lab for Developmental Studies. The primary focus of her research is first language acquisition. In particular, she looks at their children’s developing understanding of what structures are possible in their language, how those structures are interpreted, and how they may be used in conversation. Starting in Fall 2019, she will be an Assistant Professor at MIT’s Department of Linguistics.
Abstract: The overall conveyed meaning of an utterance is a conglomerate of inferences, distinguishable both in how they arise (their semantics) and how they can be appropriately used in conversation (their pragmatics). In this talk, I focus on the distinction between asserted content, the main new information conveyed by an utterance, and presuppositions, background information that is taken for granted. For example, the utterance “I ate an apple, too” conveys, as part of its asserted meaning, that the speaker ate an apple, and presupposes that something else has been eaten. These meaning components are not explicitly labeled as such in ordinary conversation. How does a child learning language learn to distinguish between them and identify how they are factored into the overall interpretation?
In a series of behavioral experiments, I use the conversational principles governing assertion and presupposition to probe children’s ability to distinguish between these two layers of meanings (e.g. Stalnaker 1974, Grice 1975). Cooperative speakers adhere to different rules when asserting vs. presupposing something. It is uncooperative to assert something that your listener already knows. On the flip side, it is uncooperative to presuppose something your listener doesn’t already know. I show that 4-to-6-year-olds have adult-like biases about the knowledge state of the listener depending on the presupposed and asserted content of a speaker’s statement.
Understanding code-switched sentences and foreign-accented speech: Electrophysiological and behavioral evidence
Location: BSB 121
Guest Speaker: JANET VAN HELL
Bio: Janet van Hell is full professor of Psychology and Linguistics at the Pennsylvania State University. She is also co-Director of the Center for Language Science at Penn State (http://www.cls.psu.edu). She holds a secondary position as professor of Language Development and Second language learning at Radboud University Nijmegen, the Netherlands. She received her Ph.D. from the University of Amsterdam, the Netherlands, in 1998. Her research focuses on second language learning and bilingualism as well as later language development in children with typical or atypical language development. She combines behavioral, neuropsychological, and linguistic techniques to study language development and language processing. Her work is supported by grants from, amongst others, the National Science Foundation and the Netherlands Organization for Scientific Research.
Abstract: A unique feature of bilingual speech is that bilinguals often produce utterances that switch between languages, such as “And we reckoned Holland was too small voor ons” [for us]. Codeswitching has been studied extensively in the field of linguistics, but an emergent body of psycholinguistic studies also seeks to examine the cognitive mechanisms associated with the comprehension and production of codeswitched sentences. These psycholinguistic studies show that switching between languages often incurs a measurable processing cost, even though codeswitchers typically report that switching occurs automatically and requires no cognitive effort. I will present a series of recent behavioral and EEG studies, using ERP and time-frequency analyses, that examined the neurocognitive mechanisms associated with the comprehension of written and spoken codeswitched sentences. I will also discuss evidence showing that switching direction (switching from the first language to the second language, or vice versa) and accented speech modulate switching costs when bilinguals listen to or read code-switched sentences. Together these studies attest to the value of integrating cross-disciplinary approaches to gain more insight into the neural, cognitive, and linguistic mechanisms of the comprehension of codeswitched sentences.
Motivating durable learning: focused attention through instructional design
Location: BSB 121
Guest Speaker: JOE KIM
Bio: Joe Kim is an Associate Professor in Psychology, Neuroscience & Behaviour at McMaster University and co-ordinates the innovative McMaster Introductory Psychology (macintropsych.com) program. He also directs the Education & Cognition Lab which aims to understand how cognitive principles such as attention, memory and learning can be applied to develop evidence-based interventions in education and training. He also organizes the annual McMaster Symposium on Education & Cognition (edcog.ca) which brings together cognitive scientists, educators and policy makers to explore how cognitive science can be applied to educational policy and instructional design.
Abstract: Cognitive scientists have been systematically studying processes such as attention, memory and learning for more than 150 years. This rich resource of knowledge has been only recently applied to developing evidence-based interventions in education. A key focus of this research has been to promote learning that is durable – extending beyond short-term testing into long-term retention of information that remains with the student after the final exam. In this presentation, I will discuss three key factors that instructors can implement to promote durable learning:
1. Learning relies on sustained attention. In the class, instructors can implement methods to reduce mind wandering and students can engage in practices to promote effortful and focused attention.
2. Design of teaching materials directly guides learning. Perhaps the largest impact an instructor can make on learning is to offer thoughtfully designed class materials that adhere to multimedia learning principles. Slide design that reduces cognitive load can promote student learning.
3. Study habits such as retrieval practice strengthen long-term retention. Instructors can implement effective assessment design into the course structure and students can learn to take an active role in learning and testing.
A key message in applying cognitive principles to instructional design is that both instructors and students have important parts to play in developing habits that promote durable learning.
Academic year 2017-18
Tuesday, April 10, 2018
Room: TSH B105
Speaker: Dr. Erin White, Post-Doctoral Research Fellow, Neurosciences and Mental Health, The Hospital for Sick Children in Toronto
Title: Developing Language Networks in the Brain
During speech comprehension, how is it that we can create a meaningful representation of what was said, when different features of speech are processed by separate brain areas and at different timescales as it unfolds? During reading, how do we integrate visual, phonological and semantic information (recognition of letters, their sounds and the meaning of individual words) to build a meaningful representation of what we read? This is the so-called “Language binding problem” and it continues to be a central question in the neuroscience of language.
In this talk I will present evidence to suggest that functional connectivity (synchrony in the phase of EEG oscillations) can provide some answers. Typically developing children (ages 4-17) and adults (ages 18-36) engaged in various language and reading tasks, while EEG was recorded. Our results suggest that the brain integrates language information by coordinating the rhythm of neuronal activity among distributed brain regions, allowing these regions to communicate in “functional” networks. As children develop, the frequency used for this communication, its timing, and the brain areas involved, change to permit more efficient and precise language processing. This work could provide new insights into the mechanisms of various language and learning difficulties, hopefully leading to more targeted educational and remediation programs.
Wednesday, January 24, 2018
Room: BSB B103
Speaker: Dr. Aline Godfroid, Associate Professor of TESOL and Second Language Studies, Michigan State University
Title: Attention in second language acquisition: Towards an explanatory model
Attention has occupied a central place in theories of second language acquisition (SLA), dating back at least to Schmidt’s noticing hypothesis (Schmidt, 1990). The noticing hypothesis states that attention to language form, coupled with a low level of awareness, enables the representation of these forms in working memory, which may then give rise to more durable learning. Different methodologies have been proposed over the years to measure attention, including circling or underlining (e.g., Izumi, Bigelow, Fujiwara, & Fearnow, 1999), note taking (e.g., Izumi, 2002), and think-aloud protocols (e.g., Alanen, 1995). More recently, eye tracking—the real-time registration of a participant’s eye gaze—has emerged as a particularly sensitive measure of learner attention (Godfroid, Boers, & Housen, 2013), extending the measurement of the eye gaze as an index of overt attention in other disciplines (Wright & Ward, 2008). In this talk, I will present an overview of the expanding field of eye-tracking research on learner attention.
Originally framed in terms of the noticing hypothesis (Godfroid, Housen, & Boers, 2010; Smith, 2010), work on attention in SLA signaled a new direction in second-language eye-tracking research, with a goal of linking processing and acquisition. Attentional processing has traditionally been observed under incidental learning conditions, meaning the participants in an eye-tracking study engage in a natural, meaning-focused language task (e.g., reading a text, chatting with an interlocutor). Unbeknownst to them, the task contains learning targets (e.g., novel words or grammar) or language-related episodes (i.e., feedback) that can help advance their knowledge. Eye-tracking research has shown that language learners generally do attend to these target forms in the input; moreover, length of processing (as a measure of attention) is positively related to the learners’ performance on surprise vocabulary or grammar post-tests (Godfroid et al., 2013, 2017; Godfroid & Uggen, 2013; Mohamed, 2017; Pellicer-Sánchez, 2016; Smith, 2012). In an expansion of this basic paradigm, researchers have now begun to manipulate task instructions in an effort to compare incidental and intentional learning conditions (Choi, in preparation) or implicit and explicit instruction (Cintrón-Valentín & Ellis, 2015; Indrarathne & Kormos, 2017a, 2017b; Issa & Morgan-Short, forthcoming) more directly. Results have generally favored more explicit types of instruction (Choi, in preparation; Indrarathne & Kormos, 2017a, 2017b) and suggest the role of attention in the learning process may be causal (Choi, in preparation). Taken together, this growing body of eye-tracking research has the potential to corroborate empirically what theorists have posited for decades, namely that attention is pivotal to adult second-language learning.
3 Hour Workshop with Dr. Aline Godfroid
Date: Thursday January 25, 2018
Time: 9:30am to 12:30pm
Room: BSB 117
Title: Experimental tasks and paradigms in second-language vocabulary learning
A growing number of researchers investigate aspects of second-language vocabulary learning as a cornerstone of learning another language. This workshop introduces selected topics and tasks that have shaped the vocabulary research agenda in recent years and are likely to continue doing so. Fundamental to a discussion of vocabulary tasks and paradigms is the distinction between intentional and incidental learning (e.g., Hulstijn, 2001), which refers to whether or not participants are explicitly informed their task is to learn new words. To some extent, the intentional-incidental distinction mirrors vocabulary learning that takes place in instructed and naturalistic contexts, respectively, such as the language classroom and language learning while immersed in a foreign country. I will present an overview of natural language tasks that have been used to study vocabulary learning under incidental conditions and present the major questions that have guided research in this area. For intentional learning conditions, the paired-associates learning paradigm remains the gold standard for lab-based research due to its relative ease of implementation and flexibility of use. During the workshop, I will present an overview of the methodological decisions that need to be made when designing a paired-associates learning experiment. Finally, in the area of vocabulary assessment, the multicomponential nature of vocabulary knowledge (Nation, 1990, 2000, 2013) is now well recognized, as seen in the use of multiple tests of word form, meaning, and use. Even so, these tests continue to be primarily explicit-declarative (e.g., recognition or recall tests), leaving out other dimensions of lexical knowledge. In the final part of the workshop, I will present a selective overview of real-time methodologies—reaction time measurement, priming, and eye-movement recordings—that hold promise for measuring implicit-tacit or procedural vocabulary learning and knowledge (Godfroid, under review). The goal of this workshop is to provide a broad overview of issues in vocabulary studies combined with in-depth discussion of selected techniques to promote well-informed and well-designed research studies.
Godfroid, A. (under review). Sensitive measures of vocabulary knowledge and processing: Expanding Nation’s framework.
Hulstijn, J. H. (2001). Intentional and incidental second-language vocabulary learning: A reappraisal of elaboration, rehearsal and automaticity. In P. Robinson (ed.), Cognition and second language instruction (pp. 258-286). Cambridge: Cambridge University Press.
Nation, I. S. P. (1990). Teaching and learning vocabulary. New York: Newbury House.
Nation, I. S. P. (2001). Learning vocabulary in another language (1st ed.). Cambridge: Cambridge University Press.
Nation, I. S. P. (2013). Learning vocabulary in another language (2nd ed.). Cambridge: Cambridge University Press.
Wednesday, February 7, 2018
Room: DSB B105
Speaker: Dr. Nicholas Welch, Post-doctoral fellow in Dr. Ivona Kučerová’s Syntax Lab, McMaster University
Title: Linguistic fieldwork on threatened languages
Language endangerment and revitalization are much in the news of late. The critical state of thousands of the world’s languages raises significant practical, political and ethical questions for field linguists. Given that a trained linguist can be a valuable resource for the revitalization of a language under threat, are we justified in investigating purely theoretical issues rather than devoting ourselves full-time to language preservation? To what degree does independent linguistic research aid a language community, and to what degree is it a species of neo-colonial paternalism? If, as has been postulated, all human languages share certain fundamental architectural properties, what can the study of a small language bring to the investigation of these properties? How can one do effective research on a language with extremely few speakers?
I will discuss these issues and others, and conclude that study of endangered or neglected languages can have profound impact on the field of linguistics as a whole and on our understanding of the possible properties of human language, that responsible field linguistics entails devoting time and effort both to theoretical questions and to language preservation, and that these goals are best achieved by working closely with speakers and communities in a framework where the needs of the community are key drivers of the research agenda. Furthermore, this framework can make the work of the linguist significantly easier and has the potential to open lines of investigation that might otherwise be overlooked.
Wednesday, March 14, 2018
Room: KTH 109
Speaker: Lance L. Hawley, Ph.D., C. Psych. (Assistant Professor), is the Clinical Lead (Outpatient Psychological Services) and Co-Director of Training for the Frederick W. Thompson Anxiety Disorders Centre at Sunnybrook Health Sciences Centre. Dr. Hawley is an assistant professor in the Department of Psychiatry at the University of Toronto, and associate graduate faculty at the University of Toronto, Scarborough. He is an Associate Editor for the peer-reviewed journalMindfulness (Springer Publications). Dr. Hawley previously worked as a staff clinical psychologist for the Mood and Anxiety Outpatient Service and the Psychological Trauma Program at the Centre for Addiction and Mental Health. His clinical focus involves providing effective individual and group psychotherapy treatment to adult outpatients experiencing mood and anxiety disorders. His research focus involves understanding cognitive mechanisms underlying optimal treatment response, using longitudinal statistical modelling approaches. Dr. Hawley has led professional training workshops and has provided clinical supervision to mental health professionals involving the treatment of mood and anxiety disorders using Cognitive Behavioral Therapy (CBT) and Mindfulness Based CBT (MBCT) approaches. He completed his clinical training in university and medical centres in Waterloo, Montreal, Hamilton and Toronto.
Title: “Mindfulness Based Interventions for Obsessive Compulsive Disorder”
Meta-analyses demonstrate the efficacy of mindfulness-based interventions (MBIs), such as Mindfulness-Based Cognitive Therapy (MBCT), across a broad range of outcomes in clinical and non-clinical samples, including reducing stress, reducing depressive symptoms, and reducing risk of relapse in recurrent depression (e.g., Chiesa & Serretti, 2009; Hofmann, Sawyer, Witt, & Oh, 2010; Piet & Hougaard, 2011). Although Cognitive Behavior Therapy (CBT) is the most efficacious treatment intervention for Obsessive Compulsive Disorder (OCD), there is a growing literature indicating the mindfulness based approaches can be beneficial in terms of managing acute mood and anxiety symptoms as well as reducing relapse risk following treatment.
This lecture will involve an interactive discussion of Mindfulness Based interventions for OCD, considering how mindfulness concepts may help clients better manage their symptoms. We will also discuss our longitudinal study which examines the benefits of using a consumer grade EEG-based biofeedback device (called “Muse”) that allows clients to engage in home based mindfulness meditation practices. Specifically, this study will investigate the effects of meditation home practice on symptom alleviation, as related to specific OCD related cognitive processes (e.g., meta-cognition, strategic vs. non-strategic mind wandering). EEG correlates of “Mind Wandering” will be examined in relation to symptom severity and cognitive variables. This EEG analysis will explore spectral band power differences in alpha waves have been closely associated with meditative state changes as a result of engaging in mindfulness meditation.
Further, we will discuss linguistic analyses that examine subject’s perceptions of their meditation practice. This involves a computational linguistic approach to identify recurring semantic themes involving experiential acceptance and decentering, related to the “three circles of mindfulness inquiry” following a mindfulness practice over the course of 8 weeks. This utilizes a “network analysis” statistical approach in order to determine how semantic themes (e.g., approach, avoidance, valence, arousal) may be associated with OCD symptom change.
2017-18 Lecture Series Line Up –
Please notice the location for each talk
This schedule will be updated as information becomes available. Graduate students taking CogSciL 725 and 726 should also consult course website on Avenue to Learn. If you have any questions, please contact the Graduate Chair, Dr. Elisabet Service at firstname.lastname@example.org or the Chair of Linguistics and Languages, Dr. Magda Stroińska at email@example.com
September 13, 2017
Title: Bringing Linguistics to Work(shop)
Speaker: Dr. Anna Marie Trester
The FrameWorks Institute and CareerLinguist.com
Time: 2:30 – 4:30 pm
Bringing Linguistics to Work(shop) has been designed to help linguistics students become more aware of (and better able to show) the transfer-ability and applicability of our skills and training in a range of professional contexts. Focusing on story as a central tool, and sharing stories of career linguists who have found innovative ways to put linguistics to work, this workshop is designed to engender a sense of ownership, agency, and creativity to thinking about career.
October 18, 2017
Title: Characteristics and Usefulness of Phonetic Variability
Speaker: Dr. Doug Whalen
Haskins Laboratories (Yale, New Haven) http://www.haskins.yale.edu/staff/whalen.html
Time: 11:30am – 1:00pm
Location: DSB- 505
Speech is well-known to be quite variable, and this variability has both impeded and informed theoretical and practical endeavors for decades. In this talk, I will outline aspects of the consistency of variability within speakers; re-examine the possible differences in variability in acoustics vs. articulation; and explore the possibility that variability has usefulness in establishing and maintaining flexibility both in production and in understanding the speech of others. New analysis methods that indicate that some variability at the kinematic level may indicate increased, rather than decreased, control at a higher level will be discussed. Overall, the new means of collecting and analyzing large amounts of data open new avenues for understanding variability in speech.
November 1, 2017
Title: Decomposing the Frequency by Skill Interaction
Speaker: Sascha Schroeder , Max Planck Institute for Human Development, Germany
Location: CNH -103
In this talk, we discuss two alternative approaches to explain the commonly observed frequency by skill interaction in visual word recognition, which refers to the fact that frequency effects are typically stronger in less-skilled readers than in more-skilled/ individuals. The first approach assumes that, due to fact that low-frequency words are underrepresented in smaller language samples, low-frequency words are encountered disproportionally less often by low-skill individuals than by low-skill individuals. The second approach also assumes that low-frequency words are less familiar for low-skill than for high-skill individuals, but not disproportionally so. Instead, the relationship between the previous encounters with a word – which we refer to as “individual frequency” – and response accuracy/latency in visual word recognition tasks is non-linear.
In order to evaluate these two approaches, we analyzed lexical decision data for high- and low-frequency words in three different age groups (4th grade, 6th grade, and young adults). Using simulations based on corpus data from different age groups and different languages, we first show that the exposure to both low- and high-frequency words increases linearly with increasing reading exposure/age, thus ruling out the first approach to explain the frequency by sill interaction. Next, we introduce a new method to estimate individual frequencies, which is based on the previous reading exposure of an individual. Using this measure, we are able to estimate the (individual) frequency effects in different age groups. Combing the frequency trajectories of different age groups, we show that there is indeed a single underlying function relating (individual) frequency and lexical decision accuracy/latency, which is non-linear. This is in line with the second approach to explain the frequency by skill interaction. We discuss different potential forms for the individual frequency function and their theoretical implications for theories of visual word recognition.
November 15, 2017
Topic: Brain imaging in research on language
Speaker: Dr. Elissa Asp, English & Linguistics St. Mary’s University, Halifax, NS
Time: 11:30am – 1:00pm
Location: DSB -505
In presenting a rationale for studying ‘language in its reality’ in cognitive neuroscience, Roel Willems (2015: 7) describes natural language use as ‘dirty and complicated’. He contrasts it with the ‘sterile’ samples typical of experimental work and argues that the dirty stuff is worth investigating not just because it is after all the ‘real stuff’, but also because what we learn from controlled samples in the lab may not, or only partially, translate to ‘real language’ used by people and supported or impaired in real brains. In this talk, I explore the value and challenges of venturing into the relative wilderness of cognitive neuroscience of natural language use through review of imaging studies employing tasks with (varying) claims to ecological validity such as scene description and picture naming, and through discussion of current models of the neurocognitive networks supporting lexical and syntactic representations and tasks. One goal is to show that our statistical analyses of MEG data – which cluster regions together according to their time courses – can introduce a bit of elegance and simplicity into relatively natural language data by reducing ‘functional connectivity’ to temporal co-activation across the whole time course. However, I will also show that statistical analyses can only partially tame the wilderness. Grounding in theoretical perspectives in linguistics, the neurobiology of language and other domains, and psychometrics for designing studies and evaluating results are also necessary for there to be progress in the cognitive neuroscience of natural language use.
Willems, R. (ed.)(2015). Cognitive Neuroscience of Natural Language Use. Cambridge University Press.
November 29, 2017
Title: Nominal Linkers: The Case of Ezafe in Iranian Languages
Speaker: Dr. Arsalan Kahnemuyipour, University of Toronto Mississauga
Time: 11:30am -1:00pm
Location: CNH 103
Looking across languages, we find elements in the nominal domain which seem to have no clear meaning or function. These elements are sometimes referred to as “linkers”. In this talk, I explore a particular example of this type of linker, known as the Ezafe, found in Iranian languages. The bulk of my talk is about Persian, a Western Iranian language spoken mainly in Iran, Afghanistan and Tajikistan, with the majority of the data coming from the dialect spoken in Iran. Descriptively, Ezafe is an unstressed vowel –e (-ye after vowels) which appears between a noun and its modifier (N-e Mod), and is repeated on subsequent modifiers, if they are present, except the last one (N-e Mod1-e Mod2-e Mod3). The presence of this iterative element inside the noun phrase has puzzled syntacticians for several decades. What is its function? Is it the realization of case, agreement, or something else? I start with a discussion of the distribution of Ezafe, with a special emphasis on its correlation with the order of elements in the noun phrase. I discuss several approaches to this phenomenon and argue for a roll-up movement account which takes the base order of the noun phrase in Persian to be head final, with the surface order derived via phrasal movement to specifiers of intermediate functional projections. I then explore the status of Ezafe in several other Iranian languages to verify how this analysis fares with data from these languages.
Ghomeshi, J., 1997. Non-projecting Nouns and the Ezafe Construction in Persian. Natural Language and Linguistic Theory 15: 729-788.
Kahnemuyipour, A. 2014. Revisiting the Persian Ezafe Construction: A Roll-up Movement Analysis, Lingua 150, 1-24.
Larson, R., 2009. Chinese as a Reverse Ezafe Language. Yuyanxue Luncong (Journal of Linguistics) 39, 30-85. Beijing, Peking University.
Larson, R., Yamakido, H., 2006. Zazaki “Double Ezafe” as Double Case-marking. Paper presented at the Linguistics Society of America Annual Meeting, Aluquerque, NM.
Larson, R., Yamakido, H., 2008. Ezafe and the Deep Position of Nominal Modifiers. In: McNally, L., Kennedy, C. (Eds.), Adjectives and Adverbs: Syntax, Semantics and Discourse. Oxford: Oxford University Press, 43-70.
Samiian, V., 1994. The Ezafe Construction: Some Implications for the Theory of X-bar Syntax. In: Marashi, M. (Ed.), Persian Studies in North America, Betheda, MD: Iranbooks, 17-41.
Samvelian, P., 2008. The Ezafe as a Head-marking Inflectional Affix: Evidence from Persian and Kurmanji Kurdish. In: Karimi, S., Samiian, V., and Stilo, D. (Eds.), Aspects of Iranian Linguistics, Newcastle upon Tyne: Cambridge Scholars Publishing, 339-361.
2016-17 Lecture Series Line Up – please notice the location for each talk
- April 5th – 3:30 pm to 5:30 pm BSB 121 Dr. Suzi Lima
- Wednesday April 5th – 3:30 pm to 5:30 pm BSB 121 – Dr. Suzi Lima (Federal University of Rio de Janeiro & University of Toronto) – Title:“The role of count lists in the acquisition of numerals” Abstract: A central question in the literature on the development of natural number concepts is the role of count lists in this process. In this paper we compared two different theoretical perspectives on this question: Carey’s cultural construction hypothesis (count lists provide the requisite placeholder structure for children to infer the relative relations between number words and acquire numbers beyond the limits of parallel individuation) and Spelke’s language combinatorics hypothesis (the human combinatorial capacity enables children to learn higher number words as denoting set sizes composed of smaller ones (e.g., “five” is “three” and “two”). In order to test these competing hypotheses we did three studies in Yudja (approximately 294 people; Brazil). Yudja children are not exposed to count lists before school-age (neither in Yudja nor in Brazilian Portuguese, which is the second language taught at school but not spoken inside the community). Furthermore, number words are highly compositional: numbers from five to twenty (higher number) are formed by combining the word for hand or toes and number words from one to four. If verbal count lists were unnecessary (contra Carey’s hypothesis), we would expect pre-schooling children to be able to learn higher number words using the process of building sets of sets using linguistic cues as predicted by Spelke’s proposal. While almost all Yudja children are monolingual, they eventually hear adults use numbers greater than 5 in Brazilian Portuguese in the community (again, not in a count list form). Once at school, children are exposed not only to Yudja count lists, but also to Brazilian Portuguese ones.Studies 20 adults (control) and 28 children, ten not enrolled in school (4 to 7 years-old; M=6.0; Stdev=0.47; 4F) and eighteen enrolled in school (8 to 13 years-old; M=9.3; Stdev=1.33; 9F) participated in three tasks in Yudja and Brazilian Portuguese (all based on Wynn 1992): the recitation task (participants had to count objects that were lined up on a table), the give-a-number task (children were asked to put N objects in a paper box. Each number on a given list was tested twice, and the order of the numbers was randomized) and the point-to-x task (participants saw a pair of pictures (2) and had to point to the one that corresponded to the number asked. This task served as reliability check of Give-a-Number.Results The recitation task has shown that pre-schooling children could recite a count list up to 5 in Yudja. Results in Brazilian Portuguese are not clear cut: most children could not count beyond five but a few children could count at least to 5 in BP. Our results have shown that schooling clearly affected children’s numerical abilities and development from subset to CP-knowers. In the recitation task, non-schooled children’s performance was centered in low numbers (0-5) both for Yudja and Brazilian Portuguese; schooled children presented a better performance on higher numbers. Thus, once presented to number words in a systematic and ordered fashion (count lists) children managed to evolve from subset knowers to CP knowers in both languages. Besides, morphological transparency of the logic of counting of the number words in Yudja did not facilitate the process of children becoming CP-knowers in Yudja. Our data from Give N task and Point to x task have shown that despite the fact that pre-schooled children presented a better performance in Yudja in comparison to Brazilian Portuguese in low range numbers and despite the fact that they are presented with both count lists (Yudja and Brazilian Portuguese) simultaneously at school, they become CP-knowers in Brazilian Portuguese before they do in Yudja.Summary Supporting Carey’s hypothesis, our results suggest that a verbal count list is necessary for a child to be able to transition from a subset knower stage to a CP-knower stage and that morphological transparency of the logic of counting does not facilitate the process of developing number knowledge in the early stages of acquisition of those words.
- Wednesday March 22th – 3:30 pm to 5:30 pm BSB 121 – Dr. John Anderson (York University) – Title:“Imaging the Aging Bilingual brain” Abstract: The process of aging involves a decline of executive control, speed of processing, and memory. The neural architecture supporting these constructs similarly decays. Often, senile decline is characterized as unavoidable. If we live long enough, we will live to get dementia. There are, however factors which reduce the rate of dementia occurrence conferring “reserve” for those whose lifestyle choices have afforded them this extra time. Exercise, education, and maintaining strong social connections are some such factors. Being bilingual and managing two languages on a daily basis lifelong is also thought to be a reserve factor. This talk explores the neural underpinnings of cognitive reserve in bilinguals relative to monolingual peers using fMRI and DTI.
- Wednesday March 8th – 3:30 pm to 5:30 pm BSB 121 – Dr. Lucie Menard (UQÀM) – Title:“Multisensory speech perception and production” Abstract: In face-to-face conversation, speech is produced and perceived through various modalities. Movements of the lips, jaw, and tongue, for instance, modulate air pressure to produce a complex waveform perceived by the listener’s ears. Visually salient articulatory movements (of the lips and jaw) also contribute to speech identification. Although many studies have been conducted on the role of visual components in speech perception, much less is known about their role in speech production. However, many studies have emphasized the important relationship between speech production and speech perception systems. If, as suggested by many researchers, perceived visual and auditory cues are not independent but instead act in synergy and complement each other, they must be involved in the speech production process. In this talk, we explore the effects of auditory and visual feedback on speech production. Congenitally blind children and adults will be considered.
- Wednesday February 8th – 3:30 pm to 5:30 pm BSB 121 – Dr. Khalil Iskarous – (University of Southern California) Title:“Discreteness and dynamics in computation: From octopus behavior to language” Abstract: What are the general computational principles involved in cognition, especially language? Research within the cognitive science and generative linguistics traditions has sought general principles of computation, which may become specialized in language. In this talk, it will be argued that similar computations may underlie three skills: octopus behavior, syntactic structure computation (asymmetric c-command), and the planning and execution of speech production. These computations are based on competition and coordination of simple computational units, which can determine outputs characteristic of a wide variety of motor, perceptual, and cognitive skills.
- Wednesday January 25th – 3:30 pm to 5:30 pm BSB 121 – Jessica Coon – (McGill University) Title:“The linguistics of Arrival: Aliens, fieldwork, and Universal Grammar” Abstract: If aliens arrived, could we communicate with them? How would we do it? What are the tools linguists use to decipher unknown languages? How different can human languages be from one another? Do these differences have bigger consequences for how we see the world? The recent science-fiction film Arrival touches on these and other real questions in the field of linguistics. In Arrival, linguistics professor Dr. Louise Banks (Amy Adams) is recruited by the military to translate the language of the newly-arrived Heptapods in order to answer the question everyone wants to know: why are they here? Language, it turns out, is a crucial piece of the answer.
- Wednesday November 23rd – 3:30 pm to 5:30 pm BSB 104 – Sid Segalowitz – (Brock University) Title: “When Do We Know the Meaning of a Word (or a Picture), and What Does this Meaning Mean?” Abstract: I want us to address the question, When does the brain show evidence for having accessed the meaning at a cognitive level of input even if this is well before we can report on it? Once we address this question, we are led to much more difficult questions, such as “What do we mean by ‘meaning’?” And that ultimately is what I want to address, not only with respect to words. I will present evidence from our event-related potential studies that people show that the content of visually presented stimuli differentiates during very early stages of processing, starting with the P100, a component that starts rising about 80 ms and peaks at about 100 ms after stimulus onset. This happens whether the stimuli are words, line drawings, or faces versus houses. Traditional models assume a linear sequence of input to decomposition to meaning. Our results however require a parallel processing brain model. My question for you will be whether this has implications for linguistics.
- Wednesday November 2nd, 3:30 pm, BSB 104: Ellen Lau (University of Maryland): Title: “Neural Investigations into Syntactic and Semantic Combination: from Beginning to End” Abstract: One of the great remaining mysteries of cognitive neuroscience is how structured and temporally extended sequences like sentences are encoded and navigated in memory. Accordingly, research on the neural bases of sentence processing has begun to shift from ‘violation’ paradigms to measurement of the brain activity associated with ‘normal’ comprehension. In this talk I will discuss a series of EEG, MEG and fMRI studies that take this approach towards better understanding the basic processes supporting sentence comprehension. One set of experiments investigates whether or not a full syntactic structure is actively maintained in working memory across the course of a sentence such that activity is greatest near the end (Pallier et al., 2011). Although our ERP results from coordinated structures are consistent with such a model, effects observed in a parametric manipulation of structure in MEG and fMRI are better explained by syntactic prediction processes that occur near the beginning of the sentence. I will also discuss our investigations into a recently-introduced time-frequency approach that highlights neural activity modulated at the same rate as syntactic constituents or phrases (Ding et al. 2015). Finally, I will present results from a new fMRI experiment that asks whether particular regions selectively support the computation of argument structure by comparing lexically-matched noun phrases and verb phrases (e.g. the buried treasure vs. buried the treasure).
- Wednesday September 28, 3:30 pm, BSB 104: Lyn Turkstra (McMaster University): Title: “Measuring social cognition in spoken and written communication” Abstract: The term social cognition refers to primate cognitive functions that are specifically engaged in social interactions. There is growing evidence that social cognition is impaired in many adults with neurological communication disorders, and social cognition theories and research have profoundly influenced on our understanding of these communication impairments. Most social cognition research has focused on the ability to “read the minds” of others, based on their facial expressions and other non-verbal cues. This talk will present evidence that information about others’ minds also can be conveyed by subtle verbal cues, adding to our understanding of the powerful ways in which language shapes our social world.
Past Lectures, 2015-16
- Tuesday April 12, 11:00 am, TSH 203: Phaedra Royle (Université de Montréal): Title: “Specific language impairment in French: Verb morphology” Abstract: Specific language impairment (SLI) is characterized by persistent difficulties affecting language abilities in otherwise normally developing children (Leonard, 2014). It remains difficult to identify young children with SLI in French. A previous study has shown that the correct production of passé composé (perfect past) in French is related to the conjugation group (regular vs irregular verbs) in typical children but not those with SLI (Royle and Elin Thordardottir, 2008). However, in that study participants were very young and showed floor responses, while verbs were not controlled for their morphophonological properties. We have recently recreated this experiment with older children and verbs in each of four past participle categories (ending in –é, –i, –u, and Other irregulars). Children with SLI in preschool or first grade were tested using an Android application, Jeu de verbes (Marquis et al, 2012). We compared their results and the error types to those of control children. Results show significant effects for linguistic group (SLI < control), and verb group (é = i = u > Other), as well as an interaction between these factors: Children with SLI’s performances did not vary according to verb conjugation group (é = i = u = O), reflecting a lack of sensitivity to inflection patterns, while control children showed this sensitivity (é = i = u > O). Children with SLI also showed different non-target production as compared to controls, with more use of the present tense in past tense contexts. We conclude that children with SLI do not master this morphosyntactic process in the same way typical French children do.
- Monday, April 11, 4:00 pm, TSH 201: Karsten Steinhauer (McGill University): Title: “Factors modulating ERP signatures of L2 acquisition and L1 attrition” Abstract: Event-related brain potentials (ERPs) provide an excellent method to study the temporal dynamics of language processing in real-time. This includes the fascinating neurocognitive changes that occur while a new language is being acquired. In the past 20 years, ERP research investigating sentence processing in second language (L2) learners has led to a number of models that try to address these neural changes and the role of modulating factors such as age of acquisition (AoA), language proficiency, first language (L1) background, the type of language exposure (e.g., implicit versus explicit training environments), as well as inter-individual differences in learning trajectories and processing preferences. An important limitation of this research has been that AoA and L2 proficiency levels are typically (negatively) correlated in L2 learners, such that AoA effects attributed to a “critical period” may instead simply reflect their proficiency level. Attriters, whose late-acquired L2 has become the dominant language, may shed important new light on the respective role of these factors. However, whether and to what extent L1 attrition is characterized by similar neurocognitive changes, and whether such changes may mirror those in language acquisition – but “in reverse” – remains an open empirical question that only few recent investigations have begun to address. My talk will first provide an overview of recent findings and controversies in ERP research on L2 acquisition, especially in the domain of morpho-syntactic processing. The second part will focus on a series of large-scale ERP studies from our lab that probe brain signatures for lexical-semantic and morpho-syntactic processes in Italian immigrants who have lived for many years in Montreal (Canada). These participants describe English as their predominant language and report problems in their L1 (Italian). ERP online data have been collected for both their L1 (Italian) and their L2 (English) and are compared to the ERP profiles of English and Italian monolinguals, as well as to English-Italian bilinguals who acquired the two languages in the reverse order. Among other advantages, this complex design allows us to investigate how factors such as (i) being “bilingual” (versus monolingual), (ii) age of language acquisition (AoA), and (iii) proficiency levels in each language, interact and modulate neurocognitive mechanisms underlying online language processing.
- Wednesday, March 23, 3:30 pm, TSH 203: David Poeppel (Max Planck Institute and New York University): Title: “Speech is special and language is structured” Abstract: I discuss two new studies that focus on general questions about the cognitive science and neural implementation of speech and language. I come to (currently) unpopular conclusions about both domains. Based on experiments using fMRI, and exploiting the temporal statistics of speech, I argue for the existence of a speech-specific processing stage and a specialized neuronal substrate that has the appropriate sensitivity and selectivity for speech. Based on experiments using MEG, I discuss the basis for abstract, structural processing. These results demonstrate that, during listening to connected speech, cortical activity of different time scales is entrained, concurrently, to the time course of linguistic structures at different hierarchical levels. Critically, entrainment to hierarchical linguistic structures is dissociated from the encoding of acoustic cues and statistical relations between words. The results demonstrate syntax-driven, internal construction of hierarchical linguistic constituent structure via entrainment of cortical dynamics. My conclusions — that speech is special and language structure driven — provide new neurobiological provocations to the prevailing view that speech perception is ‘mere’ hearing and language comprehension ‘mere’ statistics.
- Wednesday, February 24, 3:30pm, TSH 203: Guillaume Thomas (University of Toronto): Title: “Tense on Nouns: evidence from Mbya Guarani” Abstract: In English and many Indo-European languages, tense is a functional category that is largely realized as verbal inflection. Because of this fact, most syntactic theories of tense from Aristotle to Pesestky and Torrego (2004) have characterized it as an inherently verbal category. However, this conclusion has been challenged by cross-linguistic studies that look beyond Indo-European languages. In particular, Nordlinger and Sadler (2004) have argued that tense is attested and interpreted in the nominal domain in numerous languages.Guarani languages (Tupi Guarani: Argentina, Bolivia, Brazil and Paraguay) have figured prominently in the ongoing debate on the putatively verbal nature of tense. Although Paraguayan Guarani was presented as a nominal tense language in Nordlinger and Sadler’s (2004) typology, Tonhauser (2006) argued that Nordlinger and Sadler’s analysis of Paraguayan Guarani temporal markers was misguided. Tonhauser’s arguments were in turned challenged by Thomas (2015), who argued that the interpretation of nominal temporal markers in Mbya Guarani is strikingly similar to that of English tenses, once pragmatics is factored into their analysis.In this talk, I will review existing arguments for and against the analysis of Guarani temporal markers as tenses, and I will present new arguments in favor of their analysis as nominal tenses.
- Wednesday, January 27, 3:30pm, TSH 203: Gary Libben (Brock University): Title: “Morphological Structure and Cognitive Function” Abstract:Words such as mouse, screen, computer, monitor, keyboard, and trackpad are all words that describe things that we associate with digital technology. For most language users, they seem to be examples of a single language structure—the word. Yet, for many morphologists, they are quite different. The words mouse and screen are monomorphemic, the word computer is derived, the word monitor contains a suffix, the words trackpad and keyboard are compounds. A great deal of psycholinguistic research has addressed the question of the extent to which these differences play a role in online processing and what consequences the possible answers to the question may have for our understanding of cognitive representation and processing. In this presentation I propose that morphological structure is fundamentally a psychological phenomenon that is subject to variation within an individual as a result of specific task demands and experience over time. This view has two key components: (1) Morphological Transcendence: This is the claim that the representation of words in the mind changes through the lifespan as a result of the experience that an individual language user has with processing specific words and morphological families. (2) Morphological Superstates: This is the claim that morphological constituents exist psychologically in a morphological superstate up to the point at which they are measurable through acts of language production or comprehension. I discuss these claims with respect to data from English, French, German, and Hebrew.
- Wednesday, January 20, 3:30pm, TSH 203: Margaret Grant (University of Toronto): Title: “Ambiguity and Incrementality in Sentence Processing”Abstract: Uncovering the nature of ambiguity resolution during comprehension has been a central project of the field of psycholinguistics. This aim has persisted because the nature of ambiguity resolution in comprehension has critical implications for models of sentence processing in general. In this talk, I will bring together two current lines of research on ambiguity resolution, with a focus on sentence comprehension during reading. The first line of research provides a novel direct comparison of the processing of structural and referential ambiguities. These two ambiguity types have been extensively studied in separate literatures, with the two fields of research arriving at opposite conclusions. Evidence from the processing of structural ambiguities, such as ambiguous modifier attachment, favors models in which a single analysis of ambiguous material is adopted without a cost to processing (e.g., Traxler et al., 1998; van Gompel et al, 2001). This evidence stands in contrast to models in which multiple analyses are simultaneously adopted and compete for selection (e.g., MacDonald et al., 1994). Contrary to the literature on attachment ambiguities, competition has been observed between available referents in pronoun resolution (e.g., Badecker & Straub, 2002). I will present a series of studies using a variety of methods, including eye movements during reading, self-paced reading and an ambiguity judgment task, to show that the separation in the literature between these two ambiguity types is perhaps misleading. While there is a shift in results based on differences in the reading task, both attachment and pronoun ambiguities show a similar processing profile when compared directly. The second line of research investigates the way that the processor reacts to semantic ambiguity. This new work examines the processing of Determiner Phrases that are ambiguous between an individual interpretation and an amount/degree interpretation (e.g., the pizzas in the pizzas would be tasty food for the hungry students vs. the pizzas would be enough to feed the hungry students). The results of a study of eye movements during reading suggest that the processor immediately commits to a single interpretation of the DP, with the default being determined by properties of the DP itself. I will discuss these findings in the light of semantic theories of degree/individual polysemy (e.g., Rett, 2014) and in light of previous psycholinguistic findings on polysemy and other semantic ambiguities (e.g., Frazier & Rayner, 1999; Frisson 2009). Taken together, these studies on a broad range of ambiguity types suggest that the processor may exhibit different behavior in handling one type of ambiguity given a change in task demands, and that under equivalent experimental conditions, different ambiguity types may or may not give rise to similar processor behavior.
- Wednesday November 25, 3:30 pm, TSH 203: Philip J. Monahan (Centre for French and Linguistics, University of Toronto Scarborough,Department of Linguistics, University of Toronto): Title: “Phonology as the Basis for Predictions: Evidence from perceptual and neurophysiological measures” Abstract: Despite significant variation in the speech signal, we comprehend spoken language with little effort. The responsible perceptual and brain mechanisms, however, remain poorly understood. First, using perceptual and neurophysiological measures, data is presented that suggests only certain features serve as the basis for predicting the speech signal. In particular, I present data from a segment identification task which suggests that [+voice] segments allow English participants to predict that the following coda segment will also be [+voice]. Then, I present data from a pair of MEG experiments supporting an underspecified representation for mid-vowels in American English. In particular, mid-vowel standards showed reduced oscillatory power in the pre-stimulus beta-frequency band (18-26 Hz) compared to high-vowel standards. Second, I argue that listeners are sensitive to phonological long-distance dependencies during perception. Using Basque sibilant harmony as the test case, I present data from both behavioural methods and electroencephalography (EEG). These results suggest that listeners use phonological knowledge as a source for their predictions and that evidence of these predictions is evident in early brain responses. Practically, this work demonstrates that theoretical concepts can be used in conjunction with an array of methods to understand long-standing questions in speech perception. Moreover, these results suggest that listeners use their rich phonological knowledge predictively during online comprehension, pointing toward a class of models that posit prediction and feedback.
- The lecture scheduled for Wednesday October 28, 2015 has unfortunately been CANCELLED.
- Wednesday, October 21, 3:30 pm, TSH 203: Linnaea Stockall (Queen Mary, University of London): Title: “Solving Humpty-Dumpty’s Problem: how we put morphologically complex words back together again”Abstract: Over the past 15 years, considerable evidence from a range of different languages and methodologies has converged to provide clear evidence that the early stages of visual word recognion involve a mechanism of form‐based morphological parsing, which operates across all potenally morphologically complex words, regardless of formal or semantic opacity (Rastle and Davis 2008, Lewis et al 2011, Royle et al 2012, Fruchter et al 2014, inter alia). Comparavely little attention, however, has been focused on how linguistic processing proceeds once morphological constituents have been idenfied.
In this talk I’ll discuss the results of a number of recent and ongoing experiments using a range of methods to investigate how we rapidly access information about the constituents of morphologically complex words, and how we make use of this information to reassemble the pieces and evaluate their syntactic and semantic wellformedness. I’ll focus much of the talk on ‘fresh from the lab’ data from a project with Alec Marantz & Laura Gwilliams (NYU) and Christina Manouilidou (UPatras) that we are just now analysing, in which we are investigating the neural spatio‐temporal dynamics of access to the lexical category vs. argument structure representations of verbal stems. I’ll argue that by focusing on the apparently simple question of how we detect and make use of information about morphological constituents, we can gain significant insight into the overall architecture of the human linguistic system.
Past Lectures, 2014-15
- Wednesday, October 22, 3:30pm, MMC BSB 108: Lisa deMena Travis (McGill University) Title: Macro- and micro-parameters within and across language families Abstract: Languages vary in large and in small ways, and linguists can undertake macro-comparative work (e.g. comparing English and Mohawk) or micro-comparative work (e.g. comparing Northern Italian dialects). Often macro-comparative work is done across language families with the goal of uncovering macro-parameters while micro-comparative work is done within a language family with the goal of uncovering micro-parameters. In this research, I undertake micro-comparative work across language families (Austronesian and Mayan) to better understand a possible macro-parameter (VP-fronting). More specifically, I hypothesize that the co-occurrence of clefting wh-construction with V-initial languages can be explained through a macro-parameter of VP-fronting, explaining both V-initial word order and predicate fronting in clefting constructions. Within this macroparametric study, I investigate the status of clefting structure in an SVO language (Bahasa Indonesia) and micro-variation within the clefted structures comparing two dialects of Malagasy, an Austronesian language, to Kaqchikel, a Mayan language. The goal is to understand some of the details of these clefting structures that allow them to be reanalyzed leading to different setting in the macro-parameter. I argue that it is the status of the clefting particle that allows shifts in the syntactic interpretation of the structure leading to different choices in the macro-parameter.
- Wednesday, November 19, 3:30pm, MMC BSB 108: Lisa Archibald (University of Western Ontario) Title: Developmental Differences in Language and Immediate Memory Processes: Implications for Children with Language Learning Disabilities Abstract: Some children struggle to learn to their first language despite otherwise typical development. Such children, however, do not form a cohesive group. They have difficulty with varying aspects of language, in diverse circumstances, and at different stages of development. Research conducted in the Language and Working Memory Lab has been aimed at improving our understanding of the complex basis of language learning by examining the interdependency of two cognitive systems, working memory and the developing linguistic system. Taking an epidemiological approach, we have identified groups of children with impairments in language and/or working memory and examined the differential impacts of these impairments on language processing tasks such as sentence repetition and grammaticality judgment. As well, pilot work has demonstrated both domain-specific and profile-specific treatment outcomes for children with different language and working memory profiles. These results clearly underscore the potential benefits of developing a better understanding of the underlying cognitive limitations associated with impaired functioning in individual children.
- Wednesday, November 26, 3:30pm, MMC BSB 108: Michela Ippolito (University of Toronto) Title: Similarity in counterfactuals: grammar and discourse Abstract: In this talk I investigate the issue of the context-dependence of counterfactual conditionals and how the context constrains similarity in selecting the right set of worlds necessary in order to arrive at their correct truth-conditions. The present proposal is that similarity is constrained by what I call Consistency and Non-Triviality. Assuming a model of the discourse along the lines proposed by Roberts (1996) and Buring (2003), according to which conversational moves are answers to often implicit questions under discussion, the idea behind Non-Triviality is that a counterfactual statement answers a conditional question under discussion and, therefore, is required to make a non-trivial assertion. I show that nonaccidental generalizations which have often been taken to play an important role in the interpretation of counterfactuals, are crucial in selecting which conditional question is under discussion, and I propose a formal mechanism to identify the relevant question under discussion.
- Wednesday, December 3, 3:30pm, MMC BSB 108: Elizabeth Cowper (University of Toronto) Title: Locative Have: An applicative account Abstract: This talk discusses work in progress. Building on earlier work by Brunson and Cowper (1992), and more recent work by Bjorkman and Cowper (2013), I propose a new analysis of sentences like those in (1).(1) The tree has a bird’s nest in it.
(2) The garden has had many flowers planted in it.I argue that ‘have’ spells out a peripheral applicative head (Kim 2011) above Event, the head hosting viewpoint aspect, and that the subject merges in the specifier of the applicative head before moving to spec/T. The applicative head assigns an affected interpretation to its specifier. This account correctly predicts a) the interactions between ‘have’ and the spellout of other auxiliaries in the clause, and b) the special meaning associated with the construction. I will conclude with some thoughts on why the pronouns in (1) and (2) cannot be replaced with anaphors, and on the question of how many different heads are spelled out by “have”.Brunson, Barbara, and Elizabeth Cowper. 1992. “On the topic of ‘have’.” TWPL.
Bjorkman, Bronwyn, and Elizabeth Cowper. 2013. “Inflectional shells and the syntax of causative ‘have’.” CLA Proceedings.
Kim, Kyumin. 2011. “External Argument Introducers.” Ph.D. Thesis, U of Toronto.
- Wednesday, January 21, 3:30pm, DSB/505: Adrian Staub (University of Massachusetts, Amherst) Title: What does cloze probability measure? Response time and modeling evidence. Abstract: It is widely accepted that a word’s predictability influences on-line comprehension, as a more predictable word elicits shorter reading times and a smaller N400 than a less predictable one. The predictability of a word is generally operationalized in terms of cloze probability, i.e., the proportion of subjects in an off-line production task who provide the word as a continuation of the sentence. The present work investigates the process by which subjects produce a cloze response, ultimately challenging the assumption that cloze probability can be equated with predictability. In two large-scale cloze experiments, subjects read a cloze prompt in RSVP format, and their response time (RT) to initiate a verbal response was recorded. Cloze probabilities closely replicated previous norms with the same items from a standard untimed task. In both experiments, higher probability responses were issued faster than lower probability responses. In both experiments there was also a sizable, and arguably counter-intuitive, relationship between item constraint (i.e., the probability of an item’s modal response) and RT: A low probability response was issued faster in a more constraining context. We show that these two RT effects, as well as other details of the data pattern, naturally emerge from a simple evidence accumulation model. Potential responses independently race toward a threshold, with the elicited response being the first to reach the threshold. The model assumes variability between potential responses in their mean time to reach the threshold, as well as within-response trial-to-trial variability. Increased item constraint is modeled as arising from increased between-response variability in finishing time. We argue that if cloze responses are produced by an activation-based race process, it is far from obvious that cloze probability is an appropriate measure of speakers’ subjective probability distribution over upcoming words. Moreover, this model of how cloze responses are produced makes comparison of cloze probabilities between items less meaningful than is usually assumed, as the relationship between a word’s underlying activation and cloze probability is not even monotonic, when comparing across items.
- Wednesday, February 25, 3:30pm, DSB/505: Debra Titone (McGill University) Title: What the eyes reveal about first and second language reading: Explorations of cross-language competition, emotion and individual differences. Abstract: Eye movement investigations have been crucial for building a deep understanding of the linguistic processes and representations that support first and second language reading. Eye movement methods are ideally suited to this task: they have great temporal precision, allow researchers to observe language processes as they naturally unfold, and enable elegant gaze contingent manipulations that address theoretical questions with great rigor and precision. In this talk, I present data from my laboratory investigating a variety of questions of relevance to first and second language reading processes. These include the factors that modulate the real-time comprehension of language-unique words, words that straddle a bilingual’s two known languages (e.g., CHAT, which means cat in English and a conversational exchange in French; PIANO, which refers to the same musical object in both English and French), and words that vary with respect to their emotional charge (e.g., SEX vs. SKY). Across studies, we are particularly interested in how differences among bilinguals in L2 ability and other cognitive capacities (e.g., executive control) affect bilingual reading performance.
- Wednesday, March 4, 3:30pm, TSH 203: Laura Sabourin (University of Ottawa) Title: Language Processing in Bilinguals: Evidence from Lexical Organization and Cognitive Control. Abstract: Much of the current research in my lab is aimed at determining the effects of age of immersion (AoI), manner of acquisition (MoA), and proficiency on how bilinguals (and language learners) process language. Initial research data at the lexical level shows that, for native speakers of English with L2 French, an early AoI is required for lexicons to become integrated (Sabourin et al., 2014a). However, in a preliminary follow-up study looking at native French speakers with L2 English, it appears that even a late age of L2 immersion can result in integrated lexicons if the MoA is more naturalistic (Sabourin et al., 2014b). Previous research on cognitive control in bilinguals has not always shown a bilingual advantage (Costa et al., 2009), and its existence has been debated (Paap & Greenberg, 2013). In our investigations aimed at accounting for the conflicting results found in the literature (Sabourin & Vinerte, 2014), we investigated participant grouping and task difficulty effects on the Stroop task (which measures cognitive control). While we find no differences between simultaneous and early sequential bilinguals, age groups traditionally both classified as “early” bilinguals, when the task uses only one language, we find a significant difference between the two groups when the task mixes both languages. Based on the data collected to date in our lab (including studies at other levels of linguistic processing), I hypothesize that while for many bilingual and language learning groups AoI is often the most important factor in determining how languages are processed, there is an important role for factors such as MoA and the context of bilingualism.
- Wednesday, April 1, 3:30pm, TSH 203: Jon Sprouse (University of Connecticut) (TBA) Title: Experimental syntax and three debates in linguistics. Abstract: Over the past 15 years or so, there has a been a substantial push within theoretical syntax to adopt more formal experimental methods for data collection. The obvious question to be asked about any method is what does it buy us in terms of theory construction and evaluation? In this talk, I would like to review some contributions that formal experimental methods have made to three debates within the field: (i) Is the data underlying syntactic theory valid?, (ii) Can complex syntactic constraints be reduced to independently motivated aspects of sentence processing?, and (iii) Is there a role for innate, domain-specific knowledge in learning syntactic behaviors? My hope is that each of these topics will not only show the value of formal methods for linguistic theory, but also point the way to future work on these questions.
Past Lectures, 2013-14
Unless otherwise indicated, all Fall talks (i.e., October 9 to December 4) take place in TSH-201, and all Winter talks take place in DSB-505.
- Wednesday, October 9, 3:30pm: Sali A. Tagliamonte (University of Toronto)
What’s the community got to do with it?
Language is inherently variable. People alternate between two or more ways of saying the same thing in every conversation and in all communities. This variation exists at all levels of grammar from lexical choices (e.g. couch vs. sofa) to pronunciation differences (e.g. talking vs. talkin’) to morphological alternations (e.g. go slow vs. go slowly) to discourse-pragmatic phenomena (e.g. I love it vs. I like love it.). Why do people do this?
In this presentation, I outline Variationist Sociolinguistics, an area of Linguistics that studies this variation and analyses it statistically, comparatively and in reference to the social context in which it occurs (e.g. Tagliamonte, 2012, in press). The explanation for this behavior necessarily lies in the linguistic system, but it also is highly influenced by external aspects of its use (Labov, 1970; Sankoff, 1980). In order to tap the system underlying this variation, analyses must be capable of modelling the simultaneous application of social and linguistic predictors and their interaction (Cedergren & Sankoff, 1974; Labov, 1994:3). This type of behavior in language may be stable, but it may also be changing, often rapidly (Labov, 2001). This means that historical, cultural and regional information may be required to interpret its use. Comparative techniques assist the analyst in evaluating similarities and differences across relevant categorizations of the data (e.g. age, sex, ethnicity, social network) (Tagliamonte, 2002). Taken together, the methodological procedures and statistical techniques of Variationist Sociolinguistics, as I will exemplify in this presentation, provide insights into the grammatical system as well as its social embedding, and therefore rich and viable means for understanding and interpreting language behavior in socially defined populations.
Cedergren, Henrietta J. & Sankoff, David (1974). Variable rules: Performance as a statistical reflection of competence. Language 50(2): 333-355.
Labov, William (1970). The study of language in its social context. Studium Generale 23(1): 30-87.
Labov, William (1994). Principles of linguistic change: Volume 1: Internal factors. Cambridge and Oxford: Blackwell Publishers.
Labov, William (2001). Principles of linguistic change: Volume 2: Social factors. Malden and Oxford: Blackwell Publishers.
Sankoff, Gillian (1980). A quantitative paradigm for the study of communicative competence. In Sankoff, G. (Ed.), The social life of language. Philadelphia: University of Pennsylvania Press. 47-79.
Tagliamonte, Sali A. (2002). Comparative sociolinguistics. In Chambers, J. K., Trudgill, P. & Schilling-Estes, N. (Eds.), Handbook of language variation and change. Malden and Oxford: Blackwell Publishers. 729-763.
Tagliamonte, Sali A. (2012). Variationist Sociolinguistics: Change, observation, interpretation. Malden and Oxford: Wiley-Blackwell.
Tagliamonte, Sali A. (in press). Analysing and interpreting variation in the Sociolinguistic tradition. In Krug, M. & Schlüter, J. (Eds.), Research Methods in Language Variation and Change. Cambridge: Cambridge University Press.
- Wednesday, November 20, 3:30pm: Marc F. Joanisse (The University of Western Ontario)Measuring Implicit Phonological Processing With Eye Tracking and Event-Related PotentialsPhonological knowledge is typically measured using explicit judgments, as in categorical perception and phonological awareness tests. Although these have provided useful assessments of general phonological ability, I argue that they are heavily influenced by sensory factors, task demands and response modality. As a result they provide at best an indirect measure of the many underlying mechanisms involved in phonological knowledge. In this talk I discuss work in my lab that is pursuing a different approach, in which we implicitly measure phonology during spoken word recognition. In our approach, listeners see pictures of familiar objects and then hear words that either do or don’t match what they see. Manipulating the phonological similarity of what is heard versus what is expected reveals interesting modulations in eyetracking and event-related potential (ERP) measures. I discuss how this approach can be used to study phonology both in children and adults, thus providing insights into a number of domains of language research: (1) the nature of phonological deficits in children with dyslexia; (2) the extent to which such difficulties differ from those observed in children with specific language impairment (SLI); and (3) the extent to which phonological processing differs cross-linguistically, as in the case of Mandarin, a tonal languages with remarkably different phonological structure from English.
- Wednesday, November 27, 3:30pm: Kazunaga Matsuki (McMaster University)The Roles of Thematic Knowledge in Sentence ComprehensionPeople possess a great deal of knowledge about real-world events. This knowledge, specifically with respect to event participants and their relations within an event (thematic knowledge), is an important component of how people understand language. In this talk, I will present results from two sentence comprehension studies that examined different aspects of how thematic knowledge influence sentence comprehension by addressing two critical unresolved issues. First, I investigated whether manipulation of thematic knowledge can lead to processing disruption in sentences that are otherwise assumed to be free of processing difficulty. This issue is particularly important for adjudicating among two major theories of sentence comprehension, two-stage and constraint-based theories. Second, I investigated how thematic knowledge affects the construction of sentential meaning representations, and how misinterpretations can occur during that process. Specifically, the study evaluated a few possibilities regarding how misanalyses of thematic roles might occur in full passive sentences that varied in plausibility. The novel aspects of this study involved in-depth analyses of the types of errors that participants make, and using ERPs to investigate on-line processing differences. I will conclude that people’s knowledge of the roles played by specific types of participants in specific types of events immediately and continuously influences language comprehension.
- Wednesday, December 4: 3:30pm: Sylvain Moreno (Baycrest) Brain plasticity from perception to cognition: The role of video games in altering brain functionNeuroeducation is an emerging field in cognitive science, in which neuroscientific methods are used to study skill transfer and learning. Previous studies examining such training programs have reported mixed results (Detterman & Sternberg, 1982). Some studies found small, but significant, improvements in performance on untrained transfer tasks (e.g., problem-solving tasks; Lovett & Anderson, 1994), whereas other studies have found no transfer to untrained tasks (Olesen, Westerberg, & Klingberg, 2004). Yet, in spite of these mixed results, successful skill transfer to non-music related task has been demonstrated for musical training (Schellenberg, 2004, for a review, Moreno, 2009b).This presentation will outline findings related to transfer of skills from a video-game based music training program to untrained auditory and cognitive processing skills such as language. It will focus on three main questions: (1) Is transfer of skills possible between cognitive activities?; (2) If so, how can we qualify the nature of this transfer?; and (3) What can the neural correlates of these transfer mechanisms tell us about transfer and learning?
- Wednesday, January 22: 3:30pm: Lee Wurm (Wayne State University)Emotion Effects In Lexical ProcessingModels of spoken word recognition have not historically included semantic or affective information as part of the recognition process. Such effects have been presumed to be much later. After all, how can a word’s meaning affect the recognition process before the word has been recognized? A growing body of research suggests, though, that such effects are not only early but pervasive. I will discuss some of this research, focusing on affective dimensions we developed while trying to make sense of previous findings. In several studies we have found that lexical decision times are predicted by a Danger x Usefulness interaction. In our view the interaction argues for an embodied (or “situated”) model of cognition. I will also make connections to work on memory and, if time permits, on cognitive aging.
- Wednesday, February 5: 3:30pm: Jeff Mielke (North Carolina State University) Individual differences shape phonological typologyLinguistic patterns can be studied at three distinct levels of granularity: the individual, the language, and the set of all languages. At the individual level, descriptions can refer to an individual’s vocal tract morphology, acquisition history, and cognitive processes. A language-level description can include patterns and associations typically shared by members of a speech community. A crosslinguistic comparison can identify the patterns that universally/frequently/rarely/never occur in language-level descriptions. Generative grammar posited a direct link between crosslinguistic universals and the fundamental sameness of individual language learners, with language as an epiphenomenal intermediate level (e.g., Chomsky and Halle 1968, Chomsky 1986). Language has also been analyzed as a dynamical system, with familiar typological patterns emerging as a consequence of language use and language change (e.g., Ohala 1981, S. Kirby 1999, Blevins 2004). An intriguing implication of the latter approach is that differences between individual language users could shape the development and structure of languages.Phonology provides a particularly nice testing ground for the interaction of individual differences, because language-level phonological patterns often clearly reflect constraints imposed by the central nervous system, the vocal tract, the auditory system, and social interaction, all of which vary nontrivially across individuals, and all of which are easier to investigate now than 50 years ago. I will present phonetic data and survey data to contrast individual-level variation (North American English /r/ allophony, Canadian French rhotic vowels, and VOT accommodation in English) with similar variation at the language level (/l/ velarization crosslinguistically and regional variation in English short-a tensing). I will argue on this basis that the development of familiar phonological patterns crucially depends on individual-level variation and language-level convergence. This approach also offers an account of sound change minimality through an individualized notion of what qualifies utterances as same or different.
- Wednesday, February 26: 3:30pm: Stefan Th. Gries (University of California, Santa Barbara)Statistical methods in corpus linguistics: recent improvements and applicationsBy its very nature, corpus linguistics is a discipline not just concerned with, but ultimately based on, the distributions and frequencies of linguistic forms in and across corpora. This undisputed fact notwithstanding, for many years, corpus linguistics has been dominated by work that was limited in both computational and statistical ways. As for the former, a lot of work is based on a small number of ready-made proprietary software packages that provide some major functions but can of coursen not provide the functionality that, for instance, programming languages provide. As for the latter, a lot of work is very unstatistical in nature by relying on little more than observed frequencies or percentages/conditional probabilities of linguistic elements.However, over the last 10 years or so, this picture has changed and corpus linguistics has evolved considerably to a state where more diverse descriptive statistics and association measures as well as multifactorial regression modeling, other statistical classification techniques, and multivariate exploratory statistics have become quite common. In this talk, I will survey a variety of recent studies that showcase this new-developed methodological variety in both synchronic and diachronic corpus linguistics; examples will include applications of generalized linear (mixed-effects) models, different types of cluster-analytic algorithms, principal components analysis and other dimension-reduction tools, and others.
- Wednesday, March 19: 3:30pm: Martin Hackl (MIT)On the Acquisition and Processing of Only: Scalar Presupposition and the Structure of AlternativesThe abstract of the talk can be found here.
Past Lectures, 2012-13
- Wednesday, September 19, 2012, 3:30pm: Gerard Van Herk (Memorial University)From “Arr!” to -S: What pirates, yokels, and Newfoundland drag queens tell us about language`s social meaningsCompetent members of a sociolinguistic community share norms about the social meanings associated with linguistic features, as well as the linguistic features associated with social groups. When a linguistic feature is shared by multiple (marginalized) groups, it develops a broader social meaning, and becomes available for performance, mimicry, joking, literary representations of types, and other sociolinguistic work. This talk will describe recent quantitative work on one such feature, the non-standard use in urbanizing Newfoundland of verbal -s (as in “I loves it!”). As the social contexts of -s use change, so do its social meanings, so that what was once a marker of rural identity becomes associated with young urban females and then drag queens.(In celebration of International Talk Like a Pirate Day, September 19.)
- Wednesday, October 24, 2012, 3:30pm: Chris Kennedy (University of Chicago)Vagueness, Imprecision and ToleranceWhen I say “the theater is packed tonight” or “there are a lot of people in the theater tonight,” my utterance leaves a certain amount of uncertainty about the actual number of people in the theater. The same uncertainty about actual number is typically present when I say “the theater is full tonight” (even if the number of seats in the theater is known) or “there are 1000 people in the theater tonight.” In all cases, this can be traced back to the fact that we use and interpret utterances like these tolerantly: small differences in the actual number of people in the theater typically do not affect our willingness either to make these utterances or to accept a speaker’s utterance of them. However, there is an important difference between the two sets of utterances: “the theater is full” and “there are 1000 people in the theater” can be used or understood in a way that is fully precise, but “the theater is packed” and “there are a lot of people in the theater” cannot be so used or understood. This distinction — the possibility of “natural precisifications” (to use a term from Manfred Pinkal) — is one of several empirical properties that distinguish vague terms like ‘packed’ and ‘a lot’ from (potentially) imprecise ones like ‘full’ and ‘1000’.The central theoretical question is how to account for these empirical differences while at the same time explaining why both kinds of expressions can be tolerant. Do we take the shared property of tolerance to indicate that both vague and imprecise expressions have the same core semantic/pragmatic analysis, and find a way to resolve or explain away the differences; or do we take the differences to indicate that vagueness and imprecision reflect a fundamental semantic/pragmatic distinction, and find a way to accommodate the shared property of tolerance? My goal in this talk is to present some arguments in favor of the latter position. I will begin by providing linguistic and experimental evidence which argues in favor of a distinction between vagueness as a fundamentally semantic phenomenon and imprecision as a fundamentally pragmatic one. I will then argue that any reasonable pragmatic model of imprecision is one that will automatically give rise to the phenomenological properties associated with tolerance.
- CANCELLED Thursday, November 8, 2012: Liina Pylkkänen (New York University) — talk co-hosted with the Department of Psychology, Neuroscience and Behaviour, please refer to their website for the abstract
- Friday, November 9, 2012, 3:30pm, DSB-505: Alec Marantz (New York Univrsity)Words and Rules Revisited: Separating the Syntagmatic and the Paradigmatic in MorphologyPinker’s influential presentation of the distinction between the combinatoric units of language (the “words”) and the mechanisms that organize the units into linguistic constituents (the “rules”) rested on a strong, but ultimately incorrect, theory about the connection between a speaker’s internalized grammar and his/her use of language: the regular syntagmatic combination of units leaves no lasting impact on the brain, while repetition of a unit strengthens or alters its representation in memory. Thus, the telltale sign of combination is a lack of frequency effects (no behavioral consequences of the frequency of regular past tense forms like “walked” — only the frequency of the stem “walk” matters — so “walk” + “ed” is a syntagmatic (“rule”) combination), and the telltale sign of a memorized unit is frequency effects (behavioral consequences of the frequency of irregular past tense forms like “taught” — only the frequency of “taught,” not “teach,” matters — so “taught” is memorized (“word”), rather than formed via syntagmatic combination). The psycholinguistic and neurolinguistic literature of the past 30 years has demonstrated that syntagmatic combination, no matter how “regular,” does leave a trace of some sort in the brain such that frequency effects of various sorts are characteristic of brain and behavioral evidence both for atomic items (morphemes) and for combination of items. Nevertheless, linguistic theory does distinguish between atomic units, which “compete” for positions in syntax along the “paradigmatic” dimension of language, and combination of units, which are organized according to the “rules” of syntax. The Neuroscience of Language Lab at NYU has been using MEG to explore the differences in the neural bases of syntagmatic and paradigmatic frequency effects with the ultimate goal of using neural measures to help answer difficult linguistic questions. For example, work in Distributed Morphology has argued for the universal separation of the roots of lexical items (nouns, verbs, adjectives) from the lexical category information (n, v, adj). Is the relationship between the root and the category-determining feature syntagmatic (involving the syntactic combination of root and a category morpheme) or paradigmatic (involving a category feature associated with the root, but not combined with the root via the syntax)? This question is parallel to Pinker’s question about the connection between the verb and past tense for English irregular verbs – is it syntactic (rules) or paradigmatic (words) – and we know the answer is “rules” in this case. Can we exploit the same general types of experiments that demonstrate that past tense in English is always computed as a syntactic combination of units to show that lexical categories also involve a syntactic relation between a root and a category morpheme? In this talk, I will present some of the recent findings of our Lab that suggest that paradigmatic effects, quantified in terms of entropy or uncertainty about which atomic element is being processed, may be separated from syntagmatic effects, quantified in terms of the surprisal of an atom being processed in comparison to syntagmatic expectations built up from prior experience. If speakers processing a word stem show entropy effects over the possibilities that the same root might occur in different lexical categories (“hammer” as a noun or verb), the results would argue against the Distributed Morphology account, whereas if these speakers instead showed no such entropy effects but surprisal effects at the resolution of the category ambiguity, the results would support this account.
- Wednesday, November 21, 2012, 3:30pm, DSB B107: Daphna Heller (University of Toronto)Common ground and the probabilistic nature of referential domains Theoretical approaches to reference assume that definite descriptions such as “the candle” are used to refer to a candle which is uniquely identifiable relative to a set of entities defined by the situational context. Thus, the interpretation of definite descriptions crucially depends on listeners’ ability to correctly construct this situation-specific “referential domain”. While there is considerable experimental evidence that listeners are indeed able to use various types of information to construct referential domains in real time, some evidence seems to suggest that information about common ground is not used for this task. That is, evidence in the psycholinguistics literature is mixed regarding whether listeners incorporate the distinction between shared and private information in the earliest moments of processing. In this talk, I will review some of these apparently-contradictory results (Keysar et al., 2000; Heller et al., 2008), and argue that they can be explained under a novel approach to referential domains. Specifically, I propose that instead of choosing one domain over another, listeners simultaneously consider more than one domain, weighing probabilistically their relative contribution. I present data from two experiments in support of this approach, and discuss the implications for our understanding of referential domains more generally. Keysar, B., Barr, D.J., Balin, J.A. & Brauner, J.S. (2000). Taking perspective in conversation: The role of mutual knowledge in comprehension. Psychological Science, 11, 32-37. Heller, D. Grodner, D. & Tanenhaus, M. K. (2008). The Role of Perspective in Identifying Domains of Reference. Cognition, 108, 831-836.
- Wednesday, January 16, 2013, 3:30pm, DSB-505: Randy Newman (Acadia University)Is a rows a rose as Van Orden (1987) claimed? New evidence from ERP and fMRI research regarding the use of phonology in activating the meaning of wordsLearning to read requires forming associations between the sounds (i.e., phonology), spellings (i.e., orthography), and meanings (i.e., semantics) of words. While there is a general consensus that phonology influences reading, theories differ in the importance they assign to phonological information, particularly in skilled readers. So-called strong phonological theories propose that computation of phonology occurs early and automatically in the course of reading. An alternative view assigns less importance to phonology, arguing that phonological influences are dependent on factors such as reading skill and word frequency. Work in my lab at Acadia and with various collaborators employs event-related brain potential (ERP) measures and functional MRI to define the temporal and spatial dynamics of phonological processing as a means of adjudicating between these opposing views. My talk will focus on experiments that have taken advantage of the homophony of the English language to clarify phonology’s role in activating the meaning of written words. Homophones are words with identical pronunciations, but which differ in spelling and meaning (e.g., bear/bare). The rationale for using homophones is that if word meanings are activated solely from orthographic representations, then only the meaning of a presented homophone should become activated. In contrast, if phonology activates the meanings of words, then presentation of a homophone will result in activation of semantic representations associated both with the presented homophone (e.g., bare) and of its homophone mate (e.g., bear) – a so-called homophone effect. Results from a series of experiments have shown that the presence of homophone effects depends on a number of factors including word frequency, word predictability, the word-likeness of nonword fillers and the type of paradigm employed. The general conclusion of the research conducted thus far is that the use of phonology in contexts that most closely resemble natural reading (i.e., sentence verification paradigms) is likely early and automatic. However, in contexts where readers must make lexical decisions involving homophones presented amongst nonwords with atypical orthography (e.g., roynt), readers appear able to make decisions based on a superficial analysis of orthographic information. Implications for models of reading will be discussed.Background literature Jared, D., Levy, B. A., & Rayner, K. (1999). The role of phonology in the activation of word meanings during reading: Evidence from proofreading and eye movements. Journal of Experimental Psychology: General, 128, 219-264.Newman, R. L., & Connolly, J. F. (2004). Determining the role of phonology in silent reading using event-related brain potentials. Cognitive Brain Research, 21, 94-105.
- Wednesday, February 27, 2013, 3:30pm: Yoonjung Kang (University of Toronto) “Tonogenetic sound change in Korean stops and natural classes”Tonogenesis is a commonly attested sound change whereby phonation contrasts of consonants give rise to and eventually become replaced by tonal contrasts on an adjacent vowel. Korean has a typologically uncommon three-way contrast of voiceless stops among aspirated (heavily aspirated, pha), lenis (lightly aspirated, pa), and fortis (unaspirated, p’a) stops. The stops are differentiated by voice onset time (VOT, an acoustic measure of degree of aspiration)—aspirated > lenis > fortis—and also by the fundamental frequency (f0, an acoustic correlate of pitch) of the immediately following vowel—aspirated ˜ fortis > lenis. Studies on Seoul Korean from the last decade or so find that the two long VOT categories (aspirated and lenis stops) are losing their VOT distinction and that their f0 difference is emerging as the primary cue for the contrast (Aspirated: phal >pál (H); Lenis: pal > pàl (L)).In this talk, I will draw on data from synchronic, diachronic, and dialectal variation of Korean stops to examine the development of f0 contrast over time, both apparent and real. We find evidence that the development of f0 contrast is adaptive; namely, f0 distinction is further exaggerated where VOT distinction is threatened. In those dialects where lenis stops (the middle VOT category) overlap with fortis stops in VOT, f0 is further raised following fortis stops while in those dialects where lenis stops overlap with aspirated stops in VOT, as in Seoul Korean, f0 is further raised following aspirated stops. At the same time, we also find evidence that in Seoul Korean, f0 enhancement targets a broader natural class of sounds (i.e., all aspirated stops and fricatives) rather than narrowly targeting the segments directly involved in the threatened contrast (i.e., aspirated stops). In sum, the study shows that the tonal contrast is shaped through adaptive dispersion to maintain threatened contrasts but the dispersion is mediated by phonological structure–i.e., a distinctive feature.
- Wednesday, March 13, 2013, 3:30pm: Wen Cao (McMaster University/Beijing Language and Culture University)Perceptual studies on Chinese Tone-3 and its focusingBeing low ([+L]) is regarded as the distinctive feature of the Tone-3 (T3) in Standard Chinese. However, previous work has not yielded clear conclusions on its basic/recitation pattern. Different descriptions can be found in the literature, such as: dipping, broken, low-level, low-falling, falling-rising, falling-level-rising, etc.. In the first part of my talk, I will introduce my team’s recent work on Chinese tone perception. We conclude that the “falling-level-rising” pitch contour is the “ideal” isolation form of T3. And the best value in five-pitch-levels is /2112/, in which /11/ occupying 60% of its duration.Then how can a low tone syllable in Chinese sentence be perceived to be focal accented or not? In the second part of my talk, I will introduce an experiment aiming to answer this question. A total of 156 sentences containing tone-3 words were synthesized and used as stimuli in a perceptual study. The sentences differed in the falling value between the two high pitches, and in the duration and phonation types of the T3 syllables. Thirty-nine subjects were asked to judge where the focus or accent was for each sentence. The results show that at least three degrees of pitch drop are involved in the focus recognition: a big sized drop of about 10 semitones; a middle sized drop of about 6 semitones; a small sized drop of about 2 semitones. The results suggest that the three sizes of pitch drop have different indications in Chinese intonation, depending on both the tone and the tone combination. In perception, there are various ways to realize tone-3 focus in the Tx-T3-Ty sentences series, but in production or for text-to-speech synthesis, the rule simply is making a middle sized pitch drop with a long and creaky T3 syllable. Similarly, to focus on the low tone syllable in the T3-Tx-Ty sentences, a creaky T3 syllable is essential. However, a long T3 syllable is a strong determinant for a low tone focus in the Tx-Ty-T3 sentences.
- Wednesday, March 27, 10:30-12pm: Evelina Fedorenko (MIT) (a talk co-hosted with PNB; more details can be found here)
- Wednesday, March 27, 3:30-5pm: Ted Gibson (MIT)Language for communication: Language comprehension and the communicative basis of word orderPerhaps the most obvious hypothesis for the function of human language is for use in communication. Chomsky has famously argued that this is a flawed hypothesis, because of the existence of such phenomena as ambiguity. Furthermore, he argues that the kinds of things that people tend to say are not short and simple, as would be predicted by communication theory. Contrary to Chomsky, my group applies information theory and communication theory from Shannon (1948) in order to attempt to explain the typical usage of language in comprehension and production, together with the structure of languages themselves. First, we show that ambiguity out of context is not only not a problem for an information-theoretic approach to language, it is a feature. Second, we show that language comprehension appears to function as a noisy channel process, in line with communication theory. Given si, the intended sentence, and sp, the perceived sentence we propose that people maximize P(si | sp ), which is equivalent to maximizing the product of the prior P(si) and the likely noise processes P(si → sp ). We show that several predictions of this way of thinking of language are true: (1) the more noise that is needed to edit from one alternative to another leads to lower likelihood that the alternative will be considered; (2) in the noise process, deletions are more likely than insertions; (3) increasing the noise increases the reliance on the prior (semantics); and (4) increasing the likelihood of implausible events decreases the reliance on the prior. Third, we show that this way of thinking about language leads to a simple re-thinking of the P600 from the ERP literature. The P600 wave was originally proposed to be due to people’s sensitivity to syntactic violations, but there have been many instances of problematic data in the literature for this interpretation. We show that the P600 can best be interpreted as sensitivity to an edit in the signal, in order to make it more easily interpretable. Finally, we discuss how thinking of language as communication can explain aspects of the origin of word order. Some recent evidence suggests that subject-object-verb (SOV) may be the default word order for human language. For example, SOV is the preferred word order in a task where participants gesture event meanings (Goldin-Meadow et al. 2008). Critically, SOV gesture production occurs not only for speakers of SOV languages, but also for speakers of SVO languages, such as English, Chinese, Spanish (Goldin-Meadow et al. 2008) and Italian (Langus & Nespor, 2010). The gesture-production task therefore plausibly reflects default word order independent of native language. However, this leaves open the question of why there are so many SVO languages (41.2% of languages; Dryer, 2005). We propose that the high percentage of SVO languages cross-linguistically is due to communication pressures over a noisy channel. We provide several gesture experiments consistent with this hypothesis, and we speculate how a noisy channel approach might explain several typical word order patterns that occur in the world’s languages.
Past Lectures (2011-12)
- Wednesday, January 18, 2012, 3:30pm: Dr. Masako Hirotani (Carleton University)
- Wednesday, February 1, 2012, 3:30pm: Dr. Colin Phillips (University of Maryland)
Senator William McMaster Invited Lecturer in Cognitive Neuroscience of Language
“Linguistic Illusions: Where you see them, where you don’t”
- Wednesday, February 15, 2012, 4:00pm: Dr. Keren Rice (University of Toronto)
“Athabaskan verb templates and what underlies them”
- Wednesday, March 21, 2012, 3:30pm: Dr. Ellen Bialystok (York University)
“Reshaping the Mind: The Benefits of Bilingualism”
- Wednesday, April 4, 2012, 3:30pm: Dr. Ronnie Wilbur (Purdue University)
Past Lectures (Fall 2011)
- Wednesday, September 21, 2011, 3:30pm: Dr. Julie A. Van Dyke (Haskins Laboratories)
“Memory interference as a determinant of poor language comprehension”
- Wednesday, October 19, 2011, 3:30pm: Dr. Diane Massam (University of Toronto)
“Variations on Predication: Word order and case in Niuean”
- Wednesday, October 26, 2011, 3:30pm: Dr. Grit Liebscher (University of Waterloo)
“Identity construction through language: The case of German Canadians”
- Wednesday, November 2, 2011, 3:30pm: Dr. Roger Schwarzschild (Rutgers University)
“Quantifier Domain Adverbials, Semantic Change and the Comparative”
Past Lectures (2010-2011)
- Friday, October 1, 3:30pm: Dr. Michael Walsh Dickey (University of Pittsburgh)
Automatic processing and recovery of complex sentences in aphasia
- Wednesday, October 27, 3:30pm: Dr. Mathias Schulze (University of Waterloo)
Measuring textual complexity
- Wednesday, November 10, 2010, 3:30pm: Dr. Jennifer Cole (University of Illinois)
Investigating the variable prosody of everyday speech
- Wednesday, November 24, 2010, 3:30pm: Dr. Ileana Paul (University of Western Ontario)
What do determiners do? Wednesday, December 1, 2010, 3:30pm: Dr. Juan Uriagereka (University of Maryland)
A Clash of the Interfaces
- Wednesday, January 12, 2011, 3:30pm: Dr. James Walker (York University)
Phonological Variation in Toronto English: Linguistic and Social Conditioning
- Wednesday, January 19, 2011, 3:30pm: Dr. Cristina Schmitt and Dr. Alan Munn (Michigan State University)
Acquiring Definiteness: Syntax, Semantics, Pragmatics and Acquisition
- Wednesday, January 26, 2011, 3:30pm: Dr. Veena Dwivedi (Brock University)
Individual Differences in Shallow Semantic Processing of Scope Ambiguity
- Friday, February 11, 2011, 3:30pm: Dr. Florian Jaeger (University of Rochester)
How communicative pressures may come to shape language over time
- Wednesday, March 2, 2011, 3:30pm: Dr. Alana Johns (University of Toronto)
The Language of the Inuit: What we don’t know.
- Wednesday, March 16, 2011, 3:30pm: Dr. Michael Schutz (McMaster University School of the Arts)
Deconstructing a musical illusion: causality and audio-visual integration.
- Wednesday, April 6, 2011, 9:30 – 2:30: Student Research Day
Hear talks and view posters presenting the work of student researchers in the Department of Linguistics and Languages.
Past Lectures (2009-2010)
- January 13 – Dr. Craig Chambers, University of Toronto (Mississauga)
Referential Anticipation in Incremental Sentence Comprehension: A Result of Shallow or Rich Linguistic Processing?
View the talk
- January 27 – Dr. Mike Kliffer, McMaster University
Prescriptivism: An inquiry into its Cognitive Side
View the talk
- February 03 – Dr. Ian Smith, York University
Missionary Language Practice & the Unwitting Conversion of Ceylon Portuguese
- February 10 – Dr. Uli Sauerland, ZaS Berlin
The Origin of Embedded Clauses
View the talk
- February 24 – Dr. Ann Bunger, University of Delaware
The Role of Nonlinguistic Event Representation in First Language Acquisition
- March 03 – Dr. Usha Goswami, Cambridge University
Combining Educational Neuroscience and Cognitive Developmental Psychology: The Example of Learning to Read
- March 10 – Dr. Steven Brown, McMaster University
Neural Control of Vocalization and Vocal Imitation in Humans
View the talk
- March 24 – Dr. John Whitman, Cornell University
The Formal Syntax of Alignment Change: the Case of Old Japanese
View the talk