This course provides an introduction to cognitive psychology and neuroscience with an emphasis on topics relevant to the study of language. (Noun, feminine) the course of lectures in the course of lectures The course starts with an overview of the history of the human brain, describing the main functional areas, and the questions which arise out of our attempts to characterize function. We then step back to consider the unique developmental pattern associated with the human brain, and then consider what we know about the cognitive functions that arise out of human neural processing, cognitive development,

For assessment purposes, students will be required to write a series short (one-page) reviews, covering a subset of the topics. At the end of this course, participants will have acquired the background knowledge to enable them to consider linguistic questions in terms of the processing capabilities of the mind / brain.


Questions play a central role as functional contexts for language use. As such they are relevant in a number of contexts: Questions support the interpretation of answers in a concrete language-based context. They make it possible to test knowledge, to verify whether someone has read a given text, or to explore the interpretations drawn from a given text. Questions can foster learning and they are central to assessment. In computational linguistics, the automatic generation of questions is an attractive challenge given the mix of function, meaning and grammatical characteristics that it involves. In this seminar, we survey different techniques for generating questions and their use cases.

The purpose of this course is to introduce advanced students to scalable software architectures for natural language processing, in order to facilitate the move from toy examples in a few select languages to real data as it is produced by a multilingual and multicultural society. The course is split roughly in two: Part one gives an introduction to UIMA and its usage for a broad range of standard NLP tasks, and is accompanied by practical exercises covering interesting phenomena in many languages. In the second application-oriented part, we provide an introduction to the Google Web Toolkit (GWT) which has been very popular for building large web applications. In this part, students will pick a project for a multilingual application. Creative ideas are very welcome, but we are also happy to provide project ideas from our areas of interest, such as modeling learner language, localisation, translation memories and quality control, multilingual news aggregation, and derivational typology.

Introduction to linguistics for cognitive science

winter semester 2017, Tubingen

Harald Baayen

Lecture hall 002, Wilhelmstrasse 19

 

lecture series (3 ECTS)

October             24 Baayen phonetics

October              31 - -

November            7 Baayen phonology

November          14 Baayen auditory word recognition

November          21 Baayen morphology

November          28 Baayen morphological processing

December            5 Baayen syntax

December          12 Baayen language typology

December          19 Baayen semantics

January                9 Baayen language acquisition

January              16 Baayen language and ageing

January              23 Baayen dialectology

January              30 Baayen historical linguistics

February               6 Baayen language and thought

 

requirements

1. attendance in class;

2. a literature review (approximately 10 pages) on and evaluation of one of the topics covered in class

 

modeling series (3 ECTS)

October              26 Baayen interactive activation model

November            2 Nixon shifting phonetic categories (lecture)

November            9 Baayen reading out loud: DCP+ model

November          16 Baayen auditory word recognition: TRACE

November          23 Baayen auditory word recognition: NDL

November          30 Baayen morphology and naive discriminative learning

December            7 Baayen speech production: Dell, Levelt

December          14 Baayen speech production: Rumelhart & McClelland

December          21 Baayen semantics: LSA, word2vec

January              11 Baayen language acquisition: Bod, Ramscar

January              18 Baayen the consequences of experience

January              25 Baayen reading out loud: a single route model using NDL

February              1 Baayen animal communication and lexical learning

February              8 Baayen project reports

 

requirements

1. attendance in class;

2. introducing one journal article in class;

3. project report on modeling visual word recognition with the interactive activation model and with naive discriminative learning, for French, English, or Dutch; this can be collaborative work.


In this seminar we want to have a look at known freely available Natural Language tool kits like NLTK, SpaCy, Stanford's CoreNLP,  OpenNLP and tools for specific tasks like TreeTagger, Claws Tagger, Malt Parser, Charniak, Minipar parser, Watson parser, Lappin Leass Coreference resolution, CherryPicker, Smmry, Summa and others. The aim is to consider typical NLP tasks from PoS tagging and dependency parsing to tasks of more abstract description levels like coreference resolution and summarization, to study the specific theoretical background of respective functions, to get insights into the corresponding implementations and to do practical studies, test the performance and compare results to each other using different appropriate corpora and quality measures.

This course provides an introduction to the sounds of Mandarin Chinese. We will read papers about the production or perception of consonants, vowels, or tones in Mandarin Chinese. The knowledge gained through this part of the course will be complemented with hands-on experience by processing sounds of Mandarin and data analysis. Previous knowledge of phonetics/phonology or Mandarin Chinese or analysing data is not required for this course.

Psycholinguistics uses a wide range of techniques to uncover the mental processes and representations through which humans produce and understand language. This course provides an overview of experimental methods in psycholinguistics. Students will gather hands-on experience with experimental research by participating in experiments on the German language in the lab and on-line. Course grades are based on class attendance, method presentation, homework completion, experiment participation, and data analysis.

Unsupervised machine learning is a collection of methods for inferring (hidden) structure from `unlabeled' data. Considering the labor-intensive and time-consuming nature of creating labeled data and the abundance of unlabeled data, it is clear that unsupervised methods are attractive in many fields, including in computational linguistics (CL) and natural language processing (NLP). Besides these practical motivations, unsupervised learning is also instrumental in investigating many problems of linguistics and cognitive sciences.

In this course we will study unsupervised methods for solving some of the typical NLP tasks such as tokenization, part-of-speech tagging, morphological analysis and parsing. We will also review some of the research-oriented applications of unsupervised methods in linguistics. For example, their use in modeling human language processing and acquisition, and investigating linguistic variation.

The course will take a practical approach. As well as reading and discussing some important and/or recent research, we will build practical models/applications during the course.


This course offers an introduction at the advanced undergraduate/beginning graduate level to the study of language acquisition, in particular Second Language Acquisition (SLA). The course surveys the major approaches to SLA, their goals, research methodology, and major findings, emphasizing the interdisciplinary link to linguistic modeling and cognition.