Language and Communication
- Created by: jxw145
- Created on: 30-09-22 15:17
Communication
COMMUNICATION= “when one organism (the transmitter) encodes information into a signal which passes to another organism (the receiver) which decodes the signal and is capable of responding appropriately” (Beattie & Ellis, p.2)
VERBAL COMMUNICATION= spoken/written transmission of a message - this can be language, dialects and language of a group, constructed languages (e.g. esperanto)
NON-VERBAL COMMUNICATION= non-linguistic aspects- e.g. body lang, gestures, emoticons-language also has non-verbal elements such as tone, rhythm, stress.
Langauge
LANGUAGE= a type of communication- A structured system of symbols (words) and the rules (grammar) by which they are combined- no right answer for what lang is but it has to be inherently consistent
-e.g., Language is a system:
…to communicate thoughts, feelings, info
…of arbitrary signs (words) that refer to things in the world and have meaning (e.g., not just onomatopoeia)
…to combine these signs (syntax- limited number of words and rules combine to form unlimited number of expressions)
…that allows us to go beyond the here and now
…that is used by a group
What makes a language a language?
-There is debate over when a language is a language or if it is just a diff dialect (e.g., Londoner vs Geordie or standard German vs Swiss German) and can be political
-between 3000 and 8000 languages and languages die out at a rate of 1 every 2 weeks
-All European languages together= 3% of total
-Most common 1st language (L1): Chinese, Spanish, English
-L1+L2: English (20% of pop)
-a problem in psychology is that most research has been concentrated on small group of European lang-biased
Relevance in different domains
*Language can be a useful tool to use when looking at a range of different domains such as:
-Education (what makes someone a good reader? - what are the individual cog differences that makes someone good?)
-Clinical (aphasia, schizophrenia, ASD, dyslexia, stuttering, anxiety, speech and lang therapy)
-Development (lang learning and interaction with cog development)
-2nd lang learning
-Automated lang (automated transcription, speech to text translation, Siri)
-Social and cultural (diff accents- how do ppl react to people depending on their accent? diff focus in lang, diff expressions, theory of mind)
-Forensic (analysis of individual speech patterns, voice identification)
-Marketing
-Legal (lack of oxford comma in contract costs company millions in overtime dispute)
Language design features (Hockett, 1960)
-Communication does not = language (e.g. animals) -Hockett (1960) highlighted 16 design features to distinguish lang from communication -some are more essential than others, but a communication system needs all these features in order to be called a “language” -Important ones for human language:
Semanticity: words are symbols/signs that express meaning-other animals limited inventories of signals
Arbitrariness: no intrinsic relation between (most) words and their meaning (but onomatopoeia).
Displacement: not tied to here & now, can talk about past, future, somewhere else; hypotheticals (if… then…).
Productivity/Generativity: new language can be generated.
- a finite collection of sounds and words allows an infinite number of messages
- as long as we obey the rules of the language, any message can be understood by the other language users
Prevarication: we can lie (other animals can deceive but no evidence they can lie as humans do)
Reflexiveness: we can use language to talk about language.
Language design features 2
Vocal-auditory: from mouth to ear – advantage= frees up our hands (imagine a building crew having to always stop working when they want to communicate something.)
Rapid fading: speech signals fade quickly. We do have writing (something no other species has), but this is a late invention evolutionary speaking.
Interchangeability: anyone can say anything what someone else has said (e.g., a female can say whatever a male says). In other animals, often gender-limited communication. For example, a male frog can emit a call to attract a female, and the female can then indicate its response. Unlikely that the female can use the male’s calls.
Specialisation: speech is specifically designed for communication and has no other purpose. Also highly specialised: innate ability to distinguish speech sounds at birth + preference at birth for speech-like sounds.
Language design features 3
Some/most of these features are shared with other types of non-human communication. Very much directed to speech – e.g., sign language is forgotten.
*It really depends on what YOUR definition of language is e.g., do you think it's crucial to be able to go beyond the here and now?
*According to Hockett, you need all 13 of these features to be called a real langs
Maybe the naming of things isn't so arbitrary?
-implication for the evolution of lang: maybe the naming of objects might not be entirely arbitrary- instead could be determined by what it looks like
-bouba/kiki effect
-sound symbolism (individual sounds or clusters of sounds can convey meaning- “gl” words for shiny things like gleam and glint)
-young children (2.5yrs) show similar bias, even some evidence of bias in 4 mo infants
-people with ASD show a much-reduced bias (56%)
Animal langauge
Animals certainly can communicate* (but so can gestures, flowers, bacteria)
Bee dance- Novel messages, but only about food
Dolphins- Can communicate there is something new in the water; No evidence of syntax use (some evidence they can understand human syntax) -Can they combine distinctive sounds? Can they say: Tomorrow I’d like to have some herring?
Songbirds- Overlap with human language acquisition Babbling, critical period, left-hemisphere specialisation
Ape communication
Apes
-have a very rich communication system and are highly social
-95-98.5% genetic overlap and have similar brain asymmetries as humans (including enlarged Broca's area though perhaps for diff function e.g., making complex hand movements rather than complex speech sounds
-IQ= 3yo- mainly chimps but then bonobos (bonobos are more intelligent and vocal)
-teaching them to speak is near impossible (have diff articulatory apparatus/throat from humans- as a consequence we have a perpetual risk of choking)
-can use sign lang or artificial lexigrams (computerised symbols- “Yerkish”)
Ape studies
GUA Kellog & Kellog (1933), VIKY Raised by Keith & Kathy Hayes (1952).
WASHOE Garner & Garner (1969). Caught in the wild at one then brought up as a human child. Taught ASL. By 4 years she had acquired 85 signs (e.g., more, eat, listen, gimme, you, me, hurry). She also produced sign combinations such as you-drink, baby-mine. Some sensitivity to word order & some new combinations (water bird)- however was this just a one off/mistake? Taught signs to her adopted son.
NIM CHIMPSKY Terrace et al. (1979). Learned about 125 ASL signs and made sign combinations (e.g., play-me). Longer combinations heavily redundant: Give orange me give eat orange me eat orange give me eat orange give me you.
40% simple repetitions; rarely signed spontaneously. No novel combinations (unlike children).
Chimps vs children
The structure of language
1.Phonetics (speech sounds)- the term for the description and classification of speech sounds, particularly how sounds are produced (articulatory), transmitted, perceived (auditory) and the physical properties of sounds (acoustic)
-sounds and letters are not the same thing (did he believe that caeser could see...)
2.Phonology (sound system)- Concerned with the way speech sounds form systems in a given language. (phones, phonemes, minmal pairs)
3.Morphology (word formation)
4.Syntax (sentence structure)
5.Semantics (meaning)
6.Pragmatics (lang in context)
Definitions
phonetics= Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign
pragmatics= the branch of linguistics dealing with language in use and the contexts in which it is used, including such matters as deixis, the taking of turns in conversation, text organization, presupposition, and implicature.
phones= a phone is any distinct speech sound or gesture, regardless of whether the exact sound is critical to the meanings of words (things you hear) e.g., if we pronounce R, some ppl pronounce it more like a rolling R but it's just the realization of the sound
Definitions 2
phonemes= The mental representation of the sound -smallest unit of speech distinguishing one word (or word element) from another, as the element p in “tap,” which separates that word from “tab,” “tag,” and “tan.”
minimal pairs= two linguistic units that differ in a single distinctive feature or constituent (trunk and drunk- because they express diff meanings, the T and the D are phonemes)
allophone= any of the various phonetic realizations of a phoneme in a language, which do not contribute to distinctions of meaning e.g., phil and pil in English
onomatopoeia= the formation of a word from a sound associated with what is named
sound symbolism= In linguistics, sound symbolism is the resemblance between sound and meaning. For example, the English word ding may sound similar to the actual sound of a bell. Linguistic sound may be perceived as similar to not only sounds, but also to other sensory properties, such as size, vision, touch, or smell, or abstract domains, such as emotion or value judgment.
Definitions 3
FoxP2 gene= FOXP2 is a member of the family of forkhead transcription factors expressed in areas of the brain including the neocortex, striatum, thalamus, and cerebellum, which are thought to be important for language and the coordination of sequential motor output required for speech
APHASIA= A comprehension and communication (reading, speaking, or writing) disorder resulting from damage or injury to the specific area in the brain.
LEXEME= (Linguistics) - a minimal meaningful unit of language, the meaning of which cannot be understood from that of its component morphemes. Take off (In the senses to mimic, to become airborne, etc.) is a lexeme, as well as the independent morphemes take and off
Language and thought
-To what extent does lang restrict our meaning? Can we have non-verbal thoughts?
-We know that cognition influences lang but does lang influence thinking- YES (preconceptions and frequencies of exposure- doctor thought to be male and nurse thought to be female)
Sapir-Whorf hypothesis
Do ppl with a diff lang also think differently?
There are 2 diff versions of this- strong and weak version:
Linguistic determinism and Linguistic relativism
LINGUISTIC DETERMINISM (strong)- thoughts are limited, constrained by lang, lang determines our thinking (so ppl with a diff lang think differently)- no good evidnence
*Looking at vocabulary differences is not good evidence for linguistic determinism (Inuit snow example)
*Need to measure BEHAVIOUR
LINGUISTIC RELATIVISM (weak)- ppl who speak a diff lang perceive and experience the world differently but doesn't affect the way you think- good evidence
Linguistic relativism evidence
*Boroditsky et al (2002)- In Spanish, bridge is masculine, so described in more masculine terms (strong, dangerous, sturdy) but in German it is feminine, so described in more feminine terms (beautiful, elegant)
*The way that we describe an object affects how we think about its use (Glucksberg and Welsberg 1966)- e.g., box of matches or box and matches
*Lang affects encoding in space (if lang is egocentric, we describe things relative to our own frame of reference (left, right, next to))
-if lang is allocentric, we describe things with an absolute frame of reference (north, south)
*Lang affects encoding in time (English think of time horizontally but mandarin speakers think of time vertically)
-in prime tasks, speakers are faster at true and false tasks when prime coincides with their way of conceptualizing time
Pinker (1994)- thought proceeds lang (universal lang of thought- mentalese)- If thoughts depend on words, how could we ever coin new words?- If you don't have a lang, can you not think?
spoken word production
-from thought to speech
What steps do we need to take to communicate our ideas?
Griffinn and Ferreria (2006)
- Conceptualisation- WHAT to express (message planning, pre-linguistic, language neutral- Pinker’s mentalese- a hypothetical mental system resembling lang)- universal stage
- Formulation- HOW to express it (word selection-lemmas, sound processing*-lexemes)- translating pre-verbal message you have into a linguistic form
- lexicalisation- select the words you are going to utter
- syntactic planning- put these words together to form a sentence
- phonological encoding- turn the words into sounds
- phonetic planning- plan exactly how to pronounce these words
- Articulation- expressing it (pronunciation)
*Sound processing, in contrast, involves constructing the phonological form of a selected word by retrieving its individual sounds and organizing them into stressed and unstressed syllables (phonological encoding) and then specifying the motor programs to realize those syllables (phonetic encoding). The final process is articulation, that is, the execution of motor programs to pronounce the sounds of a word.
WEAVER++
-The most famous model is WEAVER++ (Word-form encoding by activation and verification) which adds a component of self-monitoring (helps to self-correct):
- Internal monitoring (of what you’re going to say)
- External monitoring (during speech)
*You get the lemmas (word meanings) from your mental lexicon (a store of all the words that you know which also includes word form)*Morphological encoding can be adding a plural
Evidence for lexicalization
-evidence from: speech errors, picture naming and picture-word interference, tip of the tongue (ToT)
SPEECH ERRORS
-we make about 15 speech sounds per second (2-3 words per sec, 150 words per min) -automatic -less attention to speech production than comprehension -about 1 or 2 errors for every 1000 words (7-22 errors a day)
what do they tell us? - tell us how the mind has prepared what you want to say
-Freud thinks it's our repressed thoughts X
-Dell thinks it's a person's capacity for using language and its components - yes
-” we just swap words or sounds and that’s it” but NO- errors do not occur at random
speech error types
-8 major types that can appear at all levels (phoneme, morpheme, word)
Shifts: One speech segment disappears from its appropriate location and appears somewhere else.
Exchanges are double shifts. Two linguistic units change places.
Additions add linguistic material.
Substitution: One segment is replaced by an intruder. The source of the intrusion is not in the sentence.
Blends are a subcategory of lexical selection errors. More than one item is being considered during speech production. Consequently, the two intended items fuse together.
Perseveration and Freudian slip: He meant to say “I'm glad you're here,” but what came out was a Freudian slip: “I'm mad you're here.
common properties of speech errors
-Exchange of phonemes in similar positions (you have Hissed all my MYstery lessons- missed/history, consonants-consonants, vowels-vowels, novel words follow the phonological rules of the language (perple is person + people, not peorslpe), data bases of spontaneous errors
-experimentally induced (e.g., the SLIP technique, Baars & Motley, 1974)
Speech errors can shed light on the basics of speech planning.
The plural morpheme <s> was treated differently/separately from the word maniac. -The pronunciation was then adapted to the word the s attached to, which means that this is done in yet another stage.
speech planning
Two different processes:
- Retrieving the words
- constructing a syntactic frame in which the words are slotted (plural ending + other grammatical elements e.g., past tense are part of the frame)
Support for this - Two types of errors:
- Word errors: not restricted by distance and always of the same type (e.g., Noun for a Noun)- happen early
- sound errors: close together and can cross word type- happen later
Lexicalisation
*The process for turning thoughts into sounds
1st stage is retrieving a representation of the lexical meaning (word meaning) and syntax, called LEMMA
-each word is represented by a lemma- these are syntactic and semantic but not phonological (means you do not retrieve everything you know about the concept of cats but retrieve a representation of the word /cat/ and the fact that it is a noun
2nd stage is retrieving the concrete phonological sound form (lexeme)
Lexicalisation 2
Fay & Cutler found that there are 2 distinct types of whole word substitutions and that you can usually separate them into 2 categories dependent on the overlap between the error and target word. Large phonological overlap between the intended and the produced word.
-there is neurological evidence- double dissociation for retrieving lexical meaning and lexical form
Lemma/lexeme
LEMMA= The lemma is the base form under which the word is entered (in a dictionary) and assigned its place- typically the stem/simplest form e.g., the lemma “build” represents builds, building, built
LEXEME= A minimal meaningful unit of language, the meaning of which cannot be understood from that of its component morphemes (take off)
-A Freudian slip is a verbal or memory mistake linked to the unconscious mind. Also known as parapraxis, these slips supposedly reveal secret thoughts and feelings that people hold.
Evidence- ToT
-you know its meaning but can't find the word itself
-ToT often comes with partial information (initial sound/some sounds, number of syllables, correct stress pattern)
-often phonologically related words get activated e.g., for oxymoron- oxygen, oxytocin, moron (interlopers for oxymoron)
-they are evidence for 2-stage model of lexicalization (meaning vs. sound)
-completed the first stage but can't complete the second stage SO retrieval of meaning independent of sound
ToT theories
- Blocking hypothesis (Jones & Langford 1987)
-interlopers prevent activation of the right word
- If blocking is correct: -words with more phonological neighbours (similar sounding words) should result in more ToT states but opposite is true (Harley and Brown)
-presenting a phonological neighbor should increase blocking but it actually reduces it- phono neighbours act more like retrieval cues
- Transmission-deficit hypothesis (Burke et al., 1991)
-due to weak links between the meaning and the word form, only limited activation of the target word form
-problem with transmission from the lemma to the lexeme
-3 reasons (low frequency word- don't say often, not seen it recently, ageing)
*Evidence favours transmission-deficit account
Transmission-deficit account
- Bilingual speakers have more ToT states- idea is that they have slightly weaker links between meaning and sound compared to monolingual speakers (link weaker because bilingual will use either language less than monolingual)
- More ToT evidence for distinction syntactic properties vs. sound - alternate words almost always of the same grammatical class (both nouns)2 and grammatical gender in Italian is non-semantic, but can be retrieved in ToT
- dyslexic children have more ToT- no difference in recalling the semantic meaning but more errors at the phonological stage or link between lemma and lexemes
Evidence- picture naming
*long-term priming- after 10-12 mins you are faster at naming picture
*What is it that primes picture naming- meaning? form?
*Homophone priming does not persist so not word form that gave priming- must be meaning?
BUT no facilitation across languages so not meaning
-It is the LINK between form and meaning that is important
picture-word interference
-naming pictures as quick as possible whilst ignoring distractor words
-If the word is semantically related to picture (cat picture, dog word)- you will be slower
-If phonologically related (picture cat, word cap)- you will be faster
-you can manipulate the time course to determine how quickly the semantic and phono info come into the system- different SOAs (stimulus onset asynchrony) -means that you give the word either just before you see the picture, at same time or just after (150ms)
- if you have semantically related distractor just before picture of cat you are slower
- but later on, if you hear phono related word you will speed up
*Means that the semantics come in earlier than phonology and that is evidence for the 2-stage model
is lexicalization discrete or interactive?
Discrete: first stage MUST be completed before second stage can begin! Interactive = one level of processing impacts on the operation of another level.
Cascading = information from one level is passed on to the next level before processing has completed. “Leakage” between levels and non-target lemmas become partially activated.
cascaded processing
-Mediated priming (Levelt et al 1991)-Mediated priming: saying “sheep” facilitates recognition of “goat” (semantic relative). BUT does “goat” then go on to facilitate its phonological neighbors (e.g., “goal”)?
- Early inhibitory priming of goat (semantic)- at the lemma level
- late priming of sheet (phonological)- at the lexeme level
- NO priming of goal which is evidence against cascading/feedback
However, there is some evidence supporting cascading- Peterson and Savoy (1998)- mediated priming for near synonyms e.g., couch primes sofa which primes soda
Feedback evidence
Lexical bias- When ppl make speech errors, they are more likely to result in existing words rather than novel ones e.g. Hissed my mystery lesson rather than missed my lystery messon
-perhaps because at the phono level, you have feedback, and these non-words don't have a lemma representation because they don't exist so less likely you will activate them
Similarity effects- mixed substitution errors (both phono and sem related to target words) occur more often than chance e.g., comma and colon
HOWEVER, diff view argues output monitoring rather than feedback:
*We monitor what we are saying or going to say
-more likely we are not going to pick up a non-existing word rather than a real word
-less likely to have slips that result in taboo words e.g., HIT SHED
*In general, evidence is largely in favor of a weakly interactive system (Dells interactive model)
Word reading
-reading is a very important cognitive skill in a modern society
-costs of problems with reading
- 796 million people cannot read and write
- UK is relatively literate but 16% considered functionally illiterate
- 17% of 15 yo do not attain a min level of proficiency
- huge economic and social costs of low literacy (costs economy-2bn a year, social- higher likelihood of depression, drugs, prison etc.)
Main techniques used in psycholinguistics:
- LDT (lexical decision task)- are letters a word or not + sometimes priming involved
- Naming- name words (can measure how long it takes for them to start naming that word)
- Eye-tracking- people are just reading sentences or texts and while they read the text, the eye movements are being recorded
Visual word recognition
*First stage in the reading process (getting from letters to the meaning of a word)
-Words are made up of a small set of symbols in combination
-what representations are used to go from words to meaning/used to access the mental lexicon?
-mental lexicon= systematic organisation of words in our brain (60,000-70,000 words)
-what are the units, calculated from the visual input, that are used to address the mental dictionary? (single letters, grapheme clusters, syllables, morphemes?)
Are graphemes used to access visual words?
Graphemes are letters and letter groups that correspond to one sound (phoneme) hence they act as a ‘functional bridge’ between phonology and orthography -bread has 4 graphemes “b” “r” “ea” “d”, broad has 4, shower has 3
-If the graphemes are perceptual units-
- finding “A” should take longer in boad than brash
- “OA” will need to be broken up to find “A”t
If there are no grapheme units-
- finding “A” in boad and brash should be equal
Result?
Are morphemes access units in reading?
Morphemes are the smallest meaningful unit of language (can be a word itself (deck) or part of a word (de-brief) -do we read unreal as a single word or through its parts- un and real?
There are some complications i.e., pseudo-affix words -e.g corner doesn’t mean does corning like a farmer does farming
Morphemes?
-2 is faster and fewer errors, even if prime “TEA” is shorter than “TEASP” 3
-1 is slower so fewer letters not better in general
Conclusion- morphemes are access units as well (not a bigger or smaller unit)
Pseudo suffixes?
Are letters processed in parallel or serially?
Are there word length effects in reading?
Results
-clear effect for non-words as since we don't have these words in our mental lexicon, we need serial grapheme-phoneme conversion in order to pronounce them (convert letter-by-letter into sound)
-weak effect for LF words has they have fewer word neighbours (neighbourhood effect)- less help from similar looking words
-SO, no clear evidence for a length effect
-if reading proceeds letter-by-letter, we expect to see a length effect for both LF and HF words
conclusion- letters are processed in parallel (kind of)
Are words that share letters activated during the
-During the process of recognition, we activate a set of words that are coded in a similar way- this is known as Colheart’s N or orthographic neighbourhood
-investigated this with form-based priming or orthographic priming e.g., if i present loup briefly, will loud get pre activated in the lexicon?
Conclusion
Conclusion- are words that share letters activated during recognition?
YES- a prime that shares letters help a target to be recognised faster (for word and non-word primes)
Are words that share letters connected in the lexicon?
YES- but negatively, not positively- if a prime word is CONSCIOIUSLY recognised, it inhibits other similar words e.g., lord - loud
-not so for non-word primes e.g., loup - loud
How does info flow in the system?
HOW DOES INFO FLOW IN THE SYSTEM? ARE THERE FEEDBACK CONNECTIONS BETWEEN LETTERS AND WORDS?
Results
Conclusion
- Info from words helps in letter identification
- there is feedback from words to letters
- this is known as the word superiority effect
Other basic phenomena
frequency effect- HF words recognised faster than LF words
age of acquisition (AoA) effect- words you learned at a younger age are recognised faster
semantic priming effect- e.g., faster recognising cat as word if first presented dog followed by cat (however prime and target need to be closely related)
FREQUENCY
*Most robust predictor- a word that is more frequent will be recognised faster and more accurately
-this is adaptive because you're going to be ready for something that you're going to encounter often- it makes sense that you will have faster access to something that is more frequent
-people already pick up the frequency of the next word within about 100 milliseconds of looking at the word before it
Orthographic similarity
*Orthographic Neighbour= a word that is spelled the same except for one letter e.g., bear- beer, beat, rear etc.
-A word must compete with its neighbors to be recognised
-faster reaction time for words with more orthographic neighbors because you already get support for recognising the word
-some words come from a dense orthographic neighborhood or a sparse one
-individual differences: better readers better at suppressing neighbors
Phonological similarity
PHONOLOGICAL SIMILARITY
*Phonological neighbor= word that differs in 1 sound/phoneme e.g., gate- get, got, hate, bait,
-faster reaction time for words with many phonological neighbors because you already get support for recognising the word
Regularity and consistency
REGULARITY *Whether a word follows the most statistically reliable spelling to sound correspondence rule
regular= Hour, sour flour
irregular= Pour
CONSISTENCY *Whether the word is pronounced like similarly spelled words
consistent= kind- mind, bind, find
inconsistent= have- gave, save, cave
**highly correlated but can be separated e.g., cost is regular but inconsistent (host, most)
-regular and consistent words are recognised faster
Semantics
*Words that are semantically richer are recognised faster (more semantic features/neighbors, higher imageability, higher sensory experience, more concrete, degree of emotional valence (positive, neutral, negative), degree of arousal (high, low)
-sleep= positive and low-arousal
-snake=negative and high arousal
*High arousal and negative valence take longer to process and
*Seems strange knowing this after saying we go from the letters all the way to meaning but now it seems meaning has an effect on you recognizing letters
-it makes sense, if you think interactively, that something from a semantic level gives feedback on partial activation that you have
Models of word recognition
The 3 models we are about to discuss differ on 2 main dimensions:
How are word entries searched in the mental dictionary?
- word entries are searched one at a time (serially)
- word entries are searched in parallel
How does info flow?
- strictly bottom up/one way (letters>words)
- interactive (letters-words)
Fosters search model
(Simple serial search model) -There are a number of steps that take you from individual letters to meaning of word
- recognise letters (e.g., COW)
- access units- e.g., first letter- chooses a bin such as C bin for COW (could be letters, graphemes, syllables, meaning)
- once in bin, entries are searched 1 by 1- serial search- bins are ordered by frequency
- once you have accessed word COW, you can use master file to get meaning
Fosters model 2
*Note there is a parallel stage to decide which bin to search, but the time to find a word is based on the serial (frequency-based) search
*Words searched 1 by 1 not letters searched 1 by 1
*Simple model- do we need anything more complicated/more parallel…
The Logogen model (Morton's)
-sends info about letters to all word detectors (logogens) all at once
-info only feeds forwards from letters to words
-each logogen has a threshold level e.g., based on frequency (high threshold for LF word and low threshold for HF word)
-when you have accumulated enough activation to pass the threshold, the logogen fires and the word is recognised
-you also have a context system which accounts for predictability (if word is very predictable in a sentence e.g., “I'm going to write a letter and in the corner of the envelope put a...stamp”- stamp is very predictable so don't need as much info in order to recognise it)
The Logogen model 2
-info accumulates from bottom up in 1 direction from features>letters>words
-if not enough info has accumulated, the letter/word is not recognised e.g., cob will be activated by cow but not enough to be recognised
*Newer model: logogens for diff modalities-
- visual and auditory logogens
- maybe even reading, writing, listening and speaking logogens
R&M Interactive activation and competition model (
(Interactive activation and competition)
- extracting features e.g., does letter consist of a horizontal line, a vertical line etc.
- those features then activate individual letters- all letters are activated all at once (parallel) and letters activate words all at once
- difference from logogen model is that words feed activation back to letters (within level and between level connections) AND
- letter perception is faster/stronger for letters in active words because of feedback (you will recognise a letter faster if it is in a word- word superiority effect)
IAC 2
IAC 3
IAC model summary:
- parallel activation of letters and words
- feedback from words to letters
- HF words have higher resting levels of activation (not lower thresholds- diff from logogen)
- all words have the same threshold to cross for recognition
Model table
Parallel distributed processing model (PDP)
*An extension of the interactive activation model
-what happens is:
- you get a word in and process it orthographically
- in order to pronounce it you have to activate its phonology (either you do this directly, O-P where you wouldn't activate the meaning of the word, or you can go via meaning)
-this is known as hidden units’ maps- So in-between orthography and phonology, orthography and meaning etc., you have hidden units
-initially these units are set at random- you guess and get feedback
-eventually you adjust weights/connections and link orthography to phonology and meaning
-importantly, there are no rules in this model, but it behaves as if there are rules
-e.g., Net talk neural network
Dual route model of reading
*We need to be able to read regular/predictable words, irregular/unpredictable words and non-words
*Can also use non lexical route for regular words
-have to use lexical route for irregular words
Testing dual route model
Can test this model by looking at neuropsychology (ppl who have suffered brain damage)- e.g., if there is damage to the lexical system, there would be difficulty reading irregular words- regularisation errors (pint would rhyme with mint)
-Can provide evidence that there are multiple routes
Surface dyslexia-
Non-lexical route impaired evidence
non-lexical route impaired (is there a double dissociation?) -exception and regular word reading good, but non word reading bad
-low frequency word reading worse than hf (GPC contributes for lf words)
-the type of error you would expect with non-words is lexicalization error (non-words activate the closest word in the lexical route e.g., plage would be page)
Phonological dyslexia
double dissociation
*This data strongly indicates independent capacities (logic of double dissociation) BUT we should confirm this by looking at neurotypical ppants
- however, we don’t get strong evidence-
*Lexical neighbors influence both pronunciation times and error for non-words SO not all non-words are processed in the same way via dual route model
Regularity effects on reading words
Summary
Workshop speech errors
workshop (speech errors):
- semantic errors more common than phonological errors- what you say is more important than how you say it
- grammatical class is preserved more often in semantic than in phonological errors
Syntax
*A lot of early work in psycholinguistics was inspired by Noam Chomsky
-distinguishes between competence and performance
Competence= Your knowledge of the language (whether what we are hearing is correct)- allows for grammaticality judgements, even of sentences we never heard before (linguistics most interested in)
Performance= The actual use of language in concrete situations- not necessarily the same as what is in our competence due to memory limitations, hesitations, errors, distractions (psycholinguistics more interested in)
-also influential in aspects of language development
Grammaticality judgments
Grammaticality judgments do not = sensicality judgments (not based on rules learned at school, prior experience, or meaning- based on implicit knowledge of the syntactic rules of your language)
The boy found the ball. grammatical Linda slept soundly. gram Linda slept the baby. not gram Bill tries Rob to be a gentleman. gram Up the hill ran Jack and Jill. gram Colourless green ideas sleep furiously. gram (though might seem wrong- semantically but fine grammatically) Colourless ideas green sleep furiously. (not gram)
*Important part of Chomsky’s model is that grammar=generative (finite number of rules to generate infinite number of sentences)- due to a property of language known as recursion (when a rule refers to a version of itself in its definition)
Phrase structure trees
-Each sentence of a language can be described in terms of hierarchical groupings of its constituent words labelled for syntactic category
-we can represent how words or groups of words (constituents) relate to each other
-phrase structure tree or tree diagram
-This does not mean we have these trees in our heads or construct them during production/comprehension (however we somehow generate some kind of structure representing who did what to whom during parsing)
On-line incremental parsing
ON-LINE INCREMENTAL PARSING
*Parsing means that you are constructing a syntactic structure on the basis of the words as they arrive, based on our syntactic knowledge
-incremental means you are starting to do this as soon as words come in rather than waiting to hear the whole sentence
-ppl can interpret sentences differently- this results in different syntactic structures- so, on the basis of how you build up that structure, if you look at reading times, you can see what people were actually thinking of.
e.g., “the girl hit the man with the umbrella” can be interpreted in diff ways and syntactic structure would be different
How do we know, when we are reading, which structure to follow? -
Parsing models
PARSING MODELS (try to explain how we build up syntactic structure-put words/chunks of words together)
One of the main differences between the models has to do with the question of encapsulation: are diff sources of knowledge (e.g., syntax, semantics, discourse…) separate, specialized components, and/or do they interact with each other?
*Generally accepted that they do interact but when?
- Is this immediate, or are there independent stages? e.g., syntactic info first and then the rest (e.g., semantics)- this fits with 2-stage, modular accounts/serial processing
-or e.g., is all info being used at the same time- interactive account/interactive processing
method= test sentences which can be misinterpreted (umbrella) and see which structure takes longer- if one structure takes longer, this means that you had built a different structure first and you had to recover from that
*This shows that there are certain biases in syntactic processing
Modular accounts
-Frazier (1987): The Garden-path model
Stage 1- parsing done solely on basis of syntactic preferences- two principles for this (which are :
- Minimal attachment (go for the simplest structure, i.e., the one with the fewest nodes)
- Late closure (incorporate words in the currently open phrase or clause if possible or link incoming material with most recent material if possible) -when in conflict, MA takes precedence
Stage 2- if the parse is incompatible with (following) syntactic, semantic, thematic etc. information, reanalysis occurs
Modular accounts 2
Interactive accounts
-constraint-based models
-all potentially relevant sources of information (constraints) can be used immediately to help syntactic parsing (inc semantics, discourse, frequency of a syntactic construction)
-all possible syntactic analyses are generated in parallel with the activation of each analysis dependent upon the support available at that moment
Definitions 4
pragmatics- helps us look beyond the literal meaning of words and utterances and allows us to focus on how meaning is constructed in specific contexts. When we communicate with other people, there is a constant negotiation of meaning between the listener and the speaker. Pragmatics looks at this negotiation and aims to understand what people mean when they communicate with each other.
morpheme-*any of the minimal grammatical units of a language, each constituting a word or meaningful part of a word, that cannot be divided into smaller independent grammatical parts, as the, write, or the -ed of waited.
generative grammar- a precisely formulated set of rules whose output is all (and only) the sentences of a language
phrase structure rules- They are used to break down a natural language sentence into its constituent parts, also known as syntactic categories
Definitions 5
parsing- resolve (a sentence) into its component parts and describe their syntactic roles
modular- employing or involving a module or modules as the basis of design or construction
garden-path sentence- ambiguous or confusing
minimal attachment- Minimal attachment is a strategy of parsimony- The parser builds the simplest syntactic structure possible (that is, the one with the fewest phrasal nodes)
late closure- the principle that new words (or "incoming lexical items") tend to be associated with the phrase or clause currently being processed rather than with structures farther back in the sentence
Semantics
-Reading evolves incrementally: each incoming word is processed roughly immediately but to what degree?
-How do we get to the correct interpretation of a word in context? - different mappings between language and concepts
-What semantic info gets activated upon reading a word?
-Unlike syntactic processing/parsing, unlimited number of possibilities (a sentence with 10 words, 20 choices/position = 10 to the power of 20 combinations)
-In order to handle this, semantic processor needs to be flexible so it can deal with the variety of input quickly
Lexical ambiguity
Homonym: word with 2 unrelated interpretations/meanings
-Can tell us something about the modularity and the info flow in a cognitive system (e.g., if the modules of syntax, pragmatics and semantics interact with each other immediately or if there is more of a directional flow
older models:
- selective access model- context restricts access to contextually appropriate meaning
- ordered access model- activation on basis of meaning frequency, tried against context
- parallel access- all meanings activated
older experiments:
- some evidence for multiple access, some for selective access- highly dependent on task
Cross-modal priming
Eye tracking and homonyms
(When no disambiguating info before biased homonym “last night, the port had a strange flavor”, takes longer to read bit after port than bit after soup
(When there is disambiguating info before, takes longer to look at actual word port rather than soup) *This indicates that as soon as they read port, they activate that to more frequent meaning- to harbor meaning and there is competition
(When no disambiguating info before balanced homonyms, takes longer to read coach than cabin because takes longer to process coach as the two meanings are similar so in competition)
Newer eye-tracking research
Newer eye tracking research:
-effects of meaning frequency and context (the frequency of the two meanings is important and whether you have a disambiguating context beforehand or not.
-proposed the reordered access model
-Hybrid of exhaustive and selective access models
-Prior context/disambiguating context can give “contextual boost”, increase the activation level of one meaning
-Homonyms can be biased or balanced, depending on the relative frequencies of the meanings
-subordinate bias effect= the subordinate/less frequent meaning will always get you in to trouble
Re-ordered access model
biased- if there's a disambiguating context beforehand, it tells you it should be the less frequent meaning and then that meaning gets a boost- then both meanings are in competition with each other
balanced- if you don't have any contextual information, they will compete with each other as soon as you read pupil. But if there is this disambiguating information beforehand, then that meaning can get a contextual boost- no more competition anymore
Lexical polysemy
(Multiple senses)
*You have words that have different interpretations, but these interpretations are semantically related to each other (senses rather than meanings- like in lexical ambiguity)
-mild case of one word to many interpretation mapping
-most words are polysemes as a lot of words have distinct interpretations which are semantically related
e.g., book is very tattered (physical), or book is very scary (content)
Metonymy
*In order to investigate lexical polysemy- use metonymy
Metonymy 2
*As soon as you see school, which interpretation/sense are you going to go for?
school can be interpreted in many diff ways with many diff senses
-what are the possibilities? literal-first, figurative-first
Access procedures
According to parallel models, all existing senses of a word will be activated. There are 2 versions of parallel models that one can distinguish. In the first version, unranked parallel, all senses are activated to the same degree. In the second version, ranked parallel, the degree of activation is dependent on e.g. the frequency of occurrence. (It doesn’t need to be frequency, a version of this model might take the saliency of the two senses as a criterium, or whether one sense is more basic than the other.)
A ranked parallel model using frequency is very similar to what is proposed for homonyms, where the degree of activation of the two meanings is affected by how frequent the two meanings are.
Place for-
Predictions
Results
Results:
Only difficulty for words that did not have an established metonymic sense (talked to the bridge, protested during Finland)
-No difficulty when metonymic sense is lexicalized (metonymic is as fast as literal)- so you can read a sentence “he talked to the school” as fast as “he walked to the school”
Seems to indicate that unlike homonyms, not all one-to-many mappings are difficult
If you have diff senses of a word, doesn't result in longer reading times but if you have diff meanings of a word, this does
Sense frequency doesn’t have an effect- So when a school is used much more often as a building or as an institution, that doesn't affect the reading times.
So, the results cannot be explained by literal first, figurative first or ranked Parallel model.
Results 2
So, maybe unranked parallel model?
-In which you activate all the senses at the same time to the same degree.
BUT words have a lot of senses- whenever we read a word, do we really activate all these diff interpretations all the time?
and why is there a frequency effect with homonyms but not metonyms?
Alternative model
-Maybe people activate one single abstract underspecified file, meaning which is the same for all semantically related senses (then use context to home in on a specific sense)
+context not used as a judge, but a tool to get to the right specific interpretation
+wrong sense hardly ever assigned
+simple, quick and flexible process
Pragmatics
Pragmatics= the distinction between what a speaker’s words literally mean and what the speaker might mean by his words
Now words have been identified, syntactic structure built, how do we understand the meaning of text and discourse?
- word meanings for individual words
- principle of compositionality (word meanings and how they are combined)
- inferences
- own skill
*Reader assumes text is coherent (not just random sentences thrown together) and cohesive (characters, events, objects remain the same)
Inferences
Inferences= The process of developing info that goes beyond the literal meaning of the text.
-framing problem- when do you stop? how many inferences are you going to draw and what type?
Three main types:
- Logical- based on word meaning e.g., “fish” implies lives in sea
- Bridging- aka backward inference- relate new info to previous info in order to maintain coherence e.g., “James was driving to London. The car kept overheating”
- elaborative- aka forward inference- extend the text with your own world knowledge- they become part of our memory of the text, practically indistinguishable from original material “
e.g., “The director and cameraman were ready to shoot close ups when suddenly the actor fell from the 14th floor.”- inference would be the actress is dead
Inferences 2
When are inferences made? Are they automatic?
-Minimalist approach vs constructionist approach
-Hybrid approach (search-after-meaning)
Minimalist- only 2 kinds of inferences encoded during reading automatically
- Those necessary for local coherence- connections between sentences that are no further than 1 or 2 sentences apart
- Those based on quickly and easily available info (based in STM)
-basically, logical and bridging inferences (+ a few easily available elaborative ones)
-most elaborative inferences are made at the time of recall (hence not during processing)
Inferences 3
*An inference that will take too much time to be drawn immediately
Constructionist/search-after-meaning approach
Constructionist approach- numerous elaborative inferences are typically drawn during reading
-In general, strong constructionist approach not well supported
Search after meaning approach-
*It turns out, when we process language, very often we only process it to quite a superficial, shallow and relatively incomplete level- want to get to the gist of things quickly so we tend to ignore things that require a bit more processing- leads to errors
Individual differences in comprehension skills
-Less skilled comprehenders draw fewer inferences and are poorer at integrating meaning across sentences
-related to differences in verbal working memory, attention, suppression of irrelevant/inappropriate material etc.
*Understanding puns hinges on the ability to suppress the contextually more predictable meaning
Psychological essentialism
-for natural kinds, (like animals) we assume they have some kind of psychological essentialism (identity judgments based on innate, essential underlying qualities such as DNA
-for artefacts, identity judgments based on superficial, perceptual features
*Younger children (less than 4) don’t reason that way yet- rely on perceptual features
*For older children and adults, natural kinds do not change
So, we want to see whether we can find evidence for this online- that people do this as soon as they encounter this information
Psychological essentialism 2
Theory of mind
*Also interested in looking at how ppl take the perspective of someone during reading
-ToT important for communicative tools (irony, sarcasm etc.)
Theory of mind 2
*So, we have evidence for immediate essentialist reasoning- So a donkey with stripes stays a donkey, a coffeepot turned into a bird feeder is now a bird feeder because it doesn't have the essentialist qualities.
-with theory of mind this was more puzzling:
*We did not find any evidence that an inference was made based on taking someone else's perspective for the natural kinds- We didn't find a difference between the donkey and the zebra.
what's happening here?
-people have calculated their own perspective and the third persons perspective, and these 2 perspectives are not in agreement
Language development
(Diff from learning 2nd lang- children are not taught the grammatical rules of their first lang or how they should combine words or word segments, or not given definitions of words)
How do they do this?
Stages of development:
Vocabulary growth:
±12 mo: production of first word, ± 15 mo: about 25 words or word fragments (e.g. ba for ball), ± 2 yo: about 300 words, ± 5 yo: 10,000 – 15,000 words- 10-20 new words a day! -vocabulary spurt/burst, naming explosion, ± 18 yo: 60,000 words
*4 main stages in development Very similar in all languages
Stages of language
-sound discrimination- babies are very good at discriminating between diff sounds and producing diff sounds that are universal (found in other languages), then become more lang specific
Pre birth
*Language processing already begins before birth:
-Fetuses hear (impoverished) sounds in womb- high frequencies blocked by amniotic fluid- can’t hear individual words, but can hear prosody: rhythm, stress, intonation, duration
DeCasper & Spence (1986) 16 mothers read the cat in the hat twice a day during final 6.5 weeks of pregnancy (≈ 5 hrs. total). Tested when born- same or different (The king, the mice, & the cheese) story. New-borns preferred familiar story - can distinguish prosody.
(When they heard cat in the hat, they sucked harder on the dummy- non-nutritive sucking method)
Early speech perception
*Very young infants are predisposed to listen to speech sounds and are aware of fine distinctions between them
Eimas et al. (1971); Kuhl & Meltzhoff (1997) Categorical perception in 1 y.o. (can they distinguish between sounds between minimal pairs) [ba] - sucking - habituation/boredom - [pa] - faster sucking
< 1 y.o.: sensitive to speech sound distinctions that occur in other languages as well.
1 y.o.: sensitivity to foreign contrasts diminished significantly. Pre-programmed to distinguish speech sounds? (Chinchillas can do it too.)
Early speech production
*Infants start off with crying, cooing (happy sounds), laughing- seems to be stimulus controlled (involuntary response to emotional states) and universal (containing all possible sounds) -cooing of deaf children= cooing of hearing children
*4-7 months- more vocal play (speech-like sounds, vowels before consonants)
*Then babbling stage- no other animals have this -hypothesized that babbling has function of practicing speech like gestures or sound in order to help them control the motor systems -deaf children, when exposed to sign lang will babble with their hands -The easy sounds (front of mouth such as P, m, d) learned earlier than hard sounds (f, r, l)
*Then one-word stage (slow at first, then rapid explosion around 18 months) -children can learn new words for objects after only 1 exposure (fast mapping) but it is unclear whether they will retain these words if these words are not being reinforced.
Contribution of infant
Contribution of infant 2
*All these things will help to initiate and maintain communication with others
*Intentional communication emerges between 8-10 months
-will show some interpretable reaction to some words
-showing, giving, pointing
-12 months- recognition of +- 50 words and production of first words
Liszkowski (2006, 2008) Pointing in infants:
12 m.o. points to objects the experimenter needs (e.g. key). This implies: Understanding of the other’s intention Understanding of the need for information Understanding of the effect of pointing (direct attention) Wish to share information, to help
Contribution of the parent to communication
*Special type of speech parents use with their children
What helps lang learning
CDS can facilitate language learning but might not be essential for it.
- Clear turn-taking. Clear eye contact and pointing. Adaptation to child’s age. Marking of word and phrase (syntactic) boundaries
- (Thiessen et al., 2005). Word order: new words in utterance-final position – helps with word recognition.
- CDS is fairly universal (but not present in all cultures, e.g., Kaluli (Papua New Guinea)).
-
*In most instances the caregiver will be talking to someone else, not their child- but just hearing speech does not help with lang development
-
-child only gets about 20 mins of input per day so, CDS will facilitate language learning a bit, but given the large variation, quality, and quantity the child is exposed to, one would expect much more variation and linguistic skills between children than there actually is.
Definitions 6
holophrastic- the learning of linguistic elements as whole chunks by very young children acquiring their first language, for example it's all gone learned as allgone.
telegraphic- One of the well-known characterizations of children's early multiword utterances is that they resemble telegrams: they omit all items which are not essential for conveying the gist of the message.
high amplitude ****ing- The HAS technique capitalizes on infants’ ****ing reflex: infants hear a sound stimulus every time they produce a strong or “High-amplitude” ****. The number of high amplitude ****s produced is used as an index of interest. Variants of the procedure can be used to test infants’ discrimination of and preference for a variety of language stimuli.
categorical perception- is the perception of different sensory phenomena as being qualitatively, or categorically, different- The Japanese language doesn't differentiate between the /l/ and /r/ sounds, so through categorical perception, the brains of native Japanese speakers have learned to treat the two sounds as the same by actually hearing the same sound when each is spoken. However, because those sounds are differentiated in English (changing the l in 'lag' to an r will change the meaning of the word), native English speakers have learned to hear a difference between the two.
fast mapping- the ability to acquire a word rapidly on the basis of minimal information
motherese- the simple form of language mothers often use when talking to their babies:
Lang acquisition theories
- behaviorist account (older one)- lang is used in response to stimuli, children learn lang through imitation and reinforcement, new borns are blank slate
ISSUES-
-input children get is quite impoverished/degenerate (when people talk there are lots of hesitancies, disfluencies, incomplete sentences, etc.) and insufficient (not enough examples exposed to in order to work out the underlying rules)
-imitation/reinforcement is not always done/followed- children often use ungrammatical language that they never heard before and parents rarely use correct grammar (more important to parent that they understand what child is trying to say- truth value)
Lang acquisition theories 2
- Nativist/Innatist accounts (e.g., Chomsky, Pinker)
-lang capacity is innate- LAD (language acquisition device)
-children do not need explicit instruction, don’t rely on imitation and reinforcement
-children world-wide learn grammar at approx. the same age- So you're born with universal grammar and then we've refined it to a specific one.
- Constructivist/Cognitive accounts (e.g., Piaget)
-Not only due to genetic predisposition or imitation
-Lang is driven by cognitive development- children first need to develop mentally (e.g., creating schemas) before lang dev can happen
Lang acquisition theories 3
- Social accounts (Vygotsky, Bruner)
-lang has a social origin- focus on social interaction, social learning
-adults are very important by modelling and explaining concepts, culture
-LASS (Language acquisition socialization system)- Bruner
-It may be that some of these social skills are innate, and that language is built upon those. But it is unlikely that this can explain all aspects of language acquisition.
NATURE/NURTURE Behavior results from an interaction of nature and nurture. The main issue is to determine to what extent language development requires innate language specific mechanisms or general-purpose learning mechanisms that work on the language input received.
Learning meaning of a word
LEARNING MEANING OF A WORD
-How does child make the connection between the sound and what is being referred to
Early words: Short and easy to say. Frequent and relevant concepts. Include items from different syntactic (e.g., Nouns and Verbs) and semantic categories.
Errors: overextension: e.g., all animals are called “doggie” under extension: e.g., only roses are “flowers” overextension more prevalent early on, under extension less common and bit later Invented words: e.g., “circle toast” for bagel
Connection between sound and meaning
What are the mechanisms of principles that the child has in order to make that connection between the sound and the meaning?
-might be low level learning mechanisms (e.g., classical conditioning but this has a minor role in word learning)
-child needs to have an idea of what concepts they are- basic ontological categories (objects, properties, events, agents)- conceptual prerequisites
-child needs to have linguistic prerequisites- assuming that words have meaning, mapping is symbolic and consistent across time and speakers
Predispositions
Predispositions
*There are a number of innate predispositions/biases/assumptions that children have when it comes to assigning meanings to sounds:
- assumes that the word is a label that refers to the whole abject than its parts
- shape bias- child will extend names to objects that are similar in shape rather than similar in colour, texture, function- starts at around 2 years
- mutual exclusivity assumption- an object can only have one label- they assign a novel word to objects they don’t have a label for already
- taxonomic assumption- assumption that a novel word which refers to one thing will also refer to similar things (can lead to overextensions)- relationship is taxonomically rather than thematically (dog used for diff types of dogs, but not for a dog bone)
- Basic level category assumption- assume that a novel word refers to basic level (dog) rather than the superordinate (animal) or subordinate (poodle) level
- Noun-category bias- nouns are easier to learn than other syntactic categories (adjectives, verbs etc.) as they are conceptually easier- while verbs are universally difficult to learn, there does seem to be differences across languages (VERB friendly)
morphological development
*Rule-based linguistic development
Critical age hypothesis
CRITICAL AGE HYPOTHESIS
*Certain types of beh need to develop within a critical, sensitive period for it to develop normally (e.g., imprinting for ducklings)
*Something similar happens in human language development:
- lang is innate (LAD-Chomsky)
- biological events related to lang (e.g., lateralization) can only happen during a limited period of maturation
- during this critical period (+/- until onset of puberty) linguistic input is necessary for normal lang development
- Genie case study
Syntax
SYNTAX
Who taught you the rules of your language? why do children learn lang so quickly?
why are languages so similar across the world? (All have nouns and verbs; all have a preferred word order (vast majority SVO (42%) or SOV (45%))
*Suggests some kind of innate biases in language structure
Chomsky: language acquisition is guided by an innate device called the LAD or Universal Grammar (UG). The LAD provides the rules (invariant) and principles that allow a child to learn any language in the world. -No human culture on earth exists without language.
“Language learning is not really something that the child does; it is something that happens to a child placed in an appropriate environment, much as the child’s body grows and matures in a predetermined way when provided with appropriate nutrition and environmental stimulation.” (Chomsky, 1973)
Universal grammar
Universal grammar (UG) has a limited number of principles common to all languages. -defines the range of possible human languages. e.g., all languages have a subject even if not expressed.
UG provides a limited range of options or parameters (“switches”) which can be set and explains why there is variation amongst languages. The child’s job is to determine from the input which parameter is appropriate for their language- May take some time. -This limits errors and explains speed of learning despite incomplete input.
Preferred word order SVO SOV …
Pro-drop languages contain subject-less sentences (She does not want to eat vs. Non vuole mangiare) *Often a certain parameter setting has broader implications: for example, if pro-drop, then no dummy pronouns: It’s raining vs. piove
Evidence- syntax
Vocabulary and syntax
VOCOBULARY AND SYNTAX
- Acquisition of syntax goes hand in hand with lexical learning
- Clear correlation between lexical and syntactic development.
- First evidence of grammatical knowledge: when words are combined.
- Multi-word utterances arise when child knows about 50 words (~2 years of age).
- Transition stage: 2 words as unanalyzed wholes: Iwant, gonna, Idontknow,…
Calculating MLU
Stages of syntactic development
Use of context during processing
USE OF CONTEXT DURING PROCESSING
*Do 5-year old's use context to help them with processing ambiguous information (Trueswell et al 1999)- is this different to adults?
*Use of garden-path sentences (sentences that one analyses in a certain way, but their initial analysis turns out to be incorrect) -Most famous one: The horse raced past the barn fell Use of eye-tracker
Use of context during processing 2
DEPENDENT VARIABLES (not dependant)
*Measured percentage of looks at the incorrect destination (empty napkin), percentage of incorrect placements of objects (putting frog on napkin, instead of the box)
-some children even put the frog on the napkin and then put the frog+napkin in the box
Results
Conclusions and key messages
CONCLUSIONS
*In adults, context (1 or 2 frogs) biases the interpretation of the ambiguous sentence
*Children do not use this context (” on the napkin” is always interpreted as destination
KEY MESSAGES
- There is a critical age period when language learning is easier
- Universal Grammar (with LAD) allows the learning of any language ↔ solely input-driven (“nurture”)
- syntactic development can be measured using MLU
- Children do not use contextual clues as well/extensively as adults
Comments
No comments have yet been made