Hadar, U., Wenkert-Olenik, D., Krauss, R.M., & Soroker, N. Gesture and the processing of speech: Neurophysiological evidence. Brain and Language (in press).


Patterns of speech-related ('coverbal') gestures were investigated in three groups of right-handed, brain-damaged patients and in matched controls. One group had anomic aphasia with a primarily semantic impairment ('semantic'); one group had a primarily phonological impairment, reflected in both repetition and naming ('phonologic'); a third group had a primarily conceptual impairment, with relatively good naming ('conceptual'). Coverbal gestures were video recorded during the description of complex pictures and analyzed for physical properties, timing in relation to speech and ideational content. The semantic and phonologic subjects produced a large number of ideational gestures relative to their lexical production, while the related production of the conceptual subjects was similar to that of the unimpaired controls. The composition of ideational gestures in the semantic and phonologic groups was similar to that of the control groups, while conceptual subjects produced fewer iconic gestures (i.e., gestures that show in their form the content of a word or phrase). The iconic gestures of the conceptual patients tended to start further from their lexical affiliates than those of all other subjects. We conclude that ideational gestures probably facilitate word retrieval, as well as reflect the transfer of information between propositional and non-propositional representations during message construction. We suggest that conceptual and lexical processes differ in the way they constrain ideational gestures.

Hadar, U.,& Krauss, R.M. Iconic gestures: The grammatical categories of lexical affiliates. Journal of Neurolinguistics (in press).

Iconic gestures occur during continuous speech and show in their form a meaning related to the meaning articulated in speech. In most cases the related speech unit is a word, called the 'lexical affiliate' of the gesture. Grammatical analysis of lexical affiliates may enhance the understanding of the processes that mediate between speech and gesture production. In this perspective, the lexical category of the affiliated words suggests the underlying mediation: concrete nouns suggest mediation through imagery, prepositions suggest mediation through spatial representations and manual verbs suggest mediation through motor schemata.

The present investigation offers a grammatical analysis of 408 lexical affiliates produced during continuous speech by healthy and brain damaged subjects. Lexical affiliates divided into seven categories: concrete nouns, manual verbs, other verbs, prepositions, adjectives, abstract nouns and quantifiers. The first four categories were the largest, accounting for 87% of all affiliates. This implies that all of the proposed systems are involved in gesture production. The lexical composition of lexical affiliates was then compared to the lexical composition of the speech sample generally. Results showed that manual verbs tended to affiliate to gesture beyond their share in continuous speech. This suggests that motor schemata are more strongly linked to processes of gesture production. Analysis of the brain damaged corpus suggests that gesture shaping is probably processed in systems which are localized to the right hemisphere, since these patients showed the greatest dis-similarity with the healthy subjects.


Hadar, U., Burstein, A., Krauss, R.M., & Soroker, N. Ideational gestures and speech: A neurolinguistic investigation.
Language and Cognitive Processes (in press).

Patterns of speech-related ('coverbal') gestures were investigated in two groups of right-handed, brain-damaged patients and in matched controls. One group ('Aphasic') had primarily anomic deficits and one group ('Visuo-spatial') had visual and spatial deficits, but no aphasia. Coverbal gesture was video recorded during the description of complex pictures and analyzed for physical properties, timing in relation to speech and ideational content. Aphasic patients produced a large amount of ideational gestures relative to their lexical production and pictorial input, while the related production of the visuo-spatial patients was small. Controls showed intermediate values. The composition of ideational gestures was similar in the aphasic and control groups, while visual subjects produced less iconic gestures, i.e., less gestures which show in their form the content of a word or phrase. We conclude that ideational gestures probably facilitate word retrieval, as well as reflect the transfer of information between propositional and non-propositional representations during message construction. We suggest that conceptual and linguistic representations should probably be re-encoded in a visuo-spatial format in order to produce ideational gestures.

Chawla, P., Krauss, R.M., & Krieger, S. Conversational visual cues and memory for narrative (under editorial review).

Three experiments investigated the effect that seeing a narrator has on a listener's ability to recall the narrative. In all three experiments, subjects either heard or heard-and-saw videotaped descriptions of the plot of an animated action cartoon, and then, after performing a distractor task, retold the story. Subjects who saw the speaker recalled the narratives better than those who only heard it (Experiment 1). The effect was not not an artifact of increased attention to the verbal message induced by the visual display (Experiment 2), and was not found when the videotape showed only the speakers' faces (Experiment 3).

Chiu, C.-y., Hong, Y.-y., & Krauss, R. M. Gaze direction and speech dysfluency in conversation. (under editorial review)

In Experiment 1, 114 Chinese-English bilingual undergraduates gave directions to six campus destinations to a bilingual addressee either in Cantonese (their first language) or in English. During two of the descriptions, they were required to gaze fixedly at the addressee, during another two descriptions to gaze fixedly at an inanimate object, and during the remaining two they were allowed to look where they chose. Regardless of the language used, subjects spoke less fluently when required to gaze at their addressee, than when they gazed fixedly at an inanimate object or allowed to gaze where they chose; the latter two conditions did not differ with respect to the frequency of dysfluencies. In Experiment 2, 40 undergraduates performed the same task with the same three gaze conditions speaking Cantonese. Half addressed their directions to another undergraduate, the remainder to a high school student. The effect of gaze condition replicated the results found in Experiment 1. More filled pauses were found in directions addressed to the high school student, especially when speakers were required to fixate their gaze on the listener. The results support a "cognitive interference" explanation of gaze patterns in interpersonal communication.

Dushay, R.A., & Krauss, R.M. Lexical gestures, linguistic fluency and listener comprehension (under editorial review)

Female native English speakers taking fourth-semester Spanish were videotaped as they described novel graphic designs and synthesized sounds in English and in Spanish. Their fluency and gestural behavior were coded. Then, their recorded description were played back in either audio-visual or audio-only form to Spanish-English bilingual listeners, who tried to select the stimulus being described for a set of similar stimuli. Speakers gestured less when speaking Spanish than they did when speaking English. Listeners more accurately identified the described stimulus when the description was in English than when it was in Spanish, but seeing a speaker's gestures did not enhance the communicativeness of her message regardless of whether she spoke English or Spanish. It was concluded that speakers did not employ gestures to compensate for their linguistic deficiencies, and that the gestures they did employ made little contribution to their listeners' comprehension.

Morsella, E.M., & Krauss, R.M. Electromyography of arm during retrieval of abstract and concrete words (under editorial review).

Electromyographic (EMG) activity of the dominant forearm was recorded during lexical retrieval tasks involving abstract and concrete words. Using a within-subjects design, participants first tried to identify target words from their definitions; then, they generated sentences employing the same words. On both tasks, EMG amplitudes were significantly greater for concrete than for abstract words. The relationship between EMG amplitude and conceptual attributes of the target words also was examined. EMG was positively related to a word's judged spatiality, concreteness, drawability, and manipulability. The findings are consistent with the view that lexical gestures serve a facilitative role in lexical retrieval, and shed light on the ways word-concepts are represented in the mental lexicon. .

Krauss, R.M., Haber, R., & Morsella, E. Inferring speakers' physical attributes from their voices.

Two experiments examined listeners' ability to make accurate inferences about speakers from the nonlinguistic content of their speech. In Experiment I, naïve listeners heard male and female speakers articulating two test sentences, and tried to select which of a pair of photographs depicted the speaker. On average they selected the correct photo 76.5% of the time. All performed at level that was reliably better than chance. In Experiment II, judges heard the test sentences and estimated the speakers' age, height and weight. A comparison group made the same estimates from photographs of the speakers. Although estimates made from photos are more accurate than those made from voice, for age and height the differences are quite small in magnitude--a little more than a year in age and less than a half inch in height. When judgments are pooled, estimates made from photos are not uniformly superior to those made from voices.

Morsella, E. & Krauss, R.M. Movements facilitate speech production: A gestural feedback model

Although the hand and arm movements (gestures) that accompany speech traditionally have been regarded as communicative, accumulating evidence suggests that they play a functional role for the speaker. According to the Gestural Feedback Model (GFM), lexical gestures play a role in speech production by increasing the semantic activation of words grounded in sensorimotor features, hence facilitating retrieval of the word form. In Exp.1, the magnitude of muscle activation observed during lexical retrieval was predicted by the concreteness and spatiality of the target word. In Exp. 2, more gesturing occurred when participants described recently viewed visual objects from memory than when the object was visually present. Fluency decreased when speakers were prevented from gesturing. The implications of these finding for the GFM and for theories of semantic representation are discussed.