Abstracts of Publications

2017

Cartmill, E., Rissman, L., Novack, M., & Goldin-Meadow, S.The development if iconicity in children's co-speech gesture and homesign. Language, Ineraction and Acquisition 8:1 (2017), doi:10.1075/lia.8.1.03car

Gesture can illustrate objects and events in the world by iconically reproducing elements of those objects and events. Children do not begin to express ideas iconically, however, until after they have begun to use conventional forms. In this paper, we investigate how children’s use of iconic resources in gesture relates to the developing structure of their communicative systems. Using longitudinal video corpora, we compare the emergence of manual iconicity in hearing children who are learning a spoken language (co-speech gesture) to the emergence of manual iconicity in a deaf child who is creating a manual system of communication homesign). We focus on one particular element of iconic gesture – the shape of the hand (handshape). We ask how handshape is used as an iconic resource in 1–5-year-olds, and how it relates to the semantic content of children’s communicative acts. We find that patterns of handshape development are broadly similar between co-speech gesture and homesign, suggesting that the building blocks underlying children's ability to iconically map manual forms to meaning are shared across different communicative systems: those where gesture is produced alongside speech, and those where gesture is the primary mode of communication.


Brentari, D., & Goldin-Meadow, S. Language emergence.  Annual Review of Linguistic, Abstract, PDF

Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.


Cooperrider, K., & Goldin-Meadow, S.  When gesture becomes analogy. Topics in Cognitive Science, doi: 10.1111/tops.12276, Abstract, PDF

Analogy researchers do not often examine gesture, and gesture researchers do not often borrow ideas from the study of analogy. One borrowable idea from the world of analogy is the importance of distinguishing between attributes and relations. Gentner (1983, 1988) observed that some metaphors highlight attributes and others highlight relations, and called the latter analogies. Mirroring this logic, we observe that some metaphoric gestures represent attributes and others represent relations, and propose to call the latter analogical gestures. We provide examples of such analogical gestures and show how they relate to the categories of iconic and metaphoric gestures described previously. Analogical gestures represent different types of relations and different degrees of relational complexity, and sometimes cohere into larger analogical models. Treating analogical gestures as a distinct phenomenon prompts new questions and predictions, and illustrates one way that the study of gesture and the study of analogy can be mutually informative.


Ozcaliskan, S., Lucero, C., & Goldin-Meadow, S.  Blind speakers show language-specific patterns in co-speech gesture but not silent gesture.  Cognitive Science, doi: 10.1111/cogs.12502, Abstract, PDF

Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.


Brookshire, G., Lu, J., Nusbaum, H., Goldin-Meadow, S., & Casasanto, D.  Visual cortex entrains to sign language.  PNAS, doi: 10.1073/pnas.1620350114, PDF

Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow (<8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at ∼1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that max


Wakefield, E.M., Novack, M., & Goldin-Meadow, S.  Unpacking the ontogeny of gesture understanding:  How movement becomes meaningful across development.  Child Development, DOI: 10.1111/cdev.12817, PDF

Gestures, hand movements that accompany speech, affect children’s learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects (but not touching them), or (c) an actor moving her hands in the absence of objects. Participants across all ages were equally able to identify actions on objects as goal directed, but the ability to identify empty-handed movements as representational actions (i.e., as gestures) increased with age and was influenced by the presence of objects, especially in older children.


Congdon, E.L., Novack, M.A., Brooks, N., Hemani-Lopez, N., & O’Keefe, L., & Goldin-Meadow, S.  Better together:  Simultaneous presentation of speech and gesture in math instruction supports generalization and retention.  Journal of Learning and Instruction, 2017. doi: 10.1016/j.learninstruc.2017.03.005. PDF

When teachers gesture during instruction, children retain and generalize what they are taught (Goldin- Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it in- volves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capi- talizes on its synchrony with speech to promote learning that lasts and can be generalized. 


Rissman, L., & Goldin-Meadow, S.  The development of causal structure without a language model.  Language Learning and Development, doi: 10.1080/15475441.2016.1254633. PDF

Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not exposed to structured linguistic input could create linguistic devices for encoding causation and, if so, whether the emergence of this causal language would follow a trajectory similar to the one observed for children learning language from linguistic input. We show that the child in our study did develop causation-encoding morphology, but only after initially using verbs that lacked internal causal structure. These results suggest that the ability to encode causation linguistically can emerge in the absence of a language model, and that exposure to linguistic input is not the only factor guiding children from one stage to
the next in their production of causal language.


Goldin-Meadow, S. Using our hands to change our minds. WIREs Cognitive Science, doi: 10.1002/wcs.1368. PDF

Jean Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how children understand the task at each point, but also about how they progress from one point to the next. This article examines a routine behavior that Piaget overlooked—the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. Gesture can do more than reflect ideas—it can also change them. Observing the gestures that others produce can change a learner’s ideas, as can producing one’s own gestures. In this sense, gesture behaves like any other action. But gesture differs from many other actions in that it also promotes generalization of new ideas. Gesture represents the world rather than directly manipulating the world (gesture does not move objects around) and is thus a special kind of action. As a result, the mechanisms by which gesture and action promote learning may differ. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.


Goldin-Meadow, S. What the hands can tell us about language emergence. Psychonomic Bulletin & Review, 2017, 24(1), 213-218, doi:10.3758/s13423-016-1074-x, PDF

Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats––a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code ismore easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manualmodality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic
encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts––when it takes on the primary function of
communication (sign language), and when it takes on a complementary communicative function (gesture)––in our analysis
of language, we gain new perspectives on the origins and continuing development of language.


Goldin-Meadow, S., & Yang, C. Statistical evidence that a child can create a combinatorial linguistic system without external linguistic input: Implications for  language evolution. Neuroscience & Biobehavioral Reviews, doi: 10.1016/j.neubiorev.2016.12.016 PDF

Can a child who is not exposed to a model for language nevertheless construct a communication system characterized by combinatorial structure? We know that deaf children whose hearing losses prevent them from acquiring spoken language, and whose hearing parents have not exposed them to sign language, use gestures, called homesigns, to communicate. In this study, we call upon a new formal analysis that characterizes the statistical profile of grammatical rules and, when applied to child language data, finds that young children’s language is consistent with a productive grammar rather than rote memorization of specific word combinations in caregiver speech. We apply this formal analysis to homesign, and find that homesign can also be characterized as having productive grammar. Our findings thus provide evidence that a child can create a combinatorial linguistic system without external linguistic input, and offer unique insight into how the capacity of language evolved as part of human biology.


2016

Cooperrider, K., Gentner, D. & Goldin-Meadow, S. Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures. Cognitive Research: Principles and Implications. doi: 10.1186/s41235-016-0024-5. PDF

How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase) and sometimes more complex relational structures (e.g., negative feedback). Moreover, over the course of their explanations, participants’ gestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not.


Wakefield, E. M., Hall, C., James, K. H., & Goldin-Meadow, S. Representational gesture as a tool for promoting word learning in young children. In Proceedings of the 41st Annual Boston University Conference on Language Development, Boston, MA, 2016.

 The movements we produce or observe others produce can help us learn. Two forms of movement that are commonplace in our daily lives are actions,  hand movements that directly manipulate our environment, and gestures , hand movements that accompany speech and represent ideas but do not  lead to physical changes in the environment. Both action and gesture have been found to influence cognition, facilitating our ability to learn and remember new information (e.g., Calvo-Merino, Glaser, Grezes, Passingham, & Haggard, 2005; Casile & Giese, 2006; Chao & Martin, 2000; Cook, Mitchell, & Goldin-Meadow, 2008; Goldin-Meadow, Cook, & Mitchell, 2009; Goldin-Meadow et al., 2012; James, 2010; James & Atwood, 2009; James & Gauthier, 2006; James & Maouene, 2009; James & Swain, 2011; Longcamp, Anton, Roth, & Velay, 2003; Longcamp, Tanskanen, & Hari, 2006; Pulvermüller, 2001; Wakefield & James, 2015) . However, the two types of movement may affect learning in different ways. In previous work, the effects of action and gesture on learning have been considered separately (but see Novack, Congdon, Hemani-Lopez, & Goldin-Meadow, 2014). Our goal here is to directly compare children’s ability to learn from actions on  objects versus gestures off  objects. We consider this question in the realm of word learning, specifically, teaching children verbs for actions that are performed on objects. We also ask whether learning through these movements unfolds differently when movements are produced versus observed by a child. More broadly, our study is a first step in understanding how information is learned, generalized, and retained based on whether it is expressed through action or gesture.


Novack, M., & Goldin-Meadow, S.  Gesture as representational action:  A paper about function.  Psychonomic Bulletin and Review, doi:10.3758/s13423-016-1145-z. PDF

A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal––that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15,495–514, 2008)––has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon’s function is its purpose rather than its precipitating cause––the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism––it is clearly not reducible to action in terms of its function. Most notably,because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.


Goldin-Meadow, S., & Brentari, D.  Gesture, sign and language:  The coming of age of sign language and gesture studies.  Behavioral and Brain Sciences, doi:10.1017/S0140525X15001247. PDF

Characterizations of sign language have swung from the view that sign is nothing more than a language of pictorial gestures with no linguistic structure, to the view that sign is no different from spoken language and has the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign, gesture, and speech. We conclude that signers gesture just as speakers do–both produce imagistic gestures along with categorical signs/words, and we call for new technology to help us better calibrate the borders between sign and gesture.


Andric, M., Goldin-Meadow, S., Small, S. & Hasson, U. Repeated movie viewings produce similar local activity patterns but different network configurations.  Neuroimage, 2016, doi: 10.1016/j.neuroimage.2016.07.061. PDF

People seek novelty in everyday life, but they also enjoy viewing the same movies or reading the same novels a second time. What changes and what stays the same when re-experiencing a narrative? In examining this question with functional neuroimaging, we found that brain activity reorganizes in a hybrid, scale-dependent manner when individuals processed the same audiovisual narrative a second time. At the most local level, sensory systems (occipital and temporal cortices) maintained a similar temporal activation profile during the two viewings. Nonetheless, functional connectivity between these same lateral temporal regions and other brain regions was stronger during the second viewing. Furthermore, at the level of whole-brain connectivity, we found a significant rearrangement of network partition structure: lateral temporal and inferior frontal regions clustered together during the first viewing but merged within a fronto-parietal cluster in the second. Our findings show that repetition maintains local activity profiles. However, at the same time, it is associated with multiple network-level connectivity changes on larger scales, with these changes strongly involving regions considered core to language processing.


Asaridou, S., Demir-Lira, O.E., Goldin-Meadow, S., & Small, S.L.  The pace of vocabulary growth during preschool predicts cortical structure at school age.  Neuropsychologia, 2016, doi: 10.1016/j.neuropsychologia.2016.05.018. PDF

Children vary greatly in their vocabulary development during preschool years. Importantly, the pace of this early vocabulary growth predicts vocabulary size at school entrance. Despite its importance for later academic success, not much is known about the relation between individual differences in early vocabulary development and later brain structure and function. Here we examined the association between vocabulary growth in children, as estimated from longitudinal measurements from 14 to 58 months, and individual differences in brain structure measured in 3rd and 4th grade (8–10 years old). Our results show that the pace of vocabulary growth uniquely predicts cortical thickness in the left supramarginal gyrus. Probabilistic tractography revealed that this region is directly connected to the inferior frontal gyrus (pars opercularis) and the ventral premotor cortex, via what is most probably the superior longitudinal fasciculus III. Our findings demonstrate, for the first time, the relation between the pace of vocabulary learning in children and a specific change in the structure of the cerebral cortex, specifically, cortical thickness in the left supramarginal gyrus. They also highlight the fact that differences in the pace of vocabulary growth are associated with the dorsal language stream, which is thought to support speech perception and articulation.


Cooperrider, K., Gentner, D., & Goldin-Meadow, S. Gesture reveals spatial analogies during complex relational reasoning. Proceedings of the 38th Annual Meeting of the Cognitive Science Society (pp. 692-697). Austin, TX: Cognitive Science Society, 2016. PDF

How do people think about complex relational phenomena like the behavior of the stock market? Here we hypothesize that people reason about such phenomena in part by creatingspatial analogies, and we explore this possibility by examining people’s spontaneous gestures. Participants read a written lesson describing positive and negative feedback systems and then explained the key differences between them. Though the lesson was highly abstract and free of concrete imagery, participants produced spatial gestures in abundance during their explanations. These spatial gestures, despite being fundamentally abstract, showed clear regularities and often built off of each other to form larger spatial models of relational structure—that is, spatial analogies. Importantly, the spatial richness and systematicity revealed in participants’ gestures was largely divorced from spatial language. These results provide evidence for the spontaneous use of spatial analogy during complex relational reasoning.


Novack, M.A., Wakefield, E.M., Congdon, E.L., Franconeri, S., & Goldin-Meadow, S.  There is more to gesture than meets the eye:  Visual attention to gesture's referents cannot account for its facilitative effects during math instruction. Proceedings of the 37th Annual Meeting of the Cognitive Science Society,(pp/ 2141-2146). Austin, TX: Cognitive Science Society, 2016. PDF

Teaching a new concept with gestures – hand movements that accompany speech – facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, 2005). However, the mechanisms underlying this phenomenon are still being explored. Here, we use eyetracking to explore one mechanism – gesture’s ability to direct visual attention. We examine how children allocate their visual attention during a mathematical equivalence less on that either contains gesture or does not. We show that gesture instruction improves posttest performance, and additionally that gesture does change how children visually attend to instruction: children look more to the problem being explained, and less to the instructor.However looking patterns alone cannot explain gesture’s effect, as posttest performance is not predicted by any of our looking-time measures. These findings suggest that gesture does guide visual attention, but that attention alone cannot account for its facilitative learning effects.


Goldin-Meadow, S.   What the hands can tell us about language emergence. Psychonomic Bulletin and Review, doi:  10.3758/s13423-016-1074-x, PDF

Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats––a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code is more easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manual modality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts––when it takes on the primary function of communication (sign language), and when it takes on a complementary communicative function (gesture)––in our analysis of language, we gain new perspectives on the origins and continuing development of language.


Ozcaliskan, S., Lucero, C., & Goldin-Meadow, S. Is seeing gesture necessary to gesture like a native speaker?  Psychological Science, doi:10.1177/0956797616629931. PDF

Speakers of all languages gesture, but there are differences in the gestures that they produce. Do speakers learn language-specific gestures by watching others gesture or by learning to speak a particular language? We examined this question by studying the speech and gestures produced by 40 congenitally blind adult native speakers of English and Turkish (n= 20/language), and comparing them with the speech and gestures of 40 sighted adult speakers in each language (20 wearing blindfolds, 20 not wearing blindfolds). We focused on speakers’ descriptions of physical motion, which display strong cross-linguistic differences in patterns of speech and gesture use. Congenitally blind speakers of English and Turkish produced speech that resembled the speech produced by sighted speakers of their native language. More important, blind speakers of each language used gestures that resembled the gestures of sighted speakers of that language. Our results suggest that hearing a particular language is sufficient to gesture like a native speaker of that language.


Ozcaliskan, S., Lucero, C. & Goldin-Meadow, S. Does language shape silent gesture? Cognition, 2016, 148, 10-18, doi: 10.1016/j.cognition.2015.12.001. PDF

Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N= 20/per language) asked to describe physical motion events (e.g., running down a path)—a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech—co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own—silent gestures produced by English-speakers were identical inhow motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language.


Trueswell, J., Lin, Y., Armstrong III, B., Cartmill, E., Goldin-Meadow, S. & Gleitman, L. Perceiving referential intent: Dynamics of reference in natural parent–child interactions. Cognition, 2016, 148, 117-135, doi:10.1016/j.cognition.2015.11.002. PDF

Two studies are presented which examined the temporal dynamics of the social-attentive behaviors that co-occur with referent identification during natural parent–child interactions in the home. Study 1 focused on 6.2 h of videos of 56 parents interacting during everyday activities with their 14–18 month-olds, during which parents uttered common nouns as parts of spontaneously occurring utterances. Trained coders recorded, on a second-by-second basis, parent and child attentional behaviors relevant to reference in the period (40 s) immediately surrounding parental naming. The referential transparency of each interaction was independently assessed by having naïve adult participants guess what word the parent had uttered in these video segments, but with the audio turned off, forcing them to use only non-linguistic evidence available in the ongoing stream of events. We found a great deal of ambiguity in the input along with a few potent moments of word-referent transparency; these transparent moments have a particular temporal signature with respect to parent and child attentive behavior: it was the object’s appearance and/or the fact that it captured parent/child attention at the moment the word was uttered, not the presence of the object throughout the video, that predicted observers’ accuracy. Study 2 experimentally investigated the precision of the timing relation, and whether it has an effect on observer accuracy, by disrupting the timing between when the word was uttered and the behaviors present in the videos as they were originally recorded. Disrupting timing by only ±1 to 2 s reduced participant confidence and significantly decreased their accuracy in word identification. The results enhance an expanding literature on how dyadic attentional factors can influence early vocabulary growth. By hypothesis, this kind of time-sensitive data-selection process operates as a filter on input, removing many extraneous and ill-supported word-meaning hypotheses from consideration during children’s early vocabulary learning.


Novack, M., Wakefield, E. & Goldin-Meadow, S. What makes a movement a gesture? Cognition, 2016, 146, 339-348, doi:10.1016/j.cognition.2015.10.014. PDF

Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement—movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features—the form of an actor’s hands and the presence of speech-like sounds—to test the effect of context on observers’ classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation.


2015

Abner, N., Cooperrider, K., & Goldin-Meadow, S.  Gesture for linguists: A handy primer. Language and Linguistics Compass, 2015, 9/11, 437-449, doi:10.1111/lnc3.12168. PDF

Humans communicate using language, but they also communicate using gesture – spontaneous movements
of the hands and body that universally accompany speech. Gestures can be distinguished from other
movements, segmented, and assigned meaning based on their forms and functions. Moreover, gestures
systematically integrate with language at all levels of linguistic structure, as evidenced in both production
and perception. Viewed typologically, gesture is universal, but nevertheless exhibits constrained variation
across language communities (as does language itself ). Finally, gesture has rich cognitive dimensions in
addition to its communicative dimensions. In overviewing these and other topics, we show that the study
of language is incomplete without the study of its communicative partner, gesture.


Horton, L., Goldin-Meadow, S., Coppola, M., Senghas, A., & Brentari, D. Forging a morphological system out of two
dimensions: Agentivity and number. Open Linguistics, 2015, 1, 596-613, doi: 10.1515/opli-2015-0021. PDF

Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature – unpunctuated repetition – in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).


Brooks, N., & Goldin-Meadow, S.  Moving to learn:  How guiding the hands can set the stage for learning.  Cognitive Science, 2015, doi: 10.1111/cogs.12292. PDF

Previous work has found that guiding problem-solvers’ movements can have an immediate effect on their ability to solve a problem. Here we explore these processes in a learning paradigm. We ask whether guiding a learner’s movements can have a delayed effect on learning, setting the stage for change that comes about only after instruction. Children were taught movements that were either relevant or irrelevant to solving mathematical equivalence problems and were told to produce the movements on a series of problems before they received instruction in mathematical equivalence. Children in the relevant movement condition improved after instruction significantly more than children in the irrelevant movement condition, despite the fact that the children showed no improvement in their understanding of mathematical equivalence on a ratings task or on a paper-and-pencil test taken immediately after the movements but before instruction. Movements of the body can thus be used to sow the seeds of conceptual change. But those seeds do not necessarily come to fruition until after the learner has received explicit instruction in the concept, suggesting a “sleeper effect” of gesture on learning.


Novack, M., Goldin-Meadow, S., & Woodward, A.  Learning from gesture:  How early does it happen? Cognition, 2015, 142,138-147. doi: 10.1016/j.cognition.2015.05.018. PDF

Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form—a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter’s gesture as it was performed). Study 2 compared 2-year-olds’ performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner’s attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation.


Novack, M., & Goldin-Meadow, S. Learning from gesture: How our hands change our minds. Educational Psychology Review, 2015, 27(3), 405-412, doi: 10.1007/s10648-015-9325-3. PDF

When people talk, they gesture, and those gestures often reveal information that cannot be found in speech. Learners are no exception. A learner’s gestures can index moments of conceptual instability, and teachers can make use of those gestures to gain access into a student’s thinking. Learners can also discover novel ideas from the gestures they produce during a lesson or from the gestures they see their teachers produce. Gesture thus has the power not only to reflect a learner’s understanding of a problem but also to change that understanding. This review explores how gesture supports learning across development and ends by offering suggestions for ways in which gesture can be recruited in educational settings.


Goldin-Meadow, S., From Action to Abstraction: Gesture as a mechanism of change. Developmental Review, 2015, doi: 10.1016/j.dr.2015.07.007. PDF

Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked – the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas – it can also change them. In this sense, gesture behaves like any other action;
both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ – gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning
abstract ideas.


Gunderson, E., Spaepen, E., Gibson, D., Goldin-Meadow, S., Levine, S., Gesture as a window onto children's number knowledge. Cognition, 2015, 144, 14-28, doi:10.1016/j.cognition.2015.07.008. PDF

Before learning the cardinal principle (knowing that the last word reached when counting a set represents the size of the whole set), children do not use number words accurately to label most set sizes. However, it remains unclear whether this difficulty reflects a general inability to conceptualize and communicate about number, or a specific problem with number words. We hypothesized that children’s gestures might reflect knowledge of number concepts that they cannot yet express in speech, particularly for numbers they do not use accurately in speech (numbers above their knower-level). Number gestures are iconic in the sense that they are item-based (i.e., each finger maps onto one item in a set) and therefore may be easier to map onto sets of objects than number words, whose forms do not map transparently onto the number of items in a set and, in this sense, are arbitrary. In addition, learners in transition with respect to a concept often produce gestures that convey different information than the accompanying speech. We examined the number words and gestures 3- to 5-year-olds used to label small set sizes exactly (1–4) and larger set sizes approximately (5–10). Children who had not yet learned the cardinal principle were more than twice as accurate when labeling sets of 2 and 3 items with gestures than with words, particularly if the values were above their knower-level. They were also better at approximating set sizes 5–10 with gestures than with words. Further, gesture was more accurate when it differed from the accompanying speech (i.e., a gesture–speech mismatch). These results show that children convey numerical information in gesture that they cannot yet convey in speech, and raise the possibility that number gestures play a functional role in children’s development of number concepts.


Suskind, D., Leffel, K. R., Leininger, L., Gunderson, E. A., Sapolich, S. G., Suskind, E., Hernandez, M.W., Goldin-Meadow, S., Graf, E. & Levine, S. A Parent-Directed Language Intervention for Children of Low Socioeconomic Status: A Randomized Controlled Pilot Study. Journal of Child Language, Available on CJO 2015. doi:10.1017/S0305000915000033, PDF

We designed a parent-directed home-visiting intervention targeting socioeconomic status (SES) disparities in children’s early language environments. A randomized controlled trial was used to evaluate whether the intervention improved parents’ knowledge of child language development and increased the amount and diversity of parent talk. Twenty-three mother–child dyads (12 experimental, 11 control, aged 1;5–3;0) participated in eight weekly hour-long home-visits. In the experimental group, but not the control group, parent knowledge of language development increased significantly one week and four months after the intervention. In lab-based observations, parent word types and tokens and child word types increased significantly one week, but not four months, post-intervention. Inhome-based observations, adult word tokens, conversational turn counts, and child vocalization counts increased significantly during the intervention, but not post-intervention. The results demonstrate the malleability of child-directed language behaviors and knowledge of child language development among low-SES parents.


Goldin-Meadow, S., Brentari, D., Coppola, M., Horton, L., Senghas, A., Watching language grow in the manal modality: Nominals, predicates, and handshapes. Cognition, 2015, 135, 381-395. PDF

All languages, both spoken and signed, make a formal distinction between two types of terms in a proposition – terms that identify what is to be talked about (nominals) and terms that say something about this topic (predicates). Here we explore conditions that could lead to this property by charting its development in a newly emerging language – Nicaraguan Sign Language (NSL). We examine how handshape is used in nominals vs. predicates in three Nicaraguan groups: (1) homesigners who are not part of the Deaf community and use their own gestures, called homesigns, to communicate; (2) NSL cohort 1 signers who fashioned the first stage of NSL; (3) NSL cohort 2 signers who learned NSL from cohort 1. We compare these three groups to a fourth: (4) native signers of American Sign Language (ASL), an established sign language. We focus on handshape in predicates that are part of a productive classifier system in ASL; handshape in these predicates varies systematically across agent vs. no-agent contexts, unlike handshape in the nominals we study, which does not vary across these contexts. We found that all four groups, including homesigners, used handshape differently in nominals vs. predicates – they displayed variability in handshape form across agent vs. no-agent contexts in predicates, but not in nominals. Variability thus differed in predicates and nominals: (1) In predicates, the variability across grammatical contexts (agent vs. no-agent) was systematic in all four groups, suggesting that handshape functioned as a productive morphological marker on predicate signs, even in homesign. This grammatical use of handshape can thus appear in the earliest stages of an emerging language. (2) In nominals, there was no variability across grammatical contexts (agent vs. no-agent), but there was variability within- and across-individuals in the handshape used in the nominal for a particular object. This variability was striking in homesigners (an individual homesigner did not necessarily use the same handshape in every nominal he produced for a particular object), but decreased in the first cohort of NSL and remained relatively constant in the second cohort. Stability in the lexical use of handshape in nominals thus does not seem to emerge unless there is pressure from a peer linguistic community. Taken together, our findings argue that a community of users is essential to arrive at a stable nominal lexicon, but not to establish a productive morphological marker in predicates. Examining the steps a manual communication system takes as it moves toward becoming a fully-fledged language offers a unique window onto factors that have made human language what it is.


Goldin-Meadow, S. Gesture as a window onto communicative abilities:  Implications for diagnosis and intervention. SIG 1 Perspectives on Language Learning and Education, 2015, 22, 50-60. doi:10.1044/lle22.2.50.

Speakers around the globe gesture when they talk, and young children are no exception. In fact, children's first foray into communication tends to be through their hands rather than their mouths. There is now good evidence that children typically express ideas in gesture before they express the same ideas in speech. Moreover, the age at which these ideas are expressed in gesture predicts the age at which the same ideas are first expressed in speech. Gesture thus not only precedes, but also predicts, the onset of linguistic milestones. These facts set the stage for using gesture in two ways in children who are at risk for language delay. First, gesture can be used to identify individuals who are not producing gesture in a timely fashion, and can thus serve as a diagnostic tool for pinpointing subsequent difficulties with spoken language. Second, gesture can facilitate learning, including word learning, and can thus serve as a tool for intervention, one that can be implemented even before a delay in spoken language is detected.


Demir, O.E., Rowe, M., Heller, G., & Goldin-Meadow, S., Levine, S.C.  Vocabulary, syntax, and narrative development in typically developing children and children with early unilateral brain injury:  Early parental talk about the there-and-then matters, Developmental Psychology, 2015, 51(2), 161-175. doi: 10.1037/a0038476. PDF.

This study examines the role of a particular kind of linguistic input—talk about the past and future, pretend, and explanations, that is, talk that is decontextualized—in the development of vocabulary, syntax, and narrative skill in typically developing (TD) children and children with pre- or perinatal brain injury (BI). Decontextualized talk has been shown to be particularly effective in predicting children’s language skills, but it is not clear why. We first explored the nature of parent decontextualized talk and found it to be linguistically richer than contextualized talk in parents of both TD and BI children. We then found, again for both groups, that parent decontextualized talk at child age 30 months was a significant predictor of child vocabulary, syntax, and narrative performance at kindergarten, above and beyond the child’s own early language skills, parent contextualized talk and demographic factors. Decontextualized talk played a larger role in predicting kindergarten syntax and narrative outcomes for children with lower syntax and narrative skill at age 30 months, and also a larger role in predicting kindergarten narrative outcomes for children with BI than for TD children. The difference between the 2 groups stemmed primarily from the fact that children with BI had lower narrative (but not vocabulary or syntax) scores than TD children. When the 2 groups were matched in terms of narrative skill at kindergarten, the impact that decontextualized talk had on narrative skill did not differ for children with BI and for TD children. Decontextualized talk is thus a strong predictor of later language skill for all children, but may be particularly potent for children at the lower-end of the distribution for language skill. The findings also suggest that variability in the language development of children with BI is influenced not only by the biological characteristics of their lesions, but also by the language input they receive.


Goldin-Meadow, S. Studying the mechanisms of language learning by varying the learning environment and the learner.  Language, Cognition & Neuroscience. doi:10.1080/23273798.2015.1016978. PDF

Language learning is a resilient process, and many linguistic properties can be developed under a wide range of learning environments and learners. The first goal of this review is to describe properties of language that can be developed without exposure to a language model – the resilient properties of language – and to explore conditions under which more fragile properties emerge. But even if a linguistic property is resilient, the developmental course that the property follows is likely to vary as a function of learning environment and learner, that is, there are likely to be individual differences in the learning trajectories children follow. The second goal is to consider how the resilient properties are brought to bear on language learning when a child is exposed to a language model. The review ends by considering the implications of both sets of findings for mechanisms, focusing on the role that the body and linguistic input play in language learning.


Demir, O.E., Levine, S., & Goldin-Meadow, S.  A tale of two hands: Children’s gesture use in narrative production predicts later narrative structure in speech, Journal of Child Language, 2015, 42(3), 662-681. doi:10.1017/S0305000914000415, PDF

Speakers of all ages spontaneously gesture as they talk. These gestures predict children’s milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight. Children’s narrative structure in speech improved across these ages. At age five, many of the children expressed a character’s viewpoint in gesture, and these children were more likely to tell better-structured stories at the later ages than children who did not produce character-viewpoint gestures at age five. In contrast, framing narratives from a character’s perspective in speech at age five did not predict later narrative structure in speech. Gesture thus continues to act as a harbinger of change even as it assumes new roles in relation to discourse.


Ozyurek, A., Furman, R., & Goldin-Meadow, S.  On the way to language:  event segmentation in homesign and gesture.  Journal of Child Language, 2015, 42(1), 64-94.  doi:10.1017/S0305000913000512. PDF

Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.


2014

Goldin-Meadow, S. Widening the lens: What the manual modality reveals about language, learning, and cognition.  Philosophical Transactions of the Royal Society, Series B, 2014, 369, doi: 10.1098/rstb.2013.0295. PDF

The goal of this paper is to widen the lens on language to include the manual modality. We look first at hearing children who are acquiring language from a spoken language model and find that even before they use speech to communicate, they use gesture. Moreover, those gestures precede, and predict, the acquisition of structures in speech. We look next at deaf children whose hearing losses prevent them from using the oral modality, and whose hearing parents have not presented them with a language model in the manual modality. These children fall back on the manual modality to communicate and use gestures, which take on many of the forms and functions of natural language. These homemade gesture systems constitute the first step in the emergence of manual sign systems that are shared within deaf communities and are full-fledged languages. We end by widening the lens on sign language to include gesture and find that signers not only gesture, but they also use gesture in learning contexts just as speakers do. These findings suggest that what is key in gesture’s ability to predict learning is its ability to add a second representational format to communication, rather than a second modality. Gesture can thus be language, assuming linguistic forms and functions, when other vehicles are not available; but when speech or sign is possible, gesture works along with language, providing an additional representational format that can promote learning. 


Beaudoin-Ryan, L. & Goldin-Meadow, S.  Teaching moral reasoning through gesture.  Developmental Science, 2014, doi: 10.1111/desc.12180 PDF

Stem-cell research. Euthanasia. Personhood. Marriage equality. School shootings. Gun control. Death penalty. Ethical dilemmas regularly spark fierce debate about the underlying moral fabric of societies. How do we prepare today’s children to be fully informed and thoughtful citizens, capable of moral and ethical decisions? Current approaches to moral education are controversial, requiring adults to serve as either direct ('top-down’) or indirect (‘bottom-up’) conduits of information about morality. A common thread weaving throughout these two educational initiatives is the ability to take multiple perspectives–increases in perspective taking ability have been found to precede advances in moral reasoning. We propose gesture as a behavior uniquely situated to augment perspective taking ability. Requiring gesture during spatial tasks has been shown to catalyze the production of more sophisticated problem-solving strategies, allowing children to profit from instruction. Our data demonstrate that requiring gesture during moral reasoning tasks has similar effects, resulting in increased perspective taking ability subsequent to instruction.


Ping, R., Goldin-Meadow, S., & Beilock S. Understanding gesture:  Is the listener's motor system involved.  Journal of Experimental Psychology:  General, 2014, 143(1), 195-204. doi: 10.1037/a0032246. PDF

Listeners are able to glean information from the gestures that speakers produce, seemingly without conscious awareness. However, little is known about the mechanisms that underlie this process. Research on human action understanding shows that perceiving another's actions results in automatic activation of the motor system in the observer, which then affects the observer's understanding of the actor's goals. We ask here whether perceiving another's gesture can similarly result in automatic activation of the motor system in the observer. In Experiment 1, we first established a new procedure in which listener response times are used to study how gesture impacts sentence comprehension. In Experiment 2, we used this procedure, in conjunction with a secondary motor task, to investigate whether the listener's motor system is involved in this process. We showed that moving arms and hands (but not legs and feet) interferes with the listener's ability to use information conveyed in a speaker's hand gestures. Our data thus suggest that understanding gesture relies, at least in part, on the listener's own motor system.


Goldin-Meadow, S. How gesture works to change or our minds. Trends in Neuroscience and Education, 2014, doi:10.1016/j.tine.2014.01.002. PDF

When people talk, they gesture. We now know that these gestures are associated with learning—they can index moments of cognitive instability and reflect thoughts not yet found in speech. But gesture has the potential to do more than just reflect learning—it might be involved in the learning process itself. This review focuses on two non-mutually exclusive possibilities: (1) The gestures we see others produce have the potential to change our thoughts. (2) The gestures that we ourselves produce have the potential to change our thoughts, perhaps by spatializing ideas that are not inherently spatial. The review ends by exploring the mechanisms responsible for gesture's impact on learning, and by highlighting ways in
which gesture can be effectively used in educational settings.


Novack, M.A., Congdon, E.L., Hemani-Lopez, N., & Goldin-Meadow, S. From action to abstraction: Using the hands to learn math. Psychological Science, 2014, doi:10.1177/0956797613518351. PDF

Previous research has shown that children benefit from gesturing during math instruction. We asked whether gesturing
promotes learning because it is itself a physical action, or because it uses physical action to represent abstract ideas.
To address this question, we taught third-grade children a strategy for solving mathematical-equivalence problems
that was instantiated in one of three ways: (a) in a physical action children performed on objects, (b) in a concrete
gesture miming that action, or (c) in an abstract gesture. All three types of hand movements helped children learn how
to solve the problems on which they were trained. However, only gesture led to success on problems that required
generalizing the knowledge gained. The results provide the first evidence that gesture promotes transfer of knowledge
better than direct action on objects and suggest that the beneficial effects gesture has on learning may reside in the
features that differentiate it from action.


Cartmill, E. A., Hunsicker, D., & Goldin-Meadow, S. Pointing and naming are not redundant: Children use gesture to modify nouns before they modify nouns in speech. Developmental Psychology, 2014, doi: 10.1037/a0036003 PDF

Nouns form the first building blocks of children’s language but are not consistently modified by other words until around 2.5 years of age. Before then, children often combine their nouns with gestures that indicate the object labeled by the noun, for example, pointing at a bottle while saying “bottle.” These gestures are typically assumed to be redundant with speech. Here we present data challenging this assumption, suggesting that these early pointing gestures serve a determiner-like function (i.e., point at bottle “bottle” that bottle). Using longitudinal data from 18 children (8 girls), we analyzed all utterances containing nouns and focused on (a) utterances containing an unmodified noun combined with a pointing gesture and (b) utterances containing a noun modified by a determiner. We found that the age at which children first produced point noun combinations predicted the onset age for determiner noun combinations. Moreover, point noun combinations decreased following the onset of determiner noun constructions. Importantly, combinations of pointing gestures with other types of speech (e.g., point at bottle “gimme” gimme that) did not relate to the onset or offset of determiner noun constructions. Point noun combinations thus appear to selectively predict the development of a new
construction in speech. When children point to an object and simultaneously label it, they are beginning to develop their understanding of nouns as a modifiable unit of speech.


Trofatter, C., Kontra, C., Beilock, S., Goldin-Meadow, S. Gesturing has a larger impact on problem-solving than action, even when action is accompanied by words.  Language, Cognition and Neuroscience, 2014, doi:10.1080/23273798.2014.905692, PDF

The coordination of speech with gesture elicits changes in speakers' problem-solving behaviour beyond the changes elicited by the coordination of speech with action. Participants solved teh Tower of Hanoi puzzle (TOH1); explained their solution using speech coordinated with either Gestures (Gesture + Talk) or Actions (Action + Talk), or demonstrated their solution using Actions alone (Action); then solved the puzzle again (TOH2). For some participants (Switch group), disc weights during TOH2 were reversed (smallest = heaviest). Only in the Gesture + Talk Switch group did performance worsen from TOH1 to TOH2–for all other groups, performance improved. In the Gesture + Talk Switch group, more one-handed gestures about the smallest disc during the explanation hurt subsequent performance compared to all other groups. These findings contradict the hypothesis that gesture affects thought by promoting the coordination of task-relevant hand movements with task-relevant speech, and lend support to the hypothesis that gesture grounds thought in action via its representational properties.


Fay, N., Lister, C., Ellison, T.M., & Goldin-Meadow, S.  Creating a communication system from scratch: Gesture beats vocalization hands down. Frontiers in Psychology (Language Sciences), 2014, 5, 354. doi: 10.3389/fpsyg.2014.00354 PDF

How does modality affect people’s ability to create a communication system from scratch? The present study experimentally tests this question by having pairs of participants communicate a range of pre-specified items (emotions, actions, objects) over a series of trials to a partner using either non-linguistic vocalization, gesture or a combination of the two. Gesture-alone outperformed vocalization-alone, both in terms of successful communication and in terms of the creation of an inventory of sign-meaning mappings shared within a dyad (i.e., sign alignment). Combining vocalization with gesture did not improve performance beyond gesture-alone. In fact, for action items, gesture-alone was a more successful means of communication than the combined modalities. When people do not share a system for communication they can quickly create one, and gesture is the best means of doing so.


Demir, O. E., Fisher, J. A., Goldin-Meadow, S. & Levine, S.C. Narrative processing in typically developing children and children with early unilateral brain injury: Seeing gestures matters. Developmental Psychology, 2014, 50(3), 815-828. doi: 10.1037/a0034322PDF

Narrative skill in kindergarteners has been shown to be a reliable predictor of later reading comprehension and school achievement. However, we know little about how to scaffold children’s narrative skill. Here we examine whether the quality of kindergarten children’s narrative retellings depends on the kind of narrative elicitation they are given. We asked this question with respect to typically developing (TD) kindergarten children and children with pre- or perinatal unilateral brain injury (PL), a group that has been shown to have difficulty with narrative production. We compared children’s skill in retelling stories originally presented to them in 4 different elicitation formats: (a) wordless cartoons, (b) stories told by a narrator through the auditory modality, (c) stories told by a narrator through the audiovisual modality without co-speech gestures, and (e) stories told by a narrator in the audiovisual modality with co-speech gestures. We found that children told better structured narratives in response to the audiovisual + gesture elicitation format than in response to the other 3 elicitation formats, consistent with findings that co-speech gestures can scaffold other aspects of language and memory. The audiovisual + gesture elicitation format was particularly beneficial for children who had the most difficulty telling a well-structured narrative, a group that included children with larger lesions associated with cerebrovascular infarcts.


Goldin-Meadow, S., Levine, S.C., Hedges, L. V., Huttenlocher, J., Raudenbush, S., & Small, S. New evidence about language and cognitive development based on a longitudinal study:  Hypotheses for intervention, American Psychologist, 2014, 69(6), 588-599. PDF

We review findings from a four-year longitudinal study of language learning conducted on two samples: a sample of typically developing children whose parents vary substantially in socioeconomic status, and a sample of children with pre- or perinatal brain injury. This design enables us to study language development across a wide range of language learning environments and a wide range of language learners. We videotaped samples of children's and parent's speech and gestures during spontaneous interactions at home every four months, and then we transcribed and coded tapes. We focused on two behaviors known to vary across individuals and environments - child gesture and parent speech - behaviors that have the possibility to index, and perhaps even play a role in creatining, differences across children in linguistic and other cognitive skills. Our observations have lead to four hypotheses that have promise for the development of diagnostic tools and interventions to enhance language and cognitive development and brain plasticity after neonatal injury. One kind of hypothesis involves tools that could identify children who may be at risk for later language deficits. The other involves interventions that have the potential to promote language development. We present our four hypotheses as a summary of the findings from our study because there is scientific evidence behind them and because this evidence has the potential to be put to practical use in improving education. 


Goldin-Meadow, S.  In search of resilient and fragile properties of language. Journal of Child Language, 2014, 41, 64-77. PDF

Young children are skilled language learners. They apply their skills to the language input they receive from their parents and, in this way, derive patterns that are statistically related to their input. But being an excellent statistical learner does not explain why children who are not exposed to usable linguistic input nevertheless communicate using systems containing the fundamental properties of language. Nor does it explain why learners sometimes alter the linguistic input to which they are exposed (input from either a natural or an artificial language). These observations suggest that children are prepared to learn language. Our task now, as it was in 1974, is to figure out what they are prepared with – to identify properties of language that are relatively easy to learn, the resilientproperties, as well as properties of language that are more difficult to learn, the fragile properties. The new tools and paradigms for describing and explaining language learning that have been introduced into the field since 1974 offer great promise for accomplishing this task.


Goldin-Meadow, S. The impact of time on predicate forms in the manual modality: Signers, homesigners, and silent gesturers. TopICS, doi: 10.1111/tops.12119. PDF

It is difficult to create spoken forms that can be understood on the spot. But the manual modality, in large part because of its iconic potential, allows us to construct forms that are immediately understood, thus requiring essentially no time to develop. This paper contrasts manual forms for actions produced over three time spans—by silent gesturers who are asked to invent gestures on the spot; by homesigners who have created gesture systems over their life spans; and by signers who have learned a conventional sign language from other signers—and finds that properties of the predicate differ across these time spans. Silent gesturers use location to establish co-reference in the way established sign languages do, but they show little evidence of the segmentation sign languages display in motion forms for manner and path, and little evidence of the finger complexity sign languages display in handshapes in predicates representing events. Homesigners, in contrast, not only use location to establish co-reference but also display segmentation in their motion forms for manner and path and finger complexity in their object handshapes, although they have not yet decreased finger complexity to the levels found in sign languages in their handling handshapes. The manual modality thus allows us to watch language as it grows, offering insight into factors that may have shaped and may continue to shape human language.


Goldin-Meadow, S., Namboodiripad, S., Mylander, C., Ozyurek, A., & Sancar, B. The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children. Journal of Cognition and Development, 2014, doi 10.1080/15248372.2013.803970. PDF

Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, which have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also used the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children’s gestures. Although cospeech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language. 


LeBarton, E. S., Raudenbush, S., & Goldin-Meadow, S.  Experimentally-induced increases in early gesture lead to increases in spoken vocabulary.  Journal of Cognition and Development, 2014, doi 10.1080/15248372.2013.858041. PDF

Differences in vocabulary that children bring with them to school can be traced back to the gestures they produce at 1;2, which, in turn, can be traced back to the gestures their parents produce at the same age (Rowe & Goldin-Meadow, 2009b). We ask here whether child gesture can be experimentally increased and, if so, whether the increases lead to increases in spoken vocabulary. Fifteen children aged 1;5 participated in an 8-week at- home intervention study (6 weekly training sessions plus follow-up 2 weeks later) in which all were exposed to object words, but only some were told to point at the named objects. Before each training session and at follow-up, children interacted naturally with caregivers to establish a baseline against which changes in communication were measured. Children who were told to gesture increased the number of gesture meanings they conveyed, not only during training but also during interactions with caregivers. These experimentally-induced increases in gesture led to larger spoken repertoires at follow-up. 


Applebaum, L., Coppola, M., & Goldin-Meadow, S.  Prosody in a communication system developed without a language model.  Sign Language and Linguistics, 2014, 17(2), 181-212. doi: 10.1075/sll.17.2.02app. PDF

Prosody, the “music” of language, is an important aspect of all natural languages, spoken and signed. We ask here whether prosody is also robust across learning conditions. If a child were not exposed to a conventional language and had to construct his own communication system, would that system contain prosodic structure? We address this question by observing a deaf child who received no sign language input and whose hearing loss prevented him from acquiring spoken language. Despite his lack of a conventional language model, this child developed his own gestural system. In this system, features known to mark phrase and utterance boundaries in established sign languages were used to consistently mark the ends of utterances, but not to mark phrase or utterance internal boundaries. A single child can thus develop the seeds of a prosodic system, but full elaboration may require more time, more users, or even more generations to blossom.


Demir, O.E., Levine, S., & Goldin-Meadow, S.  A tale of two hands: Children’s gesture use in narrative production predicts later narrative structure in speech, Journal of Child Language, doi:10.1017/S0305000914000415. PDF

Speakers of all ages spontaneously gesture as they talk. These gestures predict children’s milestones in vocabulary and sentence structure.We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight. Children’s narrative structure in speech improved across these ages. At age five, many of the children expressed a character’s viewpoint in gesture, and these children were more likely to tell better-structured stories at the later ages than children who did not produce character-viewpoint gestures at age five. In contrast, framing narratives from a character’s perspective in speech at age five did not predict later narrative structure in speech. Gesture thus continues to act as a harbinger of change even as it assumes new roles in relation to discourse.


2013

Hunsicker, D., & Goldin-Meadow, S.  How handshape can distinguish between nouns and verbs in homesign.  Gesture, 2013, 13(3), 354-376. PDF

All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but recieves from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a produce of his system but does not recieve it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system that is not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system. 


Brentari, D., Coppola, M., Jung, A., & Goldin-Meadow, S.  Acquiring word class distinctions in American Sign Language:  Evidence from handshape.  Language Learning and Development, 2013, 9(2), 130-150. PDF

Handshape works differently in nouns versus a class of verbs in American Sign Language (ASL) and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself (object handshapes) and handshapes representing how the object is handled (handlinghandshapes) appear in both nouns and a particular type of verb, classifier predicates, in ASL. When used as nouns,object and handling handshapes are phonemic—-that is, they are specified in dictionary entries and do not vary with grammatical context. In contrast, when used as classifier predicates, object and handling handshapes do vary with grammatical context for both morphological and syntactic reasons. We ask here when young deaf children learning ASL acquire the word class distinction signaled by handshape. Specifically, we determined the age at which children systematically vary object versus handling handshapes as a function of grammatical context in classifier predicates but not in the nouns that accompany those predicates. We asked 4–6-year-old children, 7–10-year-old children, and adults, all of whom were native ASL signers, to describe a series of vignettes designed to elicit object and handling handshapes in both nouns and classifier predicates.We found that all of the children behaved like adults with respect to all nouns, systematically varying object and handling handshapes as a function of type of item and not grammatical context. The children also behaved like adults with respect to certain classifiers, systematically varying handshape type as a function of grammatical context for items whose nouns have handling handshapes. The children differed from adults in that they did not systematically vary handshape as a function of grammatical context for items whose nouns have object handshapes. These findings extend previous work by showing that children require developmental time to acquire the full morphological system underlying classifier predicates in sign language, just as children acquiring complex morphology in spoken languages do. In addition, we show for the first time that children acquiring ASL treat object and handling handshapes differently as a function of their status as nouns vs. classifier predicates, and thus display a distinction between these word classes as early as 4 years of age.


Gentner, D., Özyurek, A., Gurcanli, O., & Goldin-Meadow, S.  Spatial language facilitates spatial cognition:  Evidence from children who lack language input.  Cognition, 2013, 127(3), 318–330. PDF

Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate . In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a Spatial Mapping Task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf childre n on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space.


Ozcaliskan, S., Levine, S., & Goldin-Meadow, S. Gesturing with an injured brain: How gesture helps children wth early brain injury learn linguistic constructions. Journal of Child Language, 2013, 40(5), 69-105. PDF

Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing eleven children with PL – matched to thirty TD children on expressive vocabulary – in the second year of life. Children with PL showed similarities to TD children for simple but not complex sentence types. Children with PL produced simple sentences across gesture and speech several months before producing them entirely in speech, exhibiting parallel delays in both gesture + speech and speech-alone. However, unlike TD children, children with PL produced complex sentence types first in speech-alone. Overall, the gesture–speech system appears to be a robust feature of language learning for simple – but not complex – sentence constructions, acting as a harbinger of change in language development even when that language is developing in an injured brain.


Goldin-Meadow, S., & Alibali, M.W. Gestures role in speaking, learning, and creating language. Annual Review of Psychology, 2013, 123, 448-453. PDF

When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture’s contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers’ thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers’ thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy,or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.


Hunsicker, D., & Goldin-Meadow, S. Hierarchical structure in a self-created communication system: Building nominal constituents in homesign. Language, 2013, 732-763. PDF

Deaf children whose hearing losses are so severe that they cannot acquire spoken language and whose hearing parents have not exposed them to sign language neverthelessuse gestures, called HOMESIGNS,to communicate. Homesigners have been shown to refer to entities by pointing at that entity (a demonstrative, that). They also use iconic gestures and categor ypoints that refer, not to a particular entity, but to its class (a noun, bird). We used longitudinal data from a homesigner called David to test the hypothesis that these different types of gestures are combined to form larger, multi gesture nominal constituents (that bird). We verified this hypothesis by showing thatDavid’s multi gesture combinations served the same semantic and syntactic functions as demonstrative gestures or noun gestures used on their own. In other words, the larger unit substituted for the smaller units and, in this way, functioned as a nominal constituent. Children are thus able to refer to entities using multi gesture units that contain both nouns and demonstratives, even when they do not have a conventional language to provide a model for this type of hierarchical constituent structure.


Shneidman, L. A., Arroyo, M. E., Levine, S  & Goldin-Meadow, S.  What counts as effective input for word learning?  Journal of Child Language, 2013, 40(3), 672-86. PDF

The talk children hear from their primary caregivers predicts the size of their vocabularies. But children who spend time with multiple individuals also hear talk that others direct to them, as well as talk not directed to them at all. We investigated the effect of linguistic input on vocabulary acquisition in children who routinely spent time with one vs. multiple individuals. For all children, the number of words primary caregivers directed to them at age 2;6 predicted vocabulary size at age 3;6. For children who spent time with multiple individuals, child-directed words from ALL household members also predicted later vocabulary and accounted for more variance in vocabulary than words from primary caregivers alone. Interestingly, overheard words added no predictive value to the model. These findings suggest that speech directed to children is important for early word learning, even in households where a sizable proportion of input comes from overheard speech.


Andric, M., Solodkin, A. Buccino, G., Goldin-Meadow, S., Rizzolatti, G., & Small, S. L. Brain function overlaps when people observe emblems, speech, and grasping. Neuropsychologia, 2013, 51(8), 1619-1629. PDF

A hand grasping a cup or gesturing “thumbs-up”, while both manual actions, have different purposes and effects. Grasping directly affects the cup, whereas gesturing “thumbs-up” has an effect through an impliedverbal (symbolic) meaning. Because grasping and emblematic gestures (“emblems”) are both goal-oriented hand actions, we pursued the hypothesis that observing each should evoke similar activity in neural regions implicated in processing goal-oriented hand actions. However, because emblems express symbolic meaning, observing them should also evoke activity in regions implicated in interpreting meaning, which is most commonly expressed in language. Using fMRI to test this hypothesis, we had participants watch videos of an actor performing emblems, speaking utterances matched in meaning to the emblems, and grasping objects. Our results show that lateral temporal and inferior frontal regions respond to symbolic meaning, even when it is expressed by a single hand action. In particular, we found that left inferior frontal and right lateral temporal regions are strongly engaged when people observe either emblems or speech. In contrast, we also replicate and extend previous work that implicates parietal and premotor responses in observing goal-oriented hand actions. For hand actions, we found that bilateral parietal and premotor regions are strongly engaged when people observe either emblems or grasping. These findings thus characterize converging brain responses to shared features (e.g., symbolic or manual), despite their encoding and presentation in different stimulus modalities.


Göksun, T., Goldin-Meadow, S., Newcombe, N., & Shipley, T. Individual differences in mental rotation: What does gesture tell us? Cognitive Processing, 2013, 14, 153-162. PDF

Gestures are common when people convey spatial information, for example, when they give directions or describe motion in space. Here, we examine the gestures speakers produce when they explain how they solved mental rotation problems (Shepard and Meltzer in Science 171:701–703, 1971). We asked whether speakers gesture differently while describing their problems as a function of their spatial abilities. We found that low-spatial individuals (as assessed by a standard paper-and-pencil measure) gestured more to explain their solutions than high-spatial individuals. While this finding may seem surprising, finer-grained analyses showed that low-spatial participants used gestures more often than high-spatial participants to convey ‘‘static only’’ information but less often than high-spatial participants to convey dynamic information. Furthermore, the groups differed in the types of gestures used to convey static information: high-spatial individuals were more likely than low-spatial individuals to use gestures that captured the internal structure of the block forms. Our gesture findings thus suggest that encoding block structure may be as important as rotating the blocks in mental spatial transformation.


Gunderson, E. A., Gripshover, S. J., Romero, C., Dweck, C. S., Goldin-Meadow, S., & Levine, S. C. Parent praise to 1- to 3-year-olds predicts children's motivational framworks 5 years later. Child Development, doi: 10.1111/cdev.12064. PDF

In laboratory studies, praising children’s effort encourages them to adopt incremental motivational frameworks—they believe ability is malleable, attribute success to hard work, enjoy challenges, and generate strategies for improvement. In contrast, praising children’s inherent abilities encourages them to adopt fixed-ability frameworks. Does the praise parents spontaneously give children at home show the same effects? Although parents’ early praise of inherent characteristics was not associated with children’s later fixed-ability frameworks, parents’ praise of children’s effort at 14–38 months (N=53) did predict incremental frameworks at 7–8 years, suggesting that causal mechanisms identified in experimental work may be operating in home environments.


Dick, A. S., Mok, E. H., Beharelle, A. R., Goldin-Meadow, S., & Small, S. Frontal and temporal contributions to understanding the iconic gestures that accompany speech, Human Brain Mapping, doi: 10.1002/hbm.22222. PDF

In everyday conversation, listeners often rely on a speaker’s gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers’ iconic gestures. We focused on iconic gestures that contribute information not found in the speaker’s talk, compared with those that convey information redundant with the speaker’s talk. We found that three regions—left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)—responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech.


Ping, R., Goldin-Meadow, S., & Beilock S.  Understanding gesture:  Is the listener's motor system involved.  Journal of Experimental Psychology: General. doi: 10.1037/a0032246. PDF

Listeners are able to glean information from the gestures that speakers produce, seemingly without conscious awareness. However, little is known about the mechanisms that underlie this process. Research on human action understanding shows that perceiving another’s actions results in automatic activation of the motor system in the observer, which then affects the observer’s understanding of the actor’s goals. We ask here whether perceiving another’s gesture can similarly result in automatic activation of the motor system in the observer. In Experiment 1, we first established a new procedure in which listener response times are used to study how gesture impacts sentence comprehension. In Experiment 2, we used this procedure, in conjunction with a secondary motor task, to investigate whether the listener’s motor system is involved in this process. We showed that moving arms and hands (but not legs and feet) interferes with the listener’s ability to use information conveyed in a speaker’s hand gestures. Our data thus suggest that understanding gesture relies, at least in part, on the listener’s own motor system.


Ozcaliskan, S., Gentner, D., & Goldin-Meadow, S. Do iconic gestures pave the way for children's early verbs?  Applied Psycholinguistics, doi: 10.1017/ S0142716412000720. PDF

Children produce a deictic gesture for a particular object (point at dog) approximately 3 months before they produce the verbal label for that object (“dog”; Iverson & Goldin-Meadow, 2005). Gesture thus paves the way for children’s early nouns. We ask here whether the same pattern of gesture preceding and predicting speech holds for iconic gestures. In other words, do gestures that depict actions precede and predict early verbs? We observed spontaneous speech and gestures produced by 40 children (22 girls, 18 boys) from age 14 to 34 months. Children produced their first iconic gestures 6 months later than they produced their first verbs. Thus, unlike the onset of deictic gestures, the onset of iconic gestures conveying action meanings followed, rather than preceded, children’s first verbs. However, iconic gestures increased in frequency at the same time as verbs did and, at that time, began to convey meanings not yet expressed in speech. Our findings suggest that children can use gesture to expand their repertoire of action meanings, but only after they have begun to acquire the verb system underlying their language.


So, W-C., Kita, S., & Goldin-Meadow, S. When do speakers use gesture to specify who does what to whom in a narrative? The role of language proficiency and type of gesture.  Journal of Psycholinguistic Research, doi: 10.1007/s10936-012-9230-6. PDF

Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.


Coppola, M., Spaepen, E., & Goldin-Meadow, S. Communicating about number without a language model: Number devices in homesign grammar. Cognitive Psychology, 2013, 67, 1-25. PDF

All natural languages have formal devices for communicating about number, be they lexical (e.g.,two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner’s gesture system and, in this sense, linguistic. The number gestures produced by the homesigners’ hearing communication partners displayed some, but not all, of the homesigners’ linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child’s number gestures displayed all of the properties found in the adult homesigners’ gestures, but his mother’s gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners’ linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input.


Spaepen, E., Flaherty, M., Coppola, M., Spelke, E., & Goldin-Meadow, S. Generating a lexicon without a language model: Do words for number count? Journal of Memory and Language, 2013, doi: 10.1016/j.jml.2013.05.004. PDF

Homesigns are communication systems created by deaf individuals without access to conventional linguistic input. To investigate how homesign gestures for number function in short-term memory compared to homesign gestures for objects, actions, or attributes, we conducted memory span tasks with adult homesigners in Nicaragua, and with comparison groups of unschooled hearing Spanish speakers and deaf Nicaraguan Sign Language signers. There was no difference between groups in recall of gestures or words for objects, actions or attributes; homesign gestures therefore can function as word units in short-term memory. However, homesigners showed poorer recall of numbers than the other groups. Unlike the other groups, increasing the numerical value of the to-be-remembered quantities negatively affected recall in homesigners, but not controls. When developed without linguistic input, gestures for number do not seem to function as summaries of the cardinal values of the sets (four), but rather as indexes of items within a set (one–one–one–one).


Cartmill, E.A., Armstrong, B.F., Gleitman, L.R., Goldin-Meadow, S., Medina, T.N., & Trueswell, J. D.  Quality of early parent input predicts child vocabulary three years later.  Proceedings of the National Academy of Sciences of the United States of America, 2013, 110(28), 11278-11283.  DOI 10.1073/pnas.1309518110. PDF

Children vary greatly in the number of words they know when they enter school, a major factor influencing subsequent school and workplace success. This variability is partially explained by the differential quantity of parental speech to preschoolers. However, the contexts in which young learners hear new words are also likely to vary in referential transparency; that is, in how clearly word meaning can be inferred from the immediate extralinguistic context, an aspect of input quality. To examine this aspect, we asked 218 adult participants to guess 50 parents’ words from (muted) videos of their interactions with their 14- to 18-mo-old children. We found systematic differences in how easily individual parents’ words could be identified purely from this socio-visual context. Differences in this kind of input quality correlated with the size of the children’s vocabulary 3 y later, even after controlling for differences in input quantity. Although input quantity differed as a function of socioeconomic status, input quality (as here measured) did not, suggesting that the quality of nonverbal cues to word meaning that parents offer to their children is an individual matter, widely distributed across the population of parents.


2012

Brentari, D., Coppola, M., Mazzoni, L., & Goldin-Meadow, S. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. Natural Language and Linguistic Theory, 2012, 30(1), 1-31. PDF

Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled.Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit of a linguistic community. Finally, we propose that iconicity, morphology, and phonology each play an important role in the system of sign language classifiers to create the earliest markers of phonology at the morphophonological interface.


Sauter, A., Uttal, D., Alman. A. S., Goldin-Meadow, S., & Levine, S., C. Learning what children know about space from looking at their hands: The added value of gesture in spatial communication. Journal of Experimental Child Psychology, 2012, 111(4), 587-606. PDF

This article examines two issues: the role of gesture in the communinication of spatial information and the relation between communication and mental representation. Children (8–10 years) and adults walked through a space to learn the locations of six hidden toy animals and then explained the space to another person. In Study 1, older children and adults typically gestured when describing the space and rarely provided spatial information in speech without also providing the information in gesture. However, few 8-year- olds communicated spatial information in speech or gesture. Studies 2 and 3 showed that 8-year-olds did understand the spatial arrangement of the animals and could communicate spatial information if prompted to use their hands. Taken together, these results indicate that gesture is important for conveying spatial relations at all ages and, as such, provides us with a more complete picture of what children do and do not know about communicating spatial relations.


Rowe, M. L., Raudenbush, S. W., & Goldin-Meadow, S. The pace of early vocabulary growth helps predict later vocabulary skill. Child Development, 2012, 83(2), 508-525. PDF

Children vary widely in the rate at which they acquire words-some start slow and speed up, others start fast and continue at a steady pace. Do early developmental variations of this sort help predict vocabulary skill just prior to kindergarten entry? This longitudinal study starts by examining important predictors (socioeconomic status [SES], parent input, child gesture) of vocabulary growth between 14 and 46 months (n=62) and then use growth estimates to predict children's vocabulary at 54 months. Velocity and acceleration in vocabulary development at 30 months predicted later vocabulary, particularly for children from low-SES backgrounds. Understanding the pace of early vocabulary growth thus improves our ability to predict school readiness and may help identify children at risk for starting behind.


Dick, A., Goldin-Meadow, S., Solodkin, A., & Small, S. Gesture in the developing brain. Developmental Science, 2012, 15(2), 165-180. PDF

Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movement, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture.


Goldin-Meadow, S., Shield, A., Lenzen, D., Herzig, M., & Padden, C. The gestures ASL signers use tell us when they are ready to learn math. Cognition, 2012, 123, 448-453. PDF

The manual gestures that hearing children produce when explaining their answers to math problems predict whether they will profit from instruction in those problems. We ask here whether gesture plays a similar role in deaf children, whose primary communication system is in the manual modality. Forty ASL-signing deaf children explained their solutions to math problems and were then given instruction in those problems. Children who produced many gestures conveying different information from their signs (gesture-sign mismatches)were more likely to succeed after instruction than children who produced few, suggesting that mismatch can occur within-modality, and paving the way for using gesture-based teaching strategies with deaf learners.


Cook, S. W., Yip, T., Goldin-Meadow, S. Gestures, but not meaningless movements, lighten working memory load when explaining math. Language and Cognitive Processes, 2012, 27, 594-610. PDF

Gesturing is ubiquitous in communication and serves an important function for listeners, who are able to glean meaningful information from the gestures they see. But gesturing also functions for speakers, whose own gestures reduce demands on their working memory. Here we ask whether gesture’s beneficial effects on working memory stem from its properties as a rhythmic movement, or as a vehicle for representing meaning. We asked speakers to remember letters while explaining their solutions to math problems and producing varying types of movements. Speakers recalled significantly more letters when producing movements that coordinated with the meaning of the accompanying speech, i.e., when gesturing, than when producing meaningless movements or no movement. The beneficial effects that accrue to speakers when gesturing thus seem to stem not merely from the fact that their hands are moving, but from the fact that their hands are moving in coordination with the content of speech.


Cartmill, E. A., Beilock, S., & Goldin-Meadow, S. A word in the hand: Action, gesture, and metal representation in human evolutions. Philosophical Transaction of the Royal Society, Series B, 2012, 367, 129-143. PDF

The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates’ lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements.


Quandt, L. C., Marshall, P.J., Shipley, T.F., Beilock, S.L., & Goldin-Meadow, S. Sensitivity of alpha and beta oscillations to sensorimotor characteristics of action: An EEG study of action production and gesture observation. Neuropsychologia, 50(12), 2745-51. PDF

The sensorimotor experiences we gain when performing an action have been found to influence how our own motor systems are activated when we observe others performing that same action. Here we asked whether this phenomenon applies to the observation of gesture. Would the sensorimotor experiences we gain when performing an action on an object influence activation in our own motor systems when we observe others performing a gesture for that object? Participants were given sensorimotor experience with objects that varied in weight, and then observed video clips of an actor producing gestures for those objects. Electroencephalography (EEG) was recorded while participants first observed either an iconic gesture (pantomiming lifting an object) or a deictic gesture (pointing to an object) for an object, and then grasped and lifted the object indicated by the gesture. We analyzed EEG during gesture observation to determine whether oscillatory activity was affected by the observer’s sensorimotor experiences with the object represented in the gesture. Seeing a gesture for an object previously experienced as light was associated with a suppression of power in alpha and beta frequency bands, particularly at posterior electrodes. A similar pattern was found when participants lifted the light object, but over more diffuse electrodes. Moreover, alpha and beta bands at right parieto-occipital electrodes were sensitive to the type of gesture observed (iconic vs. deictic). These results demonstrate that sensorimotor experience with an object affects how a gesture for that object is processed, as measured by the gesture-observer’s EEG, and suggest that different types of gestures recruit the observer’s own motor system in different ways.


Demir, O.E., So, W-C., Ozyurek, A., & Goldin-Meadow, S. Turkish- and English-speaking children display sensitivity to perceptual context in the referring expressions they produce in speech and gesture. Language and Cognitive Processes. 2012, 27(6), 844-867. PDF

Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children’s sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.


Shneidman, L. A., & Goldin-Meadow, S. Language input and acquistion in Mayan village: How important is directed speech? Developmental Science, 2012, 15(5), 659-673. PDF

Theories of language acquisition have highlighted the importance of adult speakers as active participants in children’s language learning. However, in many communities children are reported to be directly engaged by their caregivers only rarely (Lieven, 1994). This observation raises the possibility that these children learn language from observing, rather than participating in, communicative exchanges. In this paper, we quantify naturally occurring language input in one community where directed interaction with children has been reported to be rare (Yucatec Mayan). We compare this input to the input heard by children growing up in large families in the United States, and we consider how directed and overheard input relate to Mayan children’s later vocabulary. In Study 1, we demonstrate that 1-year-old Mayan children do indeed hear a smaller proportion of total input in directed speech than children from the US. In Study 2, we show that for Mayan (but not US) children, there are great increases in the proportion of directed input that children receive between 13 and 35 months. In Study 3, we explore the validity of using videotaped data in a Mayan village. In Study 4, we demonstrate that word types directed to Mayan children from adults at 24 months (but not word types overheard by children or word types directed from other children) predict later vocabulary. These findings suggest that adult talk directed to children is important for early word learning, even in communities where much of children’s early language input comes from overheard speech.


Kontra, C. E., Goldin-Meadow, S., & Beilock, S.L. Embodied Learning across the lifespan. Topics in Cognitive Science, 2012, 1-9. PDF

Developmental psychologists have long recognized the extraordinary influence of action on learning (Held & Hein, 1963; Piaget, 1952). Action experiences begin to shape our perception of the world during infancy (e.g., as infants gain an understanding of others’ goal-directed actions; Woodward, 2009) and these effects persist into adulthood (e.g., as adults learn about complex concepts in the physical sciences; Kontra, Lyons, Fischer, & Beilock, 2012). Theories of embodied cognition provide a structure within which we can investigate the mechanisms underlying action’s impact on thinking and reasoning. We argue that theories of embodiment can shed light on the role of action experience in early learning contexts, and further that these theories hold promise for using action to scaffold learning in more formal educational settings later in development.


Goldin-Meadow, S, Levine, S. L., Zinchenko, E., Yip, T.K-Y, Hemani, N., & Factor, L. Doing gesture promotes learning a mental transformation task better than seeing gesture. Developmental Science. 2012, 15(6), 876-884. PDF

Performing action has been found to have a greater impact on learning than observing action. Here we ask whether a particular type of action – the gestures that accompany talk – affect learning in a comparable way. We gave 158 6-year-old children instruction in a mental transformation task. Half the children were asked to produce a Move gesture relevant to the task; half were asked to produce a Point gesture. The children also observed the experimenter producing either a Move or Point gesture. Children who produced a Move gesture improved more than children who observed the Move gesture. Neither producing nor observing the Point gesture facilitated learning. Doing gesture promotes learning better than seeing gesture, as long as the gesture conveys information that could help solve the task.


2011

Spaepen, E., Coppola, M., Spelke, E., Carey, S. & Goldin-Meadow, S. Number without a language model. Proceedings of the National Academy of Science of the United States of America, 2011, 108(8), 3163-3168. PDF

Cross-cultural studies suggest that access to a conventional language containing words that can be used for counting is essential to develop representations of large exact numbers. However, cultures that lack a conventional counting system typically differ from cultures that have such systems, not only in language but also in many other ways. As a result, it is difficult to isolate theeffects of language on the development of number representations. Here we examine the numerical abilities of individuals who lack conventional language for number (deaf individuals who do not have access to a usable model for language, spoken or signed) but who live in a numerate culture (Nicaragua) and thus have access to other aspects of culture that might foster the development of number. These deaf individuals develop their own gestures, called homesigns, to communicate. We show that homesigners use gestures to communicate about number. However, they do not consistently extend the correct number of fingers when communicating about sets greater than three, nor do they always correctly match the number of items in one set to a target set when that target set is greater than three. Thus, even when integrated into a numerate society, individuals who lack input from a conventional language do not spontaneously develop representations of large exact numerosities.


Goldin-Meadow, S. Learning through gesture, WIREs (Wiley Interdisciplinary Reviews): Cognitive Science, published online, March 2011. DOI:10. 1002/wcs. 132. PDF

When people talk, they move their hands—they gesture. Although these movements might appear to be meaningless hand waving, in fact they convey substantive information that is not always found in the accompanying speech. As a result, gesture can provide insight into thoughts that speakers have but do not know they have. Even more striking, gesture can mark a speaker as being in transition with respect to a task—learners who are on the verge of making progress on a task routinely produce gestures that convey information that is different from the information conveyed in speech. Gesture can thus be used to predict who will learn. In addition, evidence is mounting that gesture not only presages learning but also can play a role in bringing that learning about. Gesture can cause learning indirectly by influencing the learning environment or directly by influencing learners themselves. We can thus change our minds by moving our hands.


Franklin, A., Giannakidou, A., & Goldin-Meadow, S. Negation and structure building in a home sign system. Cognition, 2011, 118(3), 398-416. PDF

Deaf children whose hearing losses are so severe that they cannot acquire spoken language, and whose hearing parents have not exposed them to sign language, use gestures called homesigns to communicate. Homesigns have been shown to contain many of the properties of natural languages. Here we ask whether homesign has structure building devices for negation and questions. We identify two meanings (negation, question) that correspond semantically to propositional functions, that is, to functions that apply to a sentence (whose semantic value is a proposition,/) and yield another proposition that is more complex (q/for negation;?/for question). Combining / with q or ? thus involves sentence modification. We propose that these negative and question functions are structure building operators, and we support this claim with data from an American homesigner. We show that: (a) each meaning is marked by a particular form in the child’s gesture system (side-to-side headshake for negation, manual flip for question); (b) the two markers occupy systematic, and different, positions at the periphery of the gesture sentences (headshake at the beginning, flip at the end); and (c) the flip is extended from questions to other uses associated with the wh-form (exclamatives, referential expressions of location) and thus functions like a category in natural languages. If what we see in homesign is a language creation process (Goldin-Meadow, 2003), and if negation and question formation involve sentential modification, then our analysis implies that homesign has at least this minimal sentential syntax. Our findings thus contribute to ongoing debates about properties that are fundamental to language and language learning.


Demir, O. E., So, W., Ozyurek, A., Goldin-Meadow, S. Turkish- and English-speaking children display sensitivity to perceptual context in the referring expressions they produce in speech and gesture. Language and Cognitive Processes, 2011, 1-24. PDF

Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children’s sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.


2010

Emmorey, K., Xu, J., Gannon, P., Goldin-Meadow, S., & Braun, A. CNS activation and regional connectivity during pantomime observation: No engagement of the mirror neuron system for deaf signers. NeuroImage, 2010, 49, 994-1005. PDF

Deaf signers have extensive experience with their hands to communicate. Using fMRI, we examined the neural systems engaged during the perception of manual communication in 14 deaf signers and 14 hearing non-signers. Participants passively viewed blocked video clips of pantomines (e.g., peeling an imaginary banana) and action verbs in American Sign Language (ASL) that were rated as meaningless by non-signers (e.g., TO-DANCE). In contrast to visual fixation, pantomines strongly activated fronto-parietal regions (the mirror neuron system, MNS) in hearing non-signers, but only bilateral middle temporal regions in deaf signers. When contrasted with ASL verbs, pantomines selectively engaged inferior and superior parietal regions in hearing non-signers, but right superior temporal cortex in deaf signers. The perception of ASL verbs recruited similar regions as pantomines for deaf signers, with some evidence of greater involvement of left inferior frontal gyrus for ASL verbs. Functional connectivity analyses with left hemisphere seed voxels (ventral premotor, inferior parietal lobule, fusiform gyrus) revealed robust connectivity with the MNS for the hearing non-signers. Deaf signers exhibited functional connectivity with the right hemisphere that was not observed for the hearing group for the fusiform gyrus seed voxel. We suggest that life-long experience with manual communication, and/or auditory deprivation, may alter regional connectivity and brain activation when viewing pantomines. We conclude that the lack of activation within the MNS for deaf signers does not support an account of human communication that depends upon automatic sensorimotor resonance between perception and action.


So, W.C., Demir, O. E., & Goldin-Meadow, S. When speech is ambiguous gesture steps in: Sensitivity to discourse-pragmatic principles in early childhood. Applied Psycholinguistics, 2010, 31, 209-224 PDF.

Young children produce gestures to disambiguate arguments. This study explores whether the gestures they produce are constrained by discourse-pragmatic principles: person and information status. We ask whether children use gesture more often to indicate the referents that have to be specified (i.e. third person and new referents) than the referents that do not have to be specified (i.e. first or second person and given referents). Chinese- and English-speaking children were videotaped while interacting spontaneously with adults, and their speech and gestures were coded for referential expressions. We found that both groups of children tended to use nouns when indicatin third person and new referents but pronouns or null arguments when indicating first or second person and given referents. They also produced gestures more often when indicating third person and new referents, particularly when those referents were ambiguously conveyed by less explicit referring expressions (pronouns, null arguments). Thus Chinese- and English-speaking children show sensitivity to discourse-pragmatic principles not only in speech but also in gesture.


Sauer, E., Levine, S. C., & Goldin-Meadow, S. Early gesture predicts language delay in children with pre- and perinatal brain lesions. Child Development, 2010, 81, 528-539. PDF

Does early gesture use predict later productive and receptive vocabulary in children with pre- or perinatal unilateral brain lesions (PL)? Eleven Children with PL were categorized into 2 groups based on whether their gesture at 18 months was within or below the range of typically developing (TD) children. Children with PL whose gesture was within the TD range developed a productive vocabulary at 22 and 26 months and a recep- tive vocabulary at 30 months that were all within the TD range. In contrast, children with PL below the TD range did not. Gesture was thus an early marker of which children with early unilateral lesions would eventually experience language delay, suggesting that gesture is a promising diagnostic tool for persistent delay.


Broaders, S., & Goldin-Meadow, S. Truth is at hand: How gesture adds information during investigative interview. Psychological Science, 2010, 21(5), 623-628. PDF

The accuracy of information obtained in forensic interviews is critically important to credibility in the legal system. Research has shown that the way interviewers frame questions influeces the accuracy of witnesses' reports. A separate body of research has shown that speakers gesture spontaneously when they talk and that these gestures can convey information not found anywhere in the speakers' words. In our study, which joins these two literatures, we interviewed children about an event that they had witnessed. Our results demonstrate that (a) interviewers' gestures serve as a source of information (and, at times, misinformation) that can lead witnesses to report incorrect details, and (b) the gestures witnesses spontaneously produce during interviews convey substantive information that is often not conveyed anywhere in their speech, and thus would not appear in written transcripts of the proceedings. These findings underscore the need to attend to, and document, gestures produced in investigative interviews, particularly interviews conducted with children.


Demir, O. E., Levine, S. C., Goldin-Meadow, S. Narrative skill in children with early unilateral brain injury: A possible limit to functional plasticity, Developmental Science, 2010, 13(4), 636-647. PDF

Children with pre- or perinatal brain injury (PL) exhibit marked plasticity for language learning. Previous work has focused mostly on the emergence of earlier-developing skills, such as vocabulary and syntax. Here we ask whether this plasticity for earlier-developing aspects of language extends to more complex, later-developing language functions by examining the narrative production of children with PL. Using an elicitation technique that involves asking children to create stories de novo in response to a story stem, we collected narratives from 11 children with PL and 20 typically developing (TD) children. Narratives were analysed for length, diversity of the vocabulary used, use of complex syntax, complexity of the macro-level narrative structure and use of narrative evaluation. Children’s language performance on vocabulary and syntax tasks outside the narrative context was also measured. Findings show that children with PL produced shorter stories, used less diverse vocabulary, produced structurally less complex stories at the macro-level, and made fewer inferences regarding the cognitive states of the story characters. These differences in the narrative task emerged even though children with PL did not differ from TD children on vocabulary and syntax tasks outside the narrative context. Thus, findings suggest that there may be limitations to the plasticity for language functions displayed by children with PL, and that these limitations may be most apparent in complex, decontextualized language tasks such as narrative production.


Özçalişkan, S. & Goldin-Meadow, S. Sex differences in language first appear in gesture. Developmental Science, 2010, 13(5), 752-760. PDF

Children differ in how quickly they reach linguistic milestones. Boys typically produce their first multi-word sentences later than girls do. We ask here whether there are sex differences in children’s gestures that precede, and presage, these sex differences in speech. To explore this question, we observed 22 girls and 18 boys every 4 months as they progressed from one-word speech to multi-word speech. We found that boys not only produced speech + speech (S+S) combinations (‘drink juice’) 3 months later than girls, but they also produced gesture + speech (G+S) combinations expressing the same types of semantic relations (‘eat’ + point at cookie) 3 months later than girls. Because G+S combinations are produced earlier than S+S combinations, children’s gestures provide the first sign that boys are likely to lag behind girls in the onset of sentence constructions.


Ping, R., & Goldin-Meadow, S. Gesturing saves cognitive resources when talking about non-present objects. Cognitive Science, 2010, 34(4), 602-619. PDF

In numerous experimental contexts, gesturing has been shown to lighten a speaker’s cognitive load. However, in all of these experimental paradigms, the gestures have been directed to items in the ‘‘here-and-now.’’ This study attempts to generalize gesture’s ability to lighten cognitive load. We demonstrate here that gesturing continues to confer cognitive benefits when speakers talk about objects that are not present, and therefore cannot be directly indexed by gesture. These findings suggest that gesturing confers its benefits by more than simply tying abstract speech to the objects directly visible in the environment. Moreover, we show that the cognitive benefit conferred by gesturing is greater when novice learners produce gestures that add to the information expressed in speech than when they produce gestures that convey the same information as speech, suggesting that it is gesture’s meaningfulness that gives it the ability to affect working memory load.


Cook, S. W., Yip, T.K., & Goldin-Meadow, S. Gesturing makes memories that last. Journal of Memory and Language, 2010, 63(4), 465-475. PDF

When people are asked to perform actions, they remember those actions better than if they are asked to talk about the same actions. But when people talk, they often gesture with their hands, thus adding an action component to talking. The question we asked in this study was whether producing gesture along with speech makes the information encoded in that speech more memorable than it would have been without gesture. We found that gesturing during encoding led to better recall, even when the amount of speech produced during encoding was controlled. Gesturing during encoding improved recall whether the speaker chose to gesture spontaneously or was instructed to gesture. Thus, gesturing during encoding seems to function like action in facilitating memory.


Beilock, S. L. & Goldin-Meadow, S. Gesture changes thought by grounding it in action. Psychological Science, 2010, 21, 1605-1610. PDF.

When people talk, they gesture. We show that gesture introduces action information into speakers’ mental representations, which, in turn, affect subsequent performance. In Experiment 1, participants solved the Tower of Hanoi task (TOH1), explained (with gesture) how they solved it, and solved it again (TOH2). For all participants, the smallest disk in TOH1 was the lightest and could be lifted with one hand. For some participants (no-switch group), the disks in TOH2 were identical to those in TOH1. For others (switch group), the disk weights in TOH2 were reversed (so that the smallest disk was the heaviest and could not be lifted with one hand). The more the switch group’s gestures depicted moving the smallest disk one-handed, the worse they performed on TOH2. This was not true for the no-switch group, nor for the switch group in Experiment 2, who skipped the explanation step and did not gesture. Gesturing grounds people’s mental representations in action. When gestures are no longer compatible with the action constraints of a task, problem solving suffers.


Goldin-Meadow, S. & Beilock, S. L. Action's influence on thought: The case of gesture. Persepctives on Psychological Science, 2010, 5, 664-674. PDF

Recent research has shown that people’s actions can influence how they think. A separate body of research has shown that the gestures people produce when they speak can also influence how they think. In this article, we bring these two literatures together to explore whether gesture has an effect on thinking by virtue of its ability to reflect real-world actions. We first argue that gestures contain detailed perceptual-motor information about the actions they represent, information often not found in the speech that accompanies the gestures. We then show that the action features in gesture do not just reflect the gesturer’s thinking––they can feed back and alter that thinking. Gesture actively brings action into a speaker’s mental representations, and those mental representations then affect behavior––at times more powerfully than do the actions on which the gestures are based. Gesture thus has the potential to serve as a unique bridge between action and abstract thought.


2009

Rowe, M.L., Levine, S. C., Fisher, J., & Goldin-Meadow, S. Does linguistic input play the same role in language learning for children with and without early brain injury? Developmental Psychology, 2009, 45, 90-102. PDF

Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar role in children with BI as it does in typically developing (TD) children. Growth in vocabulary and syntactic production is modeled for 80 children (53 TD, 27 BI) between 14 and 46 months. Findings indicate that caregiver input is an equally potent predictor of syntactic growth for children with BI than for TD children. Controlling for input, lesion characteristics (lesion size, type, seizure history) also affect the language trajectories of children with BI. Thus, findings illustrate how both variability in the environment (linguistic input) and variability in the organism (lesion characteristics) work together to contribute to plasticity in language learning.


So, W.-C., Kita, S., & Goldin-Meadow, S. Using the hands to identify who does what to whom: Gesture and speech go hand-in-hand. Cognitive Science, 2009, 33, 115-125. PDF

In order to produce a coherent narrative, speakers must identify the characters in the tale so that listeners can figure out who is doing what to whom. This paper explores whether speakers use gesture, as well as speech, for this purpose. English speakers were shown vignettes of two stories and asked to retell the stories to an experimenter. Their speech and gestures were transcribed and coded for referent identification. A gesture was considered to identify a referent if it was produced in the same location as the previous gesture for that referent. We found that speakers frequently used gesture location to identify referents. Interestingly, however, they used gesture most often to identify referents that were also uniquely specified in speech. Lexical specificity in referential expressions in speech thus appears to go hand-in-hand with specification in referential expressions in gesture.


Özçalişkan, S. & Goldin-Meadow, S. When gesture-speech combinations do and do not index linguistic change. Language and Cognitive Processes, 2009, 24(2), 190-217. PDF

At the one-word stage children use gesture to supplement their speech ('eat'-point at cookie), and the onset of such supplementary gesture-speech combinations predicts the onset of two-word speech ('eat cookie'). Gesture does signal a child's readiness to produce two-word constructions. The question we ask here is what happens when the child begins to flesh out these early skeletal two-word constructions with additional arguments. One possibility is that gesture continues to be a forerunner of linguistic change as children flesh out their skeletal constructions by adding arguments. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. Our analysts of 40 children - from 14 to 34 months - showed that children relied on gesture to produce the first instance of a variety of constructions. However, once each construction was established in their repertoire, the children did not use gesture to flesh out the construction. Gesture thus acts as a harbinger of linguistic steps only when those steps involve new constructions, not when the steps merely flesh out existing constructions.


Goldin-Meadow, S., Cook, S. W., & Mitchell, Z. A. Gesturing gives children new ideas about math. Psychological Science, 2009, 20(3), 267-272. PDF

How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movememts they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands.


Rowe, M.L., & Goldin-Meadow, S. Differences in early gesture explain SES disparities in child vocabulary size at school entry. Science, 2009, 323, 951-953. PDF

Children from low-socioeconomic status (SES) families, on average, arrive at school with smaller vocabularies than children from high-SES families. In an effort to identify precursors to, and possible remedies for, this inequality, we videotaped 50 children from families with a range of different SES interacting with parents at 14 months and assessed their vocabulary skills at 54 months. We found that children from high-SES families frequently used gesture to communicate at 14 months, a relation that was explained by parent gesture use (with speech controlled). In turn, the fact that children from high-SES families have large vocabularies at 54 months was explained by the fact that children from high-SES families have large vocabularies at 54 months was explained by children's gesture use at 14 months. Thus, differences in early gesture help to explain the disparities in vocabulary that children bring with them to school.


Rowe, M.L, & Goldin-Meadow, S. Early gesture selectively predicts later language learning. Developmental Science, 2009, 12, 182-187. PDF

The gestures children produce predict the early stages of spoken language development. Here we ask whether gesture is a global predictor of language learning, or whether particular gestures predict particular language outcomes. We observed 52 children interacting with their caregivers at home, and found that gesture use at 18 months selectively predicted lexical versus syntactic skills at 42 months, even with early child speech controlled. Specifically, number of different meanings conveyed in gesture at 18 months predicted vocabulary at 42 months, but number of gesture+speech combinations did not. In contrast, number of gesture+speech combinations, particularly those conveying sentence-like ideas, produced at 18 months predicted sentence complexity at 42 months, but meanings conveyed in gesture did not. We can thus predict particular milestones in vocabulary and sentence complexity at age 3 1/2 by watching how children move their hands two years earlier.


Goldin-Meadow, S. How gesture promotes learning throughout childhood. Child Development Perspectives, 2009, 3, 106-111. PDF

The gestures children use when they talk often reveal knowledge that they do not express in speech. Gesture is particularly likely to reveal these unspoken thoughts when children are on the verge of learning a new task. It thus reflects knowledge in child learners. But gesture can also play a role in changing the child's knowledge, indirectly through its effects on the child's communicative environment and directly through its effects on the child's cognitive state. Because gesture reflects thought and is an early marker of change, it may be possible to use it diagnostically. Gesture (or its lack) may be the first sign of future developmental difficulty. And because gesture can change thought, it may prove to be useful in the home, the classroom, and the clinic as a way to alter the pace, and perhaps the course, of learning and development.


Özçalişkan, S., Goldin-Meadow, S., Gentner, D., & Mylander, C. Does language about similarity play a role in fostering similarity comparison in children? Cognition, 2009, 112, 217-228. PDF

Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of similarity comparisons, we observed four children who had not been exposed to usable linguistic input - deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. These children developed gesture systems that have language-like structure at many different levels. Here we ask whether the deaf children used their gestures to comment on similarity relations and, if so, which types of relations they expressed. We found that all four deaf children were able to use their gestures to express similarity comparisons (point to cat + point to tiger) resembling those conveyed by 40 hearing children in early gesture + speech combinations (cat + point to tiger). However, the two groups diverged at later ages. Hearing children, after acquiring the word like, shifted from primarily expressing global similarity (as in cat/tiger) to primarily expressing single-property similarity (as in crayon in brown like my hair). In contrast, the deaf children, lacking an explicit term for similarity, continued to primarily express global similarity. The findings underscore the robustness of similarity comparisons in human communication, but also highlight the importance of conventional terms for comparison as likely contributors to routinely expressing more focused similarity relations.


Skipper, J.I., Goldin-Meadow, S., Nusbaum, H., & Small, S. Gestures orchestrate brain networks for language understanding. Current Biology, 2009, 19(8), 661-667. PDF

Although the linguistic structure of speech provides valuable communicative information, nonverbal behaviors can offer additional, often disambiguating cues. In particular, being able to see the face and hand movements of a speaker facilitates language comprehension. But how does the brain derive meaningful information from these movements? Mouth movements provide information about phonological aspects of speech. In contrast, cospeech gestures display semantic information relevant to the intended message. We show that when language comprehension is accompanied by observable face movements, there is strong functional connectivity between areas of cortex involved in motor planning and production and posterior areas thought to mediate phonological aspects of speech perception. These areas are not tuned to hand and arm movements that are not meaningful. Results suggest that when gestures accompany speech, the motor system works with language comprehension areas to determine the meaning of those gestures. Results also suggest that the cortical networks underlying language comprehension, rather than being fixed, are dynamically organized by the type of contextual information available to listeners during face-to-face communication.


Dick, A. S., Goldin-Meadow, S., Hasson, U., Skipper, J., & Small, S. L. Co-speech gestures influence neural responses in brain regions associated with semantic processing. Human Brain Mapping, 2009, 30(11), 3509-3526. PDF

Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influece neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.


2008

Cook, S. W., Mitchell, Z., & Goldin-Meadow, S. Gesture makes learning last, Cognition, 2008, 106, 1047 - 1058. PDF

The gestures children spontaneously produce when explaining a task predict whether they will subsequently learn that task. Why? Gesture might simply reflect a child's readiness to learn a particular task. Alternatively, gesture might itself play a role in learning the task. To investigate these alternatives, we experimentally manipulated children's gesture during instruction in a new mathematical concept. We found that requiring children to gesture while learning the new concept helped them retain the knowledge they had gained during instruction. In contrast, requiring children to speak, but not gesture, while learning the concept had no effect on solidifying learning. Gesturing can thus play a causal role in learning, perhaps by giving learners an alternative, embodied way of representing new ideas. We may be able to improve children's learning just by encouraging them to move their hands.


Rowe, M. L., Ozcaliskan, S., & Goldin-Meadow, S. Learning words by hand: Gesture's role in predicting vocabulary development. First Language, 2008, 28, 182 - 199.PDF

Children vary widely in how quickly their vocabularies grow. Can looking at early gesture use in children and parents help us predict this variability? We videotaped 53 English-speaking parent-child dyads in their homes during their daily activities for 90-minutes every four months between child age 14 and 34 months. At 42 months, children were given the Peabody Picture Vocabulary Test (PPVT). We found that child gesture use at 14 months was a significant predictor of vocabulary size at 42 months, above and beyond the effects of parent and child word use at 14 months. Parent gesture use at 14 months was not directly related to vocabulary development, but did relate to child gesture use at 14 months which, in turn, predicted child vocabulary. These relations hold even when background factors such as socio-economic status are controlled. The findings underscore the importance of examining early gesture when predicting child vocabulary development.


Iverson, J. M., Capirci, O., Volterra, V., & Goldin-Meadow, S. Learning to talk in a gesture-rich world: Early communication in Italian vs. American children. First Language, 2008, 28, 164 - 181. PDF

Italian children are immersed in a gesture-rich culture. Given the large gesture repertoire of Italian adults, young Italian children might be expected to develop a larger inventory of gestures than American children. If so, do these gestures impact the course of language learning? We examined gesture and speech production in Italian and US children between the onset of first words and the onset of two-word combinations. We found differences in the size of the gesture repertoires produced by the Italian vs. the American children, differences that were inversely related to the size of the children's spoken vocabularies. Despite these differences in gesture vocabulary, in both cultures we found that gesture + speech combinations reliably predicted the onset of two-word combinations, underscoring the robustness of gesture as a harbinger of linguistic development.


Goldin-Meadow, S., So, W.-C., Ozyurek, A., & Mylander, C. The natural order of events: How speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences, 2008, 105(27), 9163-9168.  PDF

To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor-patient-act-, is analogous to the subject-object-verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.


Ping, R. & Goldin-Meadow, S. Hands in the air: Using ungrounded iconic gestures to teach children conservation of quantity. Developmental Psychology, 2008, 44(5), p. 1277 PDF

Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigate the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.


2007

Broaders, S., Cook, S. W., Mitchell, Z., & Goldin-Meadow, S. Making children gesture reveals implicit knowledge and leads to learning. Journal of Experimental Psychology: General, 2007, Vol. 136, No. 4, 539-550. PDF

Speakers routinely gesture with their hands when they talk, and those gestures often convey information not found anywhere in their speech. This information is typically not consciously accessible, yet it provides an early sign that the speaker is ready to learn a particular task (S. Goldin-Meadow, 2003). In this sense, the unwitting gestures that speakers produce reveal their implicit knowledge. But what if a learner was forced to gesture? Would those elicited gestures also reveal implicit knowledge and, in so doing, enhance learning? To address these questions, the authors told children to gesture while explaining their solutions to novel math problems and examined the effect of this manipulation on the expression of implicit knowledge in gesture and learning. The authors found that, when told to gesture, children who were unable to solve the math problems often added new and correct problem-solving strategies, expressed only in gesture, to their repertoires. The authors also found that when these children were given instructions on the math problems later, they were more likely to succeed on the problems than children told not to gesture. Telling children to gesture thus encourages them to convey previously unexpressed, implicit ideas, which, in turn, makes them receptive to instruction that leads to learning.


Goldin-Meadow, S., Mylander, C., & Franklin, A. How children make language out of gesture: Morphological structure in gesture systems developed by American and Chinese deaf children. Cognitive Psychology, 2007, 55, 87-135. PDF

When children learn languages, they apply their language-learning skills to the linguistic input they receive. But what happens if children are not exposed to input from a conventional language? Do they engage their language-learning skills nonetheless, applying them to whatever unconventional input they have? We address this question by examining gesture systems created by four American and four Chinese deaf children. The children's profound hearing losses prevented them from learning spoken language, and their hearing parents had not exposed them to sign language, Nevertheless, the children in both cultures invented gesture systems that were structured at the morphological/word level. Interestingly, the differences between the children's systems were no bigger across cultures than within cultures. The children's morphemes could not be traced to their hearing mothers' gestures; however, they were built out of forms and meanings shared with their mothers. The findings suggest that children construct morphological structure out of the input that is handed to them, even if that input is not linguistic in form.


Goldin-Meadow, S., Goodrich, W., Sauer, E., & Iverson, J. Young children use their hands to tell their mothers what to say. Developmental Science, 2007, 10(6), 778-785. PDF

Children produce their first gestures before their first words, and their first gesture+word sentences before their first word+word sentences. These gestural accomplishments have been found not only to predate linguistic milestones, but also to predict them. Findings of this sort suggest that gesture itself might be playing a role in the language-learning process. But what role does it play? Children's gestures could elicit from their mothers the kinds of words and sentences that the children need to hear in order to take their next linguistic step. We examined maternal responses to the gestures and speech that 10 children produced during the one-word period. We found that all 10 mothers 'translated' their children's gestures into words, providing timely models for how one- and two-word ideas can be expressed in English. Gesture thus offers a mechanism by which children can point out their thoughts to mothers, who then calibrate their speech to those thoughts, and potentially facilitate language-learning.


Goldin-Meadow, S. Pointing sets the stage for learning language--and creating language. Child Development, 2007, 78, 741 - 745. PDF

Tomasello, Carpenter, and Liszkowski (2007) have argued that pointing gestures do much more than single out objects in the world. Pointing gestures function as part of a system of shared intentionality even at early stages of development. As such, pointing gestures from the platform on which linguistic communication rests, paving the way for later language learning. This commentary provides evidence that pointing gestures do establish a foundation for learning a language and, moreover, set the stage for creating a language.


Skipper, J. I., Goldin-Meadow, S., Nusbaum, H. C., & Small, S. L. Speech associated gestures, Broca's area, and the human mirror system. Brain and Language, 2007, 101, 260 - 277. PDF

Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution matching" system). We asked whether the role that Broca's area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca's area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca's area and other cortical areas because speech-associated gestures are goal-direct actions that are "mirrored"). We compared the functional connectivity of Broca's area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca's area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.


Goldin-Meadow, S. On inventing language. Daedalus: Journal of the Amercian Academy of Arts & Sciences, Summer 2007, 100-104. PDF

[...] despite the schools' best efforts, many profoundly deaf children were unable to acquire spoken language (this was many years before cochlear implants came on the scene). The children combined gestures, which were themselves composed of parts (akin to morphemes in conventional sign languages), into sentencelike strings that were structured with grammatical rules for deletion and order. Segmentation and combination are at the heart of human language, and they formed the foundation of the deaf children's gesture systems.\n This small and self-contained community consequently offers a singular perspective on some classic questions in historical linguistics.


Goldin-Meadow, S. The challenge: Some properties of language can be learned without linguistic input. The Linguistic Review, 2007, 24, 417 - 421. PDF

Usage-based accounts of language-learning ought to predict that, in the absence of linguistic input, children will not communicate in language-like ways. But this prediction is not borne out by the data. Deaf children whose hearing losses prevent them from acquiring the spoken language that surrounds them, and whose hearing parents have not exposed them to a conventional sign language, invent gesture systems, called homesigns, that display many of the properties found in natural language. Children thus have biases to structure their communication in language-like ways, biases that reflect their cognitive skills. But why do the deaf children recruit this particular set of cognitive skills, and not others, to their homesign systems? In other words, what determines the biases children bring to language-learning? The answer is clearly not linguistic input.


2006

Goldin-Meadow, S. Talking and Thinking With Our Hands. Current Directions in Psychological Science, 2006, 15, 34 - 39. PDF

When people talk, they gesture. Typically, gesture is produced along with speech and forms a fully integrated system with that speech. However, under unusual circumstances, gesture can be produced on its own, without speech. In these instances, gesture must take over the full burden of communication usually shared by the two modalities. What happens to gesture in this very different context? One possibility is that there are no differences in the forms gesture takes with speech and without it--that gesture is gesture no matter what its function. But that is not what we find. When gesture is produced on its own and assumes the full burden of communication, it takes on a language-like form. In contrast, when gesture is produced in conjunction with speech and shares the burden of communication with that speech, it takes on an unsegmented, imagistic form, often conveying information not found in speech. As such, gesture sheds light on how people think and can even play a role in changing those thoughts. Gesture can thus be part of language or it can itself be language, altering its form to fit its function.


Cook, S.,M., & Goldin-Meadow, S. The role of gesture in learning: Do children use their hands to change their minds? Journal of Cognition and Development, 2006, 7, 211 - 232. PDF

Adding gesture to spoken instructions makes those instructions more effective. The question we ask here is why. A group of 49 third and fourth grade children were given instruction in mathematical equivalence with gesture or without it. Children given instruction that included a correct problem-solving strategy in gesture were significantly more likely to produce that strategy in their own gestures during the same instruction period than children not exposed to the strategy in gesture. Those children were then significantly more likely to succeed on a posttest than children who did not produce the strategy in gesture. Gesture during instruction encourages children to produce gestures of their own, which, in turn, leads to learning. Children may be able to use their hands to change their minds.


Ehrlich, S. B., Levine, S., & Goldin-Meadow, S. The importance of gesture in children's spatial reasoning. Developmental Psychology, 2006, 42, 1259 - 1268. PDF

On average, men outperform women on mental rotation tasks. Even boys as young as 4 1/2 perform better than girls on simplified spatial transformation tasks. The goal of our study was to explore ways of improving 5-year-olds' performance on a spatial transformation task and to examine the strategies children use to solve this task. We found that boys performed better than girls before training and that both boys and girls improved with training, whether they were given explicit instruction or just practice. Regardless of training condition, the more children gestured about moving the pieces when asked to explain how they solved the spatial transformation task, the better they performed on the task, with boys gesturing about movement significantly more (and performing better) than girls. Gesture thus provides useful information about children's spatial strategies, raising the possibility that gesture training may be particularly effective in improving children's mental rotation skills.


2005

Singer, M. A., & Goldin-Meadow, S. Children learn when their teachers gestures and speech differ. Psychological Science, 2005, 16, 85-89. PDF

Teachers gesture when they teach, and those gestures do not always convey the same information as their speech. Gesture thus offers learners a second message. To determine whether learners take advantage of this offer, we gave 160 children in the third and fourth grades instruction in mathematical equivalence. Children were taught either one or two problem-solving strategies in speech accompanied by no gesture, gesture conveying the same strategy, or gesture conveying a different strategy. The children were likely to profit from instruction with gesture, but only when it conveyed a different strategy than speech did. Moreover, two strategies were effective in promoting learning only when the second strategy was taught in gesture, not speech. Gesture thus has an active hand in learning.


Goldin-Meadow, S. & Wagner, S. M. How our hands help us learn. Trends in Cognitive Sciences, 2005, 9, 234-241. PDF

When people talk they gesture, and those gestures often reflect thoughts not expressed in their words. In this sense, gesture and the speech it accompanies can mismatch. Gesture-speech 'mismatches' are found when learners are on the verge of making progress on a task - when they are ready to learn. Moreover, mismatches provide insight into the mental processes that characterize learners when in this transitional state. Gesture is not just handwaving - it reflects how we think. However, evidence is mounting that gesture goes beyond reflecting our thoughts and can have a hand in changing those thoughts. We consider two ways in which gesture could change the course of learning: indirectly by influencing learning environments or directly by influencing learners themselves.


Iverson, J.M., & Goldin-Meadow, S. Gesture paves the way for language development. Psychological Science, 2005, 16, 367-371. PDF

In development, children often use gesture to communicate before they use words. The question is whether these gestures merely precede language development or are fundamentally tied to it. We examined 10 children making the transition from single words to two-word combinations and found that gesture had a tight relation to the children's lexical and syntactic development. First, a great many of the lexical items that each child produced initially in gesture later moved to that child's verbal lexicon. Second, children who were first to produce gesture-plus-word combinations conveying two elements in a proposition (point at bird and say "nap") were also first to produce two-word combinations ("bird nap"). Changes in gesture thus not only predate but also predict changes in language, suggesting that early gesture may be paving the way for future developments in language.


Ozcaliskan, S. & Goldin-Meadow, S. Gesture is at the cutting edge of early language development. Cognition, 2005, 96, B101-113. PDF

Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+ point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.


Goldin-Meadow, S., Gelman, S., & Mylander, C. Expressing generic concepts with and without a language model. Cognition, 2005, 96, 109-126. PDF

Utterances expressing generic kinds ("birds fly") highlight qualities of a category that are stable and enduring, and thus provide insight into conceptual organization. To explore the role that linguistic input plays in children's production of generic nouns, we observed American and Chinese deaf children whose hearing losses prevented them from learning speech and whose hearing parents had not exposed them to sign. These children develop gesture systems that have language-like structure at many different levels. The specific question we addressed in this study was whether the gesture systems, developed without input from a conventional language model, would contain genetics. We found that the deaf children used generics in the gestures they invented, and did so at about the same rate as hearing children growing up in the same cultures and learning English or Mandarin. Moreover, the deaf children produced more generics for animals than for artifacts, a bias found previously in adult English- and Mandarin-speakers and also found in both groups of hearing children in our current study. This bias has been hypothesized to reflect the different conceptual organizations underlying animal and artifact categories. Our results suggest that not only is a language model not necessary for young children to produce generic utterances, but the bias to produce more generics for animals than artifacts also does not require linguistic input to develop.


Goldin-Meadow, S. The two faces of gesture: Language and thought. Gesture, 2005, 5. 241-257.

Gesture is typically produced with speech, forming a fully integrated system with that speech. However, under unusual circumstances, gesture can be produced completely on its own--without speech. In these instances, gesture takes over the full burden of communication usually shared by the two modalities. What happens to gesture in these two very different contexts? One possibility is that there are no differences in the forms gesture takes in these two contexts--that gesture is gesture no matter what its function. But, in fact, that's not what we find. When gesture is produced on its own, it assumes the full burden of communication and takes on a language-like form, with sentence-level ordering rules, word-level paradigms, and grammatical categories. In contrast, when gesture is produced in conjunction with speech, it shares the burden of communication with speech and takes on a global imagistic form, often conveying information not found anywhere in speech. Gesture thus changes its form according to its function.


So, C., Coppola, M., Licciardello, V., & Goldin-Meadow, S. The seeds of spatial grammar in the manual modality. Cognitive Science, 2005, 29, 1029-1043. PDF

Sign languages modulate the production of signs in space and use this spatial modulation to refer back to entities--to maintain coreference. We ask here whether spatial modulation is so fundamental to language in the manual modality that it will be invented by individuals asked to create gestures on the spot. English speakers were asked to describe vignettes under 2 conditions: using gesture without speech, and using speech with spontaneous gestures. When using gesture alone, adults placed gestures for particular entities in non-neutral locations and then used those locations to refer back to the entities. When using gesture plus speech, adults also produced gestures in non-neutral locations but used the locations coreferentially far less often. When gesture is forced to take on the full burden of communication, it exploits space for coreference. Coreference thus appears to be a resilient property of language, likely to emerge in communication systems no matter how simple.


Goldin-Meadow, S. What language creation in the manual modality tells us about the foundations of language. Linguistic Review, 2005, 22, 199-225. PDF

Universal Grammar offers a set of hypotheses about the biases children bring to language-learning. But testing these hypotheses is difficult, particularly if we look only at language-learning under typical circumstances. Children are influenced by the linguistic input to which they are exposed at the earliest stages of language-learning. Their biases will therefore be obscured by the input they receive. A clearer view of the child's preparation for language comes from observing children who are not exposed to linguistic input. Deaf children whose hearing losses prevent them from learning the spoken language that surrounds them, and whose hearing parents have not yet exposed them to sign language, nevertheless communicate with the hearing individuals in their worlds and use gestures, called homesigns, to do so. This article explores which properties of Universal Grammar can be found in the deaf children's homesign systems, and thus tests linguistic theory against acquisition data.


Ozcaliskan, S. & Goldin-Meadow, S. Do parents lead their children by the hand? Journal of Child Language, 2005, 32, 481 - 505. PDF

The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture+speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 american child–caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (deictic, conventional, representational) and the relation it held to speech (reinforcing, disambiguating, supplementary). Children and their caregivers produced the same types of gestures and in approximately the same distribution. However, the children differed from their caregivers in the way they used gesture in relation to speech. Over time, children produced many more reinforcing (bike+point at bike), disambiguating (that one+point at bike), and supplementary combinations (ride+point at bike). In contrast, the frequency and distribution of caregivers' gesture+speech combinations remained constant over time. Thus, the changing relation between gesture and speech observed in the children cannot be traced back to the gestural input the children receive. Rather, it appears to reflect changes in the children's own skills, illustrating once again gesture's ability to shed light on developing cognitive and linguistic processes.


2004

Goldin-Meadow, S. U-shaped changes are in the eye of the beholder. Journal of Cognition and Development, 2004, 5, 109-11. PDF

Discusses the three articles in this volume which emphasize that the initial and end points of a U are never really identical. According to the author, the question is--how identical need they be for us to call the developmental trajectory U-shaped? The virtue in thinking about a developmental path in U-shaped terms is that we focus our attention on possible reorganizations and mechanisms of change. Recurrence, regression, and U's are to a large extent a product of the level at which we choose to describe developmental change. They are, in this sense, in the eye of the beholder and may be more common than we think.


Wagner, S., Nusbaum, H., & Goldin-Meadow, S. Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language, 2004, 50, 395-407. PDF

What type of mental representation underlies the gestures that accompany speech? We used a dual-task paradigm to compare the demands gesturing makes on visuospatial and verbal working memories. Participants in one group remembered a string of letters (verbal working memory group) and those in a second group remembered a visual grid pattern (visuospatial working memory group) while explaining math problems. If gesture production is mediated by visuospatial representation, gesturing should interfere more with performance on the concurrent visuospatial task than the concurrent verbal task. We found, however, that participants in both groups remembered significantly more items when they gestured than when they did not gesture. Moreover, the number of items remembered depended on the meaning conveyed by gesture. When gesture conveyed the same prepositional information as speech, participants remembered more items than when it conveyed different information. Thus, in contrast to simple handwaving, the demands that gesture makes on working memory appear to be propositional rather than visuospatial.


Goldin-Meadow, S. Gestures role in the learning process. Theory into Practice, 2004, 43, 314-321. PDF

When children explain their answers to a problem, they convey their thoughts not only in speech but also in the gestures that accompany that speech. Teachers, when explaining problems to a child, also convey information in both speech and gesture. Thus, there is an undercurrent of conversation that takes place in gesture alongside the acknowledged conversation in speech. This article shows that these gestures can play a crucial, although typically unacknowledged, role in teaching and learning.


2003

Goldin-Meadow, S. & Singer, M. A. From children's hands to adults' ears: Gesture's role in the learning process. Developmental Psychology, 2003, 39(3), 509-520. PDF

Children can express thoughts in gesture that they do not express in speech--they produce gesture-speech mismatches. Moreover, children who produce mismatches on a given task are particularly ready to learn that task. Gesture, then, is a tool that researchers can use to predict who will profit from instruction. But is gesture also useful to adults who must decide how to instruct a particular child? We asked 8 adults to instruct 38 third- and fourth-grade children individually in a math problem. We found that the adults offered more variable instruction to children who produced mismatches than to children who produced no mismatches--more different types of instructional strategies and more instructions that contained two different strategies, one in speech and the other in gesture. The children thus appeared to be shaping their own learning environments just by moving their hands. Gesture not only reflects a child's understanding but can play a role in eliciting input that could shape that understanding. As such, it may be part of the mechanism of cognitive change.

2002

Kelly, S. D., Singer, M. A., Hicks, J. & Goldin-Meadow, S. A helping hand in assessing children's knowledge: Instructing adults to attend to gesture. Cognition and Instruction, 2002, 20, 1-26. PDF

The spontaneous hand gestures that accompany children's explanations of concepts have been used by trained experimenters to gain insight into children's knowledge. In this article, 3 experiments tested whether it is possible to teach adults who are not trained investigators to comprehend information conveyed through children's hand gestures. In Experiment 1, we used a questionnaire to explore whether adults benefit from gesture instruction when making assessments of young children's knowledge of conservation problems. In Experiment 2, we used a similar questionnaire, but asked adults to make assessments of older children's mathematical knowledge. Experiment 3 also concentrated on math assessments, but used a free-recall paradigm to test the extent of the adult's understanding of the child's knowledge. Taken together, the results of the experiments suggest that instructing adults to attend to gesture enhances their assessment of children's knowledge at multiple ages and across multiple domains.


Goldin-Meadow, S. Constructing communication by hand. Cognitive Development, 2002, 17, 1385-1405. PDF

I focus here on how children construct communication, looking in particular at places where the language model of the community exerts less influence on the child. I first describe the gesture systems constructed by deaf children who are unable to acquire speech and have not been exposed to a sign language. These children are constructing their communication systems in large part without benefit of conventional linguistic input. As a result, the children's gestures reflect kills that they themselves bring to the language-learning situation, skills that interact with linguistic input when that input is available. I then describe the gestures that hearing children produce when they talk. Gesture does not need to assume a language-like role for these children and indeed it does not. Nevertheless, the gestures these speaking children produce convey information and that information is often different from the information found in their talk. Gesture thus allows the children to reach beyond the confines of the language they are speaking. Both cases highlight the child's contribution to the communication process and provide unique opportunities to observe the child's skills as language-maker.


Gershkoff-Stowe, L., & Goldin-Meadow, S. Is there a natural order for expressing semantic relations? Cognitive Psychology, 2002, 45(3), 375-412. PDF

All languages rely to some extent on word order to signal relational information. Why? We address this question by exploring communicative and cognitive factors that could lead to a reliance on word order. InStudy1, adults were asked to describe scenes to another using their hands and not their mouths. The question was whether this home-made ‘‘language’’ would contain gesture sentences with consistent order. In addition, we asked whether reliance on order would be influenced by three communicative factors (whether the communication partner is permitted to give feedback; whether the information to be communicated is present in the context that recipient and gesture share; whether the gesturer assumes the role of gesture receiver as well as gesture producer). We found that, not only was consistent ordering of semantic elements robust across the range of communication situations, but the same non-English order appeared in all contexts. Study 2 explored whether this non-English order is found only when a person attempts to share information with another.Adults were asked to reconstruct scenes in a non-communicative context using pictures drawn on transparencies. The adults picked up the pictures for their reconstructions in a consistent order, and that order was the same non-English order found in Study1. Finding consistent orderingpatterns in a non-communicative context suggests that word order is not driven solely by the demands of communicating information to another, but may reflect a more general property of human thought.


Zheng, M., & Goldin-Meadow, S. Thought before language: How deaf and hearing children express motion events across cultures. Cognition, 2002, 85, 145-175. PDF

Do children come to the language-learning situation with a predetermined set of ideas about motion events that they want to communicate? If so, is the expression of these ideas modified by exposure to a language model within a particular cultural context? We explored these questions by comparing the gestures produced by Chinese and American deaf children who had not been exposed to a usable conventional language model with the speech of hearing children learning Mandarin or English. We found that, even in the absence of any conventional language model, deaf children conveyed the central elements of a motion event in their communications. More surprisingly, deaf children growing up in an American culture used their gestures to express motion events in precisely the same ways as deaf children growing up in a Chinese culture. In contrast, hearing children in the two cultures expressed motion events differently, in accordance with the languages they were learning. The American children obeyed the patterns of English and rarely omitted words for figures or agents. The Chinese children had more flexibility as Mandarin permits (but does not demand) deletion. Interestingly, the Chinese hearing children's descriptions of motion events resembled the deaf children's descriptions more closely than did the American hearing children's. The thoughts that deaf children convey in their gestures thus may serve as the starting point and perhaps a default for all children as they begin the process of grammaticization - thoughts that have not yet been filtered through a language model.


Garber, P., & Goldin-Meadow, S. Gesture offers insight into problem-solving in adults and children. Cognitive Science, 2002, 26, 817-831. PDF

When asked to explain their solutions to a problem, both adults and children gesture as they talk: These gestures at times convey information that is not conveyed in speech and thus reveal thoughts that are distinct from those revealed in speech. In this study, the authors use the classic Tower of Hanoi puzzle to validate the claim that gesture and speech taken together can reflect the activation of two cognitive strategies within a single response. The Tower of Hanoi is a well-studied puzzle, known to be most efficiently solved by activating subroutines at theoretically defined choice points. When asked to explain how they solved the Tower of Hanoi puzzle, both adults and children produced significantly more gesture-speech mismatches-explanations in which speech conveyed one path and gesture another at these theoretically defined choice points than they produced at non-choice points. Even when the participants did not solve the problem efficiently, gesture could be used to indicate where the participants were deciding between alternative paths. Gesture can, thus, serve as a useful adjunct to speech when attempting to discover cognitive processes in problem-solving.


2001

Goldin-Meadow, S. & Mayberry, R. How do profoundly deaf children learn to read? Learning Disabilities Research and Practice (Special Issue: Emergent and early literacy: Current status and research directions), 2001, 16, 221-228. PDF

Reading requires two related, but separable, capabilities: (1) familiarity with a language, and (2) understanding the mapping between that language and the printed word (Chamberlain & Mayberry, 2000; Hoover & Gough, 1990). Children who are profoundly deaf are disadvantaged on both counts. Not surprisingly, then, reading is difficult for profoundly deaf children. But some deaf children do manage to read fluently. How? Are they simply the smartest of the crop, or do they have some strategy, or circumstance, that facilitates linking the written code with language? A priori one might guess that knowing American Sign Language (ASL) would interfere with learning to read English simply because ASL does not map in any systematic way onto English. However, recent research has suggested that individuals with good signing skills are not worse, and may even be better, readers than individuals with poor signing skills (Chamberlain & Mayberry, 2000). Thus, knowing a language (even if it is not the language captured in print) appears to facilitate learning to read. Nonetheless, skill in signing does not guarantee skill in reading—reading must be taught. The next frontier for reading research in deaf education is to understand how deaf readers map their knowledge of sign language onto print, and how instruction can best be used to turn signers into readers.


Goldin-Meadow, S., Nusbaum, H., Kelly, S., & Wagner, S. Explaining math: Gesturing lightens the load. Psychological Science, 2001, 12, 516-522. PDF

Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. We asked adults and children to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations them when they did not gesture. Gesturing appeared to save the speakers' cognitive resources on the explanation task, permitting the speakers to allocate more resources to the memory task. It is widely accepted that gesturing reflects a speaker's cognitive state, but our observations suggest that, by reducing cognitive load, gesturing may also play a role in shaping that state.


Iverson, J. M., & Goldin-Meadow, S. The resilience of gesture in talk: Gesture in blind speakers and listeners. Developmental Science, 2001, 4, 416-422. PDF

Spontaneous gesture frequently accompanies speech. The question is why. In these studies, ive tested two non-mutually exclusive possibilities. First, speakers may gesture simply because they see others gesture and learn front this model to move their hands as they talk. We tested this hypothesis by examining spontaneous communication in congenitally blind children and adolescents. Second, speakers may gesture because they recognize that gestures can be useful to the listener. We tested this hypothesis by examining whether speakers gesture even when communicating with a blind listener who is unable to profit from the information that the hands convey. We found that congenitally blind speakers, who had never seen gestures, nevertheless gestured as they spoke, conveying the same information and producing the same range of gesture forms as sighted speakers. Moreover, blind speakers gestured even when interacting with another blind individual who could not have benefited from the information contained in those gestures. These findings underscore the robustness of gesture in talk and suggest that the gestures that co-occur with speech may serve a function for the speaker as well as for the listener.


Phillips, S. B. V. D., Goldin-Meadow, S., & Miller, P. J. Enacting stories, seeing worlds: Similarities and differences in the cross-cultural narrative development of linguistically isolated deaf children. Human Development, 2001, 44, 311-336. PDF

The stories that children hear not only offer them a model for how to tell stories, but they also serve as a window into their cultural worlds. What would happen if a child were unable to hear what surrounds them? Would such children have any sense that events can be narrated and, if so, would they narrate those events in a culturally appropriate manner? To explore this question, we examined children who did not have access to conventional language - deaf children whose profound hearing deficits prevented them from acquiring the language spoken around them, and whose hearing parents had not yet exposed them to a conventional sign language. We observed 8 deaf children of hearing parents in two cultures, 4 European-American children from either Chicago or Philadelphia, and 4 Taiwanese children from Taipei, all of whom invented gesture systems to communicate. All 8 children used their gestures to recount stories, and those gestured stories were of the same types, and of the same structure, as those told by hearing children. Moreover, the deaf children seemed to produce culturally specific narrations despite their lack of a verbal language model, suggesting that these particular messages are so central to the culture as to be instantiated in nonverbal as well as verbal practices.


2000

Goldin-Meadow, S. Beyond words: The importance of gesture to researchers and learners. Child Development (Special Issue: New Direction for Child Development in the Twenty-First Century), 2000, 71, 231-139. PDF

Gesture has privileged access to information that children know but do not say. As such, it can serve as an additional window to the mind of the developing child, one that researchers are only beginning to acknowledge. Gesture might, however, do more than merely reflect understanding--it may be involved in the process of cognitive change itself. This question will guide research on gesture as we enter the new millennium. Gesture might contribute to change through 2 mechanisms which are not mutually exclusive: (1) indirectly, by communicating unspoken aspects of the learner's cognitive state to potential agents of change (parents, teachers, sibling, friends); and (2) directly, by offering the learner a simpler way to express and explore ideas that may be difficult to think through in a verbal format, thus easing the learner's cognitive burden. As a result, the next decade may well offer evidence of gesture's dual potential as an illuminating tool for researchers and as a facilitator of cognitive growth for learners themselves.


Goldin-Meadow, S., & Saltzman, J. The cultural bounds of maternal accommodation: How Chinese and American mothers communicate with deaf and hearing children. Psychological Science, 2000, 11, 307-314. PDF

Children with special needs typically require family accommodation to those needs, We explore here the extent to which cultural forces shape the accommodations mothers make when communicating with young deaf children. Sixteen mother-child dyads (8 Chinese, 8 American) were videotaped at home. In each culture, 4 mothers interacted with their deaf children, and 4 interacted with their hearing children. None of the deaf children knew sign language, nor spoke at age level. We found that mothers adjusted their communicative behaviors to their deaf children, but in every case, those adjustments,were calibrated to cultural norms. American mothers, for example, increased their use of gesture with deaf children but stopped far short of the Chinese range-despite rite obvious potential benefits of gesturing to children who cannot hear: These findings provide the first cross-cultural demonstration that children are, first and foremost, inculcated into their cultures and, only within that framework, then treated as special cases.


Iverson, J. M., Tencer, H. L., Lany, J., & Goldin-Meadow, S. The relation between gesture and speech in congenitally blind and sighted language-learners. Journal of Nonverbal Behavior, 2000, 24, 105-130. PDF

The aim of this study was to explore the role of vision in early gesturing. The authors examined gesture development in 5 congenitally blind and 5 sighted toddlers videotaped longitudinally between the ages of 14 and 28 months in their homes while engaging in free play with a parent or experimenter. All of the blind children were found to produce at least some gestures during the one-word stage of language development. However, gesture production was relatively low among the blind children relative to their sighted peers. Moreover, although blind and sighted children produced the same overall set of gesture types, the distribution of gesture types across categories differed. In addition, blind children used gestures primarily to communicate about objects that were nearby, while sighted children used them for nearby as well as distally located objects. These findings suggest that gesture may play different roles in the language-learning process for sighted and blind children. Nevertheless, it is clear that gesture is a robust phenomenon of early communicative development, emerging even in the absence of experience with a visual model.


1999

Goldin-Meadow, S. & Sandhofer, C. M. Gesture conveys substantive information about a child's thoughts to ordinary listeners. Developmental Science, 1999, 2, 67-74. PDF

The gestures that spontaneously occur in communicative contexts have been shown to offer insight into a child's thoughts. The information gesture conveys about what is on a child's mind will, of course, only be accessible to a communication partner if that partner can interpret gesture. Adults were asked to observe a series of children who participated `live' in a set of conservation tasks and gestured spontaneously while performing the tasks. Adults were able to glean substantive information from the children's gestures, information that was not found anywhere in their speech. `Gesture-reading' did, however, have a cost - if gesture conveyed different information from speech, it hindered the listener's ability to identify the message in speech. Thus, ordinary listeners can and do extract information from a child's gestures, even gestures that are unedited and fleeting.


Alibali, M. W., Bassok, M., Solomon, K. O., Syc, S. E., & Goldin-Meadow, S. Illuminating mental representations through speech and gesture. Psychological Science, 1999, 10, 327-333. PDF

Can the gestures people produce when describing algebra word problems offer insight into their mental representations of the problems? Twenty adults were asked to describe six word problems about constant change, and then to talk aloud as they solved the problems. Two problems depicted continuous change, two depicted discrete change, and two depicted change that could be construed as either continuous or discrete. Participants’ verbal and gestured descriptions of the problems often incorporated information about manner of change. However, the information conveyed in gesture was not always the same as the information conveyed in speech. Participants’ problem representations, as expressed in speech and gesture, were systematically related to their problem solutions. When gesture reinforced the representation expressed in the spoken description, participants were very likely to solve the problem using a strategy compatible with that representation—much more likely than when gesture did not reinforce the spoken description. The results indicate that gesture and speech together provide a better index of mental representation than speech alone.


Goldin-Meadow, S., & Alibali, M. W. Does the hand reflect implicit knowledge? Yes and no. Behavioral and Brain Sciences, 1999, 22, 766-7. PDF

Gesture does not have a fixed position in the Dienes & Perner framework. Its status depends on the way knowledge is expressed. Knowledge reflected in gesture can be fully implicit (neither factuality nor predication is explicit) if the goal is simply to move a pointing hand to a target. Knowledge reflected in gesture can be explicit (both factuality and predication are explicit) if the goal is to indicate an object. However, gesture is not restricted to these two extreme positions. When gestures are unconscious accompaniments to speech and represent information that is distinct from speech, the knowledge they convey is factuality-implicit but predication-explicit.


Goldin-Meadow, S. The role of gesture in communication and thinking. Trends in Cognitive Science, 1999, 3, 419-429. PDF

People move their hands as they talk--they gesture. Gesturing is a robust phenomenon, found across cultures, ages, and tasks. Gesture is even found in individuals blind from birth. But what purpose, if any, does gesture serve? In this review, I begin by examining gesture when it stands on its own, substituting for speech and clearly serving a communicative function. When called upon to carry the full burden of communication, gesture assumes a language-like form, with structure at word and sentence levels. However, when produced along with speech, gesture assumes a different form--it becomes imagistic and analog. Despite its form, the gesture that accompanies speech also communicates. Trained coders can glean substantive information from gesture--information that is not always identical to that gleaned from speech. Gesture can thus serve as a research tool, shedding light on speakers' unspoken thoughts. The controversial question is whether gesture conveys information to listeners not trained to read them. Do spontaneous gestures communicate to ordinary listeners? Or might they be produced only for speakers themselves? I suggest these are not mutually exclusive functions--gesture serves as both a tool for communication for listeners, and a tool for thinking for speakers.


Goldin-Meadow, S., Kim, S., & Singer, M. What the teacher's hands tell the student's mind about math. Journal of Educational Psychology, 1999, 91, 720-730. PDF

Does nonverbal behavior contribute to cognitive as well as affective components of teaching? We examine here one type of nonverbal behavior: spontaneous gestures that accompany talk. Eight teachers were asked to instruct 49 children individually on mathematical equivalence as it applies to addition. All teachers used gesture to convey problem-solving strategies. The gestured strategies either reinforced (matched) or differed from (mismatched) strategies conveyed in speech. Children were more likely to reiterate teacher speech if it was accompanied by matching gesture than by no gesture at all and less likely to reiterate teacher speech if it was accompanied by mismatching gesture than by no gesture at all. Moreover, children were able to glean problem-solving strategies from the teachers' gestures and recast them into their own speech. Not only do teachers produce gestures that express task-relevant information, but their students take notice.


1998

Goldin-Meadow, S. & Mylander, C. Spontaneous sign systems created by deaf children in two cultures. Nature, 1998, 391, 279-281. PDF

Deaf children whose access to usable conventional linguistic input, signed or spoken, is severely limited nevertheless use gesture to communicate(1-3). These gestures resemble natural language in that they are structured at the level both of sentence(4) and of word(5). Although the inclination Ito use gesture maybe traceable to the fact that the deaf children's]tearing parents, like all speakers, gesture as they talk(6), the children themselves are responsible for introducing language-like structure into their gestures(7). We have explored the robustness of this phenomenon by observing deaf children of hearing parents in two cultures, an American and a Chinese culture, that differ in their child-rearing practices(8-12) and in the way gesture is used in relation to speech(13). The spontaneous sign systems developed in these cultures shared a number of structural similarities: patterned production and deletion of semantic elements ii the surface structure of a sentence; patterned ordering of those elements within the sentence; and concatenation of propositions within a sentence. These striking similarities offer critical empirical input towards resolving the ongoing debate about the 'innateness' of language in human infants


Garber, P., Alibali, M. W., & Goldin-Meadow, S. Knowledge conveyed in gesture is not tied to the hands. Child Development, 1998, 69, 75-84. PDF

Children frequently gesture when they explain what they know, and their gestures sometimes convey different information than their speech does. in this study, we investigate whether children's gestures convey knowledge that the children themselves can recognize in another context. We asked fourth-grade children to explain their solutions to a set of math problems and identified the solution procedures each child conveyed only in gesture (and not in speech) during the explanations. We then examined whether those procedures could be accessed by the same child on a rating task that did not involve gesture at all. Children rated solutions derived from procedures they conveyed uniquely in gesture higher than solutions derived from procedures they did not convey at all. Thus, gesture is indeed a vehicle through which children express their knowledge. The knowledge children express uniquely in gesture is accessible on other tasks, and in this sense, is not tied to the hands.


1997

Iverson, J. & Goldin-Meadow, S. What's communication got to do with it? Gesture in children blind from birth. Developmental Psychology, 1997, 33, 453-467. PDF

It is widely accepted that gesture can serve a communicative function. The purpose of this study was to explore gesture use in congenitally blind individuals who have never seen gesture and have no experience with its communicative function. Four children blind from birth were tested in 3 discourse situations (narrative, reasoning, and spatial directions) and compared with groups of sighted and blindfolded sighted children. Blind children produced gestures, although not in all of the contexts in which sighted children gestured, and the gestures they produced resembled those of sighted children in both form and content. Results suggest that gesture may serve a function for the speaker that is independent of its impact on the listener.


Alibali, M., Flevares, L., & Goldin-Meadow, S. Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology, 1997, 89, 183-193. PDF

Children's gestures can reveal important information about their problem-solving strategies. This study investigated whether the information children express only in gesture is accessible to adults not trained in gesture coding. Twenty teachers and 20 undergraduates viewed videotaped vignettes of 12 children explaining their solutions to equations. Six children expressed the same strategy in speech and gesture, and 6 expressed different strategies. After each vignette, adults described the child's reasoning. For children who expressed different strategies in speech and gesture, both teachers and undergraduates frequently described strategies that children had not expressed in speech. These additional strategies could often be traced to the children's gestures. Sensitivity to gesture was comparable for teachers and undergraduates. Thus, even without training, adults glean information, not only from children's words but also from their hands.


Morford, J. P., & Goldin-Meadow, S. From here to there and now to then: The development of displaced reference in homesign and English. Child Development, 1997, 68, 420-435. PDF

An essential function of human language is the ability to refer to information that is spatially and temporally displaced from the location of the speaker and the listener, that is, displaced reference. This article describes the development of this function in 4 deaf children who were not exposed to a usable conventional language model and communicated via idiosyncratic gesture systems, called homesign, and in 18 hearing children who were acquiring English as a native language. Although the deaf children referred to the nonpresent much less frequently and at later ages than the hearing children, both groups followed a similar developmental path, adding increasingly abstract categories of displaced reference to their repertoires in the same sequence. Care-givers in both groups infrequently initiated displaced reference, except with respect to communication about past events. Despite the absence of a shared linguistic code, the deaf children succeeded in evoking the non-present by generating novel gestures, by modifying the context of conventional gestures, and by pragmatic means. The findings indicate that a conventional language model is not essential for children to be able to extend their communication beyond the here and now.


1996

Goldin-Meadow, S., McNeill, D., & Singleton, J. Silence is liberating: Removing the handcuffs on grammatical expression in the manual modality. Psychological Review, 1996, 103, 34-55. PDF

Grammatical properties are found in conventional sign languages of the deaf and in unconventional gesture systems created by deaf children lacking language models. However, they do not arise in spontaneous gestures produced along with speech. The authors propose a model explaining when the manual modality will assume grammatical properties and when it will not. The model argues that two grammatical features, segmentation and hierarchical combination, appear in all settings in which one human communicates symbolically with another. These properties are preferentially assumed by speech whenever words are spoken, constraining the manual modality to a global form. However, when the manual modality must carry the full burden of communication, it is freed from the global form it assumes when integrated with speech-only to be constrained by the task of symbolic communication to take on the grammatical properties of segmentation and hierarchical combination.


1995

Goldin-Meadow, S., Mylander, C., & Butcher, C. The resilience of combinatorial structure at the word level: Morphology in self-styled gesture systems. Cognition, 1995, 56, 195-262. PDF

Combinatorial structure at both word and sentence levels is widely recognized as an important feature of language-one that sets it apart from other forms of communication. The purpose of these studies is to determine whether deaf children who were not exposed to an accessible model of a conventional language would nevertheless incorporate word-level combinatorial structure into their self styled communication systems. In previous work, we demonstrated that, despite their lack of conventional linguistic input, deaf children in these circumstances developed spontaneous gesture systems that were structured at the level of the sentence, with regularities identifiable across gestures in a sentence, akin to syntactic structure. The present study was undertaken to determine whether these gesture systems were structured at a second level, the level of the word or gesture - that is, were there regularities within a gesture, akin to morphological structure? Further, if intra-gesture regularities were found, how wide was the range of variability in their expression? Finally, from where did these intra-gesture regularities come? Specifically, were they derived from the gestures the hearing mothers produced in their attempt to interact with their deaf children?
We found that al of the deaf children produced gestures that could be characterized by paradigms of handshape and motion combinations that formed a comprehensive matrix far virtually all of the spontaneous gestures for each child. Moreover, the morphological systems that the children developed, although similar in many respects, were sufficiently different to suggest that the children had introduced relatively arbitrary distinctions into their systems. These differences could not be traced to the spontaneous gestures their hearing mothers produced, but seemed to be shaped by the early gestures that the children themselves created.
These findings suggest that combinatorial structure at more than one level is so fundamental to human language that it can be reinvented by children who do not have access to a culturally shared linguistic system. Apparently, combinatorial structure of this sort is not maintained as a universal property of language solely by historical tradition, but also by its centrality to the structure and function of language.


1994

Goldin-Meadow, S., Butcher, C., Mylander, C. & Dodge, M. Nouns and verbs in a self-styled gesture system: What's in a name? Cognitive Psychology, 1994, 27, 259-319. PDF

A distinction between nouns and verbs is not only universal to all natural languages but it also appears to be central to the structure and function of language. The purpose of this study was to determine whether a deaf child who was not exposed to a usable model of a conventional language would nevertheless incorporate into his self-styled communication system this apparently essential distinction. We found that the child initially maintained a distinction between nouns and verbs by using one set of gestures as nouns and a separate set as verbs. At age 3:3, the child began to use some of his gestures in both grammatical roles; however, he distinguished the two uses by altering the form of the gesture (akin to morphological marking) and its position in a gesture sentence (akin to syntactic marking). Such systematic marking was not found in the spontaneous gestures produced by the child's hearing mother who used gesture as an adjunct to speech rather than as a primary communication system. A distinction between nouns and verbs thus appears to be sufficiently fundamental to human language that it can be reinvented by a child who does not have access to a culturally shared linguistic system.


Goldin-Meadow, S. & Alibali, M. W. Do you have to be right to redescribe? Behavioral and Brain Sciences, 1994, 17, 718-719.

Karmiloff-Smith's developmental perspective forces us to recognize that there are many levels at which knowledge can be represented. We first offer empirical support for a distinction made on theoretical grounds between two such levels. We then argue that ''redescription'' onto a new level need not await success (as Karmiloff-Smith proposes), and that this modification of the theory has important implications for the role redescription plays in development.


1993

Goldin-Meadow, S., Nusbaum, H., Garber, P. & Church, R. B. Transitions in learning: Evidence for simultaneously activated strategies. Journal of Experimental Psychology: Human Perception and Performance, 1993, 19 (1), 92-107. PDF

Children in transition with respect to a concept, when asked to explain that concept, often convey one strategy in speech and a different one in gesture. Are both strategies activated when that child solves problems instantiating the concept? While solving a math task, discordant children (who produced different strategies in gesture and speech on a pretest) and concordant children (who produced a single strategy) were given a word recall task. All of the children solved the math task incorrectly. However, if discordant children are activating two strategies to arrive at these incorrect solutions, they should expend more effort on this task than concordant children, and consequently have less capacity left over for word-recall and perform less well on it. This prediction was confirmed, suggesting that the transitional state is characterized by dual representations, both of which are activated when attempting to explain or solve a problem.


Goldin-Meadow, S., Alibali, M. W., & Church, R. B. Transitions in concept acquisition: Using the hand to read the mind. Psychological Review, 1993, 100 (2), 279-297. PDF

Thoughts conveyed through gesture often differ from thoughts conveyed through speech. In this article, a model of the sources and consequences of such gesture-speech mismatches and their role during transitional periods in the acquisition of concepts is proposed. The model makes 2 major claims: (a) The transitional state is the source of gesture-speech mismatch. In gesture-speech mismatch, 2 beliefs are simultaneously expressed on the same problem-one in gesture and another in speech. This simultaneous activation of multiple beliefs characterizes the transitional knowledge state and creates gesture-speech mismatch. (b) Gesture-speech mismatch signals to the social world that a child is in a transitional state and is ready to learn. The child's spontaneous gestures index the zone of proximal development, thus providing a mechanism by which adults can calibrate their input to that child's level of understanding.


Alibali, M. W. & Goldin-Meadow, S. Gesture-speech mismatch and mechanisms of learning: What the hands reveal about a child's state of mind. Cognitive Psychology, 1993, 25, 468-523. PDF

Previous work has shown that, when asked to explain a concept they are acquiring, children often convey one procedure in speech and a different procedure in gesture. Such children, whom we label "discordant," have been shown to be in a transitional state in the sense that they are particularly receptive to instruction--indeed more receptive to instruction than "concordant" children, who convey the same procedure in speech and gesture. This study asks whether the discordant state is transitional, not only in the sense that it predicts receptivity to instruction, but also in the sense that it is both preceded and followed by a concordant state. To address this question, children were asked to solve and explain a series of problems instantiating the concept of mathematical equivalence. The relationship between gesture and speech in each explanation was monitored over the series. We found that the majority of children who learned to correctly solve equivalence problems did so by adhering to the hypothesized path: They first produced a single, incorrect procedure. They then entered a discordant state in which they produced different procedures--one in speech and another in gesture. Finally, they again produced a single procedure, but this time a correct one. These data support the notion that the transitional state is characterized by the concurrent activation of more than one procedure, and provide further evidence that gesture can be a powerful source of insight into the processes involved in cognitive development.


1992

Perry, M., Church, R. B., & Goldin-Meadow, S. Is gesture-speech mismatch a general index of transitional knowledge? Cognitive Development, 1992, 7(1), 109-122. PDF

When asked to explain their beliefs about a concept, some children produce gestures that convey different information from the information conveyed in their speech (i.e., gesture-speech mismatches). Moreover, it is precisely the children who produce a large proportion of gesture-speech mismatches in their explanations of a concept who are particularly "ready" to benefit from instruction in that concept, and thus may be considered to be in a transitional state with respect to the concept. Church and Goldin-Meadow (1986) and Perry, Church and Goldin-Meadow (1988) studied this phenomenon with respect to two different concepts at two different ages and found that gesture-speech mismatch reliability predicts readiness to learn in both domains. In an attempt to test further the generality of gesture-speech mismatch as an index of transitional knowledge, Stone, Webb, and Mahootian (1991) explored this phenomenon in a group of 15-year-olds working on a problem-solving task. On this task, however, gesture-speech mismatch was not found to predict transitional knowledge. We present here a theoretical framework, which makes it clear why we expect gesture-speech mismatch to be a general index of transitional knowledge, and then use this framework to motivate our methodological practices for establishing gesture-speech mismatch as a predictor of transitional knowledge. Finally, we present evidence suggesting that, if these practices had been used by Stone et al., they too would have found that gesture-speech mismatch predicts transitional knowledge.


Goldin-Meadow, S., Wein, D. & Chang, C. Assessing knowledge through gesture: Using children's hands to read their minds. Cognition and Instruction, 1992, 9(3), 201-219. PDF

Is the information that gesture provides about a child's understanding of a task accessible not only to experimenters who are trained in coding gesture but also to untrained observers? Twenty adults were asked to describe the reasoning of 12 different children, each videotaped responding to a Piagetian conservation task. Six of the children on the videotape produced gestures that conveyed the same information as their nonconserving spoken explanations, and 6 produced gestures that conveyed different information from their nonconserving spoken explanations. The adult observers displayed more uncertainty in their appraisals of children who produced different information in gesture and speech than in their appraisals of children who produced the same information in gesture and speech. Moreover, die adults were able to incorporate the information conveyed in the children's gestures into their own spoken appraisals of the children's reasoning. These data suggest that, even without training, adults form impressions of children's knowledge based not only on what children say with their mouths but also on what they say with their hands.

1991

Butcher, C., Mylander, C. & Goldin-Meadow, S. Displaced communication in a self-styled gesture system: Pointing at the non-present. Cognitive Development, 1991, 6, 315-342. PDF

The ability to refer to objects or events that are not in the here and now is widely recognized as an important feature of language, one that sets it apart from other forms of communication. The purpose of this article is to determine whether a deaf child who was not exposed to a usable model of a conventional language could use his self-styled communication system to refer to objects that were out of his perceptual field. We found that, beginning at the age of 3 years and 3 months, the deaf child consistently and reliably used gesture to refer to objects that were not present in the room. Although delayed with respect to the onset of displaced communication in hearing children, the deaf child's use of gesture to refer to nonpresent objects developed despite the fact that his hearing mother rarely used her spontaneous gestures for this purpose. Thus, the techniques necessary to communicate about the "there-and-then" appear to be so fundamental to human language that they can be reinvented by a child who does not have access to a culturally shared linguistic system.


1990

Goldin-Meadow, S. & Mylander, C. The role of parental input in the development of a morphological system. Journal of Child Language, 1990, 17, 527-563. PDF

In order to isolate the properties of language whose development can withstand wide variations in learning conditions, we have observed children who have not had access to any conventional linguistic input but who have otherwsie experienced normal social environments. The chidren we study are deaf with hearing losses so severe that they cannot naturally acquire spoken language. In previous work, we demonstrated that, despite their lack of conventional linguistic input, the children developed spontaneous gesture systems which were structured at the level of the sentence, with regularities identifiable across features in a sentence, akin to syntactic structure. The present study was undertaken to determine whether one of these deaf children's gesture systems was structured at a second level, the level of the gesture - that is, were these regularities within a gesture, akin to morphologic structure?

We have found that (1) the deaf child's gestures could be characterized by a paradigm of handshape and motion combinations which formed a matrix for virtually all of his spontaneous gestures, and (2) the deaf child's gesture system was considerably more complex than the model provided by his hearing mother. These data emphasize the child's contribution to structural regularity at the intra-word level, and suggest that such structure is a resilient property of language.


Goldin-Meadow, S. & Mylander, C. Beyond the input given: The child's role in the acquisition of language. Language, 1990, 66(2), 323-355. PDF
[Reprinted in P. Bloom (Ed.), Language acquisition: Core readings. New York: Harvester Wheatsheaf, 1993.]

The child's creative contribution to the language-acquisition process is potentially most apparent in situations where the linguistic input available to the child is degraded, providing the child with ample opportunity to elaborate upon that input. The children described in this paper are deaf, with hearing losses so severe that they cannot naturally acquire spoken language, and their hearing parents have chosen not to expose them to sign language. Despite their lack of usable linguistic input, these children develop gestural communication systems which share many structural properties with early linguistic systems of young children learning from established language models. This paper reviews our findings on the structural properties of the deaf children's gesture systems and evaluates those properties in the context of data gained from other approaches to the question of the young child's language-making capacity.


1980-1989

Angiolillo, C. & Goldin-Meadow, S. Experimental evidence for agent-patient categories in child language. Journal of Child Language, 1982, 9, 627-643. PDF

Evidence provided by contrastive word order for agent and patient semantic categories in young children's spontaneous speech is confounded. Agents (effectors of the action) tend to be animate; patients (entities acted upon) tend to be inanimate. In an experiment designed to circumvent this confounding and to test young children's linguistic sensitivity to the role an entity plays in the action, nine children (2;4.0-2;11.5) described actions involving animate and inanimate entities playing both agent and patient roles. Four linguistics measures were observed. On every measure agents were treated differently from patients. For the most part, these agent-patient differences persisted when animate and inanimate entities were examined separately. These results provide evidence for the child's intention to talk about the role an entity plays, independent of its animateness, and also suggest that the child uses role-defined linguistics categories like AGENT and PATIENT to communicate these relational intentions.


Goldin-Meadow, S. & Feldman, H. The development of language-like communication without a language model. Science, 1977, 197, 401-403. PDF

Six 17-49 mo old deaf children (of hearing parents) who were unable to acquire oral language naturally and who were not exposed to a standard manual language spontaneously developed a structured sign system that had many of the properties of natural spoken language. This communication system appeared to be largely the invention of the child himself rather than of the caretakers.


Goldin-Meadow, S., Seligman, M. E. P., & Gelman, R. Language in the two-year old: Receptive and productive stages. Cognition, 1976, 4(2), 189-202. PDF

Describes 2 stages in the vocabulary development of 12 1-2 yr olds. In the earlier "receptive" stage, Ss said fewer nouns than they understood and said no verbs at all although they understood many. Ss then began to close the comprehension-production gap, entering a "productive" stage in which they said virtually all the nouns they understood plus their first verbs. Frequency and length of word combinations correlated with these vocabulary stages.
 


Last updated March 2014

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Helvetica}