Previous research has shown that listeners' identification of English anterior sibilant fricatives changes depending on whether they are primed to believe that the talker is a woman or a man. This article explored how stable this effect is across two types of priming. Listeners identified a nine-step shack-sack continuum created by combining an /s/-/f/continuum with a natural production of a VC that had been acoustically manipulated to be gender-neutral. In the explicit-priming condition, the talker's sex was primed by providing a picture of a man or a woman. In the implicit-priming condition, the talker's sex was primed by having listeners judge the grammaticality of acoustically gender-neutral sentences that implied the talker was either a woman or a man. The effect of sex priming was strongest in the explicit condition, and it was in the expected direction: more sack judgments were elicited for trials where a man's face was used as a prime. The influence of sex priming on fricative identification was weaker in the implicit condition, but in the expected direction. Together, these data show that gender normalization effects in fricative perception occur most strongly when the talker's gender is suggested very explicitly by showing a picture of the talker. The small size of the effect questions an interpretation of Strand and Johnson's effect as evidence that social variables influence speech perception pervasively.
Bibliographical noteFunding Information:
Acknowledgments: Stimulus preparation and data collection was supported by a Research Experience for Undergraduates grant from the National Science Foundation to the University of Minnesota Center for Cognitive Sciences. Participant funds were supported by the University of Minnesota College of Liberal Arts. A pilot study of this experiment was conducted as the third author’s undergraduate thesis at Vassar College, using materials developed jointly by the first and third authors. We gratefully acknowledge Janet K. Andrews for her input to that document.
© 2017 Walter de Gruyter GmbH, Berlin/Boston 2017.
- speech perception