TY - JOUR AU - Lee,, Eun-Ju AB - Abstract Two experiments extended the computers are social actors paradigm by examining when and why people are more likely to evince gender-typed responses to computers. In both experiments, participants played a trivia game with a computer, which they thought generated random answers. When the computer gender was manifested in cartoon characters, participants attributed greater competence and exhibited greater conformity to the male than female computers, but such differences emerged only when they were simultaneously engaged in multiple tasks (Experiment 1). To elucidate what accounts for gender stereotyping of computers, Experiment 2 tested 2 competing explanations, depletion of cognitive resources and reduced attention, by varying the modality of computer output (speech vs. text). The advantages of the male computer observed in Experiment 1 dissipated when the computer provided speech output, demanding greater processing attention. Recent technological advances have augmented computer interfaces with a wide array of human-like features, such as voice recognition, synthesized speech, and animated computer agents. Although the extent to which a computer is perceived and treated as an interactant, as opposed to a channel or a tool, would vary depending on technological attributes of the machine (J. K. Burgoon, Bonito, et al., 2000), “the fact that computers are programmed and can be used as media seems to be psychologically irrelevant when users are in the midst of an interaction” (Sundar & Nass, 2000, p. 700). Rather, people seem to automatically apply social scripts and norms toward computers, even though they are fully aware that it is unreasonable to do so (Nass & Moon, 2000; Reeves & Nass, 1996). Computers are social actors (CASA) paradigm posits that the way people interact with computers is fundamentally social (Nass & Moon, 2000; Reeves & Nass, 1996). A series of studies has supported this claim in a variety of domains, including gender stereotypes, politeness, reciprocity, and personality. For example, people perceived a female-voiced computer to be more informative about love and relationships than the male-voiced counterpart (Nass, Moon, & Green, 1997), conformed more with the computer exhibiting a personality similar to their own (Nass, Moon, Fogg, Reeves, & Dryer, 1995), and rated the tutor computer more positively when the evaluations were made on the same computer than on a different computer, manifesting politeness to an inanimate machine (Nass, Moon, & Carney, 1999). The current research aimed to extend the CASA paradigm in two important respects. First, it attempted to identify boundary conditions under which social responses to computers are more or less likely to occur. By varying the cognitive demands of the situation (Experiment 1) and the modality of computer output (Experiment 2), the present experiments investigated how technological and contextual features moderate the degrees to which people make social attributions to computers. Second, this research addressed the question of why people respond to computers socially, despite their knowledge that “the computer is not a person and does not warrant human treatment or attribution” (Nass & Moon, 2000, p. 82). To shed light on the psychological processes underlying the application of social rules to computers, Experiment 2 evaluated two competing explanations: depletion of cognitive resources and reduced attention. Mindlessness explanation for social responses to computers To account for why people tend to exhibit social reactions to computers, Nass and Moon (2000) proposed a mindlessness explanation. In their now classical study, Langer, Blank, and Chanowitz (1978) demonstrated that even placebic reason elicited as much compliance to a small request as did adequate reason. Specifically, when the experimenter’s request was followed by a bogus reason, simply reiterating the obvious (“Excuse me, I have five pages. May I use the Xerox machine, because I have to make copies?” p. 637), people were as likely to comply as when they received a real reason, which provides additional information justifying the request (“Excuse me, I have five pages. May I use the Xerox machine, because I’m in a rush?” p. 637). Since then, the idea that people often perform social interaction mindlessly has spawned considerable scholarly interest. Defined as “a state of alertness and lively awareness,” mindfulness takes the form of “active information processing, characterized by cognitive differentiation” (Langer, 1989, p. 138). Mindlessness, on the other hand, represents the mental state in which an individual’s conscious attention is paid to only a few cues defining the situation, with relative disregard for other substantive cues (Langer, 1989). Although repetitive, routinized behavior is more likely to be carried out mindlessly, as individual components of a task go unnoticed and become cognitively unavailable (Langer & Imber, 1979), repetition is not necessary to evoke mindless behavior. When exposed to personally irrelevant information, for example, people were prone to uncritically accept the information, and this “premature cognitive commitment” based on a single exposure determined subsequent behavior (Chanowitz & Langer, 1981). According to Nass and his colleagues, mindlessness often characterizes computer users’ psychological experience. When interacting with a computer, people tend to focus on human-like cues, such as interactivity and voice, and thus fail to take into account the asocial nature of the interaction (Nass & Moon, 2000). With their conscious attention fixed on the attributes traditionally linked to humans, people do not seem to “process the fact that the machine is not human” (Sundar & Nass, 2000, p. 688). As a result, they come to use social categories (e.g., gender and ethnicity) for computers and emit overlearned social behaviors (e.g., politeness and reciprocity) toward computers (Nass & Moon, 2000). In other words, when a computer displays resemblance to human interactants, for example, understanding their commands or talking back, people apparently become “oblivious to novel aspects of the situation” (Langer, 1992, p. 289) (“Although it acts like a human, it is a computer!”) and act upon social scripts as they normally would in human–human interaction. The present research aimed to validate the mindlessness explanation for the CASA paradigm by investigating when social responses to computers are more or less likely to occur. If mindlessness indeed accounts for why people follow social scripts in their interactions with computers (Nass & Moon, 2000), such behavior should be more pronounced when the situational features foster mindlessness. In so doing, this research focused on gender stereotyping as an example of social responses. Considering that gender as a social category is irrelevant to inanimate machines, mindlessness, which entails “an overreliance on categories and distinctions drawn in the past” (Langer, 1992, p. 289), might lead people to nonetheless ascribe gender-typed traits to a computer just the same way they do in human–human encounter. In fact, consistent with research on gender stereotyping in social interaction (see Carli, 2001; Ridgeway, 2001, for reviews), studies have found that people not only attribute greater task competence to male than female computers but also assume gender-particular skills and knowledge for a computer. Specifically, participants rated a female-voiced computer as more knowledgeable about love and relationships than a male-voiced computer, whereas a male-voiced computer was perceived as more knowledgeable about technical subjects than its female counterpart (Nass et al., 1997). Even when the computer used clearly machine-generated voices, the product descriptions delivered by a male-voiced computer were considered to be more credible than those delivered by a female-voiced computer (Morishima, Nass, Bennett, & Lee, 2001). Moreover, the findings were replicated with gendered cartoon characters, such that a male computer’s suggestions were better received on a masculine than feminine topic, whereas the opposite was true for a female computer (Lee, 2003). Unlike previous studies that focused on demonstrating the operation of gender stereotypes in human–computer interaction (HCI), however, the current research investigated under what conditions gender stereotyping is more or less likely to occur, especially with respect to cognitive demands of the situation and the modality of computer output. Motivational and cognitive bases for mindlessness Research has identified several situational factors facilitating mindful information processing. For example, when the request became more costly, people were more likely to comply with the request based on the adequacy of the reason, differentiating placebic and real reasons (Langer et al., 1978). Similarly, personal relevance of the outcome encouraged people to use more complex decision strategies (McAllister, Mitchell, & Beach, 1979) and to consider information more critically (Chanowitz & Langer, 1981). When people knew that they would need to justify their opinions, their thoughts became more differentiated or complex, presumably due to the heightened motivation to defend their opinions against possible criticisms (Tetlock, 1983). Likewise, J. K. Burgoon, Berger, and Waldron (2000) have noted that when people anticipate negative outcomes or have suspicion about the source, they are more likely to process the message rather mindfully. Unlike previous studies highlighting motivational deficits as a main cause of mindless behaviors, however, the present research focused on reduced cognitive capacity as a basis for mindlessness. Although Langer (1992) claims that mindlessness theory does not assume “the existence of capacity limitations” nor does it attribute mindless information processing to such limitations, it seems unreasonable to believe that individuals can engage in “a fully conscious, active mode of processing information” (p. 301) regardless of the situational constraints that potentially impede their cognitive functioning. That is, even though “capacity limitations remain indeterminate” and mindlessness is largely determined by “how we initially view information” (p. 301), rather than by the inherently limited capacity of the human brain, there are situations in which mindful processing is temporarily inhibited, for example, by the time pressure or cognitive demands of the task. In fact, the amount of time to think about a discussion topic had differential effects on performance depending on the familiarity of the topic, as the time constraint hampered mindful consideration of topic-related thoughts, leading people to rely upon existing scripts (Langer & Weinman, 1981). Similarly, Dolinski, Ciszek, Godlewski, and Zawadzki (2002) have reported that sudden emotional changes disrupt individuals’ cognitive functions, and such temporary disruption induces mindless compliance to small requests. Particularly germane to the current research is Gilbert, Pelham, and Krull’s (1988) study on cognitive busyness and person perception. In their study, when participants were simultaneously engaged in two tasks, they failed to take into account situational constraints in inferring a target person’s personality. That is, even though they gathered the situational constraints under which the target person was operating (e.g., discussing relaxing vs. anxiety-provoking topics), for they were required to memorize the discussion topics, they were less likely to attribute the target’s behavior to the situation. Although the authors did not explicitly associate the findings with mindlessness, considering that mindlessness entails the failure to consider qualifying conditions for a given phenomenon and reduced awareness of alternative conceptions (Langer, 1989, 1992), dispositional attribution in the face of a strong situational constraint appears to exemplify mindlessness. If cognitive demands prompt mindlessness and if mindlessness fosters gender stereotyping of computers, people will become more likely to make gender-typed attributions to computers when the task is more cognitively demanding (Experiment 1). The finding that the two-task group was less likely to “correct” their dispositional inferences, however, can be explained by two slightly different psychological processes. First, as the authors suggested, the peripheral task (i.e., memorizing the discussion topics) might have usurped limited cognitive resources, thereby disabling the adjustment of dispositional inferences in light of the situational forces shaping the target’s behavior. Alternately, those assigned an additional task might have allocated greater attention to the peripheral task than to the personality judgment rather than trying to perform both tasks equally well. That is, they might have focused on memorizing the discussion topics with relative disregard for the focal task. In this case, it was not the depletion of cognitive resources, but reduced conscious attention to the person perception task, that prevented people from using the situational constraints in making sense of the target’s behavior. Experiment 2 evaluated these two possibilities. Experiment 1 To establish that gender stereotyping of computers becomes more likely as the cognitive demands of the situation increase, Experiment 1 had half the participants mentally rehearse a random number (Gilbert & Hixon, 1991) while playing a trivia game with a computer. Because clearly gender-biased tasks (e.g., Lee, 2003; Nass et al., 1997) are likely to heighten gender salience and prime people to encode and organize incoming information on the basis of gender (Bem, 1981), for a more conservative test of gender stereotyping in HCI, gender-neutral questions were asked. H1 through H3 concern the replication of previously documented computer gender effects, and H4 serves as the key hypothesis for Experiment 1. H1a-b: Participants will attribute greater (a) trustworthiness and (b) competence to the male than female computers. H2: Participants will believe that the male computers are more likely to provide correct answers than their female counterparts. H3: Participants will be more likely to accept the male than female computers’ answers. H4: The effects of the computer gender, as specified in H1a-b through H3, will be more pronounced when participants are simultaneously engaged in two, as compared to one, tasks. Method Participants A total of 100 undergraduate students (61 women and 39 men) enrolled in communication courses participated in the study for extra course credit. Procedure Upon arrival at the laboratory, participants were told that they would play an interactive trivia game with a computer. To reduce the participants’ suspicion about the purpose of the experiment, they were first asked to choose a letter on the computer screen, ranging from A to E, to determine the computer agent to interact with. Regardless of the letter selected, the participants saw either a female or a male character representing the computer, randomly chosen from two male and two female characters. At this point, participants chose a number, ranging from 1 to 10, to ostensibly determine a set of questions. Again, the computer presented the same questions irrespective of the number selected. First, the computer presented a multiple-choice trivia question (e.g., “Which of the following famous couples had the shortest marriage?”). Once the participants picked their initial answer, the computer gave its answer on the following page (e.g., “Sorry, the correct answer is D. Drew Barrymore and Jeremy Thomas had the shortest marriage.”), which they knew might or might not be correct; that is, the participants were explicitly told that the computer was programmed to generate a random answer. Hence, the participants had to guess whether the computer was presenting the correct answer or not and submit their final answer accordingly. For 8 out of 12 questions, the computer presented a different answer from the participants’. To enhance their involvement in the experiment and discourage blind rejection of the computer’s answer, a $20 cash prize was offered for the person with the highest score. Thus, to maximize the likelihood of winning the prize, they needed to accept the computer’s answer when it seemed like a correct one. Once they submitted their final answer, the computer proceeded to the next question without revealing the real answer. This procedure was repeated for 12 different questions. At the end of the practice round, those in the two-task condition were given a nine-digit number to memorize and rehearse during the interaction. To give an incentive to follow the instruction, participants were told that only those who remembered the number correctly would enter a drawing to win a gift certificate. The one-task group was not exposed to this manipulation. Results Manipulation check Prior to the main study, cartoon characters were pretested with 50 (17 men and 33 women) undergraduate students. They were first presented with one of the four cartoon characters (two males, two females) and instructed to indicate how feminine and masculine the character was on a 10-point scale (1—not at all masculine/feminine, 10—very much masculine/feminine). The femininity rating was reverse coded and then combined with the masculinity rating to form a masculinity index (r=−.86, p<.0001). A 2 × 2 (Participant Gender × Character Gender) analysis of variance (ANOVA) confirmed that male characters were considered to be significantly more masculine than female characters, F(1, 46) = 113.91, p<.0001, =.71. Furthermore, one-sample ttests showed that male characters were perceived as significantly more masculine than the scale midpoint (11.00), t(24) = 4.37, p<.001, whereas female characters were perceived as significantly more feminine, t(24) =−14.97, p<.001. There was no significant interaction, F< 1. Index construction For perceived trustworthiness, participants indicated how well each of the following adjectives described the computer on a 10-point scale (1—describes very poorly, 10—describes very well): trustworthy, honest, and reliable(α=.91; M= 11.34, SD= 4.94). The ratings were summed to create the trustworthiness index. Likewise, participants rated how intelligent, knowledgeable, and competentthey thought the computer was, and the scores were summed for the perceived competence index (α=.94; M= 13.40, SD= 5.68). After submitting the final answer for each question, participants estimated how likely they thought the computer was correct on an 11-point scale, ranging from 0 to 100% in 10% increments. The ratings were averaged across eight critical trials (α=.81; M= 40.46, SD= 14.15). Conformity was operationalized in terms of how many times, out of eight questions on which the computer disagreed, the participants switched to the computer’s suggestions. Conformity scores ranged from 0 to 8 (M= 4.11, SD= 1.80). To control for the potentially contaminating effects of the participants’ self-confidence, participants also indicated how confident they were about their initial answer before seeing the computer’s. The ratings on an 11-point scale, ranging from 0 to 100% in 10% increments, were averaged across eight critical trials (α=.87; M= 30.50, SD= 18.29). Another potentially contaminating variable, the participants’ prior knowledge about the questions, was measured by the number of correct initial answers (M= 1.98, SD= 1.18). Hypothesis tests H1a-b predicted that participants would perceive the male computer as more trustworthy and competent than the female computer. A 2 × 2 (Computer Gender × Number of Task) ANOVA on perceived trustworthiness and competence, however, revealed no main effect for computer gender, F< 1 and F(1, 96) = 1.76, p=.19, respectively.1,2 Instead, significant interactions emerged between the computer gender and the number of tasks, F(1, 96) = 4.76, p<.04, =.05, on perceived trustworthiness; F(1, 96) = 6.18, p<.02, =.06, on perceived competence. Decomposition of the interaction revealed that the participants perceived the male computer to be more trustworthy than the female computer only in the two-task condition, t(53) = 2.55, p<.02 (see Table 1). For the one-task condition, there was no significant difference in perceived trustworthiness between the male and the female computers, t< 1. Likewise, participants attributed greater competence to the male computer than the female computer, only when they were engaged in multiple tasks, t(53) = 2.74, p<.009. With no extra task to perform, participants did not assess the male and the female computers’ competence differently, t< 1. Table 1 Means and Standard Deviations of Key Variables: Experiment 1 Computer Gender . Two Task . One Task . Male . Female . Male . Female . Trustworthiness 11.93 (4.42)a 8.89 (4.41)b 11.90 (4.99)a 13.04 (5.25)a Competence 14.59 (6.10)a 10.43 (5.16)b 13.86 (4.67)a 15.12 (5.53)a Computer correctness 42.04 (16.52)a 37.37 (15.50)a 42.31 (12.21)a 42.56 (11.16)a Conformity 4.74 (1.65)a 3.21 (1.93)b 4.19 (1.40)a 4.38 (1.79)a Computer Gender . Two Task . One Task . Male . Female . Male . Female . Trustworthiness 11.93 (4.42)a 8.89 (4.41)b 11.90 (4.99)a 13.04 (5.25)a Competence 14.59 (6.10)a 10.43 (5.16)b 13.86 (4.67)a 15.12 (5.53)a Computer correctness 42.04 (16.52)a 37.37 (15.50)a 42.31 (12.21)a 42.56 (11.16)a Conformity 4.74 (1.65)a 3.21 (1.93)b 4.19 (1.40)a 4.38 (1.79)a Note:Standard deviations are in the parentheses. Different subscripts within each task type indicate significant differences between the scores within that condition, p<.05. Open in new tab Table 1 Means and Standard Deviations of Key Variables: Experiment 1 Computer Gender . Two Task . One Task . Male . Female . Male . Female . Trustworthiness 11.93 (4.42)a 8.89 (4.41)b 11.90 (4.99)a 13.04 (5.25)a Competence 14.59 (6.10)a 10.43 (5.16)b 13.86 (4.67)a 15.12 (5.53)a Computer correctness 42.04 (16.52)a 37.37 (15.50)a 42.31 (12.21)a 42.56 (11.16)a Conformity 4.74 (1.65)a 3.21 (1.93)b 4.19 (1.40)a 4.38 (1.79)a Computer Gender . Two Task . One Task . Male . Female . Male . Female . Trustworthiness 11.93 (4.42)a 8.89 (4.41)b 11.90 (4.99)a 13.04 (5.25)a Competence 14.59 (6.10)a 10.43 (5.16)b 13.86 (4.67)a 15.12 (5.53)a Computer correctness 42.04 (16.52)a 37.37 (15.50)a 42.31 (12.21)a 42.56 (11.16)a Conformity 4.74 (1.65)a 3.21 (1.93)b 4.19 (1.40)a 4.38 (1.79)a Note:Standard deviations are in the parentheses. Different subscripts within each task type indicate significant differences between the scores within that condition, p<.05. Open in new tab In contrast to the general assessments of computer competence and trustworthiness, when participants were asked to estimate how likely the computer’s answer was correct for each question, there were no main or interaction effects emerged, Fs < 1. Thus, H2 was not supported. Consistent with H3, male computers elicited greater conformity than did the female computers, F(1, 96) = 3.75, p<.03, =.04, one tailed. This main effect, however, was qualified by the significant interaction, F(1, 96) = 6.19, p<.02, =.06. That is, participants conformed more to the male than female computers, but only when they were assigned the number rehearsal task, t(53) = 3.14, p<.004. For the one-task group, there was no significant difference in conformity whether the computer was male or female, t< 1. To control for the potential effects of the participants’ subjective uncertainty and their prior knowledge on conformity, a 2 × 2 analysis of covariance (ANCOVA) was performed on conformity with the participants’ self-confidence and the number of correct initial answers as covariates. Both variables turned out to be nonsignificant covariates (Fs < 1), and covarying out these variables did not significantly alter the findings. In sum, significant interactions between the computer gender and the number of task on perceived trustworthiness, perceived competence, and conformity supported H4, which predicted greater gender stereotyping in the two-task condition, except for perceived correctness of the computer’s answers. Discussion Experiment 1 showed that computer gender had significant effects on the participants’ perceptions of and behavioral reactions to the computer, only when they were engaged in multiple tasks. Just as people failed to use the information that the target person was speaking on anxiety-provoking topics and attributed greater dispositional anxiety to the target when cognitively busy (Gilbert et al., 1988, experiment 1), participants seemed to forget the fact that they were dealing with a genderless machine and acted upon gender-typed social scripts, when simultaneously engaged in multiple tasks. Specifically, a comparison of the means arrayed in Table 1 suggests that gender stereotyping of computers in the two-task condition was characterized by the relatively negative appraisal and dismissal of the female computer’s performance rather than by more favorable evaluations of the male computer. Considering that previous studies reporting gender stereotyping in HCI (e.g., Lee, 2003; Nass et al., 1997) did not impose particularly strong cognitive demands, the null effect of computer gender in the one-task condition seems inconsistent at first blush. Two possibilities might account for the apparent inconsistency. First, unlike previous studies where gender stereotypes were defined as the tendency to associate the computer gender with a gender-typed task (e.g., sports vs. fashion; technology vs. love and relationships), Experiment 1 operationalized gender stereotyping in terms of the overall advantages of male influence agents using a gender-neutral task. Although people tend to attribute greater competence to men than to women, and are more likely to conform to the male than female influence agents, such a tendency is not as robust as the general proclivity to assume gender-particular skills and knowledge (Carli, 2001). Second, participants were explicitly told that the computer might present an incorrect answer and they needed to determine whether or not to accept the computer’s suggestions. Given that uncertainty of the situation (Langer, 1994) and features arousing message recipient’s suspicion prompt more thoughtful information processing (J. K. Burgoon, Berger, et al., 2000), such manipulation, in combination with low self-confidence ratings (30%), might have evoked greater mindfulness than people would normally exhibit in real-life situations where the computer’s correctness is reasonably assumed. Although the results comport well with the notion that cognitive overload impairs people’s ability to correct dispositional inferences (Gilbert et al., 1988), they are still open to an alternative explanation based on the differential allocation of attention. Specifically, it might have been the case that those in the two-task condition did not experience processing deficiency resulting from multitasking, but simply paid more conscious attention to the number rehearsal task, rendering the trivia game a “peripheral” task. In other words, greater gender stereotyping in the two-task condition was not necessarily due to the depletion of cognitive resources but more or less conscious withdrawal of attention from the task, which was required to differentiate between the present (HCI) and the past situations (human–humaninteraction) and to deactivate the social scripts enacted by perceptually salient gender cues (“It is a computer and thus, gender is not relevant.”). Experiment 2 was conducted to evaluate these competing explanations by varying the modality of computer output: speech versus written text. Experiment 2 Two rival hypotheses were entertained with respect to how computer modality moderates the extent to which people evince gender stereotypes toward computers. On one hand, speech output might evoke greater gender stereotyping than does text for the following reasons. First, past research has shown that spoken words require more cognitive resources and effortful processing than do written text (Wong, 2001). This is especially true when synthesized speech is used. Even when synthesized speech was as intelligible as recorded speech, the recall of eight-digit numbers was significantly better when the numbers were presented by male human voice than by male synthesized speech (Smither, 1993). Similarly, after using an information retrieval system either in the written or in the spoken modes, participants reported greater subjective mental workload in the spoken mode (Bigot, Jamet, Rouet, & Amiel, 2006). Therefore, if limited cognitive capacity and ensuing processing deficiency under high cognitive demands are responsible for gender stereotyping of computers, such behavior will be more pronounced in the speech condition. Second, speech makes personal and social cues of the communicator more prominent than does written text (Chaiken & Eagly, 1983). Therefore, speech output might accentuate gender salience, thereby facilitating gender categorization and gender-typed social attributions. Last, considering that speech makes interfaces more anthropomorphic than does text (J. K. Burgoon, Bonito, et al., 2000), if social responses to the computer are triggered by its resemblance to humans, speech will be more likely than text to elicit gendered attributions. By contrast, several possibilities suggest greater gender stereotyping with text output. First, under normal conditions, speech calls for greater attention from message recipients than written text, encouraging more active information processing (Foos & Goolkasian, 2005). In support of this hypothesis, when participants were made to pay close attention to experimental stimuli, recall improved only for printed words, diminishing the relative advantages of spoken words (Foos & Goolkasian, 2005, experiment 4). Similarly, J. K. Burgoon, Bonito, et al. (2000) noted that interfaces incorporating speech synthesis increase a sense of involvement and mutuality, yielding more active participation from users. Second, researchers have reported that “novel communication situations and suspicion-arousing features of modality” (J. K. Burgoon, Berger, et al., 2000, p. 110) and “unfamiliar or otherwise troublesome communicative situations” (Motley, 1992, p. 307) encourage more conscious communicative decisions. Although its use has been on the rise, given that speech is still a much less common medium than written text in HCI, speech output might foster more mindful responses by virtue of its novelty. Third, albeit as intelligible as recorded human speech, synthetic speech still falls short in naturalness (Nass & Lee, 2001), inviting more conscious message decoding. Taken together, if gender stereotyping of computers results largely from attentional deficits, it should be more pronounced when the computer presents written text as compared to speech output. H5-1: The effects of the computer gender predicted in H1a-b through H3 will be more pronounced with speech than text output (resource depletion hypothesis). H5-2: The effects of the computer gender predicted in H1a-b through H3 will be more pronounced with text than speech output (reduced attention hypothesis). In addition, Experiment 2 investigated potentially confounding effects of the language style on individuals’ perceptions of male and female computers. Possibly, the less positive responses to the female computer in the two-task condition in Experiment 1 might have stemmed in part from the use of fact-oriented language, commonly identified as the masculine language style (Eakins & Eakins, 1978; Lakoff, 1975; Mulac, 1998). Given that the comments used in Experiment 1 were strictly factual, simply verbalizing the selected answer, the incongruity between the character gender and the language style might have violated the participants’ expectations and evoked more negative reactions to the female computer (M. Burgoon & Klingle, 1998). To address this possibility, Experiment 2 created more feminine comments for the computer output and examined if they would reverse the findings or at least attenuate the relative advantages of the male computer. To do so, because some characteristics of the female language style, such as self-disclosure and references to emotions (Mulac, 1998), might unduly encourage social treatment of the computer, short comments praising participants’ performance were added to the messages.3 Method Participants A total of 135 undergraduate students (78 women and 57 men) enrolled in communication courses participated in the study for extra course credit. Procedure The experimental procedure was identical to that of Experiment 1, except for a few modifications. First, participants were not asked to choose the computer agent to work with. Instead, in the text condition, a male or a female cartoon character, randomly selected out of two male and two female characters, was shown next to the message. For the speech condition, participants were presented with a “Play” button to hear the computer’s answer before submitting their final answer. They could not move to the next page unless they played the audio file first. Second, because the mindless responses were found only in the two-task condition in Experiment 1, all participants were given the number rehearsal task. Last, for maximal exposure to the positive feedback from the computer, the computer validated the participants’ answer on 7 out of 12 questions and suggested a different answer on five critical trials. Results Index construction All indices were constructed as in Experiment 1, except for two additional measures. For perceived trustworthiness, participants indicated how trustworthy, honest, and reliablethe computer was on a 10-point scale (1—describes very poorly, 10—describes very well), and the scores were summed across the three items (α=.82). Likewise, perceived competence was indexed by summing the participants’ ratings of the computer on intelligent, knowledgeable, and competent(α=.87). After submitting the final answer for each question, the participants estimated how likely they thought the computer’s answer was correct on an 11-point scale, ranging from 0 to 100% in 10% increments. The ratings were then averaged across five critical trials (α=.79; M= 41.78, SD= 18.79). Conformity was measured by counting how many times, out of five questions on which the computer disagreed, the participants switched to the computer’s suggestions (M= 2.95, SD= 1.26). In addition, as another measure of the acceptance of the computer’s suggestions, it was recorded how often people changed their answer to something else when the computer validated their initial answer (M= 0.71, SD= 0.98). Participants also indicated how confident they were about their initial answer before seeing the computer’s answer on an 11-point scale, ranging from 0 to 100%. The ratings were averaged across five critical trials (α=.85; M= 30.24, SD= 20.04) and seven filler trials (α=.83; M= 37.72, SD= 18.93), respectively. Participants’ prior knowledge was measured by the number of correct initial answers on five critical trials (M= 1.10, SD= 0.99). Two additional measures were included in Experiment 2. First, to examine if speech rendered the computer gender more salient than did cartoon characters, participants were asked to indicate how masculineand femininethey thought the computer was on a 10-point scale (1—describes very poorly, 10—describes very well). The femininity score was then subtracted from the masculinity score, resulting in an index whose positive scores indicated greater masculinity than femininity attribution and whose negative scores indicated the opposite (r=−.40; M= 2.47, SD= 2.93, for male computers; M=−3.20, SD= 3.97, for female computers). Second, participants rated how awkward, strange, and natural(reverse scored) the interaction was on the same 10-point scale (1—describes very poorly, 10—describes very well), which were summed to create the perceived novelty index (α=.78; M= 16.45, SD= 6.10). Manipulation check To examine (a) if cartoon characters and synthesized voices manipulated the computer gender as intended and (b) if speech rendered computer gender more salient than did cartoon characters, a 2 × 2 (Computer Gender × Computer Modality) ANOVA was computed on perceived masculinity of the computer. First, male computers were perceived as significantly more masculine than female computers, F(1, 131) = 106.56, p<.0001, =.45. Moreover, one-sample ttests also confirmed that male computers were perceived as more masculine than the scale midpoint, t(80) = 7.59, p<.0001, and female computers were perceived as more feminine than the theoretical midpoint, t(53) =−5.93, p<.0001. However, this main effect was qualified by the significant Computer Gender × Modality interaction, F(1, 131) = 25.51, p<.0001, =.16. Decomposition of interaction within each modality revealed that with speech output, male computers were perceived as more masculine than the female computers, t(67) = 10.25, p<.0001, Cohen’s d= 2.50. The same was true for the text condition, but the difference was less pronounced, t(43.71) = 3.76, p<.0001, Cohen’s d= 1.14. When the interaction was decomposed for male and female computers separately, the male computer was perceived as more masculine when it presented speech than text output, t(79) =−3.96, p<.0001. Likewise, the female computer was perceived as more feminine with speech than text output, t(52) = 3.17, p<.004. Collectively, the results indicated that speech heightened gender salience to a greater extent than did cartoon characters. In addition, a 2 × 2 ANOVA on perceived novelty of interaction revealed a significant main effect for computer modality, with speech output being rated as significantly more novel and less familiar (M= 17.72, SD= 5.74) than the text output (M= 15.12, SD= 6.22), F(1, 131) = 5.72, p<.02, =.04. No other effects were statistically significant, all Fs < 2.46, all ps =.11. Hypothesis tests A 2 × 2 (Computer Modality × Computer Gender) ANOVA on perceived trustworthiness showed a significant interaction between modality and computer gender, F(1, 131) = 5.11, p<.03, =.04.4,5 Decomposition of the interaction revealed that the effect of the computer gender was significant only with text output, supporting the reduced attention hypothesis (H5-2). Specifically, when the computer provided text output, male computers were perceived as more trustworthy than their female counterparts, t(64) = 2.37, p<.03. In contrast, when the messages were delivered in synthetic speech, there was no significant difference between the male and the female computers in perceived trustworthiness, t< 1. Neither computer modality nor computer gender had a significant main effect, Fs < 1 (see Table 2). Table 2 Means and Standard Deviations of Key Variables: Experiment 2 Computer Gender . Speech . Text . Male . Female . Male . Female . Perceived masculinity 3.58 (2.98)a −4.85 (3.80)b 1.21 (2.33)a −1.68 (3.54)b Trustworthiness 11.33 (5.74)a 12.54 (4.93)a 13.95 (4.94)b 11.00 (5.06)a Competence 16.05 (6.54)a 16.38 (5.74)a 17.92 (5.02)a 14.54 (5.64)b Computer correctness 41.21 (19.25)a 44.23 (19.24)a 41.47 (17.99)a 40.79 (19.54)a Conformity 2.74 (1.36)a 3.08 (1.29)a 3.26 (1.15)a 2.71 (1.15)b Computer Gender . Speech . Text . Male . Female . Male . Female . Perceived masculinity 3.58 (2.98)a −4.85 (3.80)b 1.21 (2.33)a −1.68 (3.54)b Trustworthiness 11.33 (5.74)a 12.54 (4.93)a 13.95 (4.94)b 11.00 (5.06)a Competence 16.05 (6.54)a 16.38 (5.74)a 17.92 (5.02)a 14.54 (5.64)b Computer correctness 41.21 (19.25)a 44.23 (19.24)a 41.47 (17.99)a 40.79 (19.54)a Conformity 2.74 (1.36)a 3.08 (1.29)a 3.26 (1.15)a 2.71 (1.15)b Note:Standard deviations are in the parentheses. Different subscripts within each modality indicate significant differences between the scores within that condition, p<.05. Positive masculinity scores indicate greater masculinity than femininity attribution, whereas negative scores indicate the opposite. Open in new tab Table 2 Means and Standard Deviations of Key Variables: Experiment 2 Computer Gender . Speech . Text . Male . Female . Male . Female . Perceived masculinity 3.58 (2.98)a −4.85 (3.80)b 1.21 (2.33)a −1.68 (3.54)b Trustworthiness 11.33 (5.74)a 12.54 (4.93)a 13.95 (4.94)b 11.00 (5.06)a Competence 16.05 (6.54)a 16.38 (5.74)a 17.92 (5.02)a 14.54 (5.64)b Computer correctness 41.21 (19.25)a 44.23 (19.24)a 41.47 (17.99)a 40.79 (19.54)a Conformity 2.74 (1.36)a 3.08 (1.29)a 3.26 (1.15)a 2.71 (1.15)b Computer Gender . Speech . Text . Male . Female . Male . Female . Perceived masculinity 3.58 (2.98)a −4.85 (3.80)b 1.21 (2.33)a −1.68 (3.54)b Trustworthiness 11.33 (5.74)a 12.54 (4.93)a 13.95 (4.94)b 11.00 (5.06)a Competence 16.05 (6.54)a 16.38 (5.74)a 17.92 (5.02)a 14.54 (5.64)b Computer correctness 41.21 (19.25)a 44.23 (19.24)a 41.47 (17.99)a 40.79 (19.54)a Conformity 2.74 (1.36)a 3.08 (1.29)a 3.26 (1.15)a 2.71 (1.15)b Note:Standard deviations are in the parentheses. Different subscripts within each modality indicate significant differences between the scores within that condition, p<.05. Positive masculinity scores indicate greater masculinity than femininity attribution, whereas negative scores indicate the opposite. Open in new tab Likewise, although the overall interaction between computer gender and modality on perceived competence failed to reach statistical significance, F(1, 131) = 3.33, p<.08, =.03, additional analyses for each modality replicated the findings on trustworthiness (Winer, 1971). With text output, male computers were perceived as more competent than female computers, t(64) = 2.57, p<.02. No corresponding difference was found with speech output, t< 1. To test how computer modality moderates the effects of computer gender on perceived correctness of the computer, a 2 × 2 (Computer Gender × Computer Modality) ANOVA was computed. Replicating Experiment 1, however, no main or interaction effects were statistically significant, all Fs < 1. A 2 × 2 ANOVA on conformity, with computer gender and computer modality as independent variables, yielded a significant interaction, F(1, 131) = 4.01, p<.05, =.03. Specifically, male computers elicited greater conformity than did their female counterparts with text output, t(64) = 1.91, p<.03, one tailed. By contrast, when the computers generated speech output, male and female computers did not differ in its ability to induce conformity, t(67) =−1.00, p=.32. No other main or interaction effects were statistically significant, all Fs < 1.85, all ps =.17. Moreover, neither the participants’ self-confidence in their initial answers nor the prior knowledge proved to be a significant covariate of conformity, Fs < 1. If participants were more likely to accept the male computers’ suggestions than the female computers’, albeit only with text output, such a tendency might also have emerged when the computer validated the participants’ answers. Contrary to the findings on conformity, however, there were no significant main or interaction effects, all Fs < 1.61, all ps =.20. That is, when the computer confirmed the participants’ answer, participants were not any more likely to believe the male than female computers’ answers, regardless of the output modality. Discussion Experiment 2 pitted against two alternative hypotheses for why people were more likely to exhibit gender stereotypes toward a computer in the two-task condition. Consistent with the reduced attention account, when speech output demanded greater conscious attention and effortful message processing, participants seemed to approach the task in a more mindful manner and corrected inappropriate social attributions enacted by salient gender cues. Even though Experiment 2 employed more feminine comments than those used in Experiment 1, participants’ responses were in general more favorable to the male than female computers, casting doubt on the language–character congruity explanation for the relative advantages of male computers. Participants gender stereotyped the computer only with text output, even though it was the speech output that made computer gender more salient. On one hand, such findings effectively refute the conjecture that synthesized speech did not evoke gender-typed reactions because it failed to manipulate computer gender as intended. On the other hand, the results seem to suggest a potential disjunction between “activation” and “application” of stereotypes (Gilbert & Hixon, 1991). That is, although certain contextual features, such as interaction modality, might temporarily heighten gender salience and thus increase the accessibility of gender stereotypes, they might not necessarily lead people to apply activated stereotypes to a specific target. Especially when the application of stereotypic beliefs is blatantly inappropriate as in the current research, the same situational variable that elevates the salience of a category membership might simultaneously suppress the application of stereotypes. Computer gender, however, had no significant main or interaction effects on some variables, questioning the generalizability of the CASA paradigm. First, in both experiments, perceived likelihood of the computer presenting the correct answer did not vary as a function of computer gender. One possibility pertains to the way perceived correctness was operationalized. Unlike perceived trustworthiness and competence, which were based on overall impression measures, perceived correctness was examined by asking participants how likely the computer presented the correct answer for each question. When forced to specifically assess the likelihood of the computer’s offering the correct answer, participants might have become more mindful and thus less influenced by peripheral cues, such as computer gender. Second, although participants were more likely to switch to the male computers’ than the female computers’ answers when the computer disagreed with them, no such difference was found when the computer validated their initial answers; that is, people were not any more suspicious about the female computers’ answers. Possibly, self-serving bias might account for this discrepancy. Just as explicitly random praise from the computer enhanced perceived self-performance as much as sincere praise did (Fogg & Nass, 1997), participants might have automatically accepted the information that boosted their self-esteem, regardless of the source. Another possibility is a restricted variance. Even though participants knew that the computer might be falsely confirming their response, sticking to their initial answer would have been the most rational choice when they did not have a better idea. The fact that participants changed their initial answer on the filler trials only about 10% of the time comports well with both accounts. General discussion Taken together, the present experiments extended the CASA paradigm in several important respects. First, unlike previous studies that investigated if and how closely individuals’ responses to computers mirror those to other people in social interaction, the current research explored what situational features facilitate or inhibit such responses. For example, the finding that speech output significantly suppressed social responses to computers illustrates that individuals’ responses to computers are not invariably social. Second, this research explored what underlies the seemingly mindless gender-typed responses to computers: depletion of cognitive resources or attentional deficits. When speech output renewed participants’ attention to the focal task, whether it was due to its relative novelty in the context of HCI or inherent qualities of aural stimuli, participants became less prone to emit gender stereotypes to computers. Third, the present research ruled out a competing explanation for gender stereotyping of computers, perceptual salience of gender category cues. Although speech heightened gender salience to a greater extent than did cartoon characters, gender stereotyping of computers was less pronounced with speech output. Some limitations of the current research merit note. First, the mental state of mindlessness was not directly measured. Although mindlessness has often been examined in the form of overt behavior (Langer et al., 1978; Pollock, Smith, Knowles, & Bruce, 1998), to understand better the link between mindlessness and social responses to computers, it would be desirable to employ more proximal measures to message decoding, for example, recall memory for computer output. Second, because Experiment 2 used only synthetic speech, it remains unclear if the findings are replicable with recorded human speech. Thus, to conclude that speech, independent of particular paralinguistic attributes, indeed promotes mindfulness in HCI, future research should examine if natural speech invites different reactions than computer-generated speech does. Third, the nature of experimental context might have heightened overall mindfulness, potentially limiting external validity of the findings. Given that uncertainty of the situation provokes mindfulness (Langer, 1994) and that features arousing message recipient’s suspicion enact more thoughtful information processing (J. K. Burgoon, Berger, et al., 2000), participants might have become more mindful when they were led to doubt the truth value of the computer’s comments. In addition, offering a performance-based prize would also have increased the personal relevance of the outcome, encouraging more mindful decision making (McAllister et al., 1979). Perhaps, enhanced overall mindfulness explains why synthetic speech failed to evoke gender stereotyping of computers (Experiment 2), inconsistent with previous findings (e.g., Morishima et al., 2001). Last, both experiments investigated only one example of social responses to computers, gender stereotyping. More research is needed to conclude that mindlessness underlies social responses to computers in other domains as well. Despite the seemingly robust findings demonstrating the operation of social rules in HCI, direct comparisons between HCI and human–human interaction have revealed some interesting differences between the two. For example, in a simulated group discussion, the effects of visual representation (stick figures vs. animated cartoon characters) of interactants on source perception were more pronounced when the interactants were thought to be computer agents, as opposed to other participants (Lee & Nass, 2002). Similarly, people adjusted interpersonal distance from the computer-controlled agent in an immersive virtual environment only when the agent displayed realistic gaze, whereas they maintained the optimal level of immediacy with a human-controlled avatar regardless of gaze behavior (Bailenson, Blascovich, Beall, & Loomis, 2003, experiment 1). Even when the ontology of the source was not varied (computer vs. human), Nass and Brave (2005) found that claims to humanity (i.e., “I” reference) evoked negative reactions when delivered by a synthetic, as compared to human, voice. Taken together, these findings seem to indicate that people do differentiate between interacting withversus viacomputer technologies, and yet, in the presence of social cues, they often fail to suppress well-rehearsed social reactions toward the machine. If so, it seems imperative to examine what kinds of interface features used in what kinds of contexts influence what kinds of users’ cognitive, affective, and behavioral states (Nass & Moon, 2000), which will significantly help to convert well-established interesting observations into a meaningful theory about human communication. The present study represents an attempt to unveil specific situations that do and do not occasion social responses to computers and why. Notes 1 " A series of 2 × 2 (Computer Gender × Number of Task) ANCOVAs with participant gender as a covariate established that participant gender was a nonsignificant covariate for all dependent variables and covarying out participant gender did not significantly alter the results, all Fs < 1.38, ps =.24. Therefore, participant gender was not included in the analyses. 2 " For added generalizability of the findings, each participant was randomly assigned one of the two male and two female characters. To rule out the possibility that some idiosyncratic features of each cartoon character might have significantly affected the results, a series of 2 × 2 ANCOVAs was performed within each computer gender, employing the number of task and individual character (1 vs. 2) as independent variables and participant gender as a covariate. Results confirmed that there were no significant main or interaction effects involving the individual character (1 vs. 2), for both male and female computers, all Fs < 1 on perceived trustworthiness; all Fs < 1.14, all ps =.29 on perceived competence; all Fs < 1 on perceived correctness of the computer; all Fs < 1.68, all ps =.20 on conformity. Thus, the data were pooled between the two characters within each computer gender. 3 " Prior to the main study, to test if the addition of positive feedback on the participants’ performance made the computer comments more feminine, a total of 41 participants (20 women, 21 men) read either the neutral comments used in Experiment 1 or the slightly positive comments created for Experiment 2 and rated how masculine and feminine the comments were on a 10-point scale (1=describes very poorly, 10=describes very well). Masculinity rating was then subtracted from the femininity rating (r=−.48, p<.003). As suspected, a 2 × 2 (Comments Type × Participant Gender) ANOVA yielded a significant main effect for comments type in the predicted direction; that is, positive comments (M=−2.15, SD= 3.18) were perceived as significantly more feminine than the neutral comments (M= 0.48, SD= 2.87), F(1, 37) = 7.11, p<.02, =.16. No other main or interaction effects were statistically significant, all Fs < 1. 4 " When a series of 2 × 2 (Computer Modality × Computer Gender) ANCOVAs was performed with participant gender as a covariate, participant gender proved to be a nonsignificant covariate for all dependent variables and, thus, covarying it out did not significantly alter the results, all Fs < 1. Therefore, participant gender was not included in the analyses. 5 " Again, to examine if some idiosyncratic features of each cartoon character/voice significantly affected the findings, a series of 2 × 2 ANCOVAs was performed within each computer gender, employing computer modality and individual character/voice (1 vs. 2) as independent variables and participant gender as a covariate. Replicating Experiment 1, there were no significant main or interaction effects involving the individual character/voice (1 vs. 2), for both male and female computers, on all dependent variables, all Fs < 1. Therefore, the data were pooled between the two characters/voices within each computer gender. References Bailenson , J. N. , Blascovich , J., Beall , A. C., & Loomis , J. M. ( 2003 ). Interpersonal distance in immersive virtual environments . Personality and Social Psychology Bulletin , 29 , 1 – 15 . Google Scholar Crossref Search ADS WorldCat Bem , S. L . ( 1981 ). Gender schema theory: A cognitive account of sex typing . Psychological Review , 88 , 354 – 364 . Google Scholar Crossref Search ADS WorldCat Bigot , L. L. , Jamet , E., Rouet , J.-F., & Amiel , V. ( 2006 ). Mode and modal transfer effects on performance and discourse organization with an information retrieval dialogue system in natural language . Computers in Human Behavior , 22 , 467 – 500 . Google Scholar Crossref Search ADS WorldCat Burgoon , J. K. , Berger , C. R., & Waldron , V. R. ( 2000 ). Mindfulness and interpersonal communication . Journal of Social Issues , 56 , 105 – 127 . Google Scholar Crossref Search ADS WorldCat Burgoon , J. K. , Bonito , J. A., Bengtsson , B., Cederberg , C., Lundberg , M., & Allspach , L. ( 2000 ). Interactivity in human-computer interaction: A study of credibility, understanding, and influence . Computers in Human Behavior , 16 , 553 – 574 . Google Scholar Crossref Search ADS WorldCat Burgoon , M. , & Klingle , R. S. ( 1998 ). Gender differences in being influential and/or influenced: A challenge to prior explanations. In D. J. Canary & K. Dindia (Eds.), Sex differences and similarities in communication: Critical essays and empirical investigations of sex and gender in interaction (pp. 257 – 285 ). Mahwah, NJ : Erlbaum . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Carli , L. L . ( 2001 ). Gender and social influence . Journal of Social Issues , 57 , 725 – 741 . Google Scholar Crossref Search ADS WorldCat Chaiken , S. , & Eagly , A. H. ( 1983 ). Communication modality as a determinant of persuasion: The role of communicator salience . Journal of Personality and Social Psychology , 45 , 241 – 265 . Google Scholar Crossref Search ADS WorldCat Chanowitz , B. , & Langer , E. ( 1981 ). Premature cognitive commitment . Journal of Personality and Social Psychology , 41 , 1051 – 1063 . Google Scholar Crossref Search ADS PubMed WorldCat Dolinski , D. , Ciszek , M., Godlewski , K., & Zawadzki , M. ( 2002 ). Fear-then-relief, mindlessness, and cognitive deficits . European Journal of Social Psychology , 32 , 435 – 447 . Google Scholar Crossref Search ADS WorldCat Eakins , B. W. , & Eakins , R. G. ( 1978 ). Sex differences in human communication . Boston : Houghton Mifflin . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Fogg , B. J. , & Nass , C. ( 1997 ). Silicon sycophants: The effects of computers that flatter . International Journal of Human-Computer Studies , 46 , 551 – 561 . Google Scholar Crossref Search ADS WorldCat Foos , P. W. , & Goolkasian , P. ( 2005 ). Presentation format effects in working memory: The role of attention . Memory and Cognition , 33 , 499 – 513 . Google Scholar Crossref Search ADS PubMed WorldCat Gilbert , D. T. , & Hixon , J. G. ( 1991 ). The trouble of thinking: Activation and application of stereotypic beliefs . Journal of Personality and Social Psychology , 60 , 509 – 517 . Google Scholar Crossref Search ADS WorldCat Gilbert , D. T. , Pelham , B. W., & Krull , D. S. ( 1988 ). On cognitive busyness: When person perceivers meet persons perceived . Journal of Personality and Social Psychology , 54 , 733 – 740 . Google Scholar Crossref Search ADS WorldCat Lakoff , R . ( 1975 ). Language and women’s place . New York : Harper & Row . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Langer , E . ( 1989 ). Minding matters: The consequences of mindlessness-mindfulness. In L. Berkowitz (Ed.), Advances in experimental psychology (Vol. 22 , pp. 137 – 173 ). San Diego, CA : Academic Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Langer , E . ( 1992 ). Matters of mind: Mindfulness/mindlessness in perspective . Consciousness and Cognition , 1 , 289 – 305 . Google Scholar Crossref Search ADS WorldCat Langer , E . ( 1994 ). The illusion of calculated decisions. In R. C. Schank & E. J. Langer (Eds.), Beliefs, reasoning, and decision making: Psych-logic in honor of Bob Abelson (pp. 33 – 53 ). Hillsdale, NJ : Erlbaum . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Langer , E. , Blank , A., & Chanowitz , B. ( 1978 ). The mindlessness of ostensibly thoughtful action: The role of “placebic” information in interpersonal interaction . Journal of Personality and Social Psychology , 36 , 635 – 642 . Google Scholar Crossref Search ADS WorldCat Langer , E. , & Imber , L. ( 1979 ). When practice makes imperfect: The debilitating effects of overlearning . Journal of Personality and Social Psychology , 37 , 2014 – 2025 . Google Scholar Crossref Search ADS PubMed WorldCat Langer , E. , & Weinman , C. ( 1981 ). When thinking disrupts intellectual performance: Mindfulness on an overlearned task . Personality and Social Psychology Bulletin , 7 , 240 – 243 . Google Scholar Crossref Search ADS WorldCat Lee , E.-J . ( 2003 ). Effects of “gender” of the computer on informational social influence: The moderating role of task type . International Journal of Human-Computer Studies , 58 , 347 – 362 . Google Scholar Crossref Search ADS WorldCat Lee , E.-J. , & Nass , C. ( 2002 ). Experimental tests of normative group influence and representation effects in computer-mediated communication: When interacting via computers differs from interaction with computers . Human Communication Research , 28 , 349 – 381 . OpenURL Placeholder Text WorldCat McAllister , D. , Mitchell , T., & Beach , L. ( 1979 ). The contingency model for the selection of decision strategies: An empirical test of the effects of significance, accountability, and reversibility . Organizational Behavior and Human Performance , 24 , 228 – 244 . Google Scholar Crossref Search ADS WorldCat Morishima , Y. , Nass , C., Bennett , C., & Lee , K. M. ( 2001 ). Effects of “gender” of computer-generated speech on credibility . Technical Report of IEICE TL2001-16 , 31 , 557 – 562 . OpenURL Placeholder Text WorldCat Motley , M. T . ( 1992 ). Mindfulness in solving communicators’ dilemmas . Communication Monographs , 59 , 306 – 314 . Google Scholar Crossref Search ADS WorldCat Mulac , A . ( 1998 ). The gender-linked language effect: Do language differences really make a difference? In D. J. Canary & K. Dindia (Eds.), Sex differences and similarities in communication: Critical essays and empirical investigations of sex and sex in interaction (pp. 127 – 153 ). Mahwah, NJ : Erlbaum . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Nass , C. , & Brave , S. ( 2005 ). Wired for speech: How voice activates and advances the human-computer relationship . Cambridge, MA : MIT Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Nass , C. , & Lee , K. M. ( 2001 ). Does computer-generated speech manifest personality?: Experimental tests of recognition, similarity-attraction, and consistency-attraction . Journal of Experimental Psychology: Applied , 7 , 171 – 181 . Google Scholar Crossref Search ADS PubMed WorldCat Nass , C. , & Moon , Y. ( 2000 ). Machines and mindlessness: Social responses to computers . Journal of Social Issues , 56 , 81 – 103 . Google Scholar Crossref Search ADS WorldCat Nass , C. , Moon , Y., & Carney , P. ( 1999 ). Are respondents polite to computers? Social desirability and direct responses to computers . Journal of Applied Social Psychology , 29 , 1093 – 1110 . Google Scholar Crossref Search ADS WorldCat Nass , C. , Moon , Y., Fogg , B. J., Reeves , B., & Dryer , D. C. ( 1995 ). Can computer personalities be human personalities? International Journal of Human-Computer Studies , 43 , 223 – 239 . Google Scholar Crossref Search ADS WorldCat Nass , C. , Moon , Y., & Green , N. ( 1997 ). Are computers gender-neutral? Gender stereotypic responses to computers . Journal of Applied Social Psychology , 27 , 864 – 876 . Google Scholar Crossref Search ADS WorldCat Pollock , C. L. , Smith , S. D., Knowles , E. S., & Bruce , H. J. ( 1998 ). Mindfulness limits compliance with the that’s-not-all technique . Personality and Social Psychology Bulletin , 24 , 1153 – 1157 . Google Scholar Crossref Search ADS WorldCat Reeves , B. , & Nass , C. ( 1996 ). The media equation: How people treat computers, television, and new media like real people and places . New York : Cambridge University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Ridgeway , C. L . ( 2001 ). Sex, status, and leadership . Journal of Social Issues , 57 , 637 – 655 . Google Scholar Crossref Search ADS WorldCat Smither , J . ( 1993 ). Short term memory demands in processing synthetic speech by old and young adults . Behavior and Information Technology , 12 , 330 – 335 . Google Scholar Crossref Search ADS WorldCat Sundar , S. S. , & Nass , C. ( 2000 ). Source orientation in human-computer interaction: Programmer, networker, or independent social actor? Communication Research , 27 , 683 – 703 . Google Scholar Crossref Search ADS WorldCat Tetlock , P . ( 1983 ). Accountability and complexity of thought . Journal of Personality and Social Psychology , 45 , 74 – 83 . Google Scholar Crossref Search ADS WorldCat Winer , B. J . ( 1971 ). Statistical principles in experimental design (2nd ed.). New York : McGraw-Hill . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Wong , W . ( 2001 ). Modality and attention to meaning and form in the input . Studies in Second Language Acquisition , 23 , 345 – 368 . Google Scholar Crossref Search ADS WorldCat © 2008 International Communication Association TI - Gender Stereotyping of Computers: Resource Depletion or Reduced Attention? JO - Journal of Communication DO - 10.1111/j.1460-2466.2008.00386.x DA - 2008-06-01 UR - https://www.deepdyve.com/lp/oxford-university-press/gender-stereotyping-of-computers-resource-depletion-or-reduced-CFrX6G010v SP - 301 VL - 58 IS - 2 DP - DeepDyve ER -