2.4 The uncanny valley effect
While demonstrating complex computing power, avatars and other synthetic
life has been plagued with the challenges of the so-called uncanny
valley – the feeling of eeriness and wariness interactants experience
when confronting a synthetic being that is humanlike but not quite human
(Mori et al., 2012). Negative reactions to the pseudo-human range from
biological (MacDorman & Entezari, 2015) to metaphysical (MacDorman,
2005; Tondu, 2015), psychological (Gray & Wegner, 2012) to cultural
(MacDorman & Ishiguro, 2006). Whether the uncanny valley effect
provokes an anthropocentrism that worries over threats to its
distinctiveness (Stein & Ohler, 2017) or a concern for psychopathy and
malevolence in the machines (Tinwell et al., 2013), the literature
across domains consistently demonstrates interactants’ wariness over
synthetic pseudohumans when they are recognized as such. One empirical
study, for example, tested film animation styles and found that
semirealistic animated films produce higher feelings of eeriness than
cartoonish or human films demonstrated that the use of avatars can
actually reduce the effectiveness of AI interaction (Kätsyri et al.,
2017). Another study found that adding an avatar to chatbot interactions
produced higher deleterious uncanny valley sentiments and more intense
psychophysiological effects (Ciechanowski et al., 2019). Perhaps most
consistent with the aims of this study, and confirming previous research
on virtual faces (McDonnell & Breidt, 2010), Weisman and Peña (2021)
experimentally found that avatar faces’ evocation of the uncanny valley
effect mediates a decrease on affect-based trust, even with somewhat
crude faces by comparison even in just the few years’ difference between
that study and this present study.
Yet the advancements in AI imaging and animation have advanced
considerably since most of the studies investigating the uncanny valley
were completed. AI image generators have now evaded detection and won
international art contests (Roose, 2022), and a majority of users no
longer recognize AI-generated faces when presented to them (Miller et
al., 2023). These advancements push back against the conventional
anti-AI uncanny valley wisdom and raise new questions for the persuasive
potential of the new wave of technology. A sister study, run roughly
concurrently with this study presented in this article, tested whether
or not the avatars in a program like Synthesia could provide discrete
knowledge to users in an onboarding context. While not set in a
persuasive context, the results are directly applicable to the project
here. In that study, not only did the avatars produce equitable learning
outcomes, but over half of the participants did not recognize the
AI-generated avatars as being synthetic computer constructs (Redacted
Author Citation).