Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-27T05:27:15.410Z Has data issue: false hasContentIssue false

Semantic differences in visually similar face emojis

Published online by Cambridge University Press:  26 March 2024

Lea Fricke*
Affiliation:
Germanistisches Institut, Ruhr-Universität Bochum, Bochum, Germany
Patrick G. Grosz
Affiliation:
Institutt for lingvistiske og nordiske studier, Universitetet i Oslo, Oslo, Norway
Tatjana Scheffler
Affiliation:
Germanistisches Institut, Ruhr-Universität Bochum, Bochum, Germany
*
Corresponding author: Lea Fricke; Email: lea.fricke@rub.de
Rights & Permissions [Opens in a new window]

Abstract

The literature on face emojis raises the central question whether they should be treated as pictures or conventionalized signals. Our experiment addresses this question by investigating semantic differences in visually similar face emojis. We test a prediction following from a pictorial approach: small visual features of emojis that do not correspond to human facial features should be semantically less relevant than features that represent aspects of facial expressions. We compare emoji pairs with a visual difference that either does or does not correspond to a difference in a human facial expression according to an adaptation of the Facial Action Coding System. We created two contexts per pair, each fitted to correspond to a prominent meaning of one or the other emoji. Participants had to choose a suitable emoji for each context. The rate at which the context-matching emoji was chosen was significantly above chance for both types of emoji pairs and it did not differ significantly between them. Our results show that the small differences are meaningful in all pairs whether or not they correspond to human facial differences. This supports a lexicalist approach to emoji semantics, which treats face emojis as conventionalized signals rather than mere pictures of faces.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

Face emojis are stylized representations of human facial expressions, which have become an integral part of written digital communication. Written communication lacks signals of face-to-face communication, such as facial expressions and gestures. This is one important reason why emojis, which can take over many of the functions of such body movements, have become so popular. They have been argued to fulfill the role of gestures (McCulloch & Gawne, Reference McCulloch and Gawne2018; Pasternak & Tieu, Reference Pasternak and Tieu2022; Pierini, Reference Pierini2021) facial expressions (e.g., Weiß et al., Reference Weiß, Bille, Rodrigues and Hewig2020), particles (Beisswenger & Pappert, Reference Beisswenger and Pappert2019), or discourse connectives (Beisswenger & Pappert, Reference Beisswenger and Pappert2019) and to facilitate the understanding of indirect meaning (Holtgraves & Robinson, Reference Holtgraves and Robinson2020).

While initial research on emojis was of a predominantly descriptive or applied nature (e.g., Kralj Novak et al., Reference Kralj Novak, Smailović, Sluban and Mozetič2015), typically focusing on emoji use, recent years have seen the emergence of a more theoretically oriented literature. For example, Grosz et al. (Reference Grosz, Kaiser and Pierini2023) and Kaiser and Grosz (Reference Kaiser and Grosz2021) present an analysis of action emojis as event descriptions, which are similar to free adjuncts and are connected to the surrounding text via discourse relations. Grosz et al. (Reference Grosz, Greenberg, De Leon and Kaiser2023) show how face emojis comment on a target proposition by expressing how it relates to a discourse value held by the author. Cohn et al. (Reference Cohn, Engelen and Schilperood2019) studied the grammatical structure of emojis. They found that word-replacing emojis mostly replace nouns and adjectives in texts and that emoji sequences only show basic linear grammar patterns.

Another line of research focuses on the processing of emojis. Weissman and Tanner (Reference Weissman and Tanner2018) found in an ERP experiment that statements marked as ironic with a winking face emoji are processed similarly to statements with verbal irony. Scheffler et al. (Reference Scheffler, Brandt, de la Fuente and Nenchev2022) present a self-paced reading study, which shows that emojis replacing nouns in a sentence can represent full lexical entries, including both meaning and phonological representations. They do, however, give rise to increased reading times compared to words. Tang et al. (Reference Tang, Chen, Zhao and Zhao2020) compared the processing of incongruous emoji-text combinations to that of incongruous words in sentences in an ERP study. Their results suggest that emoji meaning is more difficult to process than word meaning. Weissman et al. (Reference Weissman, Engelen, Baas and Cohn2023) found evidence for a high degree of lexical entrenchment for emojis with conventionalized meanings. According to experiments presented by Weissman (Reference Weissman2022) and Weissman (Reference Weissman2024), such emojis commit the author of a message to their meaning, and they can be used to tell a lie.

Moreover, a number of studies investigated the potential of face emojis to convey emotions as well as their link to human facial expressions. Gantiva et al. (Reference Gantiva, Sotaquirá, Araujo and Cuervo2020) studied the cortical processing of face emojis and human faces in an ERP study and found that face emojis elicit neural responses that are similar to those caused by human faces. Boutet et al. (Reference Boutet, Guay, Chamberland, Cousineau and Collin2023) compared existing emojis, photos of human faces and novel emojis, which included additional visual features of human facial expressions with regard to their potential to convey six basic emotions. Their results suggest that adding visual features from facial expressions increases the accuracy with which an emotion can be recognized. However, Kendall et al. (Reference Kendall, Raffaelli, Kingstone and Todd2016) compared schematic cartoon faces with photos of human faces in an experimental study and found that at short exposure times, emotions conveyed by the former were identified with a higher accuracy than emotions conveyed by the latter. The authors conclude that simplified iconic representation can convey emotional information more efficiently than realistic pictures. Jaeger et al. (Reference Jaeger, Roigard, Jin, Vidal and Ares2019) asked participants to rate the emotional valence and arousal associated with emojis. They conclude that facial emojis encompass a broad spectrum of emotional valence and arousal, and that they can communicate a diverse array of emotions. However, a study by Franco and Fugate (Reference Franco and Fugate2020) identified differences between different platform renderings of the same emoji in terms of which emotions they are interpreted to convey.

A crucial question that concerns the very nature of emoji semantics has, however, not been directly addressed in the experimental research on emojis so far. It is the question whether the meaning of face emojis without symbolic components (such as dollar signs) is conventionalized like the meaning of linguistic signals, or whether it is purely iconic in that emojis should be understood as text-accompanying pictures. The study we present in this paper addresses this question by investigating semantic differences in visually similar face emojis, e.g., and (neutral face, expressionless face). Note that while the emojis displayed in the text of this paper are the emojis from the LaTeX emoji package, we tested WhatsApp emojis in our experiment.

According to the pictorial approach (Maier, Reference Maier2023), which proposes purely iconic semantics for emojis, face emojis are pictures of faces, which are interpreted to represent the face of the author of the message. For example, Maier (Reference Maier2023, p. 31) argues that is a stylized picture of a face and that pragmatic enrichment gives rise to the understanding that this is a rendering of what the author’s face looks like when typing the emoji. Maier (Reference Maier2023, p. 10) further states that there is ‘a “natural” correspondence between shapes in the sign (specifically the shapes of mouth and eyes) and properties of the speaker’s face and/or emotional state.’ Such a view makes predictions for whether two face emojis map onto the same facial expression or different ones. Consider, for example, the two emojis and , which both represent smiling, happy faces. Note that the former emoji features eyebrows in a neutral position while the latter does not. As human faces usually have eyebrows, we can assume that the two emojis map onto the same facial expression in a human face. In contrast, the two emojis and clearly map to different facial expressions as the former emoji represents open and the latter closed eyes. Thus, we draw the following inference from the pictorial account: small visual differences between two emojis should have a privileged status in terms of semantic relevance if they correspond to differences in human faces; by contrast, differences with no correspondence to human facial features should have less semantic relevance.

By contrast, a lexicalist approach emphasizes that emojis are associated with conventionalized meanings. Cohn and Ehly (Reference Cohn and Ehly2016) describe what they call the Japanese Visual Language, which is used in mangas and whose components also occur in emojis (see also Cohn, Reference Cohn2013). They use the label ‘morpheme’ for ‘units of meaningful visual representation’, that is, eye and mouth shapes as well as teardrops or snot bubbles (Cohn & Ehly, Reference Cohn and Ehly2016, p. 20). For example, a snot bubble on the nose symbolizes sleep. Relatedly, Grosz et al. (Reference Grosz, Greenberg, De Leon and Kaiser2023) treat emojis as a variant of emotive expressions and thus as loosely equivalent to conventionalized linguistic signals (see also Grosz et al., Reference Grosz, Kaiser and Pierini2023 and Kaiser & Grosz, Reference Kaiser and Grosz2021 who argue that face emojis are similar to expressives). They propose that the meaning of can be rendered in a way similar to the meaning of the interjections yay or whee; the contribution of can be compared with the contribution of a swear word like damn. While it is beyond dispute that face emojis exhibit stylized features of a human face, the lexicalist approach allows for the possibility that the minimal units an emoji consists of (e.g., eye or mouth shapes) carry partially conventionalized meaning regardless of whether they are plausible depictions of (components of) human faces. Accordingly, small differences in emojis can be meaningful, even if they do not represent differences in human faces. While we focus on face emojis in this paper, the pictorial and the lexicalist approach also cover non-face emojis. According to Maier (Reference Maier2023, p. 22), object emojis convey truth-conditional content and normally ‘depict what the evaluation world looks like’. However, Maier acknowledges that object emojis can also convey metaphorical and metonymic meanings. Grosz et al. (Reference Grosz, Kaiser and Pierini2023) present an analysis of object emojis in sentence-final position as event descriptions with an open argument slot by which they are anaphoric to an implicit agent. Thus, also with regard to non-face emojis, the two accounts focus either on their pictorial or linguistic quality.

Whether the difference between visually similar face emojis maps to a difference in corresponding human facial expressions can be captured via the Facial Action Coding System (FACS; Ekman & Friesen, Reference Ekman and Friesen1978), which decomposes facial expressions into minimal Action Units (AUs) and has been adapted for the annotation of face emojis by Fugate and Franco (Reference Fugate and Franco2021). Against the background of the FACS, we can distinguish two types of pairs of visually similar emojis: Emoji pairs with a visual difference that corresponds to an AU difference ([AU+]), like the pair and (grinning face with big eyes, grinning squinting face) and emoji pairs with a visual difference that does not correspond to an AU difference ([AU−]), like the pair and (grinning face with smiling eyes, beaming face with smiling eyes).

The latter pair shares the AUs 12 + 25 + 26 + 63, representing an arched-up mouth with an open mouth space and eyes looking up, but does not differ in any other Action Units. We followed Fugate and Franco (Reference Fugate and Franco2021) in coding the eyes as AU 63 ‘eyes looking up’. Alternatively, this eye shape could be interpreted as AU 6 ‘cheek raiser’ in both emojis. The two emojis’ visual difference lies in the presence of lower teeth in beaming face with smiling eyes. Whether lower teeth show in a smile with an open mouth space is not something a person can manipulate, which is why this is not captured in the FACS coding. The pair and shares the same AUs with regard to the mouth, that is, AU 12 + 25 + 26, which represent an arched-up mouth with an open mouth space, but they have different AUs for their eyes. The eye shape of is coded as AU 5 ‘eyes wide’, whereas the eye shape of is coded as AU 43 ‘eyes closed’.

In our experiment, we investigate whether the two types of visual differences ([AU+] and [AU $ - $ ] differences) lead to differences in meaning between the two emojis of a pair. Thereby, we assess the explanatory power of the pictorial and the lexicalist approach. More precisely, our experiment tests a prediction that follows from the pictorial approach, namely that small visual differences that do not correspond to AU differences should be semantically less relevant than visual differences that do correspond to AU differences, as the former do not have any clear real-life counterparts. Our results show that actually both types of visual differences are meaningful. This is compatible with a lexicalist approach to emoji semantics, which treats face emojis as conventionalized signals rather than mere pictures of faces.

2. Method

2.1. Design and materials

We tested three [AU+] emoji pairs and three [AU−] emoji pairs, shown in Figure 1. The emojis were rendered in the WhatsApp version as we aimed to test an emoji version that is commonly known in our sample population, which consists of emoji users in Germany. A survey with 18.243 participants, conducted between July 2020 and August 2021, yielded that 89% of the participants who were born between 1981 and 1995 and 93% of the participants who were born between 1996 and 2009 have WhatsApp (Initiative D21, 2022). Thus, WhatsApp is widely used in Germany, and the dominant smartphone operating system is Android (68% market share in the first quarter of 2023, according to https://www.kantarworldpanel.com/global/smartphone-os-market-share/), which unlike iOS does not overwrite WhatsApp emojis with its own emoji versions. The sample size of the emoji pairs was determined based on the availability of suitable emoji pairs without an AU difference. Constraints were that each emoji should occur in only one pair and that we aimed to avoid emojis with symbolic components like or , which cannot reflect actual human faces.

Figure 1. Emoji pairs tested.

The annotation of the emojis’ AUs, given in Figure 1, is based on the coding rubric by Fugate and Franco (Reference Fugate and Franco2021). With the exception of the emojis grinning squinting face, astonished face, slightly frowning face and frowning face, which are not annotated in their paper, the AU codings were identical to Fugate and Franco’s (Reference Fugate and Franco2021) existing AU codings for the emojis. Note, however, that Fugate and Franco do not provide AU codings for WhatsApp emojis. Instead, we used their codings of iOS emojis for guidance, which are very similar in design to the WhatsApp emojis. In the case of the emoji pairs with an AU difference, the AU that is exclusive to one of the two emojis is marked in boldface in the figure. Note further that we follow Fugate and Franco in not coding intensities of AUs. Intensity coding of AUs in face emojis comes with the challenges that it is hard to define the neutral expression of an emoji face and that a full spectrum of AU intensities cannot be observed in face emojis.

For each emoji pair, we created two contexts, each fitted to correspond to a prominent usage or meaning of one emoji or the other. The prominent usage or meaning of the emojis was determined based on their description on Emojipedia.org and on the results of a norming study with German speakers (Scheffler & Nenchev, Reference Scheffler and Nenchevsubmitted). In the norming study, participants were shown each emoji individually, without any surrounding linguistic context, and they were asked to name up to three meanings of each emoji. The wording of the prompt was Was bedeutet das Emoji? Nennen Sie bis zu drei verschiedene Bedeutungen. (‘What does the emoji mean? List up to three different meanings.’) The retrieved meanings of the two emojis in a given pair overlap to some extent, so we aimed to single out a meaning aspect that is part of only one of the emojis of a pair. The meaning aspect that was selected for each emoji and the verb phrase by which it was expressed in the experiment are listed in Figure 1 in the third line and fourth line, respectively.

We presented each context to the participants and asked them to choose between the two emojis of a pair in a forced-choice task. They were instructed that a person named Alex or Anna wants to write WhatsApp messages to their friend, but is not yet familiar with the use of emojis. The latter information was included to motivate the task the participant had to fulfill, namely to help Alex/Anna choose the emoji that best matched the text. For example, for grinning face with smiling eyes (), we had identified ‘amusement’ as a prominent meaning, while beaming face with smiling eyes () was found to often express ‘intense joy’. Figure 2 shows the presentation of two experimental items for this emoji pair. Red circles mark the context-matching emoji of the respective item. An English rendering of the item’s text is given in (1) (for the left side of the figure) and (2) (for its right side).

  1. (1) Alex writes to his best friend Stefan:

    I just learned that my cousin’s dog has his own advent calendar.

    Figure 2. Presentation of test items in the experiment.

    Alex is amused.

    Which of the two emojis matches the message better?

  1. (2) Alex writes to his best friend Stefan:

    I just learned that I won 500€ in the lottery.

    Alex is overjoyed.

    Which of the two emojis matches the message better?

    .

Our experimental design had 1 factor (AU difference) with 2 levels [AU+/AU−]. This factor was tested within participants and between items. The dependent variable was the measured rate (per condition) with which a context-matching emoji was chosen.

We tested 12 contexts corresponding to the total number of emojis we tested. For each context, we created four lexicalizations. For example, for the ‘amusement’ context, there were three other scenarios in which Alex/Anna had just received amusing news besides the scenario presented in Figure 2. They were: their cousin named her baby Ikea, their niece wants a Playmobil graveyard for her birthday, and their nephew’s dream job is Easter bunny. Thus, we had 48 test items in total (see Supplementary Appendix). The items were distributed over four experimental lists, so that each context was only seen once by each participant. In addition to the 12 test items, each list contained 12 filler/control items. They were of the same overall form as the test items, but the two emojis of a filler item were very distinct from one another. Therefore, it was unambiguously clear which emoji should be chosen in a given context. For example, in one context, the protagonist writes to her friend that she has to cancel their date, and the text below the screenshot states that she is expecting a (female) mechanic. The two emojis to choose from in this context were mermaid and woman mechanic.

There were two versions of the experiment. In one version, the protagonist of the stimuli and his friend (Alex and Stefan) were male, and in the other version, both the protagonist and her friend (Anna and Stefanie) were female. As the literature has reported gender differences regarding the use of emojis and attitude towards them (e.g., Chen et al., Reference Chen, Lu, Ai, Li, Mei and Liu2018; Herring & Dainas, Reference Herring and Dainas2020), we aimed to control for an influence of the protagonist’s gender.

The experiment was preregistered here: https://osf.io/hcma8/

Data and R-script are available at: https://osf.io/k2t9p/

2.2. Predictions

The iconic semantics of the pictorial approach to face emojis implies that [AU+] differences have a privileged status over [AU−] differences because they reflect differences in human facial features. Thus, assuming the pictorial approach, there should be a clear preference for the context-matching emoji for emoji pairs with an [AU+] difference. However, for [AU−] emojis, the context-matching emojis should be chosen significantly less often than for [AU+] pairs because the two emojis of an [AU−] pair can be expected to make an equivalent semantic contribution, which would cause them to be chosen more randomly.

In contrast, according to the lexicalist approach, both types of difference can create a difference in meaning between the two emojis of a pair of visually similar emojis. Under this approach, the context-matching emoji can be expected to always be the preferred choice irrespective of whether the emoji pair features an [AU+] or an [AU−] difference.

In this experiment, we assume the pictorial approach and attempt to provide evidence against the lexicalist approach: If in pairs with an AU difference, context-matching emojis are chosen significantly more often than at chance level and significantly more often than is the case with emojis of pairs without an AU difference, this would be in line with the pictorial approach, which implies a privileged role of realistic properties of human faces (such as AUs) and such a result would provide evidence against the lexicalist approach. If the rate of context matching emojis is above the chance level for at least one type of emoji and not significantly higher for emojis of pairs with an AU difference than is the case with emojis of pairs without AU difference, this result is compatible with the lexicalist approach, as it suggests that all manipulations matter, not just the realistic depictive ones.

2.3. Participants

In order to determine the size of the sample of participants, we conducted a power analysis based on a pilot version of the experiment using the software R (R Core Team, 2022), RStudio (RStudio Team, 2022) and the packages lme4 (Bates et al., Reference Bates, Mächler, Bolker and Walker2015) and simr (Green & MacLeod, Reference Green and MacLeod2016). We fitted a generalized linear mixed model (logistic regression) for the pilot data and modified the slope to have a significant effect on the factor AU difference. The analysis yielded that more than 500 participants would be required to achieve an optimal statistical power of 0.8 for a difference between the conditions of 14 percentage points. Due to monetary constraints, we chose a sample size of 160 participants, for which a statistical power of approximately 0.73 was reached in the simulation.

Thus, 160 participants were recruited via Prolific and received 1.40€ as compensation for their participation. They were native speakers of German, residing in Germany. To avoid that different levels of emoji literacy or different degrees of familiarity with emojis affect the results, we aimed for a homogeneous group of participants. By using Prolific’s pre-screening filters, we targeted participants between 18 and 35 years of age who use WhatsApp on an Android phone. This age group can be assumed to have an adequate level of emoji literacy (Herring & Dainas, Reference Herring and Dainas2020) and can be expected to be familiar with the WhatsApp emojis used in the experiment.

One participant who exceeded the specified maximum age of 35 years (based on self-reported age) was excluded. The remaining 159 participants were aged between 18 and 35 years old ( $ M=26.61 $ , $ SD=4.32 $ ), 109 specified male, 47 female and 3 non-binary genders.

There were no exclusions due to replies to filler/control items, for which we had determined a threshold of maximally three mistakes. Performance on filler/control items was good: There were two participants who made one mistake each and one participant who made three mistakes.

Although we had filtered for Android users on Prolific, there were 6 (4%) participants who reported that their smartphone operating system was iOS and 3 (2%) participants who chose the option ‘other’ for this question. We did not exclude those participants as the smartphone operating system was not specified as an exclusion criterion in our preregistration. Due to the general similarity in design for the 12 emojis in our study across operating systems and devices, we do not expect a difference in behavior from these participants.

The majority of our participants use WhatsApp frequently according to their answers: 68% (108) use it multiple times a day, 18% (28) use it daily, 11% (18) every few days and 3% (5) only once in a while. The majority also reported to use emojis frequently: 30% (47) in almost every message, 44% (70) often, 19% (31) occasionally, 5% (8) rarely, 2% (3) never. As the literature has reported differences between men and women with regard to their use of and attitude toward emojis, we analyzed our data in this respect, too. As there were only three participants with non-binary gender, we did not include these participants in the gender-related analyses. Figure 3 shows how frequently the male and female participants of our experiment use emojis. In line with previous reports in the literature, we observe that the female participants in our study use emojis more frequently than the male participants.

Figure 3. Emoji use by gender.

The attitude towards emojis was overall positive among the participants. 22% (35) of the participants have a very positive attitude, 43% (68) a positive, 23% (36) a rather positive, 11% (17) a neutral and 2% (3) a rather negative attitude towards emojis. The answer options for this question ranged from very positive to very negative. (The options ‘negative’ and ‘very negative’ were not chosen by any participants.) Figure 4 shows the attitude towards emojis by gender. Again, corresponding to what has been reported in the literature, the female participants have a more positive attitude towards emojis than the male participants.

Figure 4. Attitude towards emojis by gender.

Finally, the participants reported a good understanding of emojis. 40% (63) stated their understanding is very good, 35% (56) stated it is good, 24% (38) replied that it is rather good, and 1% (2) reported moderate understanding. The answer options ranged from very good to very bad (with ‘moderate’ in the middle of the scale).

To sum up, these data indicate that the participants of our experiment are quite familiar with emojis, have a positive attitude towards them and use emojis frequently.

2.4. Procedure

The experiment was conducted as a web-based experiment created with the software _magpie (Franke et al., Reference Franke, Ji, Ilieva, Rautenstrauch and Klehr2018). Thus, participants completed the experiment by means of their own devices. They were assigned to one of the two versions of the experiment (male or female protagonist) and to one of the experimental lists.

Each participant was tested on 6 emoji pairs in 12 contexts, that is, they saw each emoji pair twice, in one context favoring one member of the pair, and in one context favoring the other member. To avoid that the two contexts of an emoji pair occur in immediate succession in the experiment, the set of stimuli was divided into two blocks. One context of a pair occurred in block one and the other in block two. Each block contained 6 test items and 5 filler/control items, presented in pseudo-randomized order. The order of the blocks varied between lists. In half of the 48 stimuli, the (by our assumption) matching emoji was presented on the left side, and in the other half, it was presented on the right side. For test items, the order of matching/non-matching emoji was varied between lists. Each block was preceded by a filler/control item.

After completing the experimental task, participants were asked optional questions about their demographics and about the frequency with which they use emojis, their attitude towards and understanding of emojis, their use of WhatsApp and about their smartphone operating system with which they use emojis (see Section 2.3).

The completion of the experiment took between 1.7 and 17.1 min ( $ M=3.8 $ , $ SD=2 $ ).

3. Results

For data analysis, we used the software R (version 4.2.2, R Core Team, 2022) and RStudio (version 12.0, RStudio Team, 2022). The packages tidyverse (Wickham et al., Reference Wickham2019) and gridExtra (version 2.3, Auguie, Reference Auguie2017) were used for data processing and analysis, and the packages lme4 (Bates et al., Reference Bates, Mächler, Bolker and Walker2015) and afex (version 1.2–1, Singmann et al., Reference Singmann, Bolker, Westfall, Aust and Ben-Schachar2023) were used for the inferential statistical analysis.

The rate of context-matching emojis was 68% (650) in the [AU+] condition and 71% (675) in the [AU−] condition. There were no notable differences in the results between the version with the male ([AU+] = 68% (325), [AU−] = 70% (338)) and the female ([AU+] = 69% (325), [AU−] = 71% (337)) protagonist nor were there notable differences between male ([AU+] = 69% (450), [AU−] = 71% (462)) and female ([AU+] = 66% (186), [AU−] = 71% (202)) participants.

Figure 5 shows the rate at which a context-matching emoji was chosen for the individual emoji pairs. Each of the six bar charts is accompanied by the relevant emoji pair, such that the left emoji in the pair is the context-matching emoji for the left bar in the chart, whereas the right emoji is the context-matching emoji for the right bar. The green bars show the rate with which the context-matching emoji is chosen in the respective context. For almost all contexts, the matching emoji is generally preferred with matching rates above chance level. In the case of the pair neutral face and expressionless face ( and ), however, the latter emoji was preferred both in the ‘annoyance’ and in the ‘mild irritation’ context.

Figure 5. Results for individual emoji pairs.

For the inferential statistical analysis, we built a generalized linear mixed model (logistic regression) with context match (yes/no) as the dependent variable. The model included AU difference as a fixed factor, random slopes for participants and random intercepts for contexts and lexicalizations. The predicted probability of the choice of a context-matching emoji is 73% in the [AU−] condition and 72% in the [AU+] condition ( $ \mathrm{logit}\ \mathrm{difference}=-0.06, SE=0.44,z=-0.15,p=0.88 $ ). To confirm that the factor AU difference had no significant effect, as indicated by the model output, we performed a likelihood ratio test. It yielded that the minimal difference between the two conditions is not significant ( $ X(1)=0.0218,p=0.88 $ ). (Note that if the small difference were significant, this result would not be compatible with the pictorial hypothesis as the higher rate of matching emojis is in the [AU−] condition.) The intercept of the model ( $ \mathrm{logit}\ \mathrm{value}=0.99, SE=0.21,z=3.18,p=0.002 $ ) shows that the rate of context-matching emojis is above the chance level in both conditions.

4. Discussion

As evidenced by the clearly above-chance rate of matching emojis in both types of emoji pairs (i.e., emoji pairs with and without a difference in facial action units), each emoji is preferred over a visually similar emoji in a context that represents a prominent aspect of its meaning, as determined on the basis of the emoji’s Emojipedia.org entry and a norming experiment. Thus, subtle differences in emojis that otherwise look alike lead to differences in meaning.

An exception was the pair and . In the case of this pair, there was a general preference for in both contexts. A possible explanation is that participants found the mild irritation contexts (being placed on hold for 10 min in a service hotline, strike of the garbage collection, malfunctioning wifi router, pouring rain when wanting to go jogging) strongly annoying and not mildly irritating, even though it was stated that the protagonist is only ‘a bit irritated’ (etwas genervt).

Our experiment indicates that the small visual difference between the two emojis of a pair does, in fact, create a difference in meaning. This is equally the case for [AU+] differences that map to different facial actions in human faces and for [AU−] differences that represent the same facial actions in a human face. This result is compatible with a theory of face emojis, which assumes a symbolic semantics and allows for the possibility that the minimal components an emoji consists of carry conventionalized meaning similar to how linguistic signals are conventionalized.

Our results carry implications for the division of labor between semantics and pragmatics. The pictorial approach assumes a minimalist semantics, under which face emojis are essentially stylized pictures (of the author’s face) and nothing more than that. This is coupled with an independently motivated pragmatic machinery, which derives the more fine-grained and rich patterns that have been observed for the interpretation of emojis. Such a model would have difficulties explaining the differences in meaning between emojis without facial action unit differences we observed in our experiment, such as and . In principle, a different pragmatic process would need to be defined for each emoji to derive its specific contextual meaning.

By contrast, the lexicalist approach allows for a richer compositional semantics of face emojis by which individual parts of an emoji carry conventionalized meaning partially independent from their iconicity. As mentioned in the introduction, Cohn and Ehly (Reference Cohn and Ehly2016) analyze facial shapes such as mouths and eyes as morphemes and thus connect certain forms with conventionalized meanings. The lexicalist approach can, moreover, handle cases of emojis that contain symbolic components like , or more easily.

We wish to stress, however, that the experimental results presented in this paper do not constitute direct evidence for the lexicalist approach. They are merely compatible with it. Our approach was to assume the pictorial approach and try to provide evidence against the lexicalist approach; we showed that the experimental results did not provide such evidence, while showing that context-matching emojis are chosen significantly above chance level in all conditions. Future work should devise ways to probe the lexicalist hypothesis more directly, for example, by investigating the compositionality of the minimal units that emojis consist of. We currently plan to investigate to which extent single features such as eye or mouth shapes are associated with certain meanings, by testing them independently from the combinations with other shapes they co-occur with in existing emojis.

Face emojis are clearly iconic in that they represent human faces. However, due to our experimental results and due to the fact that there are a number of emojis that include symbolic features, we anticipate that a hybrid theoretical semantic approach is eventually required. Such a hybrid approach would incorporate both iconic and symbolic meaning components. This would make it possible to account for the whole spectrum from highly iconic emojis to emojis that incorporate symbolic components.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/langcog.2024.12.

Data availability statement

Data and code are available at https://osf.io/k2t9p/.

Acknowledgements

The authors would like to thank the DiFoLi group at Ruhr-Universität Bochum as well as the ViCom community for their helpful feedback on the experiment. They thank Geraldine Baumann and Dennis Reisloh for helping prepare the stimuli, Nina-Kristin Meister for critical discussion of the FACS coding, and Timo Roettger for his advice on their experimental design and the statistical analysis. Last but not least, they are grateful for the valuable feedback they received from two anonymous reviewers.

Funding statement

This study was funded by the Deutsche Forschungsgemeinschaft (DFG) through the Priority Program DFG SPP 2392 Visual Communication (ViCom), project EmDiCom (project number 501983851).

Competing interest

The authors have no competing interest to declare.

References

Auguie, B. (2017). gridExtra: Miscellaneous functions for “grid” graphics. R package version 2.3. https://CRAN.R-project.org/package=gridExtra.Google Scholar
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 148. https://doi.org/10.18637/jss.v067.i01CrossRefGoogle Scholar
Beisswenger, M., & Pappert, S. (2019). Handeln mit Emojis: Grundriss einer Linguistik kleiner Bildzeichen in der WhatsApp-Kommunikation. Universitätsverlag Rhein-Ruhr.Google Scholar
Boutet, I., Guay, J., Chamberland, J., Cousineau, D., & Collin, C. (2023). Emojis that work! Incorporating visual cues from facial expressions in emojis can reduce ambiguous interpretations. Computers in Human Behavior Reports, 9. https://doi.org/10.1016/j.chbr.2022.100251CrossRefGoogle Scholar
Chen, Z., Lu, X., Ai, W., Li, H., Mei, Q., & Liu, X. (2018). Through a gender lens: Learning usage patterns of emojis from large-scale android users. In: Proceedings of the 2018 World Wide Web Conference (pp. 763772). International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3178876.3186157Google Scholar
Cohn, N. (2013). The visual language of comics: Introduction to the structure and cognition of sequential images. Bloomsbury.Google Scholar
Cohn, N., & Ehly, S. (2016). The vocabulary of manga: visual morphology in dialects of Japanese visual language. Journal of Pragmatics, 92, 1729. https://doi.org/10.1016/j.pragma.2015.11.008CrossRefGoogle Scholar
Cohn, N., Engelen, J., & Schilperood, J. (2019). The grammar of emoji? Constraints on communicative pictorial sequencing. Cognitive Research: Principles and Implications, 4(33). https://doi.org/10.1186/s41235-019-0177-0Google ScholarPubMed
Ekman, P., & Friesen, W. V. (1978). Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press.Google Scholar
Franco, C. L., & Fugate, J. M. B. (2020). Emoji face renderings: Exploring the role emoji platform differences have on emotional interpretation. Journal of Nonverbal Behavior, 44(2), 301328. https://doi.org/10.1007/s10919-019-00330-1CrossRefGoogle Scholar
Franke, M., Ji, X., Ilieva, S., Rautenstrauch, J., & Klehr, M. (2018). _magpie, minimal architecture for the generation of portable interactive experiments. https://magpie-ea.github.io/magpie-site/.Google Scholar
Fugate, J. M. B., & Franco, C. L. (2021). Implications for emotion: Using anatomically based facial coding to compare emoji faces across platforms. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.605928CrossRefGoogle ScholarPubMed
Gantiva, C., Sotaquirá, M., Araujo, A., & Cuervo, P. (2020). Cortical processing of human and emoji faces: an ERP analysis. Behaviour & Information Technology, 39(8), 935943. https://doi.org/10.1080/0144929X.2019.1632933CrossRefGoogle Scholar
Green, P. and MacLeod, C. J. (2016). simr: an R package for power analysis of generalized linear mixed models by simulation. Methods in Ecology and Evolution, 7(4), 493498. https://doi.org/10.1111/2041-210X.12504 https://CRAN.R-project.org/package=simrCrossRefGoogle Scholar
Grosz, P., Kaiser, E., & Pierini, F. (2023). Discourse anaphoricity vs. perspective sensitivity in emoji semantics. Glossa: a Journal of General Linguistics, 8(1), 135.Google Scholar
Grosz, P., Greenberg, G., De Leon, C., & Kaiser, E. (2023). A semantics of face emoji in discourse. Linguistics and Philosophy, 46, 905957. https://doi.org/10.1007/s10988-022-09369-8.CrossRefGoogle Scholar
Herring, S. C., & Dainas, A. R. (2020). Gender and age influences on interpretation of emoji functions. ACM Transactions on Social Computing, 3(2), 126. https://doi.org/10.1145/3375629CrossRefGoogle Scholar
Holtgraves, T. and Robinson, C. (2020). Emoji can facilitate recognition of conveyed indirect meaning. PLoS ONE, 15(4). https://doi.org/10.1371/journal.pone.0232361CrossRefGoogle ScholarPubMed
Initiative D21. (2022). D21 Digital Index 2021/2022: Jährliches Lagebild zur Digitalen Gesellschaft. https://initiatived21.de/uploads/03_Studien-Publikationen/D21-Digital-Index/2021-22/d21digitalindex-2021_2022.pdf.Google Scholar
Jaeger, S. R., Roigard, C. M., Jin, D., Vidal, L., & Ares, G. (2019). Valence, arousal and sentiment meanings of 33 facial emoji: Insights for the use of emoji in consumer research. Food Research International, 119, 895907. https://doi.org/10.1016/j.foodres.2018.10.074CrossRefGoogle ScholarPubMed
Kaiser, E., & Grosz, P. G. (2021). Anaphoricity in emoji: An experimental investigation of face and non-face emoji. Proceedings of the Linguistic Society of America, 6(1), 10091023. https://doi.org/10.3765/plsa.v6i1.5067CrossRefGoogle Scholar
Kendall, L. N., Raffaelli, Q., Kingstone, A., & Todd, R. M. (2016). Iconic faces are not real faces: enhanced emotion detection and altered neural processing as faces become more iconic. Cognitive Research: Principles and Implications, 1, 19. https://doi.org/10.1186/s41235-016-0021-8Google Scholar
Kralj Novak, P. Smailović, J., Sluban, B., & Mozetič, I. (2015). Sentiment of emojis. PLoS ONE, 10(12), e0144296. https://doi.org/10.1371/journal.pone.0144296CrossRefGoogle ScholarPubMed
Maier, E. (2023). Emojis as pictures. Ergo, 10. https://doi.org/10.3998/ergo.4641.Google Scholar
McCulloch, G., & Gawne, L. (2018). Emoji grammar as beat gestures. In: Proceedings of the 1st international workshop on emoji understanding and applications in social media (Emoji2018). https://ceur-ws.org/Vol-2130/Google Scholar
Pasternak, R., & Tieu, L. (2022). Co-linguistic content inferences: From gestures to sound effects and emoji. Quarterly Journal of Experimental Psychology, 75(10), 18281843. https://doi.org/10.1177/17470218221080645CrossRefGoogle ScholarPubMed
Pierini, F. (2021). Emojis and gestures: A new typology. Proceedings of Sinn und Bedeutung, 25, 720732. https://doi.org/10.18148/sub/2021.v25i0.963Google Scholar
R Core Team. (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/Google Scholar
RStudio Team. (2022). RStudio: Integrated Development Environment for R. https://posit.co/products/open-source/rstudio/Google Scholar
Scheffler, T., & Nenchev, I. (submitted). Affective, semantic, frequency, and descriptive norms for 107 face emojis. https://semanticsarchive.net/Archive/mNiNGE1M/Google Scholar
Scheffler, T., Brandt, L., de la Fuente, M., & Nenchev, I. (2022). The processing of emoji-word substitutions: A self-paced-reading study. Computers in Human Behavior, 127, 107076. https://doi.org/10.1016/j.chb.2021.107076CrossRefGoogle Scholar
Singmann, H. Bolker, B., Westfall, J., Aust, F., & Ben-Schachar, M.S. (2023). afex: Analysis of Factorial Experiments. R package version 1.2-1. https://CRAN.R-project.org/package=afex.Google Scholar
Tang, M., Chen, B., Zhao, X., & Zhao, L. (2020). Processing network emojis in Chinese sentence context: An ERP study. Neuroscience Letters, 722, 134815. https://doi.org/10.1016/j.neulet.2020.134815CrossRefGoogle ScholarPubMed
Weiß, M., Bille, D., Rodrigues, J., & Hewig, J. (2020). Age-related differences in emoji evaluation. Experimental Aging Research, 46(5), 416432. https://doi.org/10.1080/0361073X.2020.1790087CrossRefGoogle ScholarPubMed
Weissman, B. (2022). Emoji semantics/pragmatics: investigating commitment and lying. In: Proceedings of the fifth international workshop on emoji understanding and applications in social media (pp. 2128). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emoji-1.3Google Scholar
Weissman, B. (2024). Can an emoji be a lie? The links between emoji meaning, commitment, and lying. Journal of Pragmatics, 219: 1229. https://doi.org/10.1016/j.pragma.2023.11.004CrossRefGoogle Scholar
Weissman, B., & Tanner, D. (2018). A strong wink between verbal and emoji-based irony: How the brain processes ironic emojis during language comprehension. PLoS ONE 13(8), 126. https://doi.org/10.1371/journal.pone.0201727CrossRefGoogle Scholar
Weissman, B., Engelen, J., Baas, E., & Cohn, N. (2023). The lexicon of emoji? Conventionality modulates processing of emoji. Cognitive Science, 47. https://doi.org/10.1111/cogs.13275CrossRefGoogle ScholarPubMed
Wickham, H., et al. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686CrossRefGoogle Scholar
Figure 0

Figure 1. Emoji pairs tested.

Figure 1

Figure 2. Presentation of test items in the experiment.

Figure 2

Figure 3. Emoji use by gender.

Figure 3

Figure 4. Attitude towards emojis by gender.

Figure 4

Figure 5. Results for individual emoji pairs.

Supplementary material: File

Fricke et al. supplementary material

Fricke et al. supplementary material
Download Fricke et al. supplementary material(File)
File 274.7 KB