ABSTRACT
This research article examines the effectiveness of Google’s #LetsInternetBetter social advertising campaign in countering misinformation by analyzing consumer reactions through YouTube comments. It capitalizes on social networks as a rich data source for genuine insights into consumer opinions regarding misinformation and awareness campaigns. Employing a qualitative research method through thematic analysis of 994 comments in MAXQDA 2020, this research categorizes consumer responses into ad- and brand-related themes, further dividing them into cognition, affect, and behavioral intentions. It introduces an expanded typology of commenter types, reflecting a broad spectrum of public engagement. Findings reveal varied reactions, ranging from positive endorsements to critical skepticism, highlighting the campaign’s global outreach. The research underscores the pivotal role of direct engagement in misinformation mitigation efforts and emphasizes digital campaigns’ potential to polarize audiences. Furthermore, it highlights the need for nuanced, culturally attuned communication strategies in awareness campaigns to navigate the complexities of public perceptions. This study contributes to understanding how digital platforms influence public discourse on misinformation and offers insight for crafting more effective and inclusive social advertising campaigns.
Introduction
The rise of the digital age has significantly altered information dissemination, leading to the proliferation of fake news, which encompasses both accidental misinformation and intentional disinformation. Dubbed “The Disinformation Age,” this era has witnessed a significant increase in the production of fake news. Recognized by entities such as the World Economic Forum as a societal hazard (Del Vicario et al., 2016), fake news blurs truth and falsehood, affecting social cohesion and democratic discourse.
Combating fake news requires a collective approach (Dodda & Dubbudu, 2019) that includes enhancing digital literacy, improving regulatory actions, and engaging social media platforms to counter misinformation spread. Pivotal in distributing information, internet companies employ various strategies, such as social advertising campaigns, to increase public awareness and improve media literacy (Bak-Coleman et al., 2022). Analyzing these campaigns based on consumer responses and interactions offers insights into how these issues are framed and the effectiveness of such initiatives (Chasi & Omarjee, 2014). Understanding the diverse motivations behind consumer engagement with these campaigns through their comments provides valuable perspectives for social marketers, revealing the complex thoughts and feelings of the audience regarding misinformation and the efforts to counteract it.
Previous research has extensively documented the dynamics, consequences, and counterstrategies of misinformation in digital environments. Studies have shown that false information, due to its emotional appeal, novelty, and algorithmic amplification, spreads faster and more broadly than factual content on social media (Vosoughi et al., 2018). Scholars have also emphasized the role of digital platforms not only as passive conduits but also as active gatekeepers shaping public discourse through recommendation systems and advertising infrastructures (Gillespie, 2018; Napoli, 2019). In response to growing public concern, various interventions have been examined, including fact-checking labels (Pennycook et al., 2020), media literacy initiatives (Guess et al., 2020), and platform-led awareness campaigns, which are aimed at fostering critical information consumption. However, empirical findings reveal that such interventions often generate mixed outcomes, sometimes increasing skepticism, resistance, or perceived manipulation among audiences (Nyhan & Reifler, 2015; van der Linden et al., 2020). Within advertising and marketing scholarship, prior studies have predominantly focused on consumer responses to socially framed campaigns, such as femvertizing, corporate activism, and health communication, demonstrating that audience reactions are deeply influenced by perceptions of sincerity, credibility, and corporate intent (Mukherjee & Althuizen, 2020; Vredenburg et al., 2020). Despite this growing body of work, limited attention has been paid to misinformation-focused social advertising campaigns initiated by global digital platforms, particularly regarding how audiences interpret, negotiate, and contest such messages through participatory comment cultures. The present study attempts to address this gap by situating Google’s #LetsInternetBetter campaign within this evolving research landscape and examining consumer responses as a critical site for understanding platform credibility and public engagement in The Disinformation Age.
Google’s #LetsInternetBetter campaign, launched on its YouTube platform, stands as a testament to the power of digital platforms in promoting safer and more responsible internet usage. This research article examines the effectiveness of the campaign by analyzing consumer comments on the ad campaign videos. Through a detailed examination of the consumers’ ad- and brand-related comments, the study offers insights into consumer perceptions and engagement. Moreover, it expands upon Reagle’s (2015) commenter types by identifying new categories of commenters and offering a fresh perspective on audience interaction with social advertising. This analysis contributes to the understanding of digital marketing strategies responsive to fake news and mis/disinformation and highlights the dynamics of consumer engagement in the context of social advertising campaigns.
Conceptual Background
Fake News, Mis/Disinformation, Reasons, and Combating Strategies
The proliferation of mis/disinformation in the digital age, marked by the rapid spread of false information online, poses significant societal challenges (Dell, 2018; Howard et al., 2021). Recognized by the World Economic Forum as a major societal threat, fake news undermines the optimism of the 21st century-often called the Information Age-as an era of global cooperation, leading to what could more accurately be termed The Disinformation Age (Del Vicario et al., 2016; Guilbeault, 2018). This issue is exacerbated by modern technologies that enable content dissemination at unprecedented speeds, hindering the protection of human rights and peace (United Nations, 2022).
The intent to deceive distinguishes disinformation from misinformation, with disinformation being more malicious (Hameleers et al., 2022). Mis/disinformation often mimics legitimate news, blending truth and falsehood to exploit human biases (Popescu, 2020). The intentional deception inherent in disinformation aims to manipulate, leveraging modern technologies to achieve the author’s goals (Bontridder & Poullet, 2021). The virality and ambiguity of online content can lead to information cascades in which the accuracy of shared information is often not guaranteed, thus serving the interests of those seeking political or financial gain (Bastick, 2021). Therefore, it can have real-world harmful effects, especially on vulnerable groups like children (Howard et al., 2021). Yet, overreaction to false information threatens freedoms, including expression, and highlights the dilemma faced by internet and social media companies in balancing human rights with combating fake news (Aswad, 2020).
The digitalization of news and the networked nature of the internet have amplified the spread and manipulation of information, affecting public discourse during crises and elections (Dodda & Dubbudu, 2019; Qian et al., 2022; Shapovalova, 2020; Wang et al., 2022). Social media users increase the spread of disinformation through actions like sharing and liking, which, in turn, trigger platform algorithms to further disseminate the content (Buchanan, 2020). Thus, this process is facilitated by algorithms that prioritize engagement over accuracy (Buchanan & Benson, 2019; Howard et al., 2021). Furthermore, in this context, the phenomenon of “organic reach” is crucial as user interactions boost the visibility of disinformation (Buchanan & Benson, 2019). Social media also enables paid promotion of content, further enhancing its reach (Silva et al., 2023; Vosoughi et al., 2018).
Group dynamics and the desire for social acceptance motivate individuals to share unverified information, with trust in the source playing a significant role in the likelihood of engagement (Buchanan & Benson, 2019; Dodda & Dubbudu, 2019). Moreover, personality traits and risk propensities of individuals affect their likelihood of engaging with disinformation, contributing to its spread (Buchanan & Benson, 2019). Furthermore, mis/disinformation spreads through bots, algorithms, and coordinated groups, reflecting a complex ecosystem influenced by political, social, and technological factors (Howard et al., 2021; Shekhar, 2018). Artificial intelligence and algorithms exacerbate the problem by creating realistic fake content and targeting it to susceptible audiences (Bontridder & Poullet, 2021; Howard et al., 2021).
To combat the spread of fake news, a multifaceted approach involving governments, non-profits, internet and social media companies, educators, and individuals is being implemented. Enhancing digital media literacy is a core strategy, with initiatives aimed at training news consumers to identify misinformation and understand its implications (Buchanan, 2020; Guess et al., 2020).
This includes large-scale training programs, such as WhatsApp’s effort to educate 100,000 people in India on identifying misinformation through social media posts and in-person events (Guess et al., 2020). Similarly, WhatsApp has launched campaigns and ads to educate users about misinformation, especially during critical times like elections (Shekhar, 2018). These platforms have also embraced labeling misinformation and working with fact-checking organizations to curb the spread of harmful content, such as anti-vaccination misinformation (Howard et al., 2021). The challenges posed by disinformation are also being tackled through various means, including the deplatforming of “superspreaders” to reduce the reach of misinformation, pre-moderation by content providers to protect children from misinformation, and the integration of parental controls in digital platforms (Howard et al., 2021). Furthermore, the European Commission has encouraged internet companies to adopt voluntary codes of practice for greater transparency and accountability in handling online disinformation (Mortera-Martinez, 2019).
Education plays a pivotal role in fighting misinformation, with media literacy education positioned as a crucial tool for empowering individuals to discern real from fake news. This includes teaching critical thinking skills and making media literacy a required part of educational systems (Dame Adjin-Tettey, 2022; Dell, 2018). To address misinformation, strategies such as correction, inoculation, and pre-bunking are employed to flag false content and forewarn people against it (Qian et al., 2022). Empirical evidence supports the efficacy of media literacy interventions in enhancing individuals’ ability to distinguish between accurate and false news, highlighting the importance of targeted education and training (Guess et al., 2020; Hameleers, 2022). Thus, media literacy education aims to create a more informed and media-savvy population that is less susceptible to disinformation campaigns.
Social Marketing, Advertising, and Fake News
In addressing the burgeoning challenges posed by corporate scandals, societal inequalities, and environmental issues, the “Better Marketing for Better World” approach emerges as a critical framework, emphasizing the integration of ethical considerations into marketing strategies (Voola et al., 2022). Today’s consumers are increasingly aligning their patronage with businesses that reflect their ethical values and are transparent about their commitment to human rights and sustainable practices (Anuradha et al., 2023; Kılıç Taran & Akbayır, 2022). With Generation Z and Millennials showing an increased sense of social justice, brands like Nike, WhatsApp, and Airbnb have successfully engaged these demographics through sincere social campaigns, demonstrating the potential of social marketing to resonate authentically with consumers (Mueller, 2023). However, the efficacy of social marketing goes beyond mere messaging, requiring a holistic approach that includes tangible support for underserved populations, a willingness among the economically disadvantaged to invest in valued services, and leveraging their creativity and entrepreneurial spirit (Smith, 2009).
As social marketing navigates the challenges posed by the pandemic, climate change, social inequalities, and digital technologies, it focuses on targeting consumers with “social goods” to spearhead societal improvements (Chasi & Omarjee, 2014; Flaherty et al., 2021). Modern campaigns leverage segmentation and demonstrate the value of innovations, often manufacturing consent through asymmetrical power dynamics between marketers and audiences (Chasi & Omarjee, 2014).
Social advertising, which is an essential component of broader social marketing efforts, seeks to transcend traditional commercial advertising by embedding a social purpose within brand narratives, enabling deeper consumer engagement (Anker et al., 2022). This shift toward social good is evident in campaigns across the West, aimed at addressing public health concerns and promoting social welfare. Such campaigns are characterized by messages tailored to segmented audiences for maximal impact (Smith, 2009).
Efforts to combat fake news include literacy campaigns, fact-checking, and pre-emptive collaborations, particularly before significant events, such as elections. These efforts aim to educate the public and counter misinformation (Dodda & Dubbudu, 2019). Media literacy campaigns improve citizens’ analytical skills, enabling critical engagement with media and technology (Dodda & Dubbudu, 2019). Similar to the way digital media literacy campaigns are executed in physical settings like schools, those conducted on digital platforms can efficiently and economically connect with their intended audiences. Governments may view these campaigns as educational initiatives, whereas companies might perceive them as forms of social marketing in their digital social advertising campaigns.
Social advertising focuses on enhancing the quality of life and facilitating social change, underscoring the importance of customer orientation, creativity, collective sensitivity, and competitive insight in crafting successful campaigns (Galan-Ladero & Alves, 2023). The use of advertising to address social issues, whether by state authorities, non-governmental organizations, or private entities, often entails the employment of fear-arousing appeals and humor to influence behavior change, particularly among demographics resistant to the advocated behaviors (Jäger & Eisend, 2013; Yılmaz & Ozturk, 2013). Furthermore, the credibility of the message source, alongside the delivery style-whether narrative or non-narrative-plays a key role in advertising effectiveness, influencing consumer attitudes and behaviors (Haley, 1996; Rathee & Milfeld, 2024; Yang et al., 2015). Celebrities and experts endorsing social causes further enhance campaign effectiveness by leveraging their perceived attractiveness, expertise, and trustworthiness (Kerr & Richards, 2021; Vraga et al., 2022).
The effectiveness of social advertising depends on how it integrates the components of creativity. Previous studies in the field of advertising creativity have explored its influence across cognitive, affective, and conative aspects (Feng & Xie, 2019), highlighting the unique roles of novelty and message relevance. Novelty can spark initial interest and improve short-term recall but might undermine the brand itself, whereas relevance augments long-term brand retention and information processing (Ang et al., 2014; Sheinin et al., 2011; Smith et al., 2008). In the emotional realm, novelty increases ad appreciation, potentially fostering positive attitudes, while relevance reinforces brand beliefs (Sheinin et al., 2011). Creatively integrating novelty with relevance tends to elevate ad appreciation further (Ang et al., 2014; Banerjee & Pal, 2023). Finally, cognitive and emotional reactions serve as precursors to behavioral intentions, with creative advertisements often leading to stronger purchase intentions (Smith et al., 2008). Likewise, existing studies (Feng & Xie, 2019) on consumer reactions to advertising explore three behavioral dimensions: cognitive (awareness and knowledge), affective (emotions and attitudes), and conative (intentions and actions). These studies suggest that consumer reactions to ads follow a sequential pattern: beginning with cognitive engagement (learning), moving to emotional response (feeling), and culminating in conative actions (doing) (Lawrence et al., 2013; Park et al., 2008; van Reijmersdal et al., 2010).
Audiences play a crucial role in the fake news ecosystem, influencing its spread and reaction, which in turn underscores the importance of understanding audience dynamics (Dodda & Dubbudu, 2019). Audiences’ emotional reaction to social advertising is crucial, with various studies indicating that emotional response drives consumer behavioral intentions more than rational variables (Kim, 2011; Morris et al., 2016). This understanding is fundamental in evaluating the success of social advertising communications and fostering emotional connections with the audience (Morris et al., 2016).
This study’s main objective is to explore consumer reactions to misinformation awareness campaigns. It discusses how online users respond to such campaigns by examining user-generated comments and their impact on the consumer-brand relationship and by reviewing some of the specific arguments online users make about the campaign. The paper analyzes the comments generated by online users on seven social advertising videos of Google’s #LetsInternetBetter campaign through human-centered thematic analysis. In doing so, it aims to expand knowledge on consumer responses to misinformation-related campaigns. Some recent publications have used human-based or machine-based analysis of user comments on woke advertising (Feng et al., 2021), femvertizing (Feng et al., 2019; Lima & Casais, 2021), consumer-generated advertising (Ertimur & Gilly, 2012), viral advertising (Blichfeldt & Smed, 2015), branded flash mobs (Grant et al., 2015), influencer marketing (Janiques de Carvalho & Marôpo, 2020), branded content (Waqas et al., 2020), Coronavirus Disease 2019 advertisements (Feng & Chen, 2022), and augmented reality out-of-home advertising (Feng & Xie, 2019). To the best of the authors’ knowledge, no previous studies exist that analyze user-generated comments on any misinformation-oriented social advertising campaign. Thus, the literature lacks research on how consumers respond to awareness-raising campaigns targeted at misinformation. Therefore, this article, which is the first study that analyzes user-generated comments on social advertising campaigns, is extremely valuable in terms of providing findings on the effectiveness of such awareness-raising advertising campaigns from the perspective of consumers.
Methodology
This study examines the impact of misinformation and the effectiveness of awareness campaigns in addressing it. Leveraging social networks and online communities, which provide a rich data source for authentic analysis of consumer opinions on advertising campaigns (Feng et al., 2019; Feng et al., 2021; Reagle, 2015), this research highlights the value of analyzing unprompted, unbiased online discussions (Grant et al., 2015; Waqas et al., 2020). Employing thematic analysis of comments on Google’s #LetsInternetBetter campaign videos on YouTube, this approach offers insights into consumer attitudes and the campaign’s impact, underscoring the importance of direct engagement in understanding and combating misinformation (Kousha et al., 2012; Pace, 2008).
The study focuses on Google’s #LetsInternetBetter campaign on YouTube to analyze its role in misinformation and awareness efforts. This campaign was chosen for its relevance to misinformation challenges (Graham, 2017; Metaxa-Kakavouli & Torres-Echeverry, 2017) under the guidance of the purposive sampling method. This study investigates the campaign’s effectiveness in shaping user perceptions on combating false information. Having searched across Facebook, Instagram, and Twitter, only YouTube provided ample, rich data for analysis (Waqas et al., 2020), highlighting the platform’s utility in offering detailed user responses to such initiatives.
Table 1 provides key details about the campaign videos, including view counts, likes, and comments at the time the data was collected. The #LetsInternetBetter campaign consists of seven unique ads. Each video serves a distinct educational purpose, utilizing Google’s Fact Check Explorer and other tools to debunk common misinformation topics, from celebrity clones to dubious online deals. The videos encourage critical thinking and fact-checking among viewers, highlighting the importance of verifying information through reliable sources like Google Search and Images. This approach not only educates on specific myths but also promotes a broader awareness of the prevalence of misleading information online.
This study involves collecting 994 comments from Google’s #LetsInternetBetter campaign videos on YouTube and analyzing them using MAXQDA 2020. The analysis excluded 82 unclear comments, focusing on the varied comprehension of the #LetsInternetBetter campaign ads by commenters. The authors applied the constant comparative method for coding, progressing through primary and secondary cycles to refine and interpret codes into themes (Hollebeek, 2011; Tracy, 2020), aligning with practices for thematic analysis (Braun & Clarke, 2006). This thematic analysis uncovered significant patterns in viewer engagement and reaction. Ethical considerations were paramount; commenter anonymity was preserved, and direct interaction with the community was avoided to prevent influencing the discourse. Direct quotes from the comments were used to illustrate the findings, ensuring that the presentation of data remained respectful of community privacy and integrity (De Koster & Houtman, 2008; Kozinets, 2002). This approach facilitated a comprehensive analysis of public engagement with the campaign, highlighting patterns in viewer responses.
Results
Results were categorized into ad- and brand-related responses and were further divided into cognition, affect, and behavioral intentions. Expanding Reagle’s (2015) typology, commenters were classified into nine types. The study notes the predominance of male commenters due to anonymity. The comments were primarily in English, with a variety of other languages, reflecting the campaign’s global reach. Emojis were commonly used to express opinions, with a mix of positive and negative sentiments toward Google. The emojis used in comments mostly conveyed positive sentiments toward the ad videos or the brand, with the “red heart,” indicating love, being the most common, followed by “smiling face with smiling eyes” and “face with tears of joy.” However, there were also some emojis representing negative emotions, reflecting dissatisfaction or critical views toward the brand. This diversity in emoji usage highlights the complex reactions and interactions of the audience with the campaign content.
Typology of Consumer Responses
Consumer reactions to Google’s #LetsInternetBetter campaign were divided into two primary categories, following the framework proposed by Feng and Xie (2019).
Ad-Related Comments
Ad-related comments were analyzed across three subthemes: cognition, affect, and behavioral intention. Commenters noted the ads’ executional elements (like verbal cues, music, characters, narrative, and story elements) as well as message claims, offering both positive and negative feedback. Some praised the ads’ messages as “helpful” or necessary, while others criticized them as “lie,” “misleading,” or “false.” A notable comment (“A slug-selling company trying to sell their slugs isn’t doing anything ‘nefarious’”) challenged Google’s portrayal of misleading content for profit, suggesting that not all promotional efforts are deceitful. Some viewers admitted confusion over the ads’ messages, saying “don’t get it.”
Viewers showed a wide range of emotional reactions to the campaign videos, with some expressing positive sentiments toward the ads, calling them “Cool 😎,” “good stuff,” and “Nice 🥰” and others voicing negative opinions, describing the ads as “horrible,” “unnecessary,” and “beyond cringe.” These reactions were influenced by various factors, including the perceived intent of the ads (“brainwashing”) and their executional elements like narration, animation, and story elements. Emojis were frequently used to convey feelings, further highlighting the emotional engagement with the ads (Bai et al., 2019). Conversations among commenters also reflected a mix of positive and negative sentiments. A sample dialogue between two commenters is given below:
- “only 46 likes 💀”
+ “now 47”
The comments reveal mixed reactions to the ads’ potential impact on digital health, especially in societies like India and Pakistan, where susceptibility to misinformation is high (Dodda & Dubbudu, 2019). Appreciation for the ads was conveyed through expressions of gratitude, while some criticized the prevalence of ads (“I wish there [were] no ads😔😕☹”), humorously or seriously expressing their annoyance (“Anyone came here because of those ads?”) or indifference. This diversity in feedback underscores the varying viewer engagement and perceptions of the campaign’s effectiveness and intrusiveness. The feedback also highlights that this intrusiveness negatively affected attitudes and intentions toward both the advertising and the advertised brand (Goodrich et al., 2015; McCoy et al., 2008).
Brand-Related Comments
Brand-related comments reflected a mix of perceptions about Google’s impact and comparison with competitors. Some praised Google’s contributions (“all the good Google has done for”) and indispensability (“where people would be without them is nowhere”), while others compared Google with alternatives like DuckDuckGo, citing concerns about quality (“that app doesn’t work”) and security (“Android devices without Google Play lead to a proliferation of rogue software and malware”). Commenters also appreciated Google’s comprehensive search capabilities (“Thanks Google I can search what I want”), illustrating diverse consumer attitudes toward the brand’s role and effectiveness.
Consumers’ feelings toward Google varied widely. Some expressed admiration and affection, referring to Google with positive terms and emojis (“Love google❤,” “Google💗,” “a great friend,” “father,” “dude”), while others harshly criticized the company, using negative descriptions and emojis to convey their disapproval (“so ugly like f*ck,” “so goofy,” “damn,” “f*cking dump”). These mixed reactions to the ad campaign reflect the diverse opinions people hold about Google.
Behavioral intentions toward the brand, as a result of the campaign videos, were predominantly negative. Some indicated a preference for alternative services like Bing or Firefox (“These ads make me want to use bing,” “Switch to Firefox”), attributing their shift away from Google to the campaign itself (Alwreikat & Rjoub, 2020; Rejón-Guardia & Martínez-López, 2014). The consumers suggested that the ads contributed to growing negative sentiments toward the brand (“These ads are the exact reason why I’m beginning to hate google”).
Commenter Typology
Reagle’s (2015) commenter typology consists of “reviewers,” “likers,” “haters,” “manipulators,” and “critics.” This study expands on that by identifying four additional types: “socializers,” “help-seekers,” “inquirers,” and “demanders,” broadening our understanding of online user engagement and interactions within digital campaigns.
Reviewers in the study are defined as “commenters who share their insights to assist others in understanding a topic or making decisions.” They contribute knowledge for various reasons, such as explaining the beneficial properties of slugs (“has antioxidant & antimicrobial properties and is helpful for skin barrier recovery”) in one of the videos or highlighting the use of bots by Google and YouTube to simulate positive engagement (“literally commented on every Google post since her inception”), thereby educating and informing the community. This supports Reagle’s (2015) assertion regarding the tendency of users to leave comments under posts to share their knowledge for the sake of others’ benefits. Meanwhile, the comments by those calling attention to the use of bots by Google and YouTube highlight their skepticism toward these brands (Harris, 2023; Metaxa-Kakavouli & Torres-Echeverry, 2017).
Likers of the campaign were categorized into those favoring the brand, the ads, or other comments. Those appreciating the ads surpassed brand enthusiasts, challenging the findings of Banerjee and Pal (2023), which reveals that the positive attitude toward ads has the potential to spill over to the advertised brand.
Haters expressed stronger dislike toward the brand (“LET ME OUT PLZ I’M GONNA KILL YOU GOOGLE,” “I HATE THE GOOGLE CORPORATION I HATE THE GOOGLE CORPORATION”) than toward the ads (“I freaking HATE THESE ADS!!!”), with some commenters explicitly venting their frustration. Moreover, some commenters criticized others’ comments with sarcastic or dismissive remarks (“sounds like something a walking tree denier would say🥱”), while a few acknowledged insightful points made by others (“Good point”), highlighting the conflicts and arguments among consumers in this online environment (Dineva & Daunt, 2023; Dubovi & Tabak, 2020; Kienpointner, 2018). Meanwhile, manipulators were identified for spreading positive brand sentiments, often suspected to be bots or “sockpuppets” (Reagle, 2015) due to repetitive comments across videos, highlighting complex viewer interactions with the content and objectives of the campaign.
Criticism toward Google and its #LetsInternetBetter campaign was widespread, touching on issues from misinformation and data privacy to allegations of censorship and propaganda. Commenters accused Google of prioritizing its interests and collaborating with power structures to manipulate society (Metaxa-Kakavouli & Torres-Echeverry, 2017), resulting in brand hate. This corresponds to the findings of previous studies (Hegner et al., 2017; Shoja & Vaziri, 2018; Zhang & Laroche, 2020), which allege that brand hate stems from past negative experiences and conflicts with the brand. The use of bots or sockpuppets to inflate engagement metrics was also highlighted. Ads were criticized for being unavoidable (Banerjee & Pal, 2023) and not engaging enough (Ang et al., 2014), while society was criticized for gullibility (“stupid enough to believe that trees can walk and having to fact check it 💀💀💀”). This spectrum of criticisms reflects deep-seated concerns about digital ethics, privacy, and the role of tech giants in shaping public discourse.
Socializers engage with the campaign by seeking connections and sharing personal insights. They greet others, respond to comments to foster interaction, and openly share personal details like names, nationalities, and interests. This behavior reflects the social aspect of digital platforms, where users engage with content and seek to establish community bonds (Dineva & Daunt, 2023).
Viewers also use comments to seek assistance with personal issues, ranging from technical support for their accounts (“Please, Help me my account Hacking”) and devices (“Salam alaikum I’m from Kyrgyzstan I can’t restore my mail as my phone is broken and I bought a new phone and I can’t enter my email”) to requests for help with significant life challenges, such as funding for medical expenses (“Sorry if I bothered you, but is there someone generous enough to help with my son’s surgery costs?”). The latter sounds ironic, if real, because this social advertising campaign attempts to raise awareness of not only misinformation/disinformation but also online frauds, and leaving such comments seeking help for medical expenses, if there exists no such need, can be interpreted in two ways: (1) humiliating the intent of the campaign for social good and (2) spilling over the good intent of the campaign into their evil intent. Both have the possibility of harming the brand image; thus, the brand should initiate action toward such comments. Yet, Google was found not dialogic on its YouTube page, and this evokes the impression that Google’s sole attempt through this campaign was not to care for the society’s well-being but to avert the public’s criticisms due to the claims that Google and YouTube themselves help disseminate fake news, as many expressed their skepticism (Metaxa-Kakavouli & Torres-Echeverry, 2017). Whatever the case, these comments illustrate the diverse ways in which individuals use digital platforms not only for engagement but also to seek support and solutions from the community (Kozinets, 2002).
Inquirers use comments to express their curiosity or gain understanding on unfamiliar topics. They pose various questions, from language-specific queries to seeking explanations about the content (“What does it mean,” “Tem em português? [Is it in Portuguese?]”) or navigation instructions (“I got redirected to this from an app bro what”). This behavior highlights the role of digital platforms as spaces for learning and information exchange (Alajmi, 2012; Reagle, 2015), with users leveraging community interactions to fill their knowledge gaps or clarify confusion.
Demanders focus on requests for the brand to introduce new features or support. Requests included the addition of specific emojis (“Please Add An Elephant In Emoji Kitchen”) and language support for non-English speakers (“Lo quiero en español no entiendo [I want it in Spanish I don’t understand]”). These comments reflect the users’ expectations for customization and accessibility in digital platforms, showcasing a desire for more personalized and inclusive user experiences.
Conclusion and Discussion
Misinformation and fake news, increasingly scrutinized in the digital era, impact critical decision-making and lead to severe consequences. Efforts to mitigate their influence are crucial. This paper extends research into these areas by analyzing consumer reactions to Google’s initiative-the #LetsInternetBetter campaign-against misinformation and fake news. The results show that comments related to the ads often centered on aspects of the advertisements, such as their executional components and the claims they made, sparking a broad spectrum of mental and emotional responses. Likewise, comments about the brand demonstrated views on Google’s influence and role, displaying a mix of endorsement and criticism. The intentions to act, as shown by the commenters either supporting or outright rejecting the brand, highlight the significant impact that digital campaigns can have on shaping consumer perspectives and actions.
Furthermore, this study underscores the pivotal role of digital platforms in shaping public perceptions of misinformation. The findings reveal a spectrum of reactions-from enthusiastic endorsements to stark criticisms-mirroring the complexity of digital discourse surrounding misinformation. This diversity aligns with literature suggesting that consumers’ engagement with digital content is deeply influenced by their prior beliefs, digital literacy levels, and trust in the platform (Howard et al., 2021; Popescu, 2020).
This research expands on Reagle’s (2015) typology by introducing new commenter categories, enriching our understanding of digital engagement. This innovation offers a nuanced lens through which to view the interactions between social media campaigns and their audiences, revealing that campaigns must navigate a delicate balance between raising awareness and fostering positive brand associations.
This study highlights the dual-edged nature of digital campaigns in combating misinformation. While aiming to educate and engage, such campaigns may inadvertently polarize or alienate portions of their audience (Buchanan, 2020; Silva et al., 2023). This underscores the importance of crafting messages that resonate across diverse audience segments, a challenge that demands nuanced understanding and strategic finesse.
Furthermore, the use of emojis and different languages in comments points to the emotional and cultural layers of digital communication. This aspect, reflecting both global reach and personal expression, emphasizes the need for campaigns to address cognitive as well as affective dimensions of misinformation (Morris et al., 2016). The observed demands for new features and language support indicate a desire for more personalized and accessible digital experiences. This feedback should inform future campaigns, suggesting a shift toward more user-centric approaches in the design of digital content (Anuradha et al., 2023; Voola et al., 2022).
The findings of the #LetsInternetBetter campaign analysis also offer practical implications for social marketers and advertisers. Marketers should craft campaigns that resonate across different demographic and cultural groups, acknowledging the varied ways audiences perceive and react to content. This is even more noteworthy for marketers of such international companies as Google and YouTube, since their online campaigns have the power to reach diverse users from different countries and cultures. Online social advertising campaigns also should be multilingual, taking into account the segmentation strategy of consumers. Likewise, recognizing the emotional and cultural significance of emojis in digital communication can help create more engaging and relatable content. Most of the comments included various emojis to express or strengthen the consumers’ attitudes and feelings toward the ad, brand, social issue, and other elements. Moreover, integrating these concepts (emojis) into designing social advertising campaigns may yield positive results in enhancing the interaction between the users and the campaign itself (Cavalheiro et al., 2022; Yakın & Eru, 2017).
Social marketing and advertising campaigns typically face skepticism and criticism from consumers due to their belief that these campaigns are designed by companies or other advertisers to “wash the brains” of consumers and salve their conscience (Agalarova et al., 2022; Delvaux & Van den Broeck, 2023; Guess et al., 2020; Hastings & Domegan, 2014; Lee & Kotler, 2020; Mueller, 2023; Salgado Sequeiros et al., 2022; Tapan, 2022). This study, which reveals that skepticism toward the brand was high due to its operations regarding mis/disinformation and fake news, provides insightful results for social marketers and advertising professionals to consider while designing their campaigns, including incorporating effective strategies to combat not only mis/disinformation and fake news but also perceptions of the target audiences/consumers toward the brand itself. This can be achieved by paying attention to the consumers’ demands and feedback and interacting with them online, without leaving the space unattended by the brand. Furthermore, the campaigns should aim not only to inform but also to improve the audience’s ability to critically evaluate information, supporting broader efforts to combat misinformation (Guess et al., 2020; Hameleers, 2022; Mansoor, 2024).
This study examines consumer responses to Google’s #LetsInternetBetter campaign to understand how audiences cognitively, emotionally, and behaviorally engage with misinformation awareness advertising in a platform-based environment. The findings demonstrate that such campaigns generate highly polarized reactions, reflecting both support for media literacy initiatives and deep skepticism toward platform-led interventions. Although this study is limited to YouTube comments from a single campaign and relies on qualitative human coding, it offers several practical implications. For future research, scholars are encouraged to investigate such campaigns across different platforms and cultural contexts, employ mixed methods that combine human- and machine-based analysis, and adopt longitudinal designs to identify changes in audience perceptions over time.
For consumers, the results highlight the importance of approaching both online content and platform-driven awareness campaigns with critical awareness. Actively questioning message intent, verifying information through independent sources, and engaging in reflective rather than emotionally driven online interactions can help users more effectively navigate misinformation. Overall, the study underscores that tackling misinformation requires not only responsible platform practices but also informed and critically engaged digital citizens.
Ethical Statement
It is hereby declared that all rules specified in the Higher Education Institutions Scientific Research and Publication Ethics Directive were followed in this study.


