“#LetsInternetBetter” or “Less Internet Better”: Consumer Responses to Google’s #LetsInternetBetter
PDF
Cite
Share
Request
Original Article
E-PUB
27 February 2025

“#LetsInternetBetter” or “Less Internet Better”: Consumer Responses to Google’s #LetsInternetBetter

Trakya Univ J Soc Sci. Published online 27 February 2025.
1. Akdeniz University, Faculty of Communication, Department of Advertising, Antalya, Türkiye
2. Akdeniz University, Faculty of Communication, Department of Public Relations and Promotion, Antalya, Türkiye
No information available.
No information available
Received Date: 01.05.2025
Accepted Date: 15.01.2026
E-Pub Date: 27.02.2025
PDF
Cite
Share
Request

ABSTRACT

This research article examines the effectiveness of Google’s #LetsInternetBetter social advertising campaign in countering misinformation by analyzing consumer reactions through YouTube comments. It capitalizes on social networks as a rich data source for genuine insights into consumer opinions regarding misinformation and awareness campaigns. Employing a qualitative research method through thematic analysis of 994 comments in MAXQDA 2020, this research categorizes consumer responses into ad- and brand-related themes, further dividing them into cognition, affect, and behavioral intentions. It introduces an expanded typology of commenter types, reflecting a broad spectrum of public engagement. Findings reveal varied reactions, ranging from positive endorsements to critical skepticism, highlighting the campaign’s global outreach. The research underscores the pivotal role of direct engagement in misinformation mitigation efforts and emphasizes digital campaigns’ potential to polarize audiences. Furthermore, it highlights the need for nuanced, culturally attuned communication strategies in awareness campaigns to navigate the complexities of public perceptions. This study contributes to understanding how digital platforms influence public discourse on misinformation and offers insight for crafting more effective and inclusive social advertising campaigns.

Keywords:
Misinformation, advertising, consumer response, Google, thematic analysis

Introduction

The rise of the digital age has significantly altered information dissemination, leading to the proliferation of fake news, which encompasses both accidental misinformation and intentional disinformation. Dubbed “The Disinformation Age,” this era has witnessed a significant increase in the production of fake news. Recognized by entities such as the World Economic Forum as a societal hazard (Del Vicario et al., 2016), fake news blurs truth and falsehood, affecting social cohesion and democratic discourse.

Combating fake news requires a collective approach (Dodda & Dubbudu, 2019) that includes enhancing digital literacy, improving regulatory actions, and engaging social media platforms to counter misinformation spread. Pivotal in distributing information, internet companies employ various strategies, such as social advertising campaigns, to increase public awareness and improve media literacy (Bak-Coleman et al., 2022). Analyzing these campaigns based on consumer responses and interactions offers insights into how these issues are framed and the effectiveness of such initiatives (Chasi & Omarjee, 2014). Understanding the diverse motivations behind consumer engagement with these campaigns through their comments provides valuable perspectives for social marketers, revealing the complex thoughts and feelings of the audience regarding misinformation and the efforts to counteract it.

Previous research has extensively documented the dynamics, consequences, and counterstrategies of misinformation in digital environments. Studies have shown that false information, due to its emotional appeal, novelty, and algorithmic amplification, spreads faster and more broadly than factual content on social media (Vosoughi et al., 2018). Scholars have also emphasized the role of digital platforms not only as passive conduits but also as active gatekeepers shaping public discourse through recommendation systems and advertising infrastructures (Gillespie, 2018; Napoli, 2019). In response to growing public concern, various interventions have been examined, including fact-checking labels (Pennycook et al., 2020), media literacy initiatives (Guess et al., 2020), and platform-led awareness campaigns, which are aimed at fostering critical information consumption. However, empirical findings reveal that such interventions often generate mixed outcomes, sometimes increasing skepticism, resistance, or perceived manipulation among audiences (Nyhan & Reifler, 2015; van der Linden et al., 2020). Within advertising and marketing scholarship, prior studies have predominantly focused on consumer responses to socially framed campaigns, such as femvertizing, corporate activism, and health communication, demonstrating that audience reactions are deeply influenced by perceptions of sincerity, credibility, and corporate intent (Mukherjee & Althuizen, 2020; Vredenburg et al., 2020). Despite this growing body of work, limited attention has been paid to misinformation-focused social advertising campaigns initiated by global digital platforms, particularly regarding how audiences interpret, negotiate, and contest such messages through participatory comment cultures. The present study attempts to address this gap by situating Google’s #LetsInternetBetter campaign within this evolving research landscape and examining consumer responses as a critical site for understanding platform credibility and public engagement in The Disinformation Age.

Google’s #LetsInternetBetter campaign, launched on its YouTube platform, stands as a testament to the power of digital platforms in promoting safer and more responsible internet usage. This research article examines the effectiveness of the campaign by analyzing consumer comments on the ad campaign videos. Through a detailed examination of the consumers’ ad- and brand-related comments, the study offers insights into consumer perceptions and engagement. Moreover, it expands upon Reagle’s (2015) commenter types by identifying new categories of commenters and offering a fresh perspective on audience interaction with social advertising. This analysis contributes to the understanding of digital marketing strategies responsive to fake news and mis/disinformation and highlights the dynamics of consumer engagement in the context of social advertising campaigns.

Conceptual Background

Fake News, Mis/Disinformation, Reasons, and Combating Strategies

The proliferation of mis/disinformation in the digital age, marked by the rapid spread of false information online, poses significant societal challenges (Dell, 2018; Howard et al., 2021). Recognized by the World Economic Forum as a major societal threat, fake news undermines the optimism of the 21st century-often called the Information Age-as an era of global cooperation, leading to what could more accurately be termed The Disinformation Age (Del Vicario et al., 2016; Guilbeault, 2018). This issue is exacerbated by modern technologies that enable content dissemination at unprecedented speeds, hindering the protection of human rights and peace (United Nations, 2022).

The intent to deceive distinguishes disinformation from misinformation, with disinformation being more malicious (Hameleers et al., 2022). Mis/disinformation often mimics legitimate news, blending truth and falsehood to exploit human biases (Popescu, 2020). The intentional deception inherent in disinformation aims to manipulate, leveraging modern technologies to achieve the author’s goals (Bontridder & Poullet, 2021). The virality and ambiguity of online content can lead to information cascades in which the accuracy of shared information is often not guaranteed, thus serving the interests of those seeking political or financial gain (Bastick, 2021). Therefore, it can have real-world harmful effects, especially on vulnerable groups like children (Howard et al., 2021). Yet, overreaction to false information threatens freedoms, including expression, and highlights the dilemma faced by internet and social media companies in balancing human rights with combating fake news (Aswad, 2020).

The digitalization of news and the networked nature of the internet have amplified the spread and manipulation of information, affecting public discourse during crises and elections (Dodda & Dubbudu, 2019; Qian et al., 2022; Shapovalova, 2020; Wang et al., 2022). Social media users increase the spread of disinformation through actions like sharing and liking, which, in turn, trigger platform algorithms to further disseminate the content (Buchanan, 2020). Thus, this process is facilitated by algorithms that prioritize engagement over accuracy (Buchanan & Benson, 2019; Howard et al., 2021). Furthermore, in this context, the phenomenon of “organic reach” is crucial as user interactions boost the visibility of disinformation (Buchanan & Benson, 2019). Social media also enables paid promotion of content, further enhancing its reach (Silva et al., 2023; Vosoughi et al., 2018).

Group dynamics and the desire for social acceptance motivate individuals to share unverified information, with trust in the source playing a significant role in the likelihood of engagement (Buchanan & Benson, 2019; Dodda & Dubbudu, 2019). Moreover, personality traits and risk propensities of individuals affect their likelihood of engaging with disinformation, contributing to its spread (Buchanan & Benson, 2019). Furthermore, mis/disinformation spreads through bots, algorithms, and coordinated groups, reflecting a complex ecosystem influenced by political, social, and technological factors (Howard et al., 2021; Shekhar, 2018). Artificial intelligence and algorithms exacerbate the problem by creating realistic fake content and targeting it to susceptible audiences (Bontridder & Poullet, 2021; Howard et al., 2021).

To combat the spread of fake news, a multifaceted approach involving governments, non-profits, internet and social media companies, educators, and individuals is being implemented. Enhancing digital media literacy is a core strategy, with initiatives aimed at training news consumers to identify misinformation and understand its implications (Buchanan, 2020; Guess et al., 2020).

This includes large-scale training programs, such as WhatsApp’s effort to educate 100,000 people in India on identifying misinformation through social media posts and in-person events (Guess et al., 2020). Similarly, WhatsApp has launched campaigns and ads to educate users about misinformation, especially during critical times like elections (Shekhar, 2018). These platforms have also embraced labeling misinformation and working with fact-checking organizations to curb the spread of harmful content, such as anti-vaccination misinformation (Howard et al., 2021). The challenges posed by disinformation are also being tackled through various means, including the deplatforming of “superspreaders” to reduce the reach of misinformation, pre-moderation by content providers to protect children from misinformation, and the integration of parental controls in digital platforms (Howard et al., 2021). Furthermore, the European Commission has encouraged internet companies to adopt voluntary codes of practice for greater transparency and accountability in handling online disinformation (Mortera-Martinez, 2019).

Education plays a pivotal role in fighting misinformation, with media literacy education positioned as a crucial tool for empowering individuals to discern real from fake news. This includes teaching critical thinking skills and making media literacy a required part of educational systems (Dame Adjin-Tettey, 2022; Dell, 2018). To address misinformation, strategies such as correction, inoculation, and pre-bunking are employed to flag false content and forewarn people against it (Qian et al., 2022). Empirical evidence supports the efficacy of media literacy interventions in enhancing individuals’ ability to distinguish between accurate and false news, highlighting the importance of targeted education and training (Guess et al., 2020; Hameleers, 2022). Thus, media literacy education aims to create a more informed and media-savvy population that is less susceptible to disinformation campaigns.

Social Marketing, Advertising, and Fake News

In addressing the burgeoning challenges posed by corporate scandals, societal inequalities, and environmental issues, the “Better Marketing for Better World” approach emerges as a critical framework, emphasizing the integration of ethical considerations into marketing strategies (Voola et al., 2022). Today’s consumers are increasingly aligning their patronage with businesses that reflect their ethical values and are transparent about their commitment to human rights and sustainable practices (Anuradha et al., 2023; Kılıç Taran & Akbayır, 2022). With Generation Z and Millennials showing an increased sense of social justice, brands like Nike, WhatsApp, and Airbnb have successfully engaged these demographics through sincere social campaigns, demonstrating the potential of social marketing to resonate authentically with consumers (Mueller, 2023). However, the efficacy of social marketing goes beyond mere messaging, requiring a holistic approach that includes tangible support for underserved populations, a willingness among the economically disadvantaged to invest in valued services, and leveraging their creativity and entrepreneurial spirit (Smith, 2009).

As social marketing navigates the challenges posed by the pandemic, climate change, social inequalities, and digital technologies, it focuses on targeting consumers with “social goods” to spearhead societal improvements (Chasi & Omarjee, 2014; Flaherty et al., 2021). Modern campaigns leverage segmentation and demonstrate the value of innovations, often manufacturing consent through asymmetrical power dynamics between marketers and audiences (Chasi & Omarjee, 2014).

Social advertising, which is an essential component of broader social marketing efforts, seeks to transcend traditional commercial advertising by embedding a social purpose within brand narratives, enabling deeper consumer engagement (Anker et al., 2022). This shift toward social good is evident in campaigns across the West, aimed at addressing public health concerns and promoting social welfare. Such campaigns are characterized by messages tailored to segmented audiences for maximal impact (Smith, 2009).

Efforts to combat fake news include literacy campaigns, fact-checking, and pre-emptive collaborations, particularly before significant events, such as elections. These efforts aim to educate the public and counter misinformation (Dodda & Dubbudu, 2019). Media literacy campaigns improve citizens’ analytical skills, enabling critical engagement with media and technology (Dodda & Dubbudu, 2019). Similar to the way digital media literacy campaigns are executed in physical settings like schools, those conducted on digital platforms can efficiently and economically connect with their intended audiences. Governments may view these campaigns as educational initiatives, whereas companies might perceive them as forms of social marketing in their digital social advertising campaigns.

Social advertising focuses on enhancing the quality of life and facilitating social change, underscoring the importance of customer orientation, creativity, collective sensitivity, and competitive insight in crafting successful campaigns (Galan-Ladero & Alves, 2023). The use of advertising to address social issues, whether by state authorities, non-governmental organizations, or private entities, often entails the employment of fear-arousing appeals and humor to influence behavior change, particularly among demographics resistant to the advocated behaviors (Jäger & Eisend, 2013; Yılmaz & Ozturk, 2013). Furthermore, the credibility of the message source, alongside the delivery style-whether narrative or non-narrative-plays a key role in advertising effectiveness, influencing consumer attitudes and behaviors (Haley, 1996; Rathee & Milfeld, 2024; Yang et al., 2015). Celebrities and experts endorsing social causes further enhance campaign effectiveness by leveraging their perceived attractiveness, expertise, and trustworthiness (Kerr & Richards, 2021; Vraga et al., 2022).

The effectiveness of social advertising depends on how it integrates the components of creativity. Previous studies in the field of advertising creativity have explored its influence across cognitive, affective, and conative aspects (Feng & Xie, 2019), highlighting the unique roles of novelty and message relevance. Novelty can spark initial interest and improve short-term recall but might undermine the brand itself, whereas relevance augments long-term brand retention and information processing (Ang et al., 2014; Sheinin et al., 2011; Smith et al., 2008). In the emotional realm, novelty increases ad appreciation, potentially fostering positive attitudes, while relevance reinforces brand beliefs (Sheinin et al., 2011). Creatively integrating novelty with relevance tends to elevate ad appreciation further (Ang et al., 2014; Banerjee & Pal, 2023). Finally, cognitive and emotional reactions serve as precursors to behavioral intentions, with creative advertisements often leading to stronger purchase intentions (Smith et al., 2008). Likewise, existing studies (Feng & Xie, 2019) on consumer reactions to advertising explore three behavioral dimensions: cognitive (awareness and knowledge), affective (emotions and attitudes), and conative (intentions and actions). These studies suggest that consumer reactions to ads follow a sequential pattern: beginning with cognitive engagement (learning), moving to emotional response (feeling), and culminating in conative actions (doing) (Lawrence et al., 2013; Park et al., 2008; van Reijmersdal et al., 2010).

Audiences play a crucial role in the fake news ecosystem, influencing its spread and reaction, which in turn underscores the importance of understanding audience dynamics (Dodda & Dubbudu, 2019). Audiences’ emotional reaction to social advertising is crucial, with various studies indicating that emotional response drives consumer behavioral intentions more than rational variables (Kim, 2011; Morris et al., 2016). This understanding is fundamental in evaluating the success of social advertising communications and fostering emotional connections with the audience (Morris et al., 2016).

This study’s main objective is to explore consumer reactions to misinformation awareness campaigns. It discusses how online users respond to such campaigns by examining user-generated comments and their impact on the consumer-brand relationship and by reviewing some of the specific arguments online users make about the campaign. The paper analyzes the comments generated by online users on seven social advertising videos of Google’s #LetsInternetBetter campaign through human-centered thematic analysis. In doing so, it aims to expand knowledge on consumer responses to misinformation-related campaigns. Some recent publications have used human-based or machine-based analysis of user comments on woke advertising (Feng et al., 2021), femvertizing (Feng et al., 2019; Lima & Casais, 2021), consumer-generated advertising (Ertimur & Gilly, 2012), viral advertising (Blichfeldt & Smed, 2015), branded flash mobs (Grant et al., 2015), influencer marketing (Janiques de Carvalho & Marôpo, 2020), branded content (Waqas et al., 2020), Coronavirus Disease 2019 advertisements (Feng & Chen, 2022), and augmented reality out-of-home advertising (Feng & Xie, 2019). To the best of the authors’ knowledge, no previous studies exist that analyze user-generated comments on any misinformation-oriented social advertising campaign. Thus, the literature lacks research on how consumers respond to awareness-raising campaigns targeted at misinformation. Therefore, this article, which is the first study that analyzes user-generated comments on social advertising campaigns, is extremely valuable in terms of providing findings on the effectiveness of such awareness-raising advertising campaigns from the perspective of consumers.

Methodology

This study examines the impact of misinformation and the effectiveness of awareness campaigns in addressing it. Leveraging social networks and online communities, which provide a rich data source for authentic analysis of consumer opinions on advertising campaigns (Feng et al., 2019; Feng et al., 2021; Reagle, 2015), this research highlights the value of analyzing unprompted, unbiased online discussions (Grant et al., 2015; Waqas et al., 2020). Employing thematic analysis of comments on Google’s #LetsInternetBetter campaign videos on YouTube, this approach offers insights into consumer attitudes and the campaign’s impact, underscoring the importance of direct engagement in understanding and combating misinformation (Kousha et al., 2012; Pace, 2008).

The study focuses on Google’s #LetsInternetBetter campaign on YouTube to analyze its role in misinformation and awareness efforts. This campaign was chosen for its relevance to misinformation challenges (Graham, 2017; Metaxa-Kakavouli & Torres-Echeverry, 2017) under the guidance of the purposive sampling method. This study investigates the campaign’s effectiveness in shaping user perceptions on combating false information. Having searched across Facebook, Instagram, and Twitter, only YouTube provided ample, rich data for analysis (Waqas et al., 2020), highlighting the platform’s utility in offering detailed user responses to such initiatives.

Table 1 provides key details about the campaign videos, including view counts, likes, and comments at the time the data was collected. The #LetsInternetBetter campaign consists of seven unique ads. Each video serves a distinct educational purpose, utilizing Google’s Fact Check Explorer and other tools to debunk common misinformation topics, from celebrity clones to dubious online deals. The videos encourage critical thinking and fact-checking among viewers, highlighting the importance of verifying information through reliable sources like Google Search and Images. This approach not only educates on specific myths but also promotes a broader awareness of the prevalence of misleading information online.

This study involves collecting 994 comments from Google’s #LetsInternetBetter campaign videos on YouTube and analyzing them using MAXQDA 2020. The analysis excluded 82 unclear comments, focusing on the varied comprehension of the #LetsInternetBetter campaign ads by commenters. The authors applied the constant comparative method for coding, progressing through primary and secondary cycles to refine and interpret codes into themes (Hollebeek, 2011; Tracy, 2020), aligning with practices for thematic analysis (Braun & Clarke, 2006). This thematic analysis uncovered significant patterns in viewer engagement and reaction. Ethical considerations were paramount; commenter anonymity was preserved, and direct interaction with the community was avoided to prevent influencing the discourse. Direct quotes from the comments were used to illustrate the findings, ensuring that the presentation of data remained respectful of community privacy and integrity (De Koster & Houtman, 2008; Kozinets, 2002). This approach facilitated a comprehensive analysis of public engagement with the campaign, highlighting patterns in viewer responses.

Results

Results were categorized into ad- and brand-related responses and were further divided into cognition, affect, and behavioral intentions. Expanding Reagle’s (2015) typology, commenters were classified into nine types. The study notes the predominance of male commenters due to anonymity. The comments were primarily in English, with a variety of other languages, reflecting the campaign’s global reach. Emojis were commonly used to express opinions, with a mix of positive and negative sentiments toward Google. The emojis used in comments mostly conveyed positive sentiments toward the ad videos or the brand, with the “red heart,” indicating love, being the most common, followed by “smiling face with smiling eyes” and “face with tears of joy.” However, there were also some emojis representing negative emotions, reflecting dissatisfaction or critical views toward the brand. This diversity in emoji usage highlights the complex reactions and interactions of the audience with the campaign content.

Typology of Consumer Responses

Consumer reactions to Google’s #LetsInternetBetter campaign were divided into two primary categories, following the framework proposed by Feng and Xie (2019).

Ad-Related Comments

Ad-related comments were analyzed across three subthemes: cognition, affect, and behavioral intention. Commenters noted the ads’ executional elements (like verbal cues, music, characters, narrative, and story elements) as well as message claims, offering both positive and negative feedback. Some praised the ads’ messages as “helpful” or necessary, while others criticized them as “lie,” “misleading,” or “false.” A notable comment (“A slug-selling company trying to sell their slugs isn’t doing anything ‘nefarious’”) challenged Google’s portrayal of misleading content for profit, suggesting that not all promotional efforts are deceitful. Some viewers admitted confusion over the ads’ messages, saying “don’t get it.”

Viewers showed a wide range of emotional reactions to the campaign videos, with some expressing positive sentiments toward the ads, calling them “Cool 😎,” “good stuff,” and “Nice 🥰” and others voicing negative opinions, describing the ads as “horrible,” “unnecessary,” and “beyond cringe.” These reactions were influenced by various factors, including the perceived intent of the ads (“brainwashing”) and their executional elements like narration, animation, and story elements. Emojis were frequently used to convey feelings, further highlighting the emotional engagement with the ads (Bai et al., 2019). Conversations among commenters also reflected a mix of positive and negative sentiments. A sample dialogue between two commenters is given below:

- “only 46 likes 💀”

+ “now 47”

The comments reveal mixed reactions to the ads’ potential impact on digital health, especially in societies like India and Pakistan, where susceptibility to misinformation is high (Dodda & Dubbudu, 2019). Appreciation for the ads was conveyed through expressions of gratitude, while some criticized the prevalence of ads (“I wish there [were] no ads😔😕☹”), humorously or seriously expressing their annoyance (“Anyone came here because of those ads?”) or indifference. This diversity in feedback underscores the varying viewer engagement and perceptions of the campaign’s effectiveness and intrusiveness. The feedback also highlights that this intrusiveness negatively affected attitudes and intentions toward both the advertising and the advertised brand (Goodrich et al., 2015; McCoy et al., 2008).

Brand-Related Comments

Brand-related comments reflected a mix of perceptions about Google’s impact and comparison with competitors. Some praised Google’s contributions (“all the good Google has done for”) and indispensability (“where people would be without them is nowhere”), while others compared Google with alternatives like DuckDuckGo, citing concerns about quality (“that app doesn’t work”) and security (“Android devices without Google Play lead to a proliferation of rogue software and malware”). Commenters also appreciated Google’s comprehensive search capabilities (“Thanks Google I can search what I want”), illustrating diverse consumer attitudes toward the brand’s role and effectiveness.

Consumers’ feelings toward Google varied widely. Some expressed admiration and affection, referring to Google with positive terms and emojis (“Love google❤,” “Google💗,” “a great friend,” “father,” “dude”), while others harshly criticized the company, using negative descriptions and emojis to convey their disapproval (“so ugly like f*ck,” “so goofy,” “damn,” “f*cking dump”). These mixed reactions to the ad campaign reflect the diverse opinions people hold about Google.

Behavioral intentions toward the brand, as a result of the campaign videos, were predominantly negative. Some indicated a preference for alternative services like Bing or Firefox (“These ads make me want to use bing,” “Switch to Firefox”), attributing their shift away from Google to the campaign itself (Alwreikat & Rjoub, 2020; Rejón-Guardia & Martínez-López, 2014). The consumers suggested that the ads contributed to growing negative sentiments toward the brand (“These ads are the exact reason why I’m beginning to hate google”).

Commenter Typology

Reagle’s (2015) commenter typology consists of “reviewers,” “likers,” “haters,” “manipulators,” and “critics.” This study expands on that by identifying four additional types: “socializers,” “help-seekers,” “inquirers,” and “demanders,” broadening our understanding of online user engagement and interactions within digital campaigns.

Reviewers in the study are defined as “commenters who share their insights to assist others in understanding a topic or making decisions.” They contribute knowledge for various reasons, such as explaining the beneficial properties of slugs (“has antioxidant & antimicrobial properties and is helpful for skin barrier recovery”) in one of the videos or highlighting the use of bots by Google and YouTube to simulate positive engagement (“literally commented on every Google post since her inception”), thereby educating and informing the community. This supports Reagle’s (2015) assertion regarding the tendency of users to leave comments under posts to share their knowledge for the sake of others’ benefits. Meanwhile, the comments by those calling attention to the use of bots by Google and YouTube highlight their skepticism toward these brands (Harris, 2023; Metaxa-Kakavouli & Torres-Echeverry, 2017).

Likers of the campaign were categorized into those favoring the brand, the ads, or other comments. Those appreciating the ads surpassed brand enthusiasts, challenging the findings of Banerjee and Pal (2023), which reveals that the positive attitude toward ads has the potential to spill over to the advertised brand.

Haters expressed stronger dislike toward the brand (“LET ME OUT PLZ I’M GONNA KILL YOU GOOGLE,” “I HATE THE GOOGLE CORPORATION I HATE THE GOOGLE CORPORATION”) than toward the ads (“I freaking HATE THESE ADS!!!”), with some commenters explicitly venting their frustration. Moreover, some commenters criticized others’ comments with sarcastic or dismissive remarks (“sounds like something a walking tree denier would say🥱”), while a few acknowledged insightful points made by others (“Good point”), highlighting the conflicts and arguments among consumers in this online environment (Dineva & Daunt, 2023; Dubovi & Tabak, 2020; Kienpointner, 2018). Meanwhile, manipulators were identified for spreading positive brand sentiments, often suspected to be bots or “sockpuppets” (Reagle, 2015) due to repetitive comments across videos, highlighting complex viewer interactions with the content and objectives of the campaign.

Criticism toward Google and its #LetsInternetBetter campaign was widespread, touching on issues from misinformation and data privacy to allegations of censorship and propaganda. Commenters accused Google of prioritizing its interests and collaborating with power structures to manipulate society (Metaxa-Kakavouli & Torres-Echeverry, 2017), resulting in brand hate. This corresponds to the findings of previous studies (Hegner et al., 2017; Shoja & Vaziri, 2018; Zhang & Laroche, 2020), which allege that brand hate stems from past negative experiences and conflicts with the brand. The use of bots or sockpuppets to inflate engagement metrics was also highlighted. Ads were criticized for being unavoidable (Banerjee & Pal, 2023) and not engaging enough (Ang et al., 2014), while society was criticized for gullibility (“stupid enough to believe that trees can walk and having to fact check it 💀💀💀”). This spectrum of criticisms reflects deep-seated concerns about digital ethics, privacy, and the role of tech giants in shaping public discourse.

Socializers engage with the campaign by seeking connections and sharing personal insights. They greet others, respond to comments to foster interaction, and openly share personal details like names, nationalities, and interests. This behavior reflects the social aspect of digital platforms, where users engage with content and seek to establish community bonds (Dineva & Daunt, 2023).

Viewers also use comments to seek assistance with personal issues, ranging from technical support for their accounts (“Please, Help me my account Hacking”) and devices (“Salam alaikum I’m from Kyrgyzstan I can’t restore my mail as my phone is broken and I bought a new phone and I can’t enter my email”) to requests for help with significant life challenges, such as funding for medical expenses (“Sorry if I bothered you, but is there someone generous enough to help with my son’s surgery costs?”). The latter sounds ironic, if real, because this social advertising campaign attempts to raise awareness of not only misinformation/disinformation but also online frauds, and leaving such comments seeking help for medical expenses, if there exists no such need, can be interpreted in two ways: (1) humiliating the intent of the campaign for social good and (2) spilling over the good intent of the campaign into their evil intent. Both have the possibility of harming the brand image; thus, the brand should initiate action toward such comments. Yet, Google was found not dialogic on its YouTube page, and this evokes the impression that Google’s sole attempt through this campaign was not to care for the society’s well-being but to avert the public’s criticisms due to the claims that Google and YouTube themselves help disseminate fake news, as many expressed their skepticism (Metaxa-Kakavouli & Torres-Echeverry, 2017). Whatever the case, these comments illustrate the diverse ways in which individuals use digital platforms not only for engagement but also to seek support and solutions from the community (Kozinets, 2002).

Inquirers use comments to express their curiosity or gain understanding on unfamiliar topics. They pose various questions, from language-specific queries to seeking explanations about the content (“What does it mean,” “Tem em português? [Is it in Portuguese?]”) or navigation instructions (“I got redirected to this from an app bro what”). This behavior highlights the role of digital platforms as spaces for learning and information exchange (Alajmi, 2012; Reagle, 2015), with users leveraging community interactions to fill their knowledge gaps or clarify confusion.

Demanders focus on requests for the brand to introduce new features or support. Requests included the addition of specific emojis (“Please Add An Elephant In Emoji Kitchen”) and language support for non-English speakers (“Lo quiero en español no entiendo [I want it in Spanish I don’t understand]”). These comments reflect the users’ expectations for customization and accessibility in digital platforms, showcasing a desire for more personalized and inclusive user experiences.

Conclusion and Discussion

Misinformation and fake news, increasingly scrutinized in the digital era, impact critical decision-making and lead to severe consequences. Efforts to mitigate their influence are crucial. This paper extends research into these areas by analyzing consumer reactions to Google’s initiative-the #LetsInternetBetter campaign-against misinformation and fake news. The results show that comments related to the ads often centered on aspects of the advertisements, such as their executional components and the claims they made, sparking a broad spectrum of mental and emotional responses. Likewise, comments about the brand demonstrated views on Google’s influence and role, displaying a mix of endorsement and criticism. The intentions to act, as shown by the commenters either supporting or outright rejecting the brand, highlight the significant impact that digital campaigns can have on shaping consumer perspectives and actions.

Furthermore, this study underscores the pivotal role of digital platforms in shaping public perceptions of misinformation. The findings reveal a spectrum of reactions-from enthusiastic endorsements to stark criticisms-mirroring the complexity of digital discourse surrounding misinformation. This diversity aligns with literature suggesting that consumers’ engagement with digital content is deeply influenced by their prior beliefs, digital literacy levels, and trust in the platform (Howard et al., 2021; Popescu, 2020).

This research expands on Reagle’s (2015) typology by introducing new commenter categories, enriching our understanding of digital engagement. This innovation offers a nuanced lens through which to view the interactions between social media campaigns and their audiences, revealing that campaigns must navigate a delicate balance between raising awareness and fostering positive brand associations.

This study highlights the dual-edged nature of digital campaigns in combating misinformation. While aiming to educate and engage, such campaigns may inadvertently polarize or alienate portions of their audience (Buchanan, 2020; Silva et al., 2023). This underscores the importance of crafting messages that resonate across diverse audience segments, a challenge that demands nuanced understanding and strategic finesse.

Furthermore, the use of emojis and different languages in comments points to the emotional and cultural layers of digital communication. This aspect, reflecting both global reach and personal expression, emphasizes the need for campaigns to address cognitive as well as affective dimensions of misinformation (Morris et al., 2016). The observed demands for new features and language support indicate a desire for more personalized and accessible digital experiences. This feedback should inform future campaigns, suggesting a shift toward more user-centric approaches in the design of digital content (Anuradha et al., 2023; Voola et al., 2022).

The findings of the #LetsInternetBetter campaign analysis also offer practical implications for social marketers and advertisers. Marketers should craft campaigns that resonate across different demographic and cultural groups, acknowledging the varied ways audiences perceive and react to content. This is even more noteworthy for marketers of such international companies as Google and YouTube, since their online campaigns have the power to reach diverse users from different countries and cultures. Online social advertising campaigns also should be multilingual, taking into account the segmentation strategy of consumers. Likewise, recognizing the emotional and cultural significance of emojis in digital communication can help create more engaging and relatable content. Most of the comments included various emojis to express or strengthen the consumers’ attitudes and feelings toward the ad, brand, social issue, and other elements. Moreover, integrating these concepts (emojis) into designing social advertising campaigns may yield positive results in enhancing the interaction between the users and the campaign itself (Cavalheiro et al., 2022; Yakın & Eru, 2017).

Social marketing and advertising campaigns typically face skepticism and criticism from consumers due to their belief that these campaigns are designed by companies or other advertisers to “wash the brains” of consumers and salve their conscience (Agalarova et al., 2022; Delvaux & Van den Broeck, 2023; Guess et al., 2020; Hastings & Domegan, 2014; Lee & Kotler, 2020; Mueller, 2023; Salgado Sequeiros et al., 2022; Tapan, 2022). This study, which reveals that skepticism toward the brand was high due to its operations regarding mis/disinformation and fake news, provides insightful results for social marketers and advertising professionals to consider while designing their campaigns, including incorporating effective strategies to combat not only mis/disinformation and fake news but also perceptions of the target audiences/consumers toward the brand itself. This can be achieved by paying attention to the consumers’ demands and feedback and interacting with them online, without leaving the space unattended by the brand. Furthermore, the campaigns should aim not only to inform but also to improve the audience’s ability to critically evaluate information, supporting broader efforts to combat misinformation (Guess et al., 2020; Hameleers, 2022; Mansoor, 2024).

This study examines consumer responses to Google’s #LetsInternetBetter campaign to understand how audiences cognitively, emotionally, and behaviorally engage with misinformation awareness advertising in a platform-based environment. The findings demonstrate that such campaigns generate highly polarized reactions, reflecting both support for media literacy initiatives and deep skepticism toward platform-led interventions. Although this study is limited to YouTube comments from a single campaign and relies on qualitative human coding, it offers several practical implications. For future research, scholars are encouraged to investigate such campaigns across different platforms and cultural contexts, employ mixed methods that combine human- and machine-based analysis, and adopt longitudinal designs to identify changes in audience perceptions over time.

For consumers, the results highlight the importance of approaching both online content and platform-driven awareness campaigns with critical awareness. Actively questioning message intent, verifying information through independent sources, and engaging in reflective rather than emotionally driven online interactions can help users more effectively navigate misinformation. Overall, the study underscores that tackling misinformation requires not only responsible platform practices but also informed and critically engaged digital citizens.

Ethical Statement

It is hereby declared that all rules specified in the Higher Education Institutions Scientific Research and Publication Ethics Directive were followed in this study.

Ethics Committee Approval

Since this study did not require ethics committee approval, no ethics approval was obtained.

Conflict of Interest

The author(s) declare that they have no competing interests.

Funding

No financial support was received for this research.

Author Contributions

Conceptualization: Hediye Aydoğan and Sibel Hoştut
Design: Hediye Aydoğan and Sibel Hoştut
Data Collection: Hediye Aydoğan and Sibel Hoştut
Data Processing/Analysis: Hediye Aydoğan and Sibel Hoştut
Writing - Original Draft: Hediye Aydoğan and Sibel Hoştut
Writing - Review & Editing: Hediye Aydoğan and Sibel Hoştut
Literature Review: Hediye Aydoğan and Sibel Hoştut
Visualization: Hediye Aydoğan and Sibel Hoştut
Project Administration: Hediye Aydoğan and Sibel Hoştut
Supervision/Academic Consultancy: Hediye Aydoğan and Sibel Hoştut

References

1
Agalarova, K., Zemliakova, O., Miroshnik, M., Kitchenko, O., Mironenko, N., Reshetniak, N., & Kuzmenko, O. (2022). Social advertising as a tool of social marketing and a way to form a positive brand image. AD ALTA: Journal of Interdisciplinary Research, 12 , 207-212. https://repository.kpi.kharkov.ua/entities/publication/9f6a6ce4-7447-479d-a8a1-f62319a03bfe
2
Alajmi, B. M. (2012). The intention to share: Psychological investigation of knowledge sharing behaviour in online communities. Journal of Information & Knowledge Management , 11 (03), 1250022. https://doi.org/10.1142/S0219649212500220
3
Alwreikat, A. A. M., & Rjoub, H. (2020). Impact of mobile advertising wearout on consumer irritation, perceived intrusiveness, engagement and loyalty: A partial least squares structural equation modelling analysis. South African Journal of Business Management , 51 (1). https://doi.org/10.4102/sajbm.v51i1.2046
4
Ang, S. H., Leong, S. M., Lee, Y. H., & Lou, S. L. (2014). Necessary but not sufficient: Beyond novelty in advertising creativity. Journal of Marketing Communications , 20 (3), 214-230. https://doi.org/10.1080/13527266.2012.677464
5
Anker, T. B., Gordon, R., & Zainuddin, N. (2022). Consumer-dominant social marketing: A definition and explication. European Journal of Marketing , 56 (1), 159-183. https://doi.org/10.1108/EJM-08-2020-0618
6
Anuradha, A., Shilpa, R., Thirupathi, M., Padmapriya, S., Supramaniam, G., Booshan, B., Booshan, S., Pol, N., Chavadi, C. A., & Thangam, D. (2023). Importance of sustainable marketing initiatives for supporting the sustainable development goals. In I. Gigauri, M. Palazzo, & M. A. Ferri (Eds.), Handbook of research on achieving sustainable development goals with sustainable marketing (pp. 149-169). IGI Global. https://doi.org/10.4018/978-1-6684-8681-8.ch008
7
Aswad, E. M. (2020). In a world of “fake news,” what’s a social media platform to do? Utah Law Review , 2020 , 1009-1028.
8
Bai, Q., Dan, Q., Mu, Z., & Yang, M. (2019). A systematic review of emoji: Current research and future perspectives. Frontiers in Psychology , 10. https://doi.org/10.3389/fpsyg.2019.02221
9
Bak-Coleman, J. B., Kennedy, I., Wack, M., Beers, A., Schafer, J. S., Spiro, E. S., Starbird, K., & West, J. D. (2022). Combining interventions to reduce the spread of viral misinformation. Nature Human Behaviour , 6 (10), 1372-1380. https://doi.org/10.1038/s41562-022-01388-6
10
Banerjee, S., & Pal, A. (2023). I hate ads but not the advertised brands: A qualitative study on internet users’ lived experiences with YouTube ads. Internet Research , 33 (1), 39-56. https://doi.org/10.1108/INTR-06-2021-0377
11
Bastick, Z. (2021). Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Computers in Human Behavior , 116 , 106633. https://doi.org/10.1016/j.chb.2020.106633
12
Blichfeldt, B. S., & Smed, K. M. (2015). ‘Do it to Denmark.’ Journal of Vacation Marketing , 21 (3), 289-301. https://doi.org/10.1177/1356766715573652
13
Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy , 3 , e32. https://doi.org/10.1017/dap.2021.20
14
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology , 3 (2), 77-101. https://doi.org/10.1191/1478088706qp063oa
15
Buchanan, T. (2020). Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLOS ONE , 15 (10), e0239666. https://doi.org/10.1371/journal.pone.0239666
16
Buchanan, T., & Benson, V. (2019). Spreading disinformation on Facebook: Do trust in message source, risk propensity, or personality affect the organic reach of “fake news”? Social Media + Society , 5 (4), 205630511988865. https://doi.org/10.1177/2056305119888654
17
Cavalheiro, B. P., Prada, M., Rodrigues, D. L., Garrido, M. V., & Lopes, D. (2022). With or without emoji? Perceptions about emoji use in different brand-consumer communication contexts. Human Behavior and Emerging Technologies , 2022 , 1-8. https://doi.org/10.1155/2022/3036664
18
Chasi, C., & Omarjee, N. (2014). It begins with you? An ubuntu-centred critique of a social marketing campaign on HIV and AIDS. Critical Arts , 28 (2), 229-246. https://doi.org/10.1080/02560046.2014.906342
19
Dame Adjin-Tettey, T. (2022). Combating fake news, disinformation, and misinformation: Experimental evidence for media literacy education. Cogent Arts & Humanities , 9 (1). https://doi.org/10.1080/23311983.2022.2037229
20
De Koster, W., & Houtman, D. (2008). ‘Stormfront is like a second home to me’: On virtual community formation by right-wing extremists. Information, Communication & Society , 11 (8), 1155-1176. https://doi.org/10.1080/13691180802266665
21
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences , 113 (3), 554-559. https://doi.org/10.1073/pnas.1517441113
22
Dell, M. (2018). Fake news, alternative facts, and disinformation: The importance of teaching media literacy to law students. Touro Law Review , 35 , 619-648. https://doi.org/10.2139/ssrn.3002720
23
Delvaux, I., & Van den Broeck, W. (2023). Social marketing and the sustainable development goals: Scoping review (2013-2021). International Review on Public and Nonprofit Marketing , 20 (3), 573-603. https://doi.org/10.1007/s12208-023-00372-8
24
Dineva, D., & Daunt, K. L. (2023). Reframing online brand community management: Consumer conflicts, their consequences and moderation. European Journal of Marketing , 57 (10), 2653-2682. https://doi.org/10.1108/EJM-03-2022-0227
25
Dodda, T. P., & Dubbudu, R. (2019). Countering misinformation (fake news) in India: Solutions & strategies. Factly. https://factly.in/wp-content/uploads/2019/02/Countering-Misinformation-Fake-News-In-India.pdf
26
Dubovi, I., & Tabak, I. (2020). An empirical analysis of knowledge co-construction in YouTube comments. Computers & Education , 156 , 103939. https://doi.org/10.1016/j.compedu.2020.103939
27
Ertimur, B., & Gilly, M. C. (2012). So whaddya think? Consumers create ads and other consumers critique them. Journal of Interactive Marketing , 26 (3), 115-130. https://doi.org/10.1016/j.intmar.2011.10.002
28
Feng, Y., & Chen, H. (2022). Evolving consumer responses to social issue campaigns: A data-mining case of COVID-19 ads on YouTube. Journal of Interactive Advertising , 22 (2), 195-206. https://doi.org/10.1080/15252019.2022.2063770
29
Feng, Y., Chen, H., & Ahn, H.-Y. (2021). How consumers react to woke advertising: Methodological triangulation based on social media data and self-report data. Journal of Research in Interactive Marketing , 15 (4), 529-548. https://doi.org/10.1108/JRIM-09-2020-0185
30
Feng, Y., Chen, H., & He, L. (2019). Consumer responses to femvertising: A data-mining case of Dove’s “Campaign for Real Beauty” on YouTube. Journal of Advertising , 48 (3), 292-301. https://doi.org/10.1080/00913367.2019.1602858
31
Feng, Y., & Xie, Q. (2019). Demystifying novelty effects: An analysis of consumer responses to YouTube videos featuring augmented reality out-of-home advertising campaigns. Journal of Current Issues & Research in Advertising , 40 (1), 36-53. https://doi.org/10.1080/10641734.2018.1500321
32
Flaherty, T., Domegan, C., & Anand, M. (2021). The use of digital technologies in social marketing: A systematic review. Journal of Social Marketing , 11 (4), 378-405. https://doi.org/10.1108/JSOCM-01-2021-0022
33
Galan-Ladero, M. M., & Alves, H. M. (2023). Theoretical background: Social marketing & sustainable development goals (SDGs). In M. M. Galan-Ladero & H. M. Alves (Eds.), Social marketing and sustainable development goals (SDGs) (pp. 1-24). Springer. https://doi.org/10.1007/978-3-031-27377-3_1
34
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi.org/10.12987/9780300235029
35
Goodrich, K., Schiller, S. Z., & Galletta, D. (2015). Consumer reactions to intrusiveness of online-video advertisements. Journal of Advertising Research , 55 (1), 37-50. https://doi.org/10.2501/JAR-55-1-037-050
36
Graham, R. (2017). Google and advertising: Digital capitalism in the context of post-Fordism, the reification of language, and the rise of fake news. Palgrave Communications , 3 (1), 45. https://doi.org/10.1057/s41599-017-0021-4
37
Grant, P., Botha, E., & Kietzmann, J. (2015). Branded flash mobs: Moving toward a deeper understanding of consumers’ responses to video advertising. Journal of Interactive Advertising , 15 (1), 28-42. https://doi.org/10.1080/15252019.2015.1013229
38
Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences , 117 (27), 15536-15545. https://doi.org/10.1073/pnas.1920498117
39
Guilbeault, D. (2018). Digital marketing in the disinformation age. Journal of International Affairs , 71 , 33-42. https://www.jstor.org/stable/26508116
40
Haley, E. (1996). Exploring the construct of organization as source: Consumers’ understandings of organizational sponsorship of advocacy advertising. Journal of Advertising , 25 (2), 19-35. https://doi.org/10.1080/00913367.1996.10673497
41
Hameleers, M. (2022). Separating truth from lies: Comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Information, Communication & Society , 25 (1), 110-126. https://doi.org/10.1080/1369118X.2020.1764603
42
Hameleers, M., Brosius, A., Marquart, F., Goldberg, A. C., van Elsas, E., & de Vreese, C. H. (2022). Mistake or manipulation? Conceptualizing perceived mis- and disinformation among news consumers in 10 European countries. Communication Research , 49 (7), 919-941. https://doi.org/10.1177/0093650221997719
43
Harris, K. R. (2023). Liars and trolls and bots online: The problem of fake persons. Philosophy & Technology , 36 (2), 35. https://doi.org/10.1007/s13347-023-00640-9
44
Hastings, G., & Domegan, C. (2014). Social marketing: From tunes to symphonies. Routledge. https://doi.org/10.4324/9780203380925
45
Hegner, S. M., Fetscherin, M., & van Delzen, M. (2017). Determinants and outcomes of brand hate. Journal of Product & Brand Management , 26 (1), 13-25. https://doi.org/10.1108/JPBM-01-2016-1070
46
Hollebeek, L. (2011). Exploring customer brand engagement: Definition and themes. Journal of Strategic Marketing , 19 (7), 555-573. https://doi.org/10.1080/0965254X.2011.599493
47
Howard, P. N., Neudert, L.-M., Prakash, N., & Vosloo, S. (2021). Digital misinformation/disinformation and children. UNICEF. https://www.unicef.org/innocenti/media/856/file/UNICEF-Global-Insight-Digital-Mis-Disinformation-and-Children-2021.pdf
48
Jäger, T., & Eisend, M. (2013). Effects of fear-arousing and humorous appeals in social marketing advertising: The moderating role of prior attitude toward the advertised behavior. Journal of Current Issues & Research in Advertising , 34 (1), 125-134. https://doi.org/10.1080/10641734.2013.754718
49
Janiques de Carvalho, B., & Marôpo, L. (2020). “Tenho pena que não sinalises quando fazes publicidade”: Audiência e conteúdo comercial no canal Sofia Barbosa no YouTube. Comunicação e Sociedade , 37 , 93-107. https://doi.org/10.17231/comsoc.37(2020).2394
50
Kerr, G., & Richards, J. (2021). Redefining advertising in research and practice. International Journal of Advertising , 40 (2), 175-198. https://doi.org/10.1080/02650487.2020.1769407
51
Kienpointner, M. (2018). Impoliteness online: Hate speech in online interactions. Internet Pragmatics , 1 (2), 329-351. https://doi.org/10.1075/ip.00015.kie
52
Kim, J. K. (2011). Moderating effects of plot type and message sensation value on narrative ad processing [Doctoral dissertation, The University of Alabama]. https://ir.ua.edu/bitstreams/3f3501b8-566b-4324-913d-dd12dd7d2a8a/download
53
Kılıç Taran, B., & Akbayır, Z. (2022). Kurumsal sosyal sorumluluk iletişiminde reklam: Alana ilişkin bir içerik analizi. Türkiye İletişim Araştırmaları Dergisi , 40 , 146-172. https://doi.org/10.17829/turcom.1051481
54
Kousha, K., Thelwall, M., & Abdoli, M. (2012). The role of online videos in research communication: A content analysis of YouTube videos cited in academic publications. Journal of the American Society for Information Science and Technology , 63 (9), 1710-1727. https://doi.org/10.1002/asi.22717
55
Kozinets, R. V. (2002). The field behind the screen: Using netnography for marketing research in online communities. Journal of Marketing Research , 39 (1), 61-72. https://doi.org/10.1509/jmkr.39.1.61.18935
56
Lawrence, B., Fournier, S., & Brunel, F. (2013). When companies don’t make the ad: A multimethod inquiry into the differential effectiveness of consumer-generated advertising. Journal of Advertising , 42 (4), 292-307. https://doi.org/10.1080/00913367.2013.795120
57
Lee, N. R., & Kotler, P. (2020). Social marketing: Behavior change for social good. SAGE. https://elibrary.lspr.ac.id/lsprperpus/index.php?p=show_detail&id=6896&keywords=
58
Lima, A. M., & Casais, B. (2021). Consumer reactions towards femvertising: A netnographic study. Corporate Communications: An International Journal , 26 (3), 605-621. https://doi.org/10.1108/CCIJ-02-2021-0018
59
Mansoor, H. M. H. (2024). Media and information literacy as a model of societal balance: A grounded meta-synthesis. Heliyon , 10 (3), e25380. https://doi.org/10.1016/j.heliyon.2024.e25380
60
McCoy, S., Everard, A., Polak, P., & Galletta, D. F. (2008). An experimental study of antecedents and consequences of online ad intrusiveness. International Journal of Human-Computer Interaction , 24 (7), 672-699. https://doi.org/10.1080/10447310802335664
61
Metaxa-Kakavouli, D., & Torres-Echeverry, N. (2017). Google’s role in spreading fake news and misinformation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3062984
62
Morris, J. D., Choi, Y., & Ju, I. (2016). Are social marketing and advertising communications (SMACs) meaningful? A survey of Facebook user emotional responses, source credibility, personal relevance, and perceived intrusiveness. Journal of Current Issues & Research in Advertising , 37 (2), 165-182. https://doi.org/10.1080/10641734.2016.1171182
63
Mortera-Martinez, C. (2019). What is Europe doing to fight disinformation. CER Bulletin , 123. https://www.cer.eu/sites/default/files/bulletin_123_cmm_article3-4.pdf
64
Mueller, T. (2023). Social action advertising: Motivators and detractors in cause-oriented behaviors. Journal of Social Marketing , 13 (2), 258-276. https://doi.org/10.1108/JSOCM-07-2022-0161
65
Mukherjee, S., & Althuizen, N. (2020). Brand activism: Does courting controversy help or hurt a brand? International Journal of Research in Marketing , 37 (4), 772-788. https://doi.org/10.1016/j.ijresmar.2020.02.008
66
Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press. https://cup.columbia.edu/book/social-media-and-the-public-interest/9780231184540/
67
Nyhan, B., & Reifler, J. (2015). Displacing misinformation about events: An experimental test of causal corrections. Journal of Experimental Political Science , 2 (1), 81-93. https://doi.org/10.1017/XPS.2014.22
68
Pace, S. (2008). YouTube: An opportunity for consumer narrative analysis? Qualitative Market Research: An International Journal , 11 (2), 213-226. https://doi.org/10.1108/13522750810864459
69
Park, J., Stoel, L., & Lennon, S. J. (2008). Cognitive, affective and conative responses to visual simulation: The effects of rotation in online product presentation. Journal of Consumer Behaviour , 7 (1), 72-87. https://doi.org/10.1002/cb.237
70
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science , 31 (7), 770-780. https://doi.org/10.1177/0956797620939054
71
Popescu, M. M. (2020). Media literacy tools in combating disinformation and fake news in social media. Bulletin of the Transilvania University of Braşov Series VII - Social Sciences and Law , 13 (62), 103-112. https://www.ceeol.com/search/article-detail?id=882844
72
Qian, S., Shen, C., & Zhang, J. (2022). Fighting cheapfakes: Using a digital media literacy intervention to motivate reverse search of out-of-context visual misinformation. Journal of Computer-Mediated Communication , 28 (1). https://doi.org/10.1093/jcmc/zmac024
73
Rathee, S., & Milfeld, T. (2024). Sustainability advertising: Literature review and framework for future research. International Journal of Advertising , 43 (1), 7-35. https://doi.org/10.1080/02650487.2023.2175300
74
Reagle, J. M. (2015). Reading the comments: Likers, haters, and manipulators at the bottom of the web. The MIT Press. https://mitpress.mit.edu/9780262529884/reading-the-comments/
75
Rejón-Guardia, F., & Martínez-López, F. J. (2014). Online advertising intrusiveness and consumers’ avoidance behaviors. In F. Martínez-López (Ed.), Handbook of strategic e-business management (pp. 565-586). Springer. https://doi.org/10.1007/978-3-642-39747-9_23
76
Salgado Sequeiros, J., Molina-Collado, A., Gómez-Rico, M., & Basil, D. (2022). Examining 50 years of social marketing through a bibliometric and science mapping analysis. Journal of Social Marketing , 12 (3), 296-314. https://doi.org/10.1108/JSOCM-06-2021-0145
77
Shapovalova, E. (2020). Improving media education as a way to combat fake news. Media Education (Mediaobrazovanie) , 60 (4). https://doi.org/10.13187/me.2020.4.730
78
Sheinin, D. A., Varki, S., & Ashley, C. (2011). The differential effect of ad novelty and message usefulness on brand judgments. Journal of Advertising , 40 (3), 5-18. https://doi.org/10.2753/JOA0091-3367400301
79
Shekhar, D. (2018). The anatomy of fake news: How to understand and combat misinformation? Common Cause , 37 (4), 5-10. https://www.commoncause.in/uploadimage/publication/1401268494The%20Anatomy%20of%20Fake%20News%20by%20Dhruv%20Shekhar%20%5BCCJ%20Oct-Dec,%202018%5D.pdf
80
Shoja, A., & Vaziri, F. S. (2018). Brand hate: Analysis of determinants and outcomes of brand hate. New Marketing Research Journal , 8 (2), 165-180. https://doi.org/10.22108/nmrj.2018.104899.1305
81
Silva, M., Giovanini, L., Fernandes, J., Oliveira, D., & Silva, C. S. (2023). What makes disinformation ads engaging? A case study of Facebook ads from the Russian active measures campaign. Journal of Interactive Advertising , 23 (3), 221-240. https://doi.org/10.1080/15252019.2023.2173991
82
Smith, R. E., Chen, J., & Yang, X. (2008). The impact of advertising creativity on the hierarchy of effects. Journal of Advertising , 37 (4), 47-62. https://doi.org/10.2753/JOA0091-3367370404
83
Smith, W. A. (2009). Social marketing in developing countries. In J. French, C. Blair-Stevens, D. McVey, & R. Merritt (Eds.), Social marketing and public health: Theory and practice (pp. 319-330). Oxford Academic. https://doi.org/10.1093/acprof:oso/9780199550692.003.21
84
Tapan, A. (2022). Yeşil reklam stratejilerinin dijital etkileşiminde “vicdan azabı” duygusunun rolü. Yeni Medya Dergisi , 13 , 310-336. https://doi.org/10.55609/yenimedya.1171503
85
Tracy, S. J. (2020). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact. John Wiley & Sons. https://www.wiley.com/en-be/Qualitative+ Research+Methods%3A +Collecting+Evidence%2C+ Crafting+Analysis%2C+ Communicating+Impact%2C+3rd+Edition- p-97 81119988670
86
United Nations. (2022). Countering disinformation for the promotion and protection of human rights and fundamental freedoms. UN Secretary-General Report. https://digitallibrary.un.org/record/3987886?v=pdf
87
van der Linden, S., Roozenbeek, J., & Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology , 11 , 566790. https://doi.org/10.3389/fpsyg.2020.566790
88
van Reijmersdal, E. A., Jansz, J., Peters, O., & van Noort, G. (2010). The effects of interactive brand placements in online games on children’s cognitive, affective, and conative brand responses. Computers in Human Behavior , 26 (6), 1787-1794. https://doi.org/10.1016/j.chb.2010.07.006
89
Voola, R., Bandyopadhyay, C., Voola, A., Ray, S., & Carlson, J. (2022). B2B marketing scholarship and the UN sustainable development goals (SDGs): A systematic literature review. Industrial Marketing Management , 101 , 12-32. https://doi.org/10.1016/j.indmarman.2021.11.013
90
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science , 359 (6380), 1146-1151. https://doi.org/10.1126/science.aap9559
91
Vraga, E. K., Bode, L., & Tully, M. (2022). Creating news literacy messages to enhance expert corrections of misinformation on Twitter. Communication Research , 49 (2), 245-267. https://doi.org/10.1177/0093650219898094
92
Vredenburg, J., Kapitan, S., Spry, A., & Kemper, J. A. (2020). Brands taking a stand: Authentic brand activism or woke washing? Journal of Public Policy & Marketing , 39 (4), 444-460. https://doi.org/10.1177/0743915620947359
93
Wang, S., Su, F., Ye, L., & Jing, Y. (2022). Disinformation: A bibliometric review. International Journal of Environmental Research and Public Health , 19 (24), 16849. https://doi.org/10.3390/ijerph192416849
94
Waqas, M., Hamzah, Z. L., & Salleh, N. A. M. (2020). A typology of customer experience with social media branded content: A netnographic study. International Journal of Internet Marketing and Advertising , 14 (2), 184. https://doi.org/10.1504/IJIMA.2020.107661
95
Yakın, V., & Eru, O. (2017). An application to determine the efficacy of emoji use on social marketing ads. International Journal of Social Sciences and Education Research , 3 (1), 230-230. https://doi.org/10.24289/ijsser.270652
96
Yang, D., Lu, Y., Zhu, W., & Su, C. (2015). Going green: How different advertising appeals impact green consumption behavior. Journal of Business Research , 68 (12), 2663-2675. https://doi.org/10.1016/j.jbusres.2015.04.004
97
Yılmaz, R. A., & Ozturk, M. C. (2013). Emotional appeals are used in social advertising: Content analysis on Turkish case. Online Journal of Communication and Media Technologies , 3 , 74-90. https://doi.org/10.29333/ojcmt/2435
98
Zhang, C., & Laroche, M. (2020). Brand hate: A multidimensional construct. Journal of Product & Brand Management , 30 (3), 392-414. https://doi.org/10.1108/JPBM-11-2018-2103