Connect with us


A group of moms on Facebook built an island of good-faith vaccine – The Washington Post



“On both sides, there’s people telling the truth, at least their truth,” said Buchanan, 32, who last month became infected with the coronavirus. “But on the pro-vaccine side, there was just more logic” — and more links to solid research. “On the anti-vaccine side, there was more conspiracy.”

Now he’s going to get vaccinated.

As covid-19 cases surge in the United States, jeopardizing the reopening of schools and offices and rekindling debates about mask and vaccination mandates, the battle to win over the vaccine skeptical has taken on fresh urgency. Much of that struggle is happening on social media, where misinformation about the vaccines continues to flourish. Facebook, Twitter and YouTube have all established rules against posting false information about covid and vaccines. Yet just this weekend Facebook said the most-shared link on its site from January to March this year was an article that raised concerns that coronavirus vaccines could lead to death.

Amid the online scare stories and anti-vaccine memes, an army of local influencers and everyday users is waging a grass-roots campaign on Facebook, Reddit and other platforms to gently win over the vaccine skeptical. They’re spending hours moderating forums, responding to comments, linking to research studies, and sharing tips on how to talk to fearful family members.

“It feels a lot like covid is something that is completely out of control and there is nothing we can do, like it’s this out-of-control wildfire, and I’m just one person with a little hose,” said Kate Bilowitz, an Oakland, Calif.-based mom who works for a real estate company and co-founded Vaccine Talk. “But when people reach out to us, it feels like it’s making a little bit of a difference.”

Their work exemplifies Facebook’s stated goal to “bring the world closer together.” But Bilowitz and others who run similar forums say that the interventions made by technology companies are often counterproductive, and that software algorithms frequently delete valuable conversations mistaken for misinformation.

“Facebook is attempting to shut down misinformation by shutting down all conversation entirely,” she said. “I strongly believe that civil, evidence-based discussion works, and Facebook’s policies make it extremely difficult for that to happen.”

Facebook leans on software and its army of 15,000 human moderators to detect covid misinformation. Last month, it said it had taken down more than 20 million posts since the start of the pandemic. But it routinely misses new memes and conspiracy theories, while at times scrubbing legitimate information. At the same time, its news feed algorithms boost posts that get the most engagement, often helping the most sensational claims go viral.

Twitter and Google-owned YouTube have employed similar strategies, using algorithms to parse text posts and listen to videos, sometimes banning them immediately and other times flagging them for a human to review. The companies have peppered their sites with links to official coronavirus information from the Centers for Disease Control and Prevention and World Health Organization, as well as pushing articles from mainstream news organizations to the top of people’s feeds and search results.

President Biden on July 19 said that Facebook was not killing people, but that coronavirus vaccine misinformation on its site was killing people. (The Washington Post)

While the White House is launching a project to pay micro-influencers to spread pro-vaccine messaging, a handful of carefully designed and self-policed online spaces, such as the Vaccine Talk group on Facebook, are showing that it is possible to change people’s minds, one nuanced post at a time. To do it, they’re adopting moderation systems and rules of discourse very different from those of the social media platforms.

See also  Facebook revelations: what is in cache of internal documents? - The Guardian

People’s attitudes toward the coronavirus vaccines are complex, and it’s hard to quantify the role social media has played in sowing or overcoming their doubts. A recent survey by Rutgers University found that people who get their news primarily from Facebook were less likely to be vaccinated than any other group of news consumers.

But research also supports the idea that social media can be a force for pro-vaccine persuasion, under the right circumstances. A forthcoming study from researchers at the MIT Sloan School of Management, which has not yet been peer-reviewed, found that people on the fence about vaccination can be swayed by learning that others around them are getting vaccinated. And a 2020 study by health misinformation researchers Emily Vraga of the University of Minnesota and Leticia Bode of Georgetown University showed that social media users who rebut misleading claims with factual information may not persuade the original poster but can influence the beliefs of others who witness the exchanges.

People concerned about vaccine safety may be easier to convince than those who don’t trust the government or medical authorities, said Wendy King, an associate professor in the Department of Epidemiology at the University of Pittsburgh Graduate School of Public Health. Earlier this year, King surveyed more than 5 million U.S. adults about their attitudes toward coronavirus vaccines. Many who said they may not or won’t get vaccinated said they feared side effects — a sign they may be influenced by misinformation.

“They are reading a lot of nonscientific literature and a lot of social media posts about things that are happening to people post-vaccine, and they are really worried,” King said.

That’s what happened for Bilowitz, 35. She first turned to Facebook for information on childhood vaccinations in 2015, after she gave birth. Months earlier, there had been a measles outbreak at Disneyland, which officials blamed on too-low vaccination rates. She realized a Facebook group she had joined was run by vaccine opponents, part of the anti-vaccine movement that flourished on social media long before the coronavirus pandemic.

After Bilowitz and some other moms got kicked out of that group, they formed the Vaccine Talk group to focus on evidence-based information.

“The most important rule was ‘civility,’ ” Bilowitz said. “There are some groups online where people just yell at each other. We wanted to just be able to talk to one another without it getting that way.”

Vaccine Talk now has nearly 70,000 members, each of whom must gain administrators’ approval to join and commit to a code of conduct. Strict rules prohibit users from misrepresenting themselves, offering medical advice and harassing or bullying people. Another key rule: Be ready to provide citations within 24 hours for any claim you make. Twenty-five moderators and administrators in six countries monitor the posts, and those who flout the rules are kicked out.

“Usually, the hardcore anti-vaxxers cannot follow the rules,” Bilowitz said. “They are usually spamming people with their commentary. I think it’s hard for them: They are basically coming out of an echo chamber.”

Vaccine Talk represents exactly the type of conversations Facebook says it wants to cultivate. But Bilowitz said the social network’s often clumsy and heavy-handed enforcement of covid misinformation policies has made their work more difficult. In June, Facebook temporarily shut down the group because someone posted an article deemed to be misinformation. But the poster had been seeking advice on how to rebut the article.

See also  Nottingham fast-food company hit by fake Facebook reviews after alleged incident with cyclist

“We were just caught up in the algorithm,” Bilowitz said, “and felt there wasn’t a human in charge of the process.”

To combat covid misinformation, Facebook has created both a banner across its site and a tab on covid-related posts that link people to authoritative information from public health organizations. The company’s head of health, Kang-Xing Jin, said surveys suggest vaccine acceptance on the part of American Facebook users has increased since January, from about 70 percent to upward of 80 percent. But he has also acknowledged the challenge in drawing a line between posts that evince earnest skepticism and those that are ideologically motivated.

Monica Buescher, a 32-year-old teacher in Vacaville, Calif., said she went “deep down the rabbit hole” of anti-vaccine misinformation when she had her second child in 2019. Convinced that shots were dangerous, she nonetheless wanted to hear the pro-vaccine side. She found her way to Vaccine Talk, which she said had a reputation among anti-vaccine groups as being “mean” for banning those who made claims without scientific evidence.

On Vaccine Talk, Buescher credits a handful of people with walking her through the scientific evidence and persuading her that routine childhood vaccines are safe and effective. Now, Buescher is helping her friends and family navigate conflicting information about coronavirus vaccines.

“It’s mostly just offering my experience, hopefully in an unassuming way that implies I’ve tried to understand both sides,” she said.

Vaccine Talk isn’t the only venue for such conversations. In other corners of the Internet where ground rules and moderators facilitate conversations that don’t spiral into shouting matches, people who had avoided getting vaccinated against the coronavirus are finding reasons to do so.

Ryan Steward, 29, a mechanic and pastor from Spartanburg, S.C., had never hesitated to get vaccines for himself or his family — until he started hearing scary stories about the covid shots.

“I wanted to find a way to break out of my echo chambers,” he said, “find something to shake me out of this fear that had been instilled in me by misinformation and by the media.”

A frequent user of automotive forums on the social media site Reddit, Steward was aware of a popular group called “r/ChangeMyView,” where people post a controversial opinion and invite others to challenge their assumptions. Those who succeed are awarded points, a system meant to incentivize users to keep their replies civil and constructive.

On Aug. 8, Steward posted there for the first time, spelling out his concerns about coronavirus vaccines. He cited the lack of full Food and Drug Administration approval (which came on Monday), the potential for long-term side effects and reports of breakthrough cases among the vaccinated — and braced himself for a cavalcade of vitriol. But aside from one nasty comment that was quickly deleted by the group’s moderators, he got the opposite.

“I understand where you’re coming from,” wrote one user. “The same concerns crossed my mind. However, you aren’t just deciding on the risks of the vaccine, or not. You’re deciding on the risks of the vaccine or the risks of covid.” Other commenters explained the FDA approval process, linked to academic studies and pointed to the pro-vaccine consensus among doctors worldwide, not just in the United States.

Steward felt grateful — and convinced.

See also  Philippine Nobel winner Ressa calls Facebook "biased against facts" | Reuters

“Within a matter of two hours, my mind had been changed completely,” he said. He showed the thread to his wife, who agreed to get vaccinated if he went first. So Steward scheduled an appointment — and received his first Moderna shot on Aug. 13.

In interviews, two moderators of ChangeMyView said they see their forum as an oasis of polite discourse in a digital landscape dominated by shouting matches. They also said it’s a constant struggle, on the part of more than 20 active moderators and a core of conscientious users, to keep it that way.

If you want to persuade someone, “approaching them with a sense of empathy is very important — and very much missing, I think, from a huge amount of conversation that happens online,” said London entrepreneur Stuart Johnson, 20, one of the subreddit’s active moderators. “Even if you’re completely right, if your post is just strongly calling out the other person or generalizing about them, they will completely close down. Their ego will no longer allow them to change their mind.”

Johnson also acknowledged, however, that the person has to be open to having their mind changed in the first place. Surveys suggest that many vaccine holdouts are more likely to be persuaded by mandates or cash payments than online debate.

The big tech companies have also made significant changes to their moderation policies during the pandemic. Last year, Facebook, Twitter and YouTube all said that posts promoting fake treatments for covid would be taken down. And they’ve worked with the Centers for Disease Control and Prevention and World Health Organization to append links to authoritative information on any post they deem to be about covid or vaccines. YouTube also has begun working with hospital groups to create new video series that offer authoritative medical information in response to popular health-related searches.

Yet online misinformation has proved resilient. A study published earlier this month by researchers at Yale University found that Twitter users learn over time to tweet more “expressions of moral outrage” when such tweets generate more likes and retweets. “Even if platform designers do not intend to amplify moral outrage,” the authors wrote, “design choices aimed at satisfying other goals such as profit maximization via user engagement can indirectly affect moral behavior because outrage-provoking content draws high engagement.”

Outside these purpose-built forums, pro-vaccine messages from everyday Americans can also have an impact. Chelsey Palmer, a 32-year-old lawyer from Jacksonville, Fla., held off getting vaccinated until late June — first because she was pregnant and then because “covid seemed like it died down a little bit.”

But in recent weeks, as the delta variant spread, she saw appeals from local medical professionals in her Facebook feed describing worsening conditions in hospitals.

“Seeing people who were here locally in Jacksonville, just sharing their own experiences of what was going on, seeing their firsthand experience, really gave me the push I needed to get” vaccinated, Palmer said.

In general, Palmer does not see Facebook as a good place to go for vaccine information, however.

“For the average person who’s not really trying to use critical thinking on social media, it’s actually a pretty dangerous place,” she said, noting that her feed is also riddled with scaremongering and conspiracy theories.

For those debating whether to get vaccinated, “I would not recommend going to social media,” Palmer said. “I would recommend going to a doctor.”

Read More


Facebook Adds New Trend Insights in Creator Studio, Which Could Help Shape Your Posting Strategy




en flag
sv flag

Facebook’s looking to provide more content insight within Creator Studio with the rollout of a new ‘Inspiration Hub’ element, which highlights trending content and hashtags within categories related to your business Page.

Facebook Inspiration Hub

As you can see in these screenshots, posted by social media expert Matt Navarra, when it becomes available to you, you’ll be able to access the new Inspiration Hub from the Home tab in Creator Studio.

At the right side of the screen, you can see the first of the new insights, with trending hashtags and videos from the last 24 hours, posted by Pages similar to yours, displayed above a ‘See more’ prompt.

When you tap through to the new hub, you’ll have a range of additional filters to check out trending content from across Facebook, including Page category, content type, region, and more.

Facebook Inspiration Hub

That could be hugely valuable in learning what Facebook users are responding to, and what people within your target market are engaging with in the app.

The Hub also includes insights into trending hashtags, within your chosen timeframe, which may further assist in tapping into trending discussions.

Facebook Inspiration Hub

How valuable hashtags are on Facebook is still up for debate, but you’ll also note that you can filter the displayed results by platform, so you can additionally display Instagram hashtag trends as well, which could be very valuable in maximizing your reach.

Much of this type of info has been available within CrowdTangle, Facebook’s analytics platform for journalists, for some time, but not everyone can access CrowdTangle data, which could make this an even more valuable proposition for many marketers.

See also  Zakir Naik posts Facebook ad seeking wife for son - The Week

Of course, overall performance really relates to your own creative, and thinking through the action that you want your audience to take when reading your posts. But in terms of detecting new content trends, including hashtag usage, caption length, videos versus image posts, and more, there’s a lot that could be gleaned from these tools and filters.

It’s a significant analytics addition – we’ve asked Facebook for more info on the rollout of the new option, and whether it’s already beyond test mode, etc. We’ll update this post if/when we hear back.

Continue Reading


Meta Updates Policy on Cryptocurrency Ads, Opening the Door to More Crypto Promotions in its Apps




en flag
sv flag

With cryptocurrencies gaining momentum, in line with the broader Web 3.0 push, Meta has today announced an update to its ad policies around cryptocurrencies, which will open the door to more crypto advertisers on its platforms.

As per Meta:

Starting today, we’re updating our eligibility criteria for running ads about cryptocurrency on our platform by expanding the number of regulatory licenses we accept from three to 27. We are also making the list of eligible licenses publicly available on our policy page.”

Essentially, in order to run any crypto ads in Meta’s apps, that currency needs to adhere to regional licensing provisions, which vary by nation. With crypto becoming more accepted, Meta’s now looking to enable more crypto companies to publish ads on its platform, which will provide expanded opportunity for recognized crypto providers to promote their products, while also enabling Meta to make more money from crypto ads.

“Previously, advertisers could submit an application and include information such as any licenses they obtained, whether they are traded on a public stock exchange, and other relevant public background on their business. However, over the years the cryptocurrency landscape has matured and stabilized and experienced an increase in government regulation, which has helped to set clearer responsibilities and expectations for the industry. Going forward, we will be moving away from using a variety of signals to confirm eligibility and instead requiring one of these 27 licenses.”

Is that a good move? Well, as Meta notes, the crypto marketplace is maturing, and there’s now much wider recognition of cryptocurrencies as a legitimate form of payment. But they’re also not supported by most local financial regulators, which reduced transaction protection and oversight, which also brings a level of risk in such process.

See also  Philippine Nobel winner Ressa calls Facebook "biased against facts" | Reuters

But then again, all crypto providers are required to clearly outline any such risks, and most also highlight the ongoing market volatility in the space. This expanded level of overall transparency means that most people who are investing in crypto have at least some awareness of these elements, which likely does diminish the risk factor in such promotions within Meta’s apps.

But as crypto adoption continues to expand, more of these risks will become apparent, and while much of the crypto community is built on good faith, and a sense of community around building something new, there are questions as to how much that can hold at scale, and what that will then mean for evolving scams and criminal activity, especially as more vulnerable investors are brought into the mix.

Broader promotional capacity through Meta’s apps will certainly help to boost exposure in this respect – though again, the relative risk factors are lessened by expanded regulatory oversight outside of the company.

You can read more about Meta’s expanded crypto ad regulations here.

Continue Reading


Meta Outlines Evolving Safety Measures in Messaging as it Seeks to Allay Fears Around the Expansion of E2E Encryption




en flag
sv flag

Amid rising concern about Meta’s move to roll out end-to-end encryption by default to all of its messaging apps, Meta’s Global Head of Safety Antigone Davis has today sought to provide a level of reassurance that Meta is indeed aware of the risks and dangers that such protection can pose, and that it is building safeguards into its processes to protect against potential misuse.

Though the measures outlined don’t exactly address all the issues raised by analysts and safety groups around the world.

As a quick recap, back in 2019, Facebook announced its plan to merge the messaging functionalities of Messenger, Instagram and WhatsApp, which would then provide users with a universal inbox, with all of your message threads from each app accessible on either platform.

The idea is that this will simplify cross-connection, while also opening the door to more opportunities for brands to connect with users in the messaging tool of their choice – but it also, inherently, means that the data protection method for its messaging tools must rise to the level of WhatsApp, its most secure messaging platform, which already includes E2E encryption as the default.

Various child safety experts raised the alarm, and several months after Facebook’s initial announcement, representatives from the UK, US and Australian Governments sent an open letter to Facebook CEO Mark Zuckerberg requesting that the company abandon its integration plan.

Meta has pushed ahead, despite specific concerns that the expansion of encryption will see its messaging tools used by child trafficking and exploitation groups, and now, as it closes in on the next stage, Meta’s working to counter such claims, with Davis outlining six key elements which she believes will ensure safety within this push.

See also  Nottingham fast-food company hit by fake Facebook reviews after alleged incident with cyclist

Davis has explained the various measures that Meta has added on this front, including:

  • Detection tools to stop adults from repeatedly setting up new profiles in an attempt to connect minors that they don’t know
  • Safety notices in Messenger, which provide tips on spotting suspicious behavior
  • The capacity to filter messages with selected keywords on Instagram
  • More filtering options in chat requests to help avoid unwanted contact
  • Improved education prompts to help detect spammers and scammers in messages
  • New processes to make it easier to report potential harm, including an option to select “involves a child”, which will then prioritize the report for review and action

Meta messaging security options

Which are all good, all important steps in detection, while Davis also notes that its reporting process “decrypts portions of the conversation that were previously encrypted and unavailable to us so that we can take immediate action if violations are detected”.

That’ll no doubt raise an eyebrow or two among WhatsApp users – but the problem here is that, overall, the broader concern is that such protections will facilitate usage by criminal groups, and the reliance on self-reporting in this respect is not going to have any impact on these networks operating, at scale, under a more protected messaging framework within Meta’s app eco-system.

Governments have called for ‘backdoor access’ to break Meta’s encryption for investigations into such activity, which Meta says is both not possible and will not be built into its future framework. The elements outlined by Davis do little to address this specific need, and without the capacity to better detect such, it’s hard to see any of the groups opposed to Meta’s expanded encryption changing their stance, and accepting that the merging of all of the platform’s DM options will not also see a rise in criminal activity organized via the same apps.

See also  Whistleblower Connects Facebook Decision on Misinformation to Capitol Riot - New York Magazine

Of course, the counterargument could be that encryption is already available on WhatsApp, and that criminal activity of this type can already be undertaken within WhatsApp alone. But with a combined user count of 3.58 billion people per month across its family of apps, that’s a significantly broader interconnection of people than WhatsApp’s 2 billion active users, which, arguably, could open the door to far more potential harm and danger in this respect.

Really, there’s no right answer here. Privacy advocates will argue that encryption should be the standard, and that more people are actually more protected, on balance, by enhanced security measures. But there is also an undeniable risk in shielding even more criminal groups from detection.

Either way, right now, Meta seems determined to push ahead with the plan, which will weld all of its messaging tools together, and also make it more difficult to break-up its network, if any antitrust decisions don’t go Meta’s way, and it’s potentially pressed to sell-off Instagram or WhatsApp as a result.

But expect more debate to be had, in more countries, as Meta continues to justify its decision, and regulatory and law enforcement groups seek more options to help maintain a level of accessibility for criminal investigations and detection.

Continue Reading