Connect with us


Facebook ‘Supreme Court’ Orders Social Network To Restore 4 Posts In First Rulings



Facebook created the panel of experts to review the hardest calls the social network makes about what it does and does not allow users to post.

Jeff Chiu/AP

hide caption

toggle caption

Jeff Chiu/AP

Facebook created the panel of experts to review the hardest calls the social network makes about what it does and does not allow users to post.

Jeff Chiu/AP

Updated at 3:16 p.m. ET

Facebook’s oversight board on Thursday directed the company to restore several posts that the social network had removed for breaking its rules on hate speech, harmful misinformation and other matters.

The decisions are the first rulings for the board, which Facebook created last year as a kind of supreme court, casting the final votes on the hardest calls the company makes about what it does and does not allow users to post.

Facebook Asks Oversight Board Whether Trump's Account Should Be Restored Now

The rulings announced on Thursday do not include the most high-profile and high-stakes case on the board’s docket: Facebook’s suspension of former President Donald Trump from both its namesake platform and Instagram, which the company owns. Facebook banned Trump earlier this month, after a mob of his supporters stormed the U.S. Capitol.

The Trump case, which the board has 90 days to consider since receiving it last week, is seen as a crucial test of the panel’s legitimacy. The board will begin taking public comment on it on Friday.

Still, Thursday’s decisions are an important first glimpse into how the board sees its governance role and indicated skepticism among members with how Facebook has communicated and enforced several key policies.

“We believe the board has the ability to provide a critical independent check on how Facebook moderates content and to begin reshaping the company’s policies over the long term,” Helle Thorning-Schmidt, a former prime minister of Denmark who is one of the board’s four co-chairs, told reporters on a press call on Thursday.

See also  Stockists pull Kaiapoi brewery's products after owner's racist Facebook rant

In its first batch of decisions, the board overturned Facebook’s post removals in four of five cases:

  • It overruled Facebook’s removal of a post from a user in France criticizing the government for withholding an alleged “cure” for COVID-19. Facebook had removed the post because it said it could lead to imminent harm, but the board said the user’s comments were directed at opposing government policy. “Facebook had not demonstrated the post would rise to the level of imminent harm, as required by its own rule in the Community Standards,” the board said. It also recommended that the company create a new policy specifically about health misinformation, “consolidating and clarifying the existing rules in one place.”
  • It overruled a case in which Facebook took down a post from a user in North America allegedly quoting Joseph Goebbels, a Nazi official on Facebook’s list of “dangerous individuals.” The board found it “was a criticism, not a celebration of the attitude exemplified by the alleged Goebbels quote,” board co-chair Michael McConnell, director of Stanford Law School’s Constitutional Law Center, told reporters.
  • In a case dealing with nudity, the board overturned the removal of an Instagram post promoting breast cancer awareness in Brazil that showed women’s nipples. The board pointed out that Facebook’s nudity rules include an exception for posts about breast cancer. Facebook restored the post back in December, after the board announced it would be reviewing the case.

  • “I think this is a really good example of how the mere prospect of a board review has already begun to alter how Facebook acts,” McConnell said.

    The board also sounded the alarm that the Instagram post was initially removed by automated systems. “The incorrect removal of this post indicates the lack of proper human oversight which raises human rights concerns,” it said in its decision. It recommended that Facebook tell users when their posts have been taken down by automated systems and that they can appeal those decisions to a person.

  • The final two cases dealt with hate speech. In the first, the board overruled the removal of a post from a Facebook user in Myanmar that Facebook said violated its rules against hate speech for disparaging Muslims as psychologically inferior. While the board found the post “pejorative,” taking into account the full context, it did not “advocate hatred or intentionally incite any form of imminent harm,” the board said in its decision.

    The board acknowledged the case was fraught because Facebook has been criticized for its role in the genocide of the country’s Muslim minority.

    Muslim Advocates, a civil rights group, slammed the ruling. “Facebook’s Oversight Board bent over backwards to excuse hate in Myanmar — a country where Facebook has been complicit in a genocide against Muslims,” said Eric Naing, a spokesperson for the group. “Instead of taking meaningful action to curb dangerous hate speech on the platform, Facebook punted responsibility to a third party board that used laughable technicalities to protect anti-Muslim hate content that contributes to genocide.”

  • The board upheld Facebook’s removal of another post for breaking hate-speech rules, however. It said the company was right to remove a post that used a slur against Azerbaijanis. “The context in which the term was used makes clear it was meant to dehumanize its target,” the board said in its decision.
See also  Clear the Shelters: Erie Animal Network is More than a Facebook Page

Facebook has said it will abide by the board’s decisions and has already reinstated the posts the board said shouldn’t have come down.

Civil Rights Groups Say If Facebook Won't Act On Election Misinformation, They Will

The board also issued recommendations that Facebook be more transparent, including explaining to users whose posts are removed which rules they had violated and giving more clarity and definitions for issues including health misinformation and dangerous individuals. Facebook has 30 days to respond to the board.

“We believe that the board included some important suggestions that we will take to heart. Their recommendations will have a lasting impact on how we structure our policies,” Monika Bickert, Facebook’s vice president of content policy, said in a blog post.

Evelyn Douek, a Harvard Law School lecturer who studies online content moderation, said the board’s inaugural rulings “are a true shot across the bow” because they take aim not just at Facebook’s handling of individual posts but at its broader policies and enforcement.

“Now the question is, how seriously will Facebook take those recommendations and how openly will it engage with what the board said?” Douek said.

The board is funded by Facebook through an independent trust, and made up of 20 experts around the world. They include specialists in law and human rights, a Nobel Peace laureate from Yemen, the vice president of the libertarian Cato Institute and several former journalists.

Each case was reviewed by a group of five randomly selected members, with the final decision approved by the full board.

Editor’s note: Facebook is among NPR’s financial supporters.

Read More

See also  Facebook Isn't Scandal-Proof - The New York Times


Facebook Adds New Trend Insights in Creator Studio, Which Could Help Shape Your Posting Strategy




en flag
sv flag

Facebook’s looking to provide more content insight within Creator Studio with the rollout of a new ‘Inspiration Hub’ element, which highlights trending content and hashtags within categories related to your business Page.

Facebook Inspiration Hub

As you can see in these screenshots, posted by social media expert Matt Navarra, when it becomes available to you, you’ll be able to access the new Inspiration Hub from the Home tab in Creator Studio.

At the right side of the screen, you can see the first of the new insights, with trending hashtags and videos from the last 24 hours, posted by Pages similar to yours, displayed above a ‘See more’ prompt.

When you tap through to the new hub, you’ll have a range of additional filters to check out trending content from across Facebook, including Page category, content type, region, and more.

Facebook Inspiration Hub

That could be hugely valuable in learning what Facebook users are responding to, and what people within your target market are engaging with in the app.

The Hub also includes insights into trending hashtags, within your chosen timeframe, which may further assist in tapping into trending discussions.

Facebook Inspiration Hub

How valuable hashtags are on Facebook is still up for debate, but you’ll also note that you can filter the displayed results by platform, so you can additionally display Instagram hashtag trends as well, which could be very valuable in maximizing your reach.

Much of this type of info has been available within CrowdTangle, Facebook’s analytics platform for journalists, for some time, but not everyone can access CrowdTangle data, which could make this an even more valuable proposition for many marketers.

See also  [OPINION] When Facebook takes down a journalist's account without due process - Rappler

Of course, overall performance really relates to your own creative, and thinking through the action that you want your audience to take when reading your posts. But in terms of detecting new content trends, including hashtag usage, caption length, videos versus image posts, and more, there’s a lot that could be gleaned from these tools and filters.

It’s a significant analytics addition – we’ve asked Facebook for more info on the rollout of the new option, and whether it’s already beyond test mode, etc. We’ll update this post if/when we hear back.

Continue Reading


Meta Updates Policy on Cryptocurrency Ads, Opening the Door to More Crypto Promotions in its Apps




en flag
sv flag

With cryptocurrencies gaining momentum, in line with the broader Web 3.0 push, Meta has today announced an update to its ad policies around cryptocurrencies, which will open the door to more crypto advertisers on its platforms.

As per Meta:

Starting today, we’re updating our eligibility criteria for running ads about cryptocurrency on our platform by expanding the number of regulatory licenses we accept from three to 27. We are also making the list of eligible licenses publicly available on our policy page.”

Essentially, in order to run any crypto ads in Meta’s apps, that currency needs to adhere to regional licensing provisions, which vary by nation. With crypto becoming more accepted, Meta’s now looking to enable more crypto companies to publish ads on its platform, which will provide expanded opportunity for recognized crypto providers to promote their products, while also enabling Meta to make more money from crypto ads.

“Previously, advertisers could submit an application and include information such as any licenses they obtained, whether they are traded on a public stock exchange, and other relevant public background on their business. However, over the years the cryptocurrency landscape has matured and stabilized and experienced an increase in government regulation, which has helped to set clearer responsibilities and expectations for the industry. Going forward, we will be moving away from using a variety of signals to confirm eligibility and instead requiring one of these 27 licenses.”

Is that a good move? Well, as Meta notes, the crypto marketplace is maturing, and there’s now much wider recognition of cryptocurrencies as a legitimate form of payment. But they’re also not supported by most local financial regulators, which reduced transaction protection and oversight, which also brings a level of risk in such process.

See also  Facebook flags post from non-profit that supports police officers

But then again, all crypto providers are required to clearly outline any such risks, and most also highlight the ongoing market volatility in the space. This expanded level of overall transparency means that most people who are investing in crypto have at least some awareness of these elements, which likely does diminish the risk factor in such promotions within Meta’s apps.

But as crypto adoption continues to expand, more of these risks will become apparent, and while much of the crypto community is built on good faith, and a sense of community around building something new, there are questions as to how much that can hold at scale, and what that will then mean for evolving scams and criminal activity, especially as more vulnerable investors are brought into the mix.

Broader promotional capacity through Meta’s apps will certainly help to boost exposure in this respect – though again, the relative risk factors are lessened by expanded regulatory oversight outside of the company.

You can read more about Meta’s expanded crypto ad regulations here.

Continue Reading


Meta Outlines Evolving Safety Measures in Messaging as it Seeks to Allay Fears Around the Expansion of E2E Encryption




en flag
sv flag

Amid rising concern about Meta’s move to roll out end-to-end encryption by default to all of its messaging apps, Meta’s Global Head of Safety Antigone Davis has today sought to provide a level of reassurance that Meta is indeed aware of the risks and dangers that such protection can pose, and that it is building safeguards into its processes to protect against potential misuse.

Though the measures outlined don’t exactly address all the issues raised by analysts and safety groups around the world.

As a quick recap, back in 2019, Facebook announced its plan to merge the messaging functionalities of Messenger, Instagram and WhatsApp, which would then provide users with a universal inbox, with all of your message threads from each app accessible on either platform.

The idea is that this will simplify cross-connection, while also opening the door to more opportunities for brands to connect with users in the messaging tool of their choice – but it also, inherently, means that the data protection method for its messaging tools must rise to the level of WhatsApp, its most secure messaging platform, which already includes E2E encryption as the default.

Various child safety experts raised the alarm, and several months after Facebook’s initial announcement, representatives from the UK, US and Australian Governments sent an open letter to Facebook CEO Mark Zuckerberg requesting that the company abandon its integration plan.

Meta has pushed ahead, despite specific concerns that the expansion of encryption will see its messaging tools used by child trafficking and exploitation groups, and now, as it closes in on the next stage, Meta’s working to counter such claims, with Davis outlining six key elements which she believes will ensure safety within this push.

See also  Myanmar junta blocks Facebook to shut down dissent as West increases pressure

Davis has explained the various measures that Meta has added on this front, including:

  • Detection tools to stop adults from repeatedly setting up new profiles in an attempt to connect minors that they don’t know
  • Safety notices in Messenger, which provide tips on spotting suspicious behavior
  • The capacity to filter messages with selected keywords on Instagram
  • More filtering options in chat requests to help avoid unwanted contact
  • Improved education prompts to help detect spammers and scammers in messages
  • New processes to make it easier to report potential harm, including an option to select “involves a child”, which will then prioritize the report for review and action

Meta messaging security options

Which are all good, all important steps in detection, while Davis also notes that its reporting process “decrypts portions of the conversation that were previously encrypted and unavailable to us so that we can take immediate action if violations are detected”.

That’ll no doubt raise an eyebrow or two among WhatsApp users – but the problem here is that, overall, the broader concern is that such protections will facilitate usage by criminal groups, and the reliance on self-reporting in this respect is not going to have any impact on these networks operating, at scale, under a more protected messaging framework within Meta’s app eco-system.

Governments have called for ‘backdoor access’ to break Meta’s encryption for investigations into such activity, which Meta says is both not possible and will not be built into its future framework. The elements outlined by Davis do little to address this specific need, and without the capacity to better detect such, it’s hard to see any of the groups opposed to Meta’s expanded encryption changing their stance, and accepting that the merging of all of the platform’s DM options will not also see a rise in criminal activity organized via the same apps.

See also  Attack Yogi & BJP, all-Hindi posts — How Priyanka Gandhi's using Facebook to reach voters

Of course, the counterargument could be that encryption is already available on WhatsApp, and that criminal activity of this type can already be undertaken within WhatsApp alone. But with a combined user count of 3.58 billion people per month across its family of apps, that’s a significantly broader interconnection of people than WhatsApp’s 2 billion active users, which, arguably, could open the door to far more potential harm and danger in this respect.

Really, there’s no right answer here. Privacy advocates will argue that encryption should be the standard, and that more people are actually more protected, on balance, by enhanced security measures. But there is also an undeniable risk in shielding even more criminal groups from detection.

Either way, right now, Meta seems determined to push ahead with the plan, which will weld all of its messaging tools together, and also make it more difficult to break-up its network, if any antitrust decisions don’t go Meta’s way, and it’s potentially pressed to sell-off Instagram or WhatsApp as a result.

But expect more debate to be had, in more countries, as Meta continues to justify its decision, and regulatory and law enforcement groups seek more options to help maintain a level of accessibility for criminal investigations and detection.

Continue Reading