Connect with us

FACEBOOK

Community Standards Enforcement Report, November 2019 Edition

Published

on

Today we’re publishing the fourth edition of our Community Standards Enforcement Report, detailing our work for Q2 and Q3 2019. We are now including metrics across ten policies on Facebook and metrics across four policies on Instagram.

These metrics include:

  • Prevalence: how often content that violates our policies was viewed
  • Content Actioned: how much content we took action on because it was found to violate our policies
  • Proactive Rate: of the content we took action on, how much was detected before someone reported it to us
  • Appealed Content: how much content people appealed after we took action
  • Restored Content: how much content was restored after we initially took action

We also launched a new page today so people can view examples of how our Community Standards apply to different types of content and see where we draw the line.

Adding Instagram to the Report
For the first time, we are sharing data on how we are doing at enforcing our policies on Instagram. In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda. The report does not include appeals and restores metrics for Instagram, as appeals on Instagram were only launched in Q2 of this year, but these will be included in future reports.

While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services. There are many reasons for this, including: the differences in the apps’ functionalities and how they’re used – for example, Instagram doesn’t have links, re-shares in feed, Pages or Groups; the differing sizes of our communities; where people in the world use one app more than another; and where we’ve had greater ability to use our proactive detection technology to date. When comparing metrics in order to see where progress has been made and where more improvements are needed, we encourage people to see how metrics change, quarter-over-quarter, for individual policy areas within an app.

What Else Is New in the Fourth Edition of the Report

  • Data on suicide and self-injury: We are now detailing how we’re taking action on suicide and self-injury content. This area is both sensitive and complex, and we work with experts to ensure everyone’s safety is considered. We remove content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior. We place a sensitivity screen over content that doesn’t violate our policies but that may be upsetting to some, including things like healed cuts or other non-graphic self-injury imagery in a context of recovery. We also recently strengthened our policies around self-harm and made improvements to our technology to find and remove more violating content.
    • On Facebook, we took action on about 2 million pieces of content in Q2 2019, of which 96.1% we detected proactively, and we saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3% we detected proactively.
    • On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.
  • Expanded data on terrorist propaganda: Our Dangerous Individuals and Organizations policy bans all terrorist organizations from having a presence on our services. To date, we have identified a wide range of groups, based on their behavior, as terrorist organizations. Previous reports only included our efforts specifically against al Qaeda, ISIS and their affiliates as we focused our measurement efforts on the groups understood to pose the broadest global threat. Now, we’ve expanded the report to include the actions we’re taking against all terrorist organizations. While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content affiliated with any terrorist organization on Facebook is 98.5% and on Instagram is 92.2%. We will continue to invest in automated techniques to combat terrorist content and iterate on our tactics because we know bad actors will continue to change theirs.
  • Estimating prevalence for suicide and self-injury and regulated goods: In this report, we are adding prevalence metrics for content that violates our suicide and self-injury and regulated goods (illicit sales of firearms and drugs) policies for the first time. Because we care most about how often people may see content that violates our policies, we measure prevalence, or the frequency at which people may see this content on our services. For the policy areas addressing the most severe safety concerns — child nudity and sexual exploitation of children, regulated goods, suicide and self-injury, and terrorist propaganda — the likelihood that people view content that violates these policies is very low, and we remove much of it before people see it. As a result, when we sample views of content in order to measure prevalence for these policy areas, many times we do not find enough, or sometimes any, violating samples to reliably estimate a metric. Instead, we can estimate an upper limit of how often someone would see content that violates these policies. In Q3 2019, this upper limit was 0.04%. Meaning that for each of these policies, out of every 10,000 views on Facebook or Instagram in Q3 2019, we estimate that no more than 4 of those views contained content that violated that policy.
    • It’s also important to note that when the prevalence is so low that we can only provide upper limits, this limit may change by a few hundredths of a percentage point between reporting periods, but changes that small do not mean there is a real difference in the prevalence of this content on the platform.

Progress to Help Keep People Safe
Across the most harmful types of content we work to combat, we’ve continued to strengthen our efforts to enforce our policies and bring greater transparency to our work. In addition to suicide and self-injury content and terrorist propaganda, the metrics for child nudity and sexual exploitation of children, as well as regulated goods, demonstrate this progress. The investments we’ve made in AI over the last five years continue to be a key factor in tackling these issues. In fact, recent advancements in this technology have helped with rate of detection and removal of violating content.

For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram, enabling us to identify and remove more violating content.

On Facebook:

  • In Q3 2019, we removed about 11.6 million pieces of content, up from Q1 2019 when we removed about 5.8 million. Over the last four quarters, we proactively detected over 99% of the content we remove for violating this policy.

While we are including data for Instagram for the first time, we have made progress increasing content actioned and the proactive rate in this area within the last two quarters:

  • In Q2 2019, we removed about 512,000 pieces of content, of which 92.5% we detected proactively.
  • In Q3, we saw greater progress and removed 754,000 pieces of content, of which 94.6% we detected proactively.

For our regulated goods policy prohibiting illicit firearm and drug sales, continued investments in our proactive detection systems and advancements in our enforcement techniques have allowed us to build on the progress from the last report.

On Facebook:

  • In Q3 2019, we removed about 4.4 million pieces of drug sale content, of which 97.6% we detected proactively — an increase from Q1 2019 when we removed about 841,000 pieces of drug sale content, of which 84.4% we detected proactively.
  • Also in Q3 2019, we removed about 2.3 million pieces of firearm sales content, of which 93.8% we detected proactively — an increase from Q1 2019 when we removed about 609,000 pieces of firearm sale content, of which 69.9% we detected proactively.

On Instagram:

  • In Q3 2019, we removed about 1.5 million pieces of drug sale content, of which 95.3% we detected proactively.
  • In Q3 2019, we removed about 58,600 pieces of firearm sales content, of which 91.3% we detected proactively.

New Tactics in Combating Hate Speech
Over the last two years, we’ve invested in proactive detection of hate speech so that we can detect this harmful content before people report it to us and sometimes before anyone sees it. Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.

Initially, we’ve used these systems to proactively detect potential hate speech violations and send them to our content review teams since people can better assess context where AI cannot. Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy.

While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen. That’s why we continue to invest in systems that enable us to improve our accuracy in removing content that violates our policies while safeguarding content that discusses or condemns hate speech. Similar to how we review decisions made by our content review team in order to monitor the accuracy of our decisions, our teams routinely review removals by our automated systems to make sure we are enforcing our policies correctly. We also continue to review content again when people appeal and tell us we made a mistake in removing their post.

Updating our Metrics
Since our last report, we have improved the ways we measure how much content we take action on after identifying an issue in our accounting this summer. In this report, we are updating metrics we previously shared for content actioned, proactive rate, content appealed and content restored for the periods Q3 2018 through Q1 2019.

During those quarters, the issue with our accounting processes did not impact how we enforced our policies or how we informed people about those actions; it only impacted how we counted the actions we took. For example, if we find that a post containing one photo violates our policies, we want our metric to reflect that we took action on one piece of content — not two separate actions for removing the photo and the post. However, in July 2019, we found that the systems logging and counting these actions did not correctly log the actions taken. This was largely due to needing to count multiple actions that take place within a few milliseconds and not miss, or overstate, any of the individual actions taken.

We’ll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate. We share more details about these processes here.

The post Community Standards Enforcement Report, November 2019 Edition appeared first on About Facebook.

Facebook Newsroom

FACEBOOK

Best Practices for Designing Great Messaging Experiences on Messenger

Published

on

We recently reminded our community of the upcoming policy changes to the Messenger platform that will go into effect on March 4, 2020. These policy changes were designed to improve the messaging experience between people and businesses by driving timely and personally relevant conversations — prioritizing conversations started by people and related follow-up communications.

To help businesses best adapt to these new policy changes, here are some tips on the best practices to adopt when designing messenger experiences:

1. Respond quickly and set customer expectations on response times

People expect businesses to respond quickly and provide timely updates. We have found a strong correlation between responsiveness and successful business outcomes.>

2. Make it short and sweet

Make sure to communicate your key points succinctly and early on in your message. This aligns with people’s expectations for messaging as a channel and increases readability. Messages that are short and to the point can also be read clearly in message previews.

3. Leverage Messenger features to send high value messages outside the 24 hour standard messaging window

Successful businesses know the options available to send messages outside the standard messaging window and use them effectively.

  • Message tags – use tags to send personal, timely and important non-promotional messages. Businesses can use tags to send account updates, post purchase updates, confirmed event updates, and human agent responses.
  • One-Time Notification – allows a page to request a user to send one follow-up message after the 24-hour messaging window has ended. This can be used for cases such as back in stock alerts where a person has explicitly requested the business to send out a notification. Make sure that the message matches the topic the user agreed to receive the notification for and this message is fully communicated on the first attempt. You may also want to prompt people to interact with your notification in order to restart the standard messaging window.
  • Sponsored Messages – use sponsored messages for broadcast promotional updates to customers you’ve interacted with in Messenger. Sponsored messages support Facebook ads targeting and have built-in integrity controls to help us safeguard the user experience in Messenger.

4. Focus on customer value

Ensure your messages clearly communicate customer value – especially notifications sent outside the standard messaging window. Sending out low value messages makes it more likely that customers will tune out or block messages from your business altogether. Businesses using Messenger’s platform should consider adjusting push parameters for valuable messages that don’t require immediate action.

5. Provide audiences with options to choose from

Consider giving your audience additional control over the type of content they will receive via Messenger. For example, you may allow the user to select specific types of account alerts or post-purchase updates provided they comply with the Messenger platform policies.

We believe following these simple guidelines will help to ensure a businesses’ messaging efforts will be effective and drive outcomes, while providing customers with pleasant and valuable interaction experiences that encourage them to continue engaging with the business on Messenger.

Facebook Developers

Continue Reading

FACEBOOK

Two Billion Users — Connecting the World Privately

Published

on

We are excited to share that, as of today, WhatsApp supports more than 2 billion users around the world.

Mothers and fathers can reach their loved ones no matter where they are. Brothers and sisters can share moments that matter. Coworkers can collaborate, and businesses can grow by easily connecting with their customers.

Private conversations that once were only possible face-to-face can now take place across great distances through instant chats and video calling. There are so many significant and special moments that take place over WhatsApp and we are humbled and honored to reach this milestone.

We know that the more we connect, the more we have to protect. As we conduct more of our lives online, protecting our conversations is more important than ever.

That is why every private message sent using WhatsApp is secured with end-to-end encryption by default. Strong encryption acts like an unbreakable digital lock that keeps the information you send over WhatsApp secure, helping protect you from hackers and criminals. Messages are only kept on your phone, and no one in between can read your messages or listen to your calls, not even us. Your private conversations stay between you.

Strong encryption is a necessity in modern life. We will not compromise on security because that would make people less safe. For even more protection, we work with top security experts, employ industry leading technology to stop misuse as well as provide controls and ways to report issues — without sacrificing privacy.

WhatsApp started with the goal of creating a service that is simple, reliable and private for people to use. Today we remain as committed as when we started, to help connect the world privately and to protect the personal communication of 2 billion users all over the world.

The post Two Billion Users — Connecting the World Privately appeared first on About Facebook.

Facebook Newsroom

Continue Reading

FACEBOOK

Facebook, Instagram and YouTube: Government forcing companies to protect you online

Published

on

Although many of the details have still to be confirmed, it’s likely the new rules will apply to Facebook, Twitter, Whatsapp, Snapchat, and Instagram

We often talk about the risks you might find online and whether social media companies need to do more to make sure you don’t come across inappropriate content.

Well, now media regulator Ofcom is getting new powers, to make sure companies protect both adults and children from harmful content online.

The media regulator makes sure everyone in media, including the BBC, is keeping to the rules.

Harmful content refers to things like violence, terrorism, cyber-bullying and child abuse.

The new rules will likely apply to Facebook – who also own Instagram and WhatsApp – Snapchat, Twitter, YouTube and TikTok, and will include things like comments, forums and video-sharing.

Platforms will need to ensure that illegal content is removed quickly, and may also have to “minimise the risks” of it appearing at all.

These plans have been talked about for a while now.

The idea of new rules to tackle ‘online harms’ was originally set out by the Department for Digital, Culture, Media and Sport in May 2018.

The government has now decided to give Ofcom these new powers following research called the ‘Online Harms consultation’, carried out in the UK in 2019.

Plans allowing Ofcom to take control of social media were first spoken of in August last year.

The government will officially announce these new powers for Ofcom on Wednesday 12 February.

But we won’t know right away exactly what new rules will be introduced, or what will happen to tech or social media companies who break the new rules.

Children’s charity the NSPCC has welcomed the news. It says trusting companies to keep children safe online has failed.

“Too many times social media companies have said: ‘We don’t like the idea of children being abused on our sites, we’ll do something, leave it to us,'” said chief executive Peter Wanless.

“Thirteen self-regulatory attempts to keep children safe online have failed.

To enjoy the CBBC Newsround website at its best you will need to have JavaScript turned on.

Back in Feb 2018 YouTube said they were “very sorry” after Newsround found several videos not suitable for children on the YouTube Kids app

The UK government’s Digital Secretary, Baroness Nicky Morgan said: “There are many platforms who ideally would not have wanted regulation, but I think that’s changing.”

“I think they understand now that actually regulation is coming.”

In many countries, social media platforms are allowed to regulate themselves, as long as they stick to local laws on illegal material.

But some, including Germany and Australia, have introduced strict rules to force social media platforms do more to protect users online.

In Australia, social media companies have to pay big fines and bosses can even be sent to prison if they break the rules.

For more information and tips about staying safe online, go to BBC Own It, and find out how to make the internet a better place for all of us.

Read More

Continue Reading

Trending