Connect with us

FACEBOOK

Community Standards Enforcement Report, November 2019 Edition

Published

on

Today we’re publishing the fourth edition of our Community Standards Enforcement Report, detailing our work for Q2 and Q3 2019. We are now including metrics across ten policies on Facebook and metrics across four policies on Instagram.

These metrics include:

  • Prevalence: how often content that violates our policies was viewed
  • Content Actioned: how much content we took action on because it was found to violate our policies
  • Proactive Rate: of the content we took action on, how much was detected before someone reported it to us
  • Appealed Content: how much content people appealed after we took action
  • Restored Content: how much content was restored after we initially took action

We also launched a new page today so people can view examples of how our Community Standards apply to different types of content and see where we draw the line.

Adding Instagram to the Report
For the first time, we are sharing data on how we are doing at enforcing our policies on Instagram. In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda. The report does not include appeals and restores metrics for Instagram, as appeals on Instagram were only launched in Q2 of this year, but these will be included in future reports.

While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services. There are many reasons for this, including: the differences in the apps’ functionalities and how they’re used – for example, Instagram doesn’t have links, re-shares in feed, Pages or Groups; the differing sizes of our communities; where people in the world use one app more than another; and where we’ve had greater ability to use our proactive detection technology to date. When comparing metrics in order to see where progress has been made and where more improvements are needed, we encourage people to see how metrics change, quarter-over-quarter, for individual policy areas within an app.

What Else Is New in the Fourth Edition of the Report

Advertisement
free widgets for website
  • Data on suicide and self-injury: We are now detailing how we’re taking action on suicide and self-injury content. This area is both sensitive and complex, and we work with experts to ensure everyone’s safety is considered. We remove content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior. We place a sensitivity screen over content that doesn’t violate our policies but that may be upsetting to some, including things like healed cuts or other non-graphic self-injury imagery in a context of recovery. We also recently strengthened our policies around self-harm and made improvements to our technology to find and remove more violating content.
    • On Facebook, we took action on about 2 million pieces of content in Q2 2019, of which 96.1% we detected proactively, and we saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3% we detected proactively.
    • On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.
  • Expanded data on terrorist propaganda: Our Dangerous Individuals and Organizations policy bans all terrorist organizations from having a presence on our services. To date, we have identified a wide range of groups, based on their behavior, as terrorist organizations. Previous reports only included our efforts specifically against al Qaeda, ISIS and their affiliates as we focused our measurement efforts on the groups understood to pose the broadest global threat. Now, we’ve expanded the report to include the actions we’re taking against all terrorist organizations. While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content affiliated with any terrorist organization on Facebook is 98.5% and on Instagram is 92.2%. We will continue to invest in automated techniques to combat terrorist content and iterate on our tactics because we know bad actors will continue to change theirs.
  • Estimating prevalence for suicide and self-injury and regulated goods: In this report, we are adding prevalence metrics for content that violates our suicide and self-injury and regulated goods (illicit sales of firearms and drugs) policies for the first time. Because we care most about how often people may see content that violates our policies, we measure prevalence, or the frequency at which people may see this content on our services. For the policy areas addressing the most severe safety concerns — child nudity and sexual exploitation of children, regulated goods, suicide and self-injury, and terrorist propaganda — the likelihood that people view content that violates these policies is very low, and we remove much of it before people see it. As a result, when we sample views of content in order to measure prevalence for these policy areas, many times we do not find enough, or sometimes any, violating samples to reliably estimate a metric. Instead, we can estimate an upper limit of how often someone would see content that violates these policies. In Q3 2019, this upper limit was 0.04%. Meaning that for each of these policies, out of every 10,000 views on Facebook or Instagram in Q3 2019, we estimate that no more than 4 of those views contained content that violated that policy.
    • It’s also important to note that when the prevalence is so low that we can only provide upper limits, this limit may change by a few hundredths of a percentage point between reporting periods, but changes that small do not mean there is a real difference in the prevalence of this content on the platform.
See also  Yves Bissouma's nine-word Instagram message to Nicolas Pepe as Arsenal fans spot transfer 'hint'

Progress to Help Keep People Safe
Across the most harmful types of content we work to combat, we’ve continued to strengthen our efforts to enforce our policies and bring greater transparency to our work. In addition to suicide and self-injury content and terrorist propaganda, the metrics for child nudity and sexual exploitation of children, as well as regulated goods, demonstrate this progress. The investments we’ve made in AI over the last five years continue to be a key factor in tackling these issues. In fact, recent advancements in this technology have helped with rate of detection and removal of violating content.

For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram, enabling us to identify and remove more violating content.

On Facebook:

  • In Q3 2019, we removed about 11.6 million pieces of content, up from Q1 2019 when we removed about 5.8 million. Over the last four quarters, we proactively detected over 99% of the content we remove for violating this policy.

While we are including data for Instagram for the first time, we have made progress increasing content actioned and the proactive rate in this area within the last two quarters:

  • In Q2 2019, we removed about 512,000 pieces of content, of which 92.5% we detected proactively.
  • In Q3, we saw greater progress and removed 754,000 pieces of content, of which 94.6% we detected proactively.
See also  Facebook Inc. Cl A stock outperforms market on strong trading day

For our regulated goods policy prohibiting illicit firearm and drug sales, continued investments in our proactive detection systems and advancements in our enforcement techniques have allowed us to build on the progress from the last report.

On Facebook:

  • In Q3 2019, we removed about 4.4 million pieces of drug sale content, of which 97.6% we detected proactively — an increase from Q1 2019 when we removed about 841,000 pieces of drug sale content, of which 84.4% we detected proactively.
  • Also in Q3 2019, we removed about 2.3 million pieces of firearm sales content, of which 93.8% we detected proactively — an increase from Q1 2019 when we removed about 609,000 pieces of firearm sale content, of which 69.9% we detected proactively.

On Instagram:

  • In Q3 2019, we removed about 1.5 million pieces of drug sale content, of which 95.3% we detected proactively.
  • In Q3 2019, we removed about 58,600 pieces of firearm sales content, of which 91.3% we detected proactively.

New Tactics in Combating Hate Speech
Over the last two years, we’ve invested in proactive detection of hate speech so that we can detect this harmful content before people report it to us and sometimes before anyone sees it. Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.

Initially, we’ve used these systems to proactively detect potential hate speech violations and send them to our content review teams since people can better assess context where AI cannot. Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy.

See also  Facebook News launches in Australia

While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen. That’s why we continue to invest in systems that enable us to improve our accuracy in removing content that violates our policies while safeguarding content that discusses or condemns hate speech. Similar to how we review decisions made by our content review team in order to monitor the accuracy of our decisions, our teams routinely review removals by our automated systems to make sure we are enforcing our policies correctly. We also continue to review content again when people appeal and tell us we made a mistake in removing their post.

Advertisement
free widgets for website

Updating our Metrics
Since our last report, we have improved the ways we measure how much content we take action on after identifying an issue in our accounting this summer. In this report, we are updating metrics we previously shared for content actioned, proactive rate, content appealed and content restored for the periods Q3 2018 through Q1 2019.

During those quarters, the issue with our accounting processes did not impact how we enforced our policies or how we informed people about those actions; it only impacted how we counted the actions we took. For example, if we find that a post containing one photo violates our policies, we want our metric to reflect that we took action on one piece of content — not two separate actions for removing the photo and the post. However, in July 2019, we found that the systems logging and counting these actions did not correctly log the actions taken. This was largely due to needing to count multiple actions that take place within a few milliseconds and not miss, or overstate, any of the individual actions taken.

We’ll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate. We share more details about these processes here.

The post Community Standards Enforcement Report, November 2019 Edition appeared first on About Facebook.

Facebook Newsroom

Advertisement
free widgets for website

FACEBOOK

Introducing an Update to the Data Protection Assessment

Published

on

By

introducing-an-update-to-the-data-protection-assessment

Over the coming year, some apps with access to certain types of user data on our platforms will be required to complete the annual Data Protection Assessment. We have made a number of improvements to this process since our launch last year, when we introduced our first iteration of the assessment.

The updated Data Protection Assessment will include a new developer experience that is enhanced through streamlined communications, direct support, and clear status updates. Today, we’re sharing what you can expect from these new updates and how you can best prepare for completing this important privacy requirement if your app is within scope.

If your app is in scope for the Data Protection Assessment, and you’re an app admin, you’ll receive an email and a message in your app’s Alert Inbox when it’s time to complete the annual assessment. You and your team of experts will then have 60 calendar days to complete the assessment. We’ve built a new platform that enhances the user experience of completing the Data Protection Assessment. These updates to the platform are based on learnings over the past year from our partnership with the developer community. When completing the assessment, you can expect:

  • Streamlined communication: All communications and required actions will be through the My Apps page. You’ll be notified of pending communications requiring your response via your Alerts Inbox, email, and notifications in the My Apps page.

    Note: Other programs may still communicate with you through the App Contact Email.

  • Available support: Ability to engage with Meta teams via the Support tool to seek clarification on the questions within the Data Protection Assessment prior to submission and help with any requests for more info, or to resolve violations.

    Note: To access this feature, you will need to add the app and app admins to your Business Manager. Please refer to those links for step-by-step guides.

  • Clear status updates: Easy to understand status and timeline indicators throughout the process in the App Dashboard, App Settings, and My Apps page.
  • Straightforward reviewer follow-ups: Streamlined experience for any follow-ups from our reviewers, all via developers.facebook.com.

We’ve included a brief video that provides a walkthrough of the experience you’ll have with the Data Protection Assessment:

Something Went Wrong

Advertisement
free widgets for website

We’re having trouble playing this video.

The Data Protection Assessment elevates the importance of data security and helps gain the trust of the billions of people who use our products and services around the world. That’s why we are committed to providing a seamless experience for our partners as you complete this important privacy requirement.

Here is what you can do now to prepare for the assessment:

  1. Make sure you are reachable: Update your developer or business account contact email and notification settings.
  2. Review the questions in the Data Protection Assessment and engage with your teams on how best to answer these questions. You may have to enlist the help of your legal and information security points of contact to answer some parts of the assessment.
  3. Review Meta Platform Terms and our Developer Policies.

We know that when people choose to share their data, we’re able to work with the developer community to safely deliver rich and relevant experiences that create value for people and businesses. It’s a privilege we share when people grant us access to their data, and it’s imperative that we protect that data in order to maintain and build upon their trust. This is why the Data Protection Assessment focuses on data use, data sharing and data security.

Data privacy is challenging and complex, and we’re dedicated to continuously improving the processes to safeguard user privacy on our platform. Thank you for partnering with us as we continue to build a safer, more sustainable platform.

First seen at developers.facebook.com

Advertisement
free widgets for website
See also  Yves Bissouma's nine-word Instagram message to Nicolas Pepe as Arsenal fans spot transfer 'hint'
Continue Reading

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

First seen at developers.facebook.com

See also  Facebook Inc. Cl A stock outperforms market on strong trading day
Continue Reading

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Updated July 18: Developers and advertising partners may be required to share information on their app’s privacy practices in third party app stores, such as Google Play and the Apple App Store, including the functionality of SDKs provided by Meta. To help make it easier for you to complete these requirements, we have consolidated information that explains our data collection practices for the Facebook and Audience Network SDKs.

Facebook SDK

To provide functionality within the Facebook SDK, we may receive and process certain contact, location, identifier, and device information associated with Facebook users and their use of your application. The information we receive depends on what SDK features 3rd party applications use and we have structured the document below according to these features.

App Ads, Facebook Analytics, & App Events

Facebook App Events allow you to measure the performance of your app using Facebook Analytics, measure conversions associated with Facebook ads, and build audiences to acquire new users as well as re-engage existing users. There are a number of different ways your app can use app events to keep track of when people take specific actions such as installing your app or completing a purchase.

With Facebook SDK, there are app events that are automatically logged (app installs, app launches, and in-app purchases) and collected for Facebook Analytics unless you disable automatic event logging. Developers determine what events to send to Facebook from a list of standard events, or via a custom event.

When developers send Facebook custom events, these events could include data types outside of standard events. Developers control sending these events to Facebook either directly via application code or in Events Manager for codeless app events. Developers can review their code and Events Manager to determine which data types they are sending to Facebook. It’s the developer’s responsibility to ensure this is reflected in their application’s privacy policy.

Advertisement
free widgets for website

Advanced Matching

Developers may also send us additional user contact information in code, or via the Events Manager. Advanced matching functionality may use the following data, if sent:

  • email address, name, phone number, physical address (city, state or province, zip or postal code and country), gender, and date of birth.
See also  Facebook turns to the channel to bulk up its Workplace play

Facebook Login

There are two scenarios for applications that use Facebook Login via the Facebook SDK: Authenticated Sign Up or Sign In, and User Data Access via Permissions. For authentication, a unique, app-specific identifier tied to a user’s Facebook Account enables the user to sign in to your app. For Data Access, a user must explicitly grant your app permission to access data.

Note: Since Facebook Login is part of the Facebook SDK, we may collect other information referenced here when you use Facebook Login, depending on your settings.

Device Information

We may also receive and process the following information if your app is integrated with the Facebook SDK:

  • Device identifiers;
  • Device attributes, such as device model and screen dimensions, CPU core, storage size, SDK version, OS and app versions, and app package name; and
  • Networking information, such as the name of the mobile operator or ISP, language, time zone, and IP address.

Audience Network SDK

We may receive and process the following information when you use the Audience Network SDK to integrate Audience Network ads in your app:

  • Device identifiers;
  • Device attributes, such as device model and screen dimensions, operating system, mediation platform and SDK versions; and
  • Ad performance information, such as impressions, clicks, placement, and viewability.

First seen at developers.facebook.com

Continue Reading

Trending