Connect with us

FACEBOOK

Community Standards Enforcement Report, November 2019 Edition

Published

on

Today we’re publishing the fourth edition of our Community Standards Enforcement Report, detailing our work for Q2 and Q3 2019. We are now including metrics across ten policies on Facebook and metrics across four policies on Instagram.

These metrics include:

  • Prevalence: how often content that violates our policies was viewed
  • Content Actioned: how much content we took action on because it was found to violate our policies
  • Proactive Rate: of the content we took action on, how much was detected before someone reported it to us
  • Appealed Content: how much content people appealed after we took action
  • Restored Content: how much content was restored after we initially took action

We also launched a new page today so people can view examples of how our Community Standards apply to different types of content and see where we draw the line.

Adding Instagram to the Report
For the first time, we are sharing data on how we are doing at enforcing our policies on Instagram. In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda. The report does not include appeals and restores metrics for Instagram, as appeals on Instagram were only launched in Q2 of this year, but these will be included in future reports.

While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services. There are many reasons for this, including: the differences in the apps’ functionalities and how they’re used – for example, Instagram doesn’t have links, re-shares in feed, Pages or Groups; the differing sizes of our communities; where people in the world use one app more than another; and where we’ve had greater ability to use our proactive detection technology to date. When comparing metrics in order to see where progress has been made and where more improvements are needed, we encourage people to see how metrics change, quarter-over-quarter, for individual policy areas within an app.

What Else Is New in the Fourth Edition of the Report

Advertisement
free widgets for website
  • Data on suicide and self-injury: We are now detailing how we’re taking action on suicide and self-injury content. This area is both sensitive and complex, and we work with experts to ensure everyone’s safety is considered. We remove content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior. We place a sensitivity screen over content that doesn’t violate our policies but that may be upsetting to some, including things like healed cuts or other non-graphic self-injury imagery in a context of recovery. We also recently strengthened our policies around self-harm and made improvements to our technology to find and remove more violating content.
    • On Facebook, we took action on about 2 million pieces of content in Q2 2019, of which 96.1% we detected proactively, and we saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3% we detected proactively.
    • On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.
  • Expanded data on terrorist propaganda: Our Dangerous Individuals and Organizations policy bans all terrorist organizations from having a presence on our services. To date, we have identified a wide range of groups, based on their behavior, as terrorist organizations. Previous reports only included our efforts specifically against al Qaeda, ISIS and their affiliates as we focused our measurement efforts on the groups understood to pose the broadest global threat. Now, we’ve expanded the report to include the actions we’re taking against all terrorist organizations. While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content affiliated with any terrorist organization on Facebook is 98.5% and on Instagram is 92.2%. We will continue to invest in automated techniques to combat terrorist content and iterate on our tactics because we know bad actors will continue to change theirs.
  • Estimating prevalence for suicide and self-injury and regulated goods: In this report, we are adding prevalence metrics for content that violates our suicide and self-injury and regulated goods (illicit sales of firearms and drugs) policies for the first time. Because we care most about how often people may see content that violates our policies, we measure prevalence, or the frequency at which people may see this content on our services. For the policy areas addressing the most severe safety concerns — child nudity and sexual exploitation of children, regulated goods, suicide and self-injury, and terrorist propaganda — the likelihood that people view content that violates these policies is very low, and we remove much of it before people see it. As a result, when we sample views of content in order to measure prevalence for these policy areas, many times we do not find enough, or sometimes any, violating samples to reliably estimate a metric. Instead, we can estimate an upper limit of how often someone would see content that violates these policies. In Q3 2019, this upper limit was 0.04%. Meaning that for each of these policies, out of every 10,000 views on Facebook or Instagram in Q3 2019, we estimate that no more than 4 of those views contained content that violated that policy.
    • It’s also important to note that when the prevalence is so low that we can only provide upper limits, this limit may change by a few hundredths of a percentage point between reporting periods, but changes that small do not mean there is a real difference in the prevalence of this content on the platform.
See also  Top 5 sites to Buy Facebook Likes in 2021. | Sponsored | state-journal.com

Progress to Help Keep People Safe
Across the most harmful types of content we work to combat, we’ve continued to strengthen our efforts to enforce our policies and bring greater transparency to our work. In addition to suicide and self-injury content and terrorist propaganda, the metrics for child nudity and sexual exploitation of children, as well as regulated goods, demonstrate this progress. The investments we’ve made in AI over the last five years continue to be a key factor in tackling these issues. In fact, recent advancements in this technology have helped with rate of detection and removal of violating content.

For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram, enabling us to identify and remove more violating content.

On Facebook:

  • In Q3 2019, we removed about 11.6 million pieces of content, up from Q1 2019 when we removed about 5.8 million. Over the last four quarters, we proactively detected over 99% of the content we remove for violating this policy.

While we are including data for Instagram for the first time, we have made progress increasing content actioned and the proactive rate in this area within the last two quarters:

  • In Q2 2019, we removed about 512,000 pieces of content, of which 92.5% we detected proactively.
  • In Q3, we saw greater progress and removed 754,000 pieces of content, of which 94.6% we detected proactively.
See also  Why Instagram is worth the time for some small businesses

For our regulated goods policy prohibiting illicit firearm and drug sales, continued investments in our proactive detection systems and advancements in our enforcement techniques have allowed us to build on the progress from the last report.

On Facebook:

  • In Q3 2019, we removed about 4.4 million pieces of drug sale content, of which 97.6% we detected proactively — an increase from Q1 2019 when we removed about 841,000 pieces of drug sale content, of which 84.4% we detected proactively.
  • Also in Q3 2019, we removed about 2.3 million pieces of firearm sales content, of which 93.8% we detected proactively — an increase from Q1 2019 when we removed about 609,000 pieces of firearm sale content, of which 69.9% we detected proactively.

On Instagram:

  • In Q3 2019, we removed about 1.5 million pieces of drug sale content, of which 95.3% we detected proactively.
  • In Q3 2019, we removed about 58,600 pieces of firearm sales content, of which 91.3% we detected proactively.

New Tactics in Combating Hate Speech
Over the last two years, we’ve invested in proactive detection of hate speech so that we can detect this harmful content before people report it to us and sometimes before anyone sees it. Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.

Initially, we’ve used these systems to proactively detect potential hate speech violations and send them to our content review teams since people can better assess context where AI cannot. Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy.

See also  Hailey Bieber: I only use social media on the weekends

While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen. That’s why we continue to invest in systems that enable us to improve our accuracy in removing content that violates our policies while safeguarding content that discusses or condemns hate speech. Similar to how we review decisions made by our content review team in order to monitor the accuracy of our decisions, our teams routinely review removals by our automated systems to make sure we are enforcing our policies correctly. We also continue to review content again when people appeal and tell us we made a mistake in removing their post.

Advertisement
free widgets for website

Updating our Metrics
Since our last report, we have improved the ways we measure how much content we take action on after identifying an issue in our accounting this summer. In this report, we are updating metrics we previously shared for content actioned, proactive rate, content appealed and content restored for the periods Q3 2018 through Q1 2019.

During those quarters, the issue with our accounting processes did not impact how we enforced our policies or how we informed people about those actions; it only impacted how we counted the actions we took. For example, if we find that a post containing one photo violates our policies, we want our metric to reflect that we took action on one piece of content — not two separate actions for removing the photo and the post. However, in July 2019, we found that the systems logging and counting these actions did not correctly log the actions taken. This was largely due to needing to count multiple actions that take place within a few milliseconds and not miss, or overstate, any of the individual actions taken.

We’ll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate. We share more details about these processes here.

The post Community Standards Enforcement Report, November 2019 Edition appeared first on About Facebook.

Facebook Newsroom

Advertisement
free widgets for website

FACEBOOK

Creating Apps with App Use Cases

Published

on

By

creating-apps-with-app-use-cases

With the goal of making Meta’s app creation process easier for developers to create and customize their apps, we are announcing the rollout of an updated process using App Use Cases instead of the former product-focused process. App Use Cases will enable developers to quickly create apps by selecting the use case that best represents their reason for creating an app.

Currently, the product-focused app creation process requires developers to select an app type and individually request permission to API endpoints. After listening to feedback from developers saying this process was, at times, confusing and difficult to navigate, we’re updating our approach that’s based on App Use Cases. With App Use Cases, user permissions and features will be bundled with each use case so developers can now confidently select the right data access for their needs. This change sets developers up for success to create their app and navigate app review, ensuring they only get the exact data access they need to accomplish their goals.

Starting today Facebook Login will be the first use case to become available to developers. This will be the first of many use cases that will be built into the app creation process that will roll out continually in 2023. For more information please reference our Facebook Login documentation.

First seen at developers.facebook.com

See also  Myanmar military clamps down on internet services, blocks Facebook
Continue Reading

FACEBOOK

Understanding Authorization Tokens and Access for the WhatsApp Business Platform

Published

on

By

understanding-authorization-tokens-and-access-for-the-whatsapp-business-platform

The WhatsApp Business Platform makes it easy to send WhatsApp messages to your customers and automate replies. Here, we’ll explore authentication using the Cloud API, hosted by Meta.

We’ll start with generating and using a temporary access token and then replace it with a permanent access token. This tutorial assumes you’re building a server-side application and won’t need additional steps to keep your WhatsApp application secrets securely stored.

Managing Access and Authorization Tokens

First, let’s review how to manage authorization tokens and safely access the API.

Prerequisites

Start by making sure you have a developer account on Meta for Developers. You’ll also need WhatsApp installed on a mobile device to send test messages to.

Creating an App

Before you can authenticate, you’ll need an application to authenticate you.

Advertisement
free widgets for website

Once you’re signed in, you see the Meta for Developers App Dashboard. Click Create App to get started.

Next, you’ll need to choose an app type. Choose Business.

After that, enter a display name for your application. If you have a business account to link to your app, select it. If not, don’t worry. The Meta for Developers platform creates a test business account you can use to experiment with the API. When done, click Create App.

Then, you’ll need to add products to your app. Scroll down until you see WhatsApp and click the Set up button:

Finally, choose an existing Meta Business Account or ask the platform to create a new one and click Continue:

Advertisement
free widgets for website

And with that, your app is created and ready to use. You’re automatically directed to the app’s dashboard.

Note that you have a temporary access token. For security reasons, the token expires in less than 24 hours. However, you can use it for now to test accessing the API. Later, we’ll cover how to generate a permanent access token that your server applications can use. Also, note your app’s phone number ID because you’ll need it soon.

See also  Killi Introduces Facebook Unveil Allowing Consumers Access To Data Attributes That May Have ...

Click the dropdown under the To field, and then click Manage phone number list.

In the popup that appears, enter the phone number of a WhatsApp account to send test messages to.

Then, scroll further down the dashboard page and you’ll see an example curl call that looks similar to this:

Advertisement
free widgets for website
curl -i -X POST https://graph.facebook.com/v13.0//messages -H 'Authorization: Bearer ' -H 'Content-Type: application/json' -d '{ "messaging_product": "whatsapp", "to": "", "type": "template", "template": { "name": "hello_world", "language": { "code": "en_US" } } }'

Note that the Meta for Developers platform inserts your app’s phone number ID and access token instead of the and placeholders shown above. If you have curl installed, paste the command into your terminal and run it. You should receive a “hello world” message in WhatsApp on your test device.

If you’d prefer, you can convert the curl request into an HTTP request in your programming language by simply creating a POST request that sets the Authorization and Content-Type headers as shown above, including the JSON payload in the request body.

Since this post is about authentication, let’s focus on that. Notice that you’ve included your app’s access token in the Authorization header. For any request to the API, you must set the Authorization header to Bearer .

Remember that you must use your token instead of the placeholder. Using bearer tokens will be familiar if you’ve worked with JWT or OAuth2 tokens before. If you’ve never seen one before, a bearer token is essentially a random secret string that you, as the bearer of the token, can present to an API to prove you’re allowed to access it.

Failure to include this header causes the API to return a 401 Unauthorized response code.

Advertisement
free widgets for website

Creating a Permanent Access Token

Knowing that you need to use a bearer token in the Authorization header of an HTTP request is helpful, but it’s not enough. The only access token you’ve seen so far is temporary. Chances are that you want your app to access the API for more than 24 hours, so you need to generate a longer-lasting access token.

Fortunately, the Meta for Developers platform makes this easy. All you need to do is add a System User to your business account to obtain an access token you can use to continue accessing the API. To create a system user, do the following:

  • Go to Business Settings.

  • Select the business account your app is associated with.
  • Below Users, click System Users.
  • Click Add.
  • Name the system user, choose Admin as the user role, and click Create System User.
  • Select the whatsapp_business_messaging permission.
  • Click Generate New Token.
  • Copy and save your token.

Your access token is a random string of letters and numbers. Now, try re-running the earlier request using the token you just created instead of the temporary one:

curl -i -X POST https://graph.facebook.com/v13.0//messages -H 'Authorization: Bearer ' -H 'Content-Type: application/json' -d '{ "messaging_product": "whatsapp", "to": "", "type": "template", "template": { "name": "hello_world", "language": { "code": "en_US" } } }'

Your test device should receive a second hello message sent via the API.

Best Practices for Managing Access Tokens

It’s important to remember that you should never embed an App Access Token in a mobile or desktop application. These tokens are only for use in server-side applications that communicate with the API. Safeguard them the same way you would any other application secrets, like your database credentials, as anyone with your token has access to the API as your business.

If your application runs on a cloud services provider like AWS, Azure, GCP, or others, those platforms have tools to securely store app secrets. Alternatively there are freely-available secret stores like Vault or Conjur. While any of these options may work for you, it’s important to evaluate your options and choose what works best for your setup. At the very least, consider storing access tokens in environment variables and not in a database or a file where they’re easy to find during a data breach.

Advertisement
free widgets for website

Conclusion

In this post, you learned how to create a Meta for Developers app that leverages the WhatsApp Business Platform. You now know how the Cloud API’s bearer access tokens work, how to send an access token using an HTTP authorization header, and what happens if you send an invalid access token. You also understand the importance of keeping your access tokens safe since an access token allows an application to access a business’ WhatsApp messaging capabilities.

Why not try using the Cloud API, hosted by Meta if you’re considering building an app for your business to manage WhatsApp messaging? Now that you know how to obtain and use access tokens, you can use them to access any endpoint in the API.

First seen at developers.facebook.com

Continue Reading

FACEBOOK

Now people can share directly to Instagram Reels from some of their favorite apps

Published

on

By

now-people-can-share-directly-to-instagram-reels-from-some-of-their-favorite-apps

More people are creating, sharing and watching Reels than ever before. We’ve seen the creator community dive deeply into video content – and use it to connect with their communities. We’re running a limited alpha test that lets creators share video content directly from select integrated apps to Instagram Reels. Now, creators won’t be interrupted in their workflow, making it easier for them share share and express themselves on Reels.

“With the shift to video happening across almost all online platforms, our innovative tools and services empower creativity and fuel the creator economy and we are proud to be able to offer a powerful editing tool like Videoleap that allows seamless content creation, while partnering with companies like Meta to make sharing content that much easier.”- Zeev Farbman, CEO and co-founder of Lightricks.

Starting this month, creators can share short videos directly to Instagram Reels from some of their favorite apps, including Videoleap, Reface, Smule, VivaVideo, SNOW, B612, VITA and Zoomerang, with more coming soon. These apps and others also allow direct sharing to Facebook , which is available for any business with a registered Facebook App to use.

We hope to expand this test to more partners in 2023. If you’re interested in being a part of that beta program, please fill out this form and we will keep track of your submission. We do not currently have information to share about general availability of this integration.

Learn more here about sharing Stories and Reels to Facebook and Instagram and start building today.

Advertisement
free widgets for website

FAQs

Q. What is the difference between the Instagram Content Publishing API and Instagram Sharing to Reels?

See also  Former Facebook exec calls on the platform to improve transparency

A: Sharing to Reels is different from the Instagram Content Publishing API, which allows Instagram Business accounts to schedule and publish posts to Instagram from third-party platforms. Sharing to Reels is specifically for mobile apps to display a ‘Share to Reels’ widget. The target audience for the Share to Reels widget is consumers, whereas the Content Publishing API is targeted towards businesses, including third-party publishing platforms such as Hootsuite and Sprout Social that consolidate sharing to social media platforms within their third-party app.

Q: Why is Instagram partnering with other apps?

A: Creators already use a variety of apps to create and edit videos before uploading them to Instagram Reels – now we’re making that experience faster and easier. We are currently doing a small test of an integration with mobile apps that creators know and love, with more coming soon.

Q: How can I share my video from another app to Reels on Instagram?

Advertisement
free widgets for website

A: How it works (Make sure to update the mobile app you’re using to see the new Share to Reels option):

  • Create and edit your video in one of our partner apps
  • Once your video is ready, tap share and then tap the Instagram Reels icon
  • You will enter the Instagram Camera, where you can customize your reel with audio, effects, Voiceover and stickers. Record any additional clips or swipe up to add an additional clip from your camera roll.
  • Tap ‘Next’ to add a caption, hashtag, location, tag others or use the paid partnerships label.
  • Tap ‘Share’. Your reel will be visible where you share reels today, depending on your privacy settings.
See also  Bolsonaro blocks critics on social media, says rights group | Reuters

Q: How were partners selected?

A. We are currently working with a small group of developers that focus on video creation and editing as early partners. We’ll continue to expand to apps with other types of creation experiences.

Q: When will other developers be able to access Sharing to Reels on Instagram?

A: We do not currently have a date for general availability, but are planning to expand further in 2023.

Q: Can you share to Facebook Reels from other apps?

Advertisement
free widgets for website

A: Yes, Facebook offers the ability for developers to integrate with Sharing to Reels. For more information on third-party sharing opportunities, check out our entire suite of sharing offerings .

First seen at developers.facebook.com

Continue Reading

Trending