Connect with us

FACEBOOK

Facebook has an invisible system that shelters powerful rule-breakers. So do other online platforms.

Published

on

Last week, the Wall Street Journal published Jeff Horwitz’s investigation into the inner workings of Facebook — with some troubling findings. Internal documents suggest that Facebook’s top management dismissed or downplayed an array of problems brought to their attention by product teams, internal researchers and their own Oversight Board. These include a report on what is known as the XCheck program, which reportedly allowed nearly any Facebook employee, at their own discretion, to whitelist users who were “newsworthy,” “influential or popular” or “PR risky.” The apparent result was that more than 5.8 million users were moderated according to different rules than ordinary Facebook users, or hardly moderated at all.

This system of “invisible elite tiers,” as the Journal describes it, meant that the speech of powerful and influential actors was protected while ordinary people’s speech was moderated by automated algorithms and overworked humans. As our research shows, that’s not surprising. Other platforms besides Facebook enforce different standards for different users, creating special classes of users as part of their business models.

Unequal and opaque standards can breed suspicion among users

In a recent research article, we explain how another important platform, YouTube, takes what we call a “tiered governance” approach, separating users into categories and applying different rules to each category’s videos. YouTube distinguishes among such categories as media partners, nonprofits and governments. Most important, it distinguishes between “creators” who get a slice of its ad revenue and ordinary users. Even among those paid creators, YouTube has a more subtle array of tiers according to popularity.

Advertisement
free widgets for website
See also  Why Facebook's betting $1 billion on creators

Facebook’s program began as a stopgap measure to avoid the public relations disasters that might happen if the platform hastily deleted content by someone powerful enough to fight back, such as a sitting president. YouTube’s program began when it created a special category of paid creators, the YouTube Partner Program, to give popular YouTubers incentives to stay on the site and make more content.

YouTube then began to create more intricate tiers, providing the most influential creators with special perks such as access to studios and camera equipment. An elite few had direct contact with handlers within the company who could help them deal with content moderation issues quickly, so that they didn’t lose money. But things changed when advertisers — YouTube’s main source of revenue — began to worry about their ads being shown together with offensive content. This drove YouTube to adjust its policies — over and over again — about which creators belonged to which tiers and what their benefits and responsibilities were, even if the creators didn’t like it.

Creators were understandably frustrated as these arrangements seemed to keep shifting under their feet. They didn’t object to different rules and sets of perks for different tiers of creators, but they did care that the whole system was opaque. Users like to know what to expect from platforms — whether they will enforce guidelines, and how much financial compensation they provide. They didn’t like the unpredictability of YouTube’s decisions, especially since those decisions had real social, financial and reputational impact.

See also  Amazon rainforest plots sold via Facebook Marketplace ads

Some were frustrated and suspicious about the platform’s real motives. Opacity and perceptions of unfairness provided fuel for conspiracy theories about why YouTube was doing what it was doing. Creators who didn’t know if YouTube’s algorithms had demonetized or demoted their videos began to worry that their political leanings were being penalized. This led to anger and despair, which was worsened by YouTube’s clumsy appeals system. And it gave fodder to those eager to accuse YouTube of censorship, whether it was true.

Advertisement
free widgets for website

It’s fair to be unfair, as long as you’re fair about it

Social media companies such as YouTube and Facebook have suggested that their platforms are open, meritocratic, impartial and evenhanded. This makes it hard for them to explain why they treat different people differently. However, other systems for adjudication make distinctions, too. For example, criminal law takes into account whether the accused is a child, impaired, a repeat offender, under the influence, responding in self-defense or under justifiable duress.

Similarly, there are plausible reasons platform companies might want to treat different tiers of users in different ways. For example, for postings about the coronavirus, it made sense to establish different rules for those who had established themselves as trustworthy. To decrease the spread of misinformation or harassment, platforms might reasonably want to impose higher standards rather than lower ones on users who had many followers, who held political office and had special obligations to the public, or who paid or received money to post.

See also  Facebook groups showcase before and after wartime photos

But YouTube’s experience suggests that clarity about why different users are treated differently matters for public perception. When a company such as Facebook discriminates between different tiers of users just to avoid offending powerful people and mitigate possible PR disasters, observers will treat that reasoning as less legitimate than if the company were trying to hold the powerful to account. This is especially so if the differences are kept hidden from users, the public and even Facebook’s own Oversight Board.

Advertisement
free widgets for website

These allegations are likely to breed distrust, accusations of bias and suspicions about Facebook’s intentions.

Robyn Caplan is a researcher at Data & Society Research Institute. Follow her @RobynCaplan.

Read More

Advertisement
free widgets for website

FACEBOOK

Updating Special Ad Audiences for housing, employment, and credit advertisers

Published

on

By

updating-special-ad-audiences-for-housing,-employment,-and-credit-advertisers

On June 21, 2022 we announced an important settlement with the US Department of Housing and Urban Development (HUD) that will change the way we deliver housing ads to people residing in the US. Specifically, we are building into our ads system a method designed to make sure the audience that ends up seeing a housing ad more closely reflects the eligible targeted audience for that ad.

As part of this agreement, we will also be sunsetting Special Ad Audiences, a tool that lets advertisers expand their audiences for ad sets related to housing. We are choosing to sunset this for employment and credit ads as well. In 2019, in addition to eliminating certain targeting options for housing, employment and credit ads, we introduced Special Ad Audiences as an alternative to Lookalike Audiences. But the field of fairness in machine learning is a dynamic and evolving one, and Special Ad Audiences was an early way to address concerns. Now, our focus will move to new approaches to improve fairness, including the method previously announced.

What’s happening: We’re removing the ability to create Special Ad Audiences via Ads Manager beginning on August 25, 2022.

Beginning October 12th, 2022, we will pause any remaining ad sets that contain Special Ad Audiences. These ad sets may be restarted once advertisers have removed any and all Special Ad Audiences from those ad sets. We are providing a two month window between preventing new Special Ad Audiences and pausing existing Special Ad Audiences to enable advertisers the time to adjust budgets and strategies as needed.

See also  Antivaxx microinfluencers are Facebook's next big problem

For more details, please visit our Newsroom post.

Advertisement
free widgets for website

Impact to Advertisers using Marketing API on September 13, 2022

For advertisers and partners using the API listed below, the blocking of new Special Ad Audience creation will present a breaking change on all versions. Beginning August 15, 2022, developers can start to implement the code changes, and will have until September 13, 2022, when the non-versioning change occurs and prior values are deprecated. Refer below to the list of impacted endpoints related to this deprecation:

For reading audience:

  • endpoint gr:get:AdAccount/customaudiences
  • field operation_status

For adset creation:

  • endpoint gr:post:AdAccount/adsets
  • field subtype

For adset editing:

  • endpoint gr:post:AdCampaign
  • field subtype

For custom audience creation:

  • endpoint gr:post:AdAccount/customaudiences
  • field subtype

For custom audience editing:

  • endpoint gr:post:CustomAudience

Please refer to the developer documentation for further details to support code implementation.

First seen at developers.facebook.com

Advertisement
free widgets for website
Continue Reading

FACEBOOK

Introducing an Update to the Data Protection Assessment

Published

on

By

introducing-an-update-to-the-data-protection-assessment

Over the coming year, some apps with access to certain types of user data on our platforms will be required to complete the annual Data Protection Assessment. We have made a number of improvements to this process since our launch last year, when we introduced our first iteration of the assessment.

The updated Data Protection Assessment will include a new developer experience that is enhanced through streamlined communications, direct support, and clear status updates. Today, we’re sharing what you can expect from these new updates and how you can best prepare for completing this important privacy requirement if your app is within scope.

If your app is in scope for the Data Protection Assessment, and you’re an app admin, you’ll receive an email and a message in your app’s Alert Inbox when it’s time to complete the annual assessment. You and your team of experts will then have 60 calendar days to complete the assessment. We’ve built a new platform that enhances the user experience of completing the Data Protection Assessment. These updates to the platform are based on learnings over the past year from our partnership with the developer community. When completing the assessment, you can expect:

  • Streamlined communication: All communications and required actions will be through the My Apps page. You’ll be notified of pending communications requiring your response via your Alerts Inbox, email, and notifications in the My Apps page.

    Note: Other programs may still communicate with you through the App Contact Email.

  • Available support: Ability to engage with Meta teams via the Support tool to seek clarification on the questions within the Data Protection Assessment prior to submission and help with any requests for more info, or to resolve violations.

    Note: To access this feature, you will need to add the app and app admins to your Business Manager. Please refer to those links for step-by-step guides.

  • Clear status updates: Easy to understand status and timeline indicators throughout the process in the App Dashboard, App Settings, and My Apps page.
  • Straightforward reviewer follow-ups: Streamlined experience for any follow-ups from our reviewers, all via developers.facebook.com.

We’ve included a brief video that provides a walkthrough of the experience you’ll have with the Data Protection Assessment:

Something Went Wrong

Advertisement
free widgets for website

We’re having trouble playing this video.

The Data Protection Assessment elevates the importance of data security and helps gain the trust of the billions of people who use our products and services around the world. That’s why we are committed to providing a seamless experience for our partners as you complete this important privacy requirement.

Here is what you can do now to prepare for the assessment:

  1. Make sure you are reachable: Update your developer or business account contact email and notification settings.
  2. Review the questions in the Data Protection Assessment and engage with your teams on how best to answer these questions. You may have to enlist the help of your legal and information security points of contact to answer some parts of the assessment.
  3. Review Meta Platform Terms and our Developer Policies.

We know that when people choose to share their data, we’re able to work with the developer community to safely deliver rich and relevant experiences that create value for people and businesses. It’s a privilege we share when people grant us access to their data, and it’s imperative that we protect that data in order to maintain and build upon their trust. This is why the Data Protection Assessment focuses on data use, data sharing and data security.

Data privacy is challenging and complex, and we’re dedicated to continuously improving the processes to safeguard user privacy on our platform. Thank you for partnering with us as we continue to build a safer, more sustainable platform.

First seen at developers.facebook.com

Advertisement
free widgets for website
See also  Facebook groups showcase before and after wartime photos
Continue Reading

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

First seen at developers.facebook.com

See also  Facebook reported revenue it 'should have never made', manager claimed
Continue Reading

Trending