Connect with us

FACEBOOK

Facebook knew about, failed to police, abusive content globally – documents | Reuters

Published

on

Facebook Logo Image

Oct 25 (Reuters) – Facebook employees have warned for years that as the company raced to become a global service it was failing to police abusive content in countries where such speech was likely to cause the most harm, according to interviews with five former employees and internal company documents viewed by Reuters.

For over a decade, Facebook has pushed to become the world’s dominant online platform. It currently operates in more than 190 countries and boasts more than 2.8 billion monthly users who post content in more than 160 languages. But its efforts to prevent its products from becoming conduits for hate speech, inflammatory rhetoric and misinformation – some which has been blamed for inciting violence – have not kept pace with its global expansion.

(Also Read: Australian publisher calls government on Facebook)

Internal company documents viewed by Reuters show Facebook has known that it hasn’t hired enough workers who possess both the language skills and knowledge of local events needed to identify objectionable posts from users in a number of developing countries. The documents also showed that the artificial intelligence systems Facebook employs to root out such content frequently aren’t up to the task, either; and that the company hasn’t made it easy for its global users themselves to flag posts that violate the site’s rules.

Those shortcomings, employees warned in the documents, could limit the company’s ability to make good on its promise to block hate speech and other rule-breaking posts in places from Afghanistan to Yemen.

Advertisement
free widgets for website

In a review posted to Facebook’s internal message board last year regarding ways the company identifies abuses on its site, one employee reported “significant gaps” in certain countries at risk of real-world violence, especially Myanmar and Ethiopia.

The documents are among a cache of disclosures made to the U.S. Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen, a former Facebook product manager who left the company in May. Reuters was among a group of news organizations able to view the documents, which include presentations, reports and posts shared on the company’s internal message board. Their existence was first reported by The Wall Street Journal.

Facebook spokesperson Mavis Jones said in a statement that the company has native speakers worldwide reviewing content in more than 70 languages, as well as experts in humanitarian and human rights issues. She said these teams are working to stop abuse on Facebook’s platform in places where there is a heightened risk of conflict and violence.

“We know these challenges are real and we are proud of the work we’ve done to date,” Jones said.

Still, the cache of internal Facebook documents offers detailed snapshots of how employees in recent years have sounded alarms about problems with the company’s tools – both human and technological – aimed at rooting out or blocking speech that violated its own standards. The material expands upon Reuters’ previous reporting on Myanmar and other countries, where the world’s largest social network has failed repeatedly to protect users from problems on its own platform and has struggled to monitor content across languages.

Advertisement
free widgets for website

Among the weaknesses cited were a lack of screening algorithms for languages used in some of the countries Facebook has deemed most “at-risk” for potential real-world harm and violence stemming from abuses on its site.

See also  Facebook reports majority of child sex abuse images in 2020, data shows

The company designates countries “at-risk” based on variables including unrest, ethnic violence, the number of users and existing laws, two former staffers told Reuters. The system aims to steer resources to places where abuses on its site could have the most severe impact, the people said.

Facebook reviews and prioritizes these countries every six months in line with United Nations guidelines aimed at helping companies prevent and remedy human rights abuses in their business operations, spokesperson Jones said.

In 2018, United Nations experts investigating a brutal campaign of killings and expulsions against Myanmar’s Rohingya Muslim minority said Facebook was widely used to spread hate speech toward them. That prompted the company to increase its staffing in vulnerable countries, a former employee told Reuters. Facebook has said it should have done more to prevent the platform being used to incite offline violence in the country.

Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa, who left in 2017, said the company’s approach to global growth has been “colonial,” focused on monetization without safety measures.

Advertisement
free widgets for website

More than 90% of Facebook’s monthly active users are outside the United States or Canada.

LANGUAGE ISSUES

Facebook has long touted the importance of its artificial-intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. Machine-learning systems can detect such content with varying levels of accuracy.

But languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook’s automated content moderation, the documents provided to the government by Haugen show. The company lacks AI systems to detect abusive posts in a number of languages used on its platform. In 2020, for example, the company did not have screening algorithms known as “classifiers” to find misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic, a document showed.

A 3D-printed Facebook logo is seen placed on a keyboard in this illustration taken March 25, 2020. REUTERS/Dado Ruvic/Illustration/File Photo

Advertisement
free widgets for website

These gaps can allow abusive posts to proliferate in the countries where Facebook itself has determined the risk of real-world harm is high.

Reuters this month found posts in Amharic, one of Ethiopia’s most common languages, referring to different ethnic groups as the enemy and issuing them death threats. A nearly year-long conflict in the country between the Ethiopian government and rebel forces in the Tigray region has killed thousands of people and displaced more than 2 million.

See also  4 Augmented Reality Stocks To Watch As Facebook Unveils Its First Smart Glasses | Nasdaq

Facebook spokesperson Jones said the company now has proactive detection technology to detect hate speech in Oromo and Amharic and has hired more people with “language, country and topic expertise,” including people who have worked in Myanmar and Ethiopia.

In an undated document, which a person familiar with the disclosures said was from 2021, Facebook employees also shared examples of “fear-mongering, anti-Muslim narratives” spread on the site in India, including calls to oust the large minority Muslim population there. “Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned,” the document said. Internal posts and comments by employees this year also noted the lack of classifiers in the Urdu and Pashto languages to screen problematic content posted by users in Pakistan, Iran and Afghanistan.

Jones said Facebook added hate speech classifiers for Hindi in 2018 and Bengali in 2020, and classifiers for violence and incitement in Hindi and Bengali this year. She said Facebook also now has hate speech classifiers in Urdu but not Pashto.

Advertisement
free widgets for website

Facebook’s human review of posts, which is crucial for nuanced problems like hate speech, also has gaps across key languages, the documents show. An undated document laid out how its content moderation operation struggled with Arabic-language dialects of multiple “at-risk” countries, leaving it constantly “playing catch up.” The document acknowledged that, even within its Arabic-speaking reviewers, “Yemeni, Libyan, Saudi Arabian (really all Gulf nations) are either missing or have very low representation.”

Facebook’s Jones acknowledged that Arabic language content moderation “presents an enormous set of challenges.” She said Facebook has made investments in staff over the last two years but recognizes “we still have more work to do.”

Three former Facebook employees who worked for the company’s Asia Pacific and Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. These people said leadership did not understand the issues and did not devote enough staff and resources.

Facebook’s Jones said the California company cracks down on abuse by users outside the United States with the same intensity applied domestically.

The company said it uses AI proactively to identify hate speech in more than 50 languages. Facebook said it bases its decisions on where to deploy AI on the size of the market and an assessment of the country’s risks. It declined to say in how many countries it did not have functioning hate speech classifiers.

Advertisement
free widgets for website

Facebook also says it has 15,000 content moderators reviewing material from its global users. “Adding more language expertise has been a key focus for us,” Jones said.

See also  Facebook Henrico Data Center awards grants to 8 organizations

In the past two years, it has hired people who can review content in Amharic, Oromo, Tigrinya, Somali, and Burmese, the company said, and this year added moderators in 12 new languages, including Haitian Creole.

Facebook declined to say whether it requires a minimum number of content moderators for any language offered on the platform.

LOST IN TRANSLATION

Facebook’s users are a powerful resource to identify content that violates the company’s standards. The company has built a system for them to do so, but has acknowledged that the process can be time consuming and expensive for users in countries without reliable internet access. The reporting tool also has had bugs, design flaws and accessibility issues for some languages, according to the documents and digital rights activists who spoke with Reuters.

Advertisement
free widgets for website

Next Billion Network, a group of tech civic society groups working mostly across Asia, the Middle East and Africa, said in recent years it had repeatedly flagged problems with the reporting system to Facebook management. Those included a technical defect that kept Facebook’s content review system from being able to see objectionable text accompanying videos and photos in some posts reported by users. That issue prevented serious violations, such as death threats in the text of these posts, from being properly assessed, the group and a former Facebook employee told Reuters. They said the issue was fixed in 2020.

Facebook said it continues to work to improve its reporting systems and takes feedback seriously.

Language coverage remains a problem. A Facebook presentation from January, included in the documents, concluded “there is a huge gap in the Hate Speech reporting process in local languages” for users in Afghanistan. The recent pullout of U.S. troops there after two decades has ignited an internal power struggle in the country. So-called “community standards” – the rules that govern what users can post – are also not available in Afghanistan’s main languages of Pashto and Dari, the author of the presentation said.

A Reuters review this month found that community standards weren’t available in about half the more than 110 languages that Facebook supports with features such as menus and prompts.

Facebook said it aims to have these rules available in 59 languages by the end of the year, and in another 20 languages by the end of 2022.

Advertisement
free widgets for website

Reporting by Elizabeth Culliford in New York and Brad Heath in Washington; additional reporting by Fanny Potkin in Singapore, Sheila Dang in Dallas, Ayenet Mersie in Nairobi and Sankalp Phartiyal in New Delhi; editing by Kenneth Li and Marla Dickerson

Our Standards: The Thomson Reuters Trust Principles.

Read More

FACEBOOK

Updating Special Ad Audiences for housing, employment, and credit advertisers

Published

on

By

updating-special-ad-audiences-for-housing,-employment,-and-credit-advertisers

On June 21, 2022 we announced an important settlement with the US Department of Housing and Urban Development (HUD) that will change the way we deliver housing ads to people residing in the US. Specifically, we are building into our ads system a method designed to make sure the audience that ends up seeing a housing ad more closely reflects the eligible targeted audience for that ad.

As part of this agreement, we will also be sunsetting Special Ad Audiences, a tool that lets advertisers expand their audiences for ad sets related to housing. We are choosing to sunset this for employment and credit ads as well. In 2019, in addition to eliminating certain targeting options for housing, employment and credit ads, we introduced Special Ad Audiences as an alternative to Lookalike Audiences. But the field of fairness in machine learning is a dynamic and evolving one, and Special Ad Audiences was an early way to address concerns. Now, our focus will move to new approaches to improve fairness, including the method previously announced.

What’s happening: We’re removing the ability to create Special Ad Audiences via Ads Manager beginning on August 25, 2022.

Beginning October 12th, 2022, we will pause any remaining ad sets that contain Special Ad Audiences. These ad sets may be restarted once advertisers have removed any and all Special Ad Audiences from those ad sets. We are providing a two month window between preventing new Special Ad Audiences and pausing existing Special Ad Audiences to enable advertisers the time to adjust budgets and strategies as needed.

See also  Facebook suspends account of poet K Satchidanandan for posting video on BJP's loss in Kerala ...

For more details, please visit our Newsroom post.

Advertisement
free widgets for website

Impact to Advertisers using Marketing API on September 13, 2022

For advertisers and partners using the API listed below, the blocking of new Special Ad Audience creation will present a breaking change on all versions. Beginning August 15, 2022, developers can start to implement the code changes, and will have until September 13, 2022, when the non-versioning change occurs and prior values are deprecated. Refer below to the list of impacted endpoints related to this deprecation:

For reading audience:

  • endpoint gr:get:AdAccount/customaudiences
  • field operation_status

For adset creation:

  • endpoint gr:post:AdAccount/adsets
  • field subtype

For adset editing:

  • endpoint gr:post:AdCampaign
  • field subtype

For custom audience creation:

  • endpoint gr:post:AdAccount/customaudiences
  • field subtype

For custom audience editing:

  • endpoint gr:post:CustomAudience

Please refer to the developer documentation for further details to support code implementation.

First seen at developers.facebook.com

Advertisement
free widgets for website
Continue Reading

FACEBOOK

Introducing an Update to the Data Protection Assessment

Published

on

By

introducing-an-update-to-the-data-protection-assessment

Over the coming year, some apps with access to certain types of user data on our platforms will be required to complete the annual Data Protection Assessment. We have made a number of improvements to this process since our launch last year, when we introduced our first iteration of the assessment.

The updated Data Protection Assessment will include a new developer experience that is enhanced through streamlined communications, direct support, and clear status updates. Today, we’re sharing what you can expect from these new updates and how you can best prepare for completing this important privacy requirement if your app is within scope.

If your app is in scope for the Data Protection Assessment, and you’re an app admin, you’ll receive an email and a message in your app’s Alert Inbox when it’s time to complete the annual assessment. You and your team of experts will then have 60 calendar days to complete the assessment. We’ve built a new platform that enhances the user experience of completing the Data Protection Assessment. These updates to the platform are based on learnings over the past year from our partnership with the developer community. When completing the assessment, you can expect:

  • Streamlined communication: All communications and required actions will be through the My Apps page. You’ll be notified of pending communications requiring your response via your Alerts Inbox, email, and notifications in the My Apps page.

    Note: Other programs may still communicate with you through the App Contact Email.

  • Available support: Ability to engage with Meta teams via the Support tool to seek clarification on the questions within the Data Protection Assessment prior to submission and help with any requests for more info, or to resolve violations.

    Note: To access this feature, you will need to add the app and app admins to your Business Manager. Please refer to those links for step-by-step guides.

  • Clear status updates: Easy to understand status and timeline indicators throughout the process in the App Dashboard, App Settings, and My Apps page.
  • Straightforward reviewer follow-ups: Streamlined experience for any follow-ups from our reviewers, all via developers.facebook.com.

We’ve included a brief video that provides a walkthrough of the experience you’ll have with the Data Protection Assessment:

Something Went Wrong

Advertisement
free widgets for website

We’re having trouble playing this video.

The Data Protection Assessment elevates the importance of data security and helps gain the trust of the billions of people who use our products and services around the world. That’s why we are committed to providing a seamless experience for our partners as you complete this important privacy requirement.

Here is what you can do now to prepare for the assessment:

  1. Make sure you are reachable: Update your developer or business account contact email and notification settings.
  2. Review the questions in the Data Protection Assessment and engage with your teams on how best to answer these questions. You may have to enlist the help of your legal and information security points of contact to answer some parts of the assessment.
  3. Review Meta Platform Terms and our Developer Policies.

We know that when people choose to share their data, we’re able to work with the developer community to safely deliver rich and relevant experiences that create value for people and businesses. It’s a privilege we share when people grant us access to their data, and it’s imperative that we protect that data in order to maintain and build upon their trust. This is why the Data Protection Assessment focuses on data use, data sharing and data security.

Data privacy is challenging and complex, and we’re dedicated to continuously improving the processes to safeguard user privacy on our platform. Thank you for partnering with us as we continue to build a safer, more sustainable platform.

First seen at developers.facebook.com

Advertisement
free widgets for website
See also  VideoAmp Empowers Advertisers to Optimize Campaigns as an Official Facebook Multi ...
Continue Reading

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

First seen at developers.facebook.com

See also  Dear Abby: I see his Facebook photo as a red flag. He says I read too much into it.
Continue Reading

Trending