Connect with us

FACEBOOK

Enforcing Against Manipulated Media

Published

on

Illustration of a post being flagged and removed from News Feed

People share millions of photos and videos on Facebook every day, creating some of the most compelling and creative visuals on our platform. Some of that content is manipulated, often for benign reasons, like making a video sharper or audio more clear. But there are people who engage in media manipulation in order to mislead. 

Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or “deep learning” techniques to create videos that distort reality – usually called “deepfakes.” While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.

Today we want to describe how we are addressing both deepfakes and all types of manipulated media. Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.

Collaboration is key. Across the world, we’ve been driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform our policy development and improve the science of detecting manipulated media. 

As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:

  • It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words. 

Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing  nudity, graphic violence, voter suppression and hate speech.

Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.

This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.

Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts. Just last month, we identified and removed a network using AI-generated photos to conceal their fake accounts. Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior. 

We are also engaged in the identification of manipulated content, of which deepfakes are the most challenging to detect. That’s why last September we launched the Deep Fake Detection Challenge, which has spurred people from all over the world to produce more research and open source tools to detect deepfakes. This project, supported by $10 million in grants, includes a cross-sector coalition of organizations including the Partnership on AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC and AWS, among several others in civil society and the technology, media and academic communities.

In a separate effort, we’ve partnered with Reuters, the world’s largest multimedia news provider, to help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course. News organizations increasingly rely on third parties for large volumes of images and video, and identifying manipulated visuals is a significant challenge. This program aims to support newsrooms trying to do this work. 

As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact.

The post Enforcing Against Manipulated Media appeared first on About Facebook.

Facebook Newsroom

FACEBOOK

Youth apologises to parents on Facebook for ’embarrassing them’, hangs himself to death

Published

on

Youth apologises to parents on <b>Facebook</b> for 'embarrassing them', hangs himself to death thumbnail

The deceased was identified as Sumit Pardhe (Representative Image).

The deceased was identified as Sumit Pardhe (Representative Image).&nbsp | &nbspPhoto Credit:&nbspiStock Images

Key Highlights

  • A 24-year-old youth in Aurangabad allegedly hanged himself to death on Friday
  • The youth took the drastic after apologising to his parents on a Facebook Live

Aurangabad: A 24-year-old youth from Aurangabad, Maharashtra, allegedly ended his own life after apologising to his parents for “embarrassing them”. The youth went live on social media platform Facebook before taking the drastic step and apologised to his parents. 

The deceased was identified as Sumit Pardhe. Pardhe was found hanging from a tree in Paradh, Jalna on Friday morning. He was a resident of Hatti, Sillod tehsil of Aurangabad. 

‘The family is in shock and they are not in a position to speak’

Abhijit More, Paradh police station inspector said that the circumstances that prompted Pardhe to take the drastic step have not been ascertained yet. He added, “The family is in shock and they are not in a position to speak. “

The youth had gone to stay at his aunt’s home. On Friday morning, he left the house to go to a neighbouring farm where he allegedly hanged himself to death. Some of the locals saw the body and informed the police. The youth was taken to a nearby hospital where he was declared brought dead, The Times of India reported. 

Youth apologised for going against parents’ wishes 

The youth had completed his masters in science and used to play volleyball. During the Facebook Live session, the youth apologised to his parents for embarrassing them. He said that his parents had to apologise publicly because of him. The youth also said that his decision of going against his parents’ wishes caused all the problems for his family. 

Reportedly, the youth was disturbed over an incident that took place around three days before he took the extreme step. Efforts are underway to unearth the details of the incident. A case of accidental death was registered by the police. 


 

Read More

Continue Reading

FACEBOOK

Israel, Arabs and Jews: Was Facebook objective? – Analysis

Published

on

Last week, readers contacted The Jerusalem Post to suggest that we investigate claims that Facebook and Instagram were maliciously biasing the social media war against Israel, guided by powerful figures inside the company.

According to the claim, people pressing “report post” on blatantly antisemitic or anti-Israel content, or posts with false information about the recent military campaign, were told that the post “doesn’t violate our community guidelines.”

Reporters investigated a particular Instagram employee, a Muslim woman who has posted several pro-Palestinian images on her personal Instagram account, who activists said is one of the people who decide what is and isn’t in line with the social media giant’s community guidelines. “If the heads of these companies support these views themselves, why is it even surprising that no one sees our side?” one Jewish activist asked.

After investigating the matter further and speaking with a number of Facebook executives, the Post concluded that the accusation wasn’t strong enough to pursue. But an article published last week in Buzzfeed News made a similar accusation- from the Arab side.

According to the article, “Facebook is losing trust among Arab users,” because during the ongoing Palestinian-Israeli conflict, “censorship – either perceived or documented – had made Arab and Muslim users skeptical of the platform.” The article went on to list the same claims the Jewish activists had made, that their posts were being censored while the other side’s were not, and that powerful people inside the Facebook organization were making deliberately biased calls about what meets the company’s community standards and what does not.

The article quoted heavily from The Jerusalem Post’s September 2020 profile of Jordana Cutler, Facebook’s Head of Policy for Israel and the Jewish Diaspora, who was named one of the year’s most influential Jews. The article saw proof of Facebook’s pro-Israel bias in Cutler’s statements like “My job is to represent Facebook to Israel, and represent Israel to Facebook.” Facebook’s former head of policy for the Middle East and North Africa region, Ashraf Zeitoon, was quoted as saying he was “shocked” after seeing that interview.

Zeitoon, who left Facebook in 2017, shouldn’t have been so shocked though. Facebook maintains public policy teams in every country it works in, tasked with interfacing between the needs of the social media company and the legal and diplomatic needs of the local government.

“Jordana’s role, and the role of our public policy team around the world, is to help make sure local governments, regulators and civil society understand Facebook’s policies, and that we at Facebook understand the context of the countries where we operate. Jordana is part of a global policy team, and to suggest that her role is any kind of conflict of interest is entirely inaccurate and inflammatory,” a Facebook spokesperson said.

Israel, like other countries, expects Facebook to remove content that violates local laws, even if it meets Facebook’s own criteria. On that matter, Israel’s intervention during the Guardian of the Walls military campaign was relatively limited. Data from the cyber department of Israel’s Attorney-General shows that from May 8-26, Israeli officials made 608 requests from Facebook to remove posts, with 54% accepted. On Instagram, there were 190 official requests for removal, with a 46% acceptance rate.

The number of Israelis reporting hate speech and incitement through the platform seemingly had a far greater impact. According to Buzzfeed News, Israel, with 5.8 million Facebook users, reported to Facebook 550,000 posts violating policies for violence and hate speech and 155,000 posts for terrorist content during one week of fighting. During the period, Israelis reported 10 times more terrorism violations and eight times more hate violations compared to Palestinian users, Buzzfeed said, citing a company employee.

Zeitoon, in a different interview given to CBS News, attributed that gap to Israel’s organizational superiority. “Israel has hacked the system and knows how to pressure Facebook to take stuff down,” he was quoted as saying. “Palestinians don’t have the capacity, experience and resources to report hate speech by Israeli citizens in Hebrew.”

Others, however, note another difference: Hamas is recognized by many governments as a terrorist organization, and Palestinians posted in far greater number than Israelis direct calls for violence, hate speech, and content glorifying terrorism. Ignoring that aspect of the “Palestinian voice” that those like Zeitoon say is being suppressed is irresponsible and dangerous, they claim.

Israel is justifiably quite concerned about the clear and present dangers posed by social media. Reports in the Hebrew press suggest that Prime Minister Benjamin Netanyahu even proposed blocking social media sites completely in Israel as the recent conflict began, in hopes of quelling incitement. Many have referred to the recent uptick in violence as the TikTok Intifada, a reference to the video-sharing social media network that is particularly popular among a younger demographic, and is widely seen as the source of some of the most intense incitement activity against Israel.

Facebook, as well as TikTok, categorically asserts that its automated content removal tools and human content moderators show no systemic bias toward any political cause or movement.

On that post by the Israeli activist mentioned above, Facebook Israel communications manager Maayan Sarig responded sharply. “We take criticism very seriously, but false claims against specific employees are not acceptable. Our policies are conducted globally in accordance with our community rules and there is no content that is independently approved or removed by individuals. So let’s try to avoid conspiracy theories.” That sort of statement is echoed throughout the company’s internal and external communications.

TikTok likewise has told the Post that “Safety is our top priority and we do not tolerate violence, hate speech or hateful behavior.”

It is not surprising that people on both sides of the conflict accuse social platforms of being biased against their cause. But, as is often the case online, the nuances easily get drowned out by strong emotions.

Read More

Continue Reading

FACEBOOK

Facebook & Instagram will now allow all users to hide their like counts

Published

on

By

<b>Facebook</b> & Instagram will now allow all users to hide their like counts thumbnail

facebook

Facebook and Instagram are giving more control to users over their content, feed and privacy. 

This week they announced new tools such as a Feed Filter Bar, Favourite Feed and Choose Who Can Comment, which aim to give people more ways to control what they see on their news feeds.

Facebook has been working on another new tool that allows users to filter offensive content from their DMS, and they have been testing hiding like counts over the past months. 

The hiding like counts tool is “beneficial for some and annoying to others”, says Facebook.

They added, “We’re giving you the option to hide like counts on all posts in your feed. You’ll also have the option to hide like counts on your own posts, so others can’t see how many likes your posts get. This way, if you like, you can focus on the photos and videos being shared, instead of how many likes posts get.”

According to Facebook, “changing the way people view like counts is a big shift.” 

(Image Credit: www.thoughtcatalog.com with an active link required)

Read More

Continue Reading

Trending