Connect with us

FACEBOOK

Facebook Considers Changes to its Political Ads Policy

Published

on

As you may have heard, Facebook’s decision not to subject political ads to fact-checking has caused a lot of concern, in various sectors.

With The Social Network rolling out a range of measures to stop potential misuse of its platforms by politically-affiliated groups in the wake of the 2016 US Presidential Election, its decision to allow lies in political ads in the coming campaign seems at odds with such efforts. Add to that the fact that every other digital platform – including Google, which this week announced its own political ad restrictions – now either doesn’t accept political ads or subjects them to fact-checking, and you can see why the pressure would be on the world’s largest digital platform to re-think its stance.

And now, according to The Wall Street Journal, it just might do that.

As per WSJ:

“Facebook is considering making changes to its political-advertising policy that could include preventing campaigns from targeting only very small groups of people, people familiar with the matter said, in an effort to spurn the spread of misinformation. The company in recent weeks has weighed increasing the minimum number of people who are targeted in political ads from 100 to a few thousand, the people said.”

That could an interesting move – micro-targeting, or focusing specific messages onto very small groups, was a key element of how Cambridge Analytica reportedly conducted its political advocacy campaigns across The Social Network. If it wasn’t able to hone in on such specific audience subsets, with messaging tailored to their key pain points, maybe this type of campaigning wouldn’t be as effective – and while expanding the minimum audience targeting from 100 to 1000 may not seem like a major shift, it could have a big impact.

This comes after Google outlined similar limitations on its targeting tools for political ads – earlier this week. Google announced that it will remove Customer Match targeting as an option for political promotions, which will remove the capacity for such campaigns to upload their own lists of emails and/or phone numbers and then have Google’s systems match them up with relevant online profiles.

Again, this may not seem like a major step, but its this kind of precise targeting that has proven significantly effective in campaigns from groups like Cambridge Analytica and the Russian IRA.

Of course, Facebook could still do more. Facebook could still subject political ads to fact-checks, it could add in new labeling to signify such, or it could, as Twitter has done, just stop selling political ads.

For its part, Facebook has said that it is considering all options, and that it’s still assessing its stance in light of ongoing discussion:

CNN reported earlier this month that:

“Facebook is considering changes to how political ads can be targeted, how ads are labeled, and providing more information about who is paying for an ad.”

So it seems like all options are still on the table. Well, all but banning political ads outright – but even if it doesn’t choose to stop them entirely, there are various ways in which it could improve its approach, and not only move in-line with other platforms, but also appease user and regulatory concerns. And the latter could end up being a bigger headache for Facebook in future if it doesn’t act.

This is especially true with other platforms refining their processes – if Facebook sticks with its stance, that will open the door for digital platforms to come under a new set of government-implemented standards, which will bring new penalties and limitations on how Facebook, and others, operate. That kind of accountability will also prompt increased discussion about how Facebook and other providers are run, and whether there should be rules governing their overall practices.

Facebook doesn’t want that, as it would be costly to implement and difficult to manage – and as such, it makes sense that Zuck and Co. would look to move more in line with everyone else.

There’s nothing official as yet, but expect to see Facebook’s political ads policy change within the next few months.

Social Media Today

Continue Reading

FACEBOOK

Youth apologises to parents on Facebook for ’embarrassing them’, hangs himself to death

Published

on

Youth apologises to parents on <b>Facebook</b> for 'embarrassing them', hangs himself to death thumbnail

The deceased was identified as Sumit Pardhe (Representative Image).

The deceased was identified as Sumit Pardhe (Representative Image).&nbsp | &nbspPhoto Credit:&nbspiStock Images

Key Highlights

  • A 24-year-old youth in Aurangabad allegedly hanged himself to death on Friday
  • The youth took the drastic after apologising to his parents on a Facebook Live

Aurangabad: A 24-year-old youth from Aurangabad, Maharashtra, allegedly ended his own life after apologising to his parents for “embarrassing them”. The youth went live on social media platform Facebook before taking the drastic step and apologised to his parents. 

The deceased was identified as Sumit Pardhe. Pardhe was found hanging from a tree in Paradh, Jalna on Friday morning. He was a resident of Hatti, Sillod tehsil of Aurangabad. 

‘The family is in shock and they are not in a position to speak’

Abhijit More, Paradh police station inspector said that the circumstances that prompted Pardhe to take the drastic step have not been ascertained yet. He added, “The family is in shock and they are not in a position to speak. “

The youth had gone to stay at his aunt’s home. On Friday morning, he left the house to go to a neighbouring farm where he allegedly hanged himself to death. Some of the locals saw the body and informed the police. The youth was taken to a nearby hospital where he was declared brought dead, The Times of India reported. 

Youth apologised for going against parents’ wishes 

The youth had completed his masters in science and used to play volleyball. During the Facebook Live session, the youth apologised to his parents for embarrassing them. He said that his parents had to apologise publicly because of him. The youth also said that his decision of going against his parents’ wishes caused all the problems for his family. 

Reportedly, the youth was disturbed over an incident that took place around three days before he took the extreme step. Efforts are underway to unearth the details of the incident. A case of accidental death was registered by the police. 


 

Read More

Continue Reading

FACEBOOK

Israel, Arabs and Jews: Was Facebook objective? – Analysis

Published

on

Last week, readers contacted The Jerusalem Post to suggest that we investigate claims that Facebook and Instagram were maliciously biasing the social media war against Israel, guided by powerful figures inside the company.

According to the claim, people pressing “report post” on blatantly antisemitic or anti-Israel content, or posts with false information about the recent military campaign, were told that the post “doesn’t violate our community guidelines.”

Reporters investigated a particular Instagram employee, a Muslim woman who has posted several pro-Palestinian images on her personal Instagram account, who activists said is one of the people who decide what is and isn’t in line with the social media giant’s community guidelines. “If the heads of these companies support these views themselves, why is it even surprising that no one sees our side?” one Jewish activist asked.

After investigating the matter further and speaking with a number of Facebook executives, the Post concluded that the accusation wasn’t strong enough to pursue. But an article published last week in Buzzfeed News made a similar accusation- from the Arab side.

According to the article, “Facebook is losing trust among Arab users,” because during the ongoing Palestinian-Israeli conflict, “censorship – either perceived or documented – had made Arab and Muslim users skeptical of the platform.” The article went on to list the same claims the Jewish activists had made, that their posts were being censored while the other side’s were not, and that powerful people inside the Facebook organization were making deliberately biased calls about what meets the company’s community standards and what does not.

The article quoted heavily from The Jerusalem Post’s September 2020 profile of Jordana Cutler, Facebook’s Head of Policy for Israel and the Jewish Diaspora, who was named one of the year’s most influential Jews. The article saw proof of Facebook’s pro-Israel bias in Cutler’s statements like “My job is to represent Facebook to Israel, and represent Israel to Facebook.” Facebook’s former head of policy for the Middle East and North Africa region, Ashraf Zeitoon, was quoted as saying he was “shocked” after seeing that interview.

Zeitoon, who left Facebook in 2017, shouldn’t have been so shocked though. Facebook maintains public policy teams in every country it works in, tasked with interfacing between the needs of the social media company and the legal and diplomatic needs of the local government.

“Jordana’s role, and the role of our public policy team around the world, is to help make sure local governments, regulators and civil society understand Facebook’s policies, and that we at Facebook understand the context of the countries where we operate. Jordana is part of a global policy team, and to suggest that her role is any kind of conflict of interest is entirely inaccurate and inflammatory,” a Facebook spokesperson said.

Israel, like other countries, expects Facebook to remove content that violates local laws, even if it meets Facebook’s own criteria. On that matter, Israel’s intervention during the Guardian of the Walls military campaign was relatively limited. Data from the cyber department of Israel’s Attorney-General shows that from May 8-26, Israeli officials made 608 requests from Facebook to remove posts, with 54% accepted. On Instagram, there were 190 official requests for removal, with a 46% acceptance rate.

The number of Israelis reporting hate speech and incitement through the platform seemingly had a far greater impact. According to Buzzfeed News, Israel, with 5.8 million Facebook users, reported to Facebook 550,000 posts violating policies for violence and hate speech and 155,000 posts for terrorist content during one week of fighting. During the period, Israelis reported 10 times more terrorism violations and eight times more hate violations compared to Palestinian users, Buzzfeed said, citing a company employee.

Zeitoon, in a different interview given to CBS News, attributed that gap to Israel’s organizational superiority. “Israel has hacked the system and knows how to pressure Facebook to take stuff down,” he was quoted as saying. “Palestinians don’t have the capacity, experience and resources to report hate speech by Israeli citizens in Hebrew.”

Others, however, note another difference: Hamas is recognized by many governments as a terrorist organization, and Palestinians posted in far greater number than Israelis direct calls for violence, hate speech, and content glorifying terrorism. Ignoring that aspect of the “Palestinian voice” that those like Zeitoon say is being suppressed is irresponsible and dangerous, they claim.

Israel is justifiably quite concerned about the clear and present dangers posed by social media. Reports in the Hebrew press suggest that Prime Minister Benjamin Netanyahu even proposed blocking social media sites completely in Israel as the recent conflict began, in hopes of quelling incitement. Many have referred to the recent uptick in violence as the TikTok Intifada, a reference to the video-sharing social media network that is particularly popular among a younger demographic, and is widely seen as the source of some of the most intense incitement activity against Israel.

Facebook, as well as TikTok, categorically asserts that its automated content removal tools and human content moderators show no systemic bias toward any political cause or movement.

On that post by the Israeli activist mentioned above, Facebook Israel communications manager Maayan Sarig responded sharply. “We take criticism very seriously, but false claims against specific employees are not acceptable. Our policies are conducted globally in accordance with our community rules and there is no content that is independently approved or removed by individuals. So let’s try to avoid conspiracy theories.” That sort of statement is echoed throughout the company’s internal and external communications.

TikTok likewise has told the Post that “Safety is our top priority and we do not tolerate violence, hate speech or hateful behavior.”

It is not surprising that people on both sides of the conflict accuse social platforms of being biased against their cause. But, as is often the case online, the nuances easily get drowned out by strong emotions.

Read More

Continue Reading

FACEBOOK

Facebook & Instagram will now allow all users to hide their like counts

Published

on

By

<b>Facebook</b> & Instagram will now allow all users to hide their like counts thumbnail

facebook

Facebook and Instagram are giving more control to users over their content, feed and privacy. 

This week they announced new tools such as a Feed Filter Bar, Favourite Feed and Choose Who Can Comment, which aim to give people more ways to control what they see on their news feeds.

Facebook has been working on another new tool that allows users to filter offensive content from their DMS, and they have been testing hiding like counts over the past months. 

The hiding like counts tool is “beneficial for some and annoying to others”, says Facebook.

They added, “We’re giving you the option to hide like counts on all posts in your feed. You’ll also have the option to hide like counts on your own posts, so others can’t see how many likes your posts get. This way, if you like, you can focus on the photos and videos being shared, instead of how many likes posts get.”

According to Facebook, “changing the way people view like counts is a big shift.” 

(Image Credit: www.thoughtcatalog.com with an active link required)

Read More

Continue Reading

Trending