Connect with us

OTHER

Twitter Says Will Step Up Fight Against Misinformation on Platform With New Policy

Published

on

twitter-says-will-step-up-fight-against-misinformation-on-platform-with-new-policy

Twitter is stepping up its fight against misinformation with a new policy cracking down on posts that spread potentially dangerous false stories. The change is part of a broader effort to promote accurate information during times of conflict or crisis.

Starting Thursday, the platform will no longer automatically recommend or emphasise posts that make misleading claims about the Russian invasion of Ukraine, including material that mischaracterises conditions in conflict zones or makes false allegations of war crimes or atrocities against civilians.

Under its new “crisis misinformation policy,” Twitter will also add warning labels to debunked claims about ongoing humanitarian crises, the San Francisco-based company said. Users won’t be able to like, forward or respond to posts that violate the new rules.

The changes make Twitter the latest social platform to grapple with the misinformation, propaganda, and rumors that have proliferated since Russia invaded Ukraine in February. That misinformation ranges from rumors spread by well-intentioned users to Kremlin propaganda amplified by Russian diplomats or fake accounts and networks linked to Russian intelligence.

“We have seen both sides share information that may be misleading and/or deceptive,” said Yoel Roth, Twitter’s head of safety and integrity, who detailed the new policy for reporters. “Our policy doesn’t draw a distinction between the different combatants. Instead, we’re focusing on misinformation that could be dangerous, regardless of where it comes from.”

Advertisement
free widgets for website

The new policy will complement existing Twitter rules that prohibit digitally manipulated media, false claims about elections and voting, and health misinformation, including debunked claims about COVID-19 and vaccines.

See also  Elon Musk Says Laid Off Employees Were Offered 3 Months of Severance

But it could also clash with the views of Tesla billionaire Elon Musk, who has agreed to pay $44 billion (roughly Rs. 3,41,160 crore) to acquire Twitter with the aim of making it a haven for “free speech.” Musk hasn’t addressed many instances of what that would mean in practice, although he has said that Twitter should only take down posts that violate the law, which taken literally would prevent action against most misinformation, personal attacks and harassment. He has also criticised the algorithms used by Twitter and other social platforms to recommend particular posts to individuals.

The policy was written broadly to cover misinformation during other conflicts, natural disasters, humanitarian crises or “any situation where there’s a widespread threat to health and safety,” Roth said.

Twitter said it will rely on a variety of credible sources to determine when a post is misleading. Those sources will include humanitarian groups, conflict monitors and journalists.

A senior Ukrainian cybersecurity official, Victor Zhora, welcomed Twitter’s new screening policy and said that it’s up to the global community to “find proper approaches to prevent the sowing of misinformation across social networks.”

Advertisement
free widgets for website

While the results have been mixed, Twitter’s efforts to address misinformation about the Ukraine conflict exceed those of other platforms that have chosen a more hands-off approach, like Telegram, which is popular in Eastern Europe.

Asked specifically about the Telegram platform, where Russian government disinformation is rampant but Ukraine’s leaders also reaches a wide audience, Zhora said the question was “tricky but very important.” That’s because the kind of misinformation disseminated without constraint on Telegram “to some extent led to this war.”

See also  Twitter Agreeable to October 17 Elon Musk Trial, Wants Commitment to Complete Trial in 5 Days

Since the Russian invasion began in February, social media platforms like Twitter and Meta, the owner of Facebook and Instagram, have tried to address a rise in war-related misinformation by labeling posts from Russian state-controlled media and diplomats. They’ve also de-emphasised some material so it no longer turns up in searches or automatic recommendations.

Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab and expert on social media and disinformation, said that the conflict in Ukraine shows how easily misinformation can spread online during conflict, and the need for platforms to respond.

“This is a conflict that has played out on the Internet, and one that has driven extraordinarily rapid changes in tech policy,” he said.

Advertisement
free widgets for website

Affiliate links may be automatically generated – see our ethics statement for details.

OTHER

Meta Oversight Board Calls for Overhaul of ‘Cross-Check’ Programme That Prioritises VIP Users

Published

on

By

meta-oversight-board-calls-for-overhaul-of-‘cross-check’-programme-that-prioritises-vip-users

Meta Platforms’ Oversight Board recommended on Tuesday that the company revamp its system exempting high-profile users from its rules, saying the practice privileged the powerful and allowed business interests to influence content decisions.

The arrangement, called cross-check, adds a layer of enforcement review for millions of Facebook and Instagram accounts belonging to celebrities, politicians and other influential users, allowing them extra leeway to post content that violates the company’s policies.

Cross-check “prioritises users of commercial value to Meta and as structured does not meet Meta’s human rights responsibilities and company values,” Oversight Board director Thomas Hughes said in a statement announcing the decision.

The board had been reviewing the cross-check programme since last year, when whistleblower Frances Haugen exposed the extent of the system by leaking internal company documents to the Wall Street Journal.

Those documents revealed that the programme was both larger and more forgiving of influential users than Meta had previously told the Oversight Board, which is funded by the company through a trust and operates independently.

Advertisement
free widgets for website

Without controls on eligibility or governance, cross-check sprawled to include nearly anyone with a substantial online following, although even with millions of members it represents a tiny slice of Meta’s 3.7 billion total users.

In 2019, the system blocked the company’s moderators from removing nude photos of a woman posted by Brazilian soccer star Neymar, even though the post violated Meta’s rules against “nonconsensual intimate imagery,” according to the WSJ report.

The board at the time of the report rebuked Meta for not being “fully forthcoming” in its disclosures about cross-check.

See also  Twitter Agreeable to October 17 Elon Musk Trial, Wants Commitment to Complete Trial in 5 Days

In the opinion it issued on Tuesday, the board said it agreed that Meta needed mechanisms to address enforcement mistakes, given the extraordinary volume of user-generated content the company moderates each day.

However, it added, Meta “has a responsibility to address these larger problems in ways that benefit all users and not just a select few.”

Advertisement
free widgets for website

It made 32 recommendations that it said would structure the programme more equitably, including transparency requirements, audits of the system’s impact and a more systematic approach to eligibility.

State actors, it said, should continue to be eligible for inclusion in the programme, but based only on publicly available criteria, with no other special preferences.

The Oversight Board’s policy recommendations are not binding, but Meta is required to respond to them, normally within 60 days.

A spokeswoman for the Oversight Board said the company had asked for and received an extension in this case, so it would have 90 days to respond.

© Thomson Reuters 2022

Advertisement
free widgets for website

Affiliate links may be automatically generated – see our ethics statement for details.

Continue Reading

OTHER

Facebook Dating Will Allow Users to Verify Their Age Using AI Face Scanning, Meta Says

Published

on

By

facebook-dating-will-allow-users-to-verify-their-age-using-ai-face-scanning,-meta-says

Meta on Monday announced that it has introduced a new method for users to verify their age on its Facebook Dating service. Facebook is experimenting with methods, such as using an AI face scanner, to allow users of the platform’s dating service to verify their age.

Meta announced in a blog post that it would start prompting users on Facebook Dating to verify that they’re over 18 if the platform suspects a user is underage.

Users can then verify their age by sharing a selfie video that Facebook shares with a third-party business or by uploading a copy of their ID. According to Meta, the company, Yoti, uses facial cues to determine a user’s age without identifying them.

Meta says the new age verification systems will help stop children from accessing features meant for adults. It doesn’t appear that there are any requirements for adults to verify their age on Facebook Dating.

The US social media giant has used Yoti for other age verification purposes, including vetting Instagram users who attempt to change their birthdate to make them 18 or older.

Advertisement
free widgets for website

However, according to a report by The Verge, the system isn’t equally accurate for all people: Yoti’s data shows that its accuracy is worse for “female” faces and people with darker complexions.

Last year, Instagram announced that it had started prompting users to fill in their birthday details. The prompts could initially be dismissed but the social media giant eventually made it compulsory for users who wanted to continue using Instagram. The prompts were designed to ascertain how old users were on Instagram and prevent content that isn’t suitable for young people to appear on their feed. At the time, Instagram had stated that the information is necessary for new features it was developing to protect young people.

See also  Millennial Money: five steps to weed out Instagram ad scams

Affiliate links may be automatically generated – see our ethics statement for details.

Continue Reading

OTHER

Meta Threatens to Remove News From Platform if US Congress Passes Media Bill

Published

on

By

meta-threatens-to-remove-news-from-platform-if-us-congress-passes-media-bill

Facebook parent Meta Platforms on Monday threatened to remove news from its platform if the US Congress passes a proposal aimed at making it easier for news organisations to negotiate collectively with companies like Alphabet’s Google and Facebook.

Sources briefed on the matter said lawmakers are considering adding the Journalism Competition and Preservation Act to a must-pass annual defense bill as way to help the struggling local news industry. Meta spokesperson Andy Stone in a tweet said the company would be forced to consider removing news if the law was passed “rather than submit to government-mandated negotiations that unfairly disregard any value we provide to news outlets through increased traffic and subscriptions.”

He added the proposal fails to recognise that publishers and broadcasters put content on the platform because “it benefits their bottom line – not the other way around.”

The News Media Alliance, a trade group representing newspaper publishers, is urging Congress to add the bill to the defense bill, arguing that “local papers cannot afford to endure several more years of Big Tech’s use and abuse, and time to take action is dwindling. If Congress does not act soon, we risk allowing social media to become America’s de facto local newspaper.”

More than two dozen groups including the American Civil Liberties Union, Public Knowledge and the Computer & Communications Industry Association on Monday urged Congress not to approve the local news bill saying it would “create an ill-advised antitrust exemption for publishers and broadcasters” and argued the bill does not require “funds gained through negotiation or arbitration will even be paid to journalists.”

Advertisement
free widgets for website

A similar Australian law, which took effect in March 2021 after talks with the big tech firms led to a brief shutdown of Facebook news feeds in the country, has largely worked, a government report said.

See also  Elon Musk Polls Users on 'General Amnesty' for Suspended Twitter Accounts

Since the News Media Bargaining Code took effect, various tech firms including Meta and Alphabet have signed more than 30 deals with media outlets, compensating them for content that generated clicks and advertising dollars, the report added.

© Thomson Reuters 2022


Affiliate links may be automatically generated – see our ethics statement for details.

Advertisement
free widgets for website
Continue Reading

Trending