Connect with us

FACEBOOK

The silent partner cleaning up Facebook for $500m a year – The Irish Times

Published

on

Your Web Browser may be out of date. If you are using Internet Explorer 9, 10 or 11 our Audio player will not work properly.

For a better experience use Google Chrome, Firefox or Microsoft Edge.

In 2019, Julie Sweet, the newly-appointed chief executive of global consulting firm Accenture, held a meeting with top managers. She had a question: Should Accenture get out of some of the work it was doing for a leading client, Facebook?

For years, tensions had mounted within Accenture over a certain task that it performed for the social network. In eight-hour shifts, thousands of its full-time employees and contractors were sorting through Facebook’s most noxious posts, including images, videos and messages about suicides, beheadings and sexual acts, trying to prevent them from spreading online.

Some of those Accenture workers, who reviewed hundreds of Facebook posts in a shift, said they had started experiencing depression, anxiety and paranoia. In the United States, one worker had joined a class-action lawsuit to protest the working conditions. News coverage linked Accenture to the grisly work. So Sweet had ordered a review to discuss the growing ethical, legal and reputational risks.

At the meeting in Accenture’s Washington office, she and Ellyn Shook, head of human resources, voiced concerns about the psychological toll of the work for Facebook and the damage to the firm’s reputation, attendees said. Some executives who oversaw the Facebook account argued that the problems were manageable. They said the social network was too lucrative a client to lose.

Advertisement

The meeting ended with no resolution.

Facebook and Accenture have rarely talked about their arrangement or even acknowledged that they work with each other. But their secretive relationship lies at the heart of an effort by the world’s largest social media company to distance itself from the most toxic part of its business.

Toxic content

For years, Facebook has been under scrutiny for the violent and hateful content that flows through its site. Chief executive Mark Zuckerberg has repeatedly pledged to clean up the platform. He has promoted the use of artificial intelligence to weed out toxic posts and touted efforts to hire thousands of workers to remove the messages that AI doesn’t.

Behind the scenes, Facebook has quietly paid others to take on much of the responsibility. Since 2012, the company has hired at least 10 consulting and staffing firms globally to sift through its posts, along with a wider web of subcontractors, according to interviews and public records.

No company has been more crucial to that endeavour than Accenture. The Fortune 500 firm, better known for providing high-end tech, accounting and consulting services to multinational companies and governments, has become Facebook’s single biggest partner in moderating content, according to an examination by The New York Times.

Advertisement

Accenture has taken on the work – and given it a veneer of respectability – because Facebook has signed contracts with it for content moderation and other services worth at least $500 million (€422 million) a year, according to the New York Times examination. Accenture employs more than a third of the 15,000 people whom Facebook has said it has hired to inspect its posts. And while the agreements provide only a small fraction of Accenture’s annual revenue, they give it an important lifeline into Silicon Valley. Within Accenture, Facebook is known as a “diamond client”.

See also  This Robot Trader Turned Bullish on Amazon, Facebook, and Nvidia. What It Sold. | Barron's

Their contracts, which have not previously been reported, have redefined the traditional boundaries of an outsourcing relationship. Accenture has absorbed the worst facets of moderating content and made Facebook’s content issues its own. As a cost of doing business, it has dealt with workers’ mental-health issues from reviewing the posts. It has grappled with labour activism when those workers pushed for more pay and benefits. And it has silently borne public scrutiny when they have spoken out against the work.

Those issues have been compounded by Facebook’s demanding hiring targets and performance goals and so many shifts in its content policies that Accenture struggled to keep up, 15 current and former employees said. And when faced with legal action from moderators about the work, Accenture stayed quiet as Facebook argued that it was not liable because the workers belonged to Accenture and others.

“You couldn’t have Facebook as we know it today without Accenture,” said Cori Crider, a co-founder of Foxglove, a law firm that represents content moderators. “Enablers like Accenture, for eye-watering fees, have let Facebook hold the core human problem of its business at arm’s length.”

The New York Times interviewed more than 40 current and former Accenture and Facebook employees, labour lawyers and others about the companies’ relationship, which also includes accounting and advertising work. Most spoke anonymously because of non-disclosure agreements and fear of reprisal. The New York Times also reviewed Facebook and Accenture documents, legal records and regulatory filings.

Advertisement

Facebook and Accenture declined to make executives available for comment. Drew Pusateri, a Facebook spokesperson, said the company was aware that content moderation “jobs can be difficult, which is why we work closely with our partners to constantly evaluate how to best support these teams”.

Stacey Jones, an Accenture spokesperson, said the work was a public service that was “essential to protecting our society by keeping the internet safe”.

Neither company mentioned the other by name.

Pornographic posts

Much of Facebook’s work with Accenture traces back to a nudity problem.

In 2007, millions of users joined the social network every month – and many posted naked photos. A settlement that Facebook reached that year with Andrew Cuomo, who was New York’s attorney general, required the company to take down pornographic posts flagged by users within 24 hours.

Advertisement

Facebook employees who policed content were soon overwhelmed by the volume of work, members of the team said. Sheryl Sandberg, the company’s chief operating officer, and other executives pushed the team to find automated solutions for combing through the content, three of them said.

See also  Why Facebook Stock Was Slipping Today | The Motley Fool

Facebook also began looking at outsourcing, they said. Outsourcing was cheaper than hiring people and provided tax and regulatory benefits, along with the flexibility to grow or shrink quickly in regions where the company did not have offices or language expertise. Sandberg helped champion the outsourcing idea, they said, and midlevel managers worked out the details.

In 2010, Accenture scored an accounting contract with Facebook. By 2012, that had expanded to include a deal for moderating content, particularly outside the United States.

That year, Facebook sent employees to Manila in the Philippines, and Warsaw, Poland, to train Accenture workers to sort through posts, two former Facebook employees involved with the trip said. Accenture’s workers were taught to use a Facebook software system and the platform’s guidelines for leaving content up, taking it down or escalating it for review.

What started as a few dozen Accenture moderators grew rapidly.

Advertisement

By 2015, Accenture’s office in the San Francisco Bay Area had set up a team, codenamed Honey Badger, just for Facebook’s needs, former employees said. Accenture went from providing about 300 workers in 2015 to about 3,000 in 2016. They are a mix of full-time employees and contractors, depending on the location and task.

Facebook also spread the content work to other firms, such as Cognizant and TaskUs. Facebook now provides a third of TaskUs’s business, or $150 million a year, according to regulatory filings.

The work was challenging. While more than 90 per cent of objectionable material that comes across Facebook and Instagram is removed by AI, outsourced workers must decide whether to leave up the posts that the AI doesn’t catch.

They receive a performance score that is based on correctly reviewing posts against Facebook’s policies. If they make mistakes more than 5 per cent of the time, they can be fired, Accenture employees said.

Psychological costs

Within Accenture, workers began questioning the effects of viewing so many hateful posts.

Advertisement

In Dublin, one Accenture moderator who sifted through Facebook content left a suicide note on his desk in 2018, said a mental health counsellor who was involved in the episode. The worker was found safe.

Joshua Sklar, a moderator in Austin who quit in April, said he had reviewed 500 to 700 posts a shift, including images of dead bodies after car crashes and videos of animals being tortured.

If workers went around Accenture’s chain of command and directly communicated with Facebook about content issues, they risked being reprimanded, he added. That made Facebook slower to learn about and react to problems, he said.

Facebook said anyone filtering content could escalate concerns.

Another former moderator in Austin, Spencer Darr, said in a legal hearing in June that the job had required him to make unimaginable decisions, such as whether to delete a video of a dog being skinned alive or simply mark it as disturbing. “Content moderators’ job is an impossible one,” he said.

Advertisement

In 2018, Accenture introduced WeCare – policies that mental-health counsellors said limited their ability to treat workers. Their titles were changed to “wellness coaches” and they were instructed not to give psychological assessments or diagnoses, but to provide “short-term support” like taking walks or listening to calming music. The goal, according to a 2018 Accenture guidebook, was to teach moderators “how to respond to difficult situations and content.”

See also  Facebook to bar politicians from posting deceptive content: report

Accenture’s Jones said the company was “committed to helping our people who do this important work succeed both professionally and personally”. Workers can see outside psychologists.

Scrutiny

By 2019, scrutiny of the industry was growing. That year, Cognizant said it was exiting content moderation after tech site the Verge described the low pay and mental-health effects of workers at an Arizona office. Cognizant said the decision would cost it at least $240 million in revenue and lead to 6,000 job cuts.

More than one Accenture chief executive debated doing business with Facebook.

In 2017, Pierre Nanterme, Accenture’s chief at the time, questioned the ethics of the work and whether it fitted the firm’s long-term strategy of providing services with high profit margins and technical expertise, three executives involved in the discussions said.

Advertisement

No actions were taken. Nanterme died of cancer in January 2019.

Five months later, Sweet, a longtime Accenture lawyer and executive, was named chief executive. She soon ordered the review of the moderation business, three former colleagues said.

Last year, a worker in Austin was one of two from Accenture who joined a class-action suit against Facebook filed by US moderators. Facebook argued that it was not liable because the workers were employed by firms such as Accenture, according to court records. After the judge in the case ruled against Facebook, the company reached a $52 million settlement with the workers in May 2020.

For Sweet, the debate over the Facebook contracts stretched out over several meetings, former executives said. She subsequently made several changes.

In December 2019, Accenture created a two-page legal disclosure to inform moderators about the risks of the job. The work had “the potential to negatively impact your emotional or mental health”, the document said.

Advertisement

Last October, Accenture went further. It listed content moderation for the first time as a risk factor in its annual report, saying it could leave the firm vulnerable to media scrutiny and legal trouble. Accenture also restricted new moderation clients, two people with knowledge of the policy shift said. Any new contracts required approval from senior management.

But Sweet also left some things untouched, they said.

Among them: the contracts with Facebook. Ultimately, the people said, the client was too valuable to walk away from. – This article first appeared in the New York Times

Business Today

Get the latest business news and commentarySIGN UP HERE

Read More

Advertisement

FACEBOOK

Facebook Messenger Is Launching a Split Payments Feature for Users to Quickly Share Expenses

Published

on

By

Facebook Messenger Is Launching a Split Payments Feature for Users to Quickly Share Expenses

Meta has announced the arrival of a new Split Payments feature in Facebook Messenger. This feature, as the name suggests, will let you calculate and split expenses with others right from Facebook Messenger. This feature essentially looks to bring an easier method to share the cost of bills and expenses — for example, splitting a dinner bill with friends. Using this new Split Payment feature, Facebook Messenger users will be able to split bills evenly or modify the contribution for each individual, including their own.

The company took to its blog post to announce the new Split Payment feature in Facebook Messenger. 9to5Mac reports that this new bill splitting feature is still in beta and will be exclusive to US users at first. The rollout will begin early next week. As mentioned, it will help users share the cost of bills, expenses, and payments. This feature is especially useful for those who share an apartment and need to split the monthly rent and other expenses with their mates. It could also come handy at a group dinner with many people.

With Split Payments, users can add the number of people the expense needs to be divided with and, by default, the amount entered will be divided in equal parts. A user can also modify each person’s contribution including their own. To use Split Payments, click the Get Started button in a group chat or the Payments Hub in Messenger. Users can modify the contribution in the Split Payments option and send a notification to all the users who need to make payments. After entering a personalised message and confirming your Facebook Pay details, the request will be sent and viewable in the group chat thread.

See also  Facebook retools messaging again by adding calling to main app - BNN Bloomberg

Once someone has made the payment, you can mark their transaction as ‘completed’. The Split Payment feature will automatically take into account your share as well and calculate the amount owed accordingly.


For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.

Advertisement

Tasneem Akolawala is a Senior Reporter for Gadgets 360. Her reporting expertise encompasses smartphones, wearables, apps, social media, and the overall tech industry. She reports out of Mumbai, and also writes about the ups and downs in the Indian telecom sector. Tasneem can be reached on Twitter at @MuteRiot, and leads, tips, and releases can be sent to tasneema@ndtv.com.

Continue Reading

FACEBOOK

Facebook Owner Meta Launches New Platform, Safety Hub to Protect Women in India

Published

on

By

Meta Image

Meta (formerly Facebook) on Thursday announced a slew of steps to protect woman users on its platform, including the launch of StopNCII.org in India that aims to combat the spread of non-consensual intimate images (NCII).

Meta has also launched the Women’s Safety Hub, which will be available in Hindi and 11 other Indian languages, that will enable more women users in India to access information about tools and resources that can help them make the most of their social media experience, while staying safe online.

This initiative by Meta will ensure women do not face a language barrier in accessing information Karuna Nain, director (global safety policy) at Meta Platforms, told reporters here.

“Safety is an integral part of Meta’s commitment to building and offering a safe online experience across the platforms and over the years the company has introduced several industry leading initiatives to protect users online.

“Furthering our effort to bolster the safety of users, we are bringing in a number of initiatives to ensure online safety of women on our platforms,” she added.

Advertisement

StopNCII.org is a platform that aims to combat the spread of non-consensual intimate images (NCII).

“It gives victims control. People can come to this platform proactively, hash their intimate videos and images, share their hashes back with the platform and participating companies,” Nain said.

She explained that the platform doesn’t receive any photos and videos, and instead what they get is the hash or unique digital fingerprint/unique identifier that tells the company that this is a known piece of content that is violating. “We can proactively keep a lookout for that content on our platforms and once it”s uploaded, our review team check what”s really going on and take appropriate action if it violates our policies,” she added.

See also  Facebook Is Taking On Amazon…And Other Small Business Tech News

In partnership with UK Revenge Porn Helpline, StopNCII.org builds on Meta’s NCII Pilot, an emergency programme that allows potential victims to proactively hash their intimate images so they can”t be proliferated on its platforms.

The first-of-its-kind platform, has partnered with global organisations to support the victims of NCII. In India, the platform has partnered with organisations such as Social Media Matters, Centre for Social Research, and Red Dot Foundation.

Advertisement

Nain added that the company is hopeful that this becomes an industrywide initiative, so that victims can just come to this one central place to get help and support and not have to go to each and every tech platform, one by one to get help and support.

Also, Bishakha Datta (executive editor of Point of View) and Jyoti Vadehra from Centre for Social Research are the first Indian members in Meta”s Global Women”s Safety Expert Advisors. The group comprises 12 other non-profit leaders, activists, and academic experts from different parts of the world and consults Meta in the development of new policies, products and programmes to better support women on its apps.

“We are confident that with our ever-growing safety measures, women will be able to enjoy a social experience which will enable them to learn, engage and grow without any challenges.

“India is an important market for us and bringing Bishakha and Jyoti onboard to our Women”s Safety Expert Advisory Group will go a long way in further enhancing our efforts to make our platforms safer for women in India,” Nain said.

See also  Facebook to bar politicians from posting deceptive content: report

Advertisement
Continue Reading

FACEBOOK

Facebook Adds New Trend Insights in Creator Studio, Which Could Help Shape Your Posting Strategy

Published

on

By

facebook-adds-new-trend-insights-in-creator-studio,-which-could-help-shape-your-posting-strategy
en flag
sv flag

Facebook’s looking to provide more content insight within Creator Studio with the rollout of a new ‘Inspiration Hub’ element, which highlights trending content and hashtags within categories related to your business Page.

Facebook Inspiration Hub

As you can see in these screenshots, posted by social media expert Matt Navarra, when it becomes available to you, you’ll be able to access the new Inspiration Hub from the Home tab in Creator Studio.

At the right side of the screen, you can see the first of the new insights, with trending hashtags and videos from the last 24 hours, posted by Pages similar to yours, displayed above a ‘See more’ prompt.

When you tap through to the new hub, you’ll have a range of additional filters to check out trending content from across Facebook, including Page category, content type, region, and more.

Facebook Inspiration Hub

That could be hugely valuable in learning what Facebook users are responding to, and what people within your target market are engaging with in the app.

Advertisement

The Hub also includes insights into trending hashtags, within your chosen timeframe, which may further assist in tapping into trending discussions.

Facebook Inspiration Hub

How valuable hashtags are on Facebook is still up for debate, but you’ll also note that you can filter the displayed results by platform, so you can additionally display Instagram hashtag trends as well, which could be very valuable in maximizing your reach.

Much of this type of info has been available within CrowdTangle, Facebook’s analytics platform for journalists, for some time, but not everyone can access CrowdTangle data, which could make this an even more valuable proposition for many marketers.

See also  Guwahati woman takes on molester, drags his scooter down the drain; Facebook post goes viral

Of course, overall performance really relates to your own creative, and thinking through the action that you want your audience to take when reading your posts. But in terms of detecting new content trends, including hashtag usage, caption length, videos versus image posts, and more, there’s a lot that could be gleaned from these tools and filters.

It’s a significant analytics addition – we’ve asked Facebook for more info on the rollout of the new option, and whether it’s already beyond test mode, etc. We’ll update this post if/when we hear back.

Advertisement
Continue Reading

Trending