Connect with us

FACEBOOK

Introducing New Brand Safety Controls for Advertisers

Published

on

  • Create a safe and welcoming community
  • Maintain a high quality ecosystem of content, publishers and ads
  • Proactive collaboration with industry

Create a Safe and Welcoming Community

The best way to contribute to brand safety is to prevent harmful content from ever appearing on our services in the first place. We’ll never be perfect but we continue to make investments in technology and people to limit as much as we can.

It starts by having Community Standards to determine what is and what isn’t allowed. These are grounded in our commitment to give people a voice, and in order for people to express themselves, they need to know they’re safe. That’s why we have over 35,000 people working on safety and security, which includes removing billions of fake accounts a year. We also invest in technology to reduce the spread of false news and help identify content that may violate our policies — often before anyone sees it. And we routinely release the Community Standards Enforcement Report to track our progress to make Facebook safe and inclusive. Our recent report includes data on Instagram, suicide and self-injury, and expanded data on terrorist propaganda for the first time.

Maintain a High Quality Ecosystem of Content, Publishers and Ads

While our Community Standards apply to everyone on Facebook, we also have additional policies to hold publishers, creators and advertisers accountable. At the same time, we know that not all businesses are the same and some may want additional controls for placements within publisher content, like in-stream and Instant Articles on Facebook or Audience Network.

It’s why we are focused on giving advertisers more transparency and sophisticated tools to suit their brand. Today we are announcing strides in our brand safety controls:

  • A one-stop place in Business Manager or Ads Manager to create block lists, get delivery reports and set an account-level inventory filter, rather than applying it one campaign at a time.
  • Improved delivery reports that allows the advertiser to search by account ID or publisher without having to download it. Soon we’ll add content level information to the delivery report.
  • A new brand safety partner, Zefr, joins DoubleVerify, Integral Ad Science and OpenSlate, to help ensure the brand safety controls and tools we create continue to serve advertisers’ needs.
  • Dynamic Content Sets that provide a content-level white listing tool for advertisers working with Integral Ad Science, OpenSlate, and Zefr. This initial test for in-stream placements allows these partners to routinely update and adjust the videos available to advertisers based on what best suits their brand.
  • We’re also beginning to test Publisher White Lists for Audience Network and in-stream ads on Facebook with select advertisers and will look to expand more broadly next year.

These build on the tools we offer today:

  • Controls over where their ad appears when shown within publisher or creator content
  • Publisher lists so advertisers know where their ads might appear before they run
  • Publisher delivery reports so they can understand where their ads actually ran
  • Block lists to prevent an advertiser’s ads from delivering on specific publishers
  • Inventory filter so they can choose the type of content they want associated with their business and filter out the rest
  • Partnerships with third parties to provide advertisers with even more options

All of these policies and controls contribute to a quality ecosystem among people, publishers and advertisers.

Proactive Collaboration with Industry Partners

Brand safety is a challenge for the entire advertising industry, which is why we collaborate with industry partners to share knowledge, build consensus, and work towards making all online platforms safer for businesses.

  • We completed JICWEBS’ Digital Trading Standards Group’s Brand Safety audit, receiving the IAB UK Gold Standard.
  • We’re an active part of the working group for the World Federation of Advertiser’s Global Alliance for Responsible Media (GARM), which is leading the charge with global brands, media agencies, media owners and platforms, and industry bodies to create a more sustainable and responsible digital ecosystem.
  • We’re introducing sessions with industry bodies to provide further insight into how our operations teams work to review content and enforce our Community Standards.

While we have zero tolerance for harmful content on our platforms, we recognize that doesn’t mean zero occurrence. It’s why we are tackling this challenge across the company working with industry, enlisting expertise across subject matters, and continuing to invest in the technology, tools and advancements that advertisers are asking for. A safer Facebook and Instagram is better for everyone, including businesses, and it’s what we’ll keep working towards.

The Latest News from Facebook for Business

Continue Reading

FACEBOOK

Two Billion Users — Connecting the World Privately

Published

on

We are excited to share that, as of today, WhatsApp supports more than 2 billion users around the world.

Mothers and fathers can reach their loved ones no matter where they are. Brothers and sisters can share moments that matter. Coworkers can collaborate, and businesses can grow by easily connecting with their customers.

Private conversations that once were only possible face-to-face can now take place across great distances through instant chats and video calling. There are so many significant and special moments that take place over WhatsApp and we are humbled and honored to reach this milestone.

We know that the more we connect, the more we have to protect. As we conduct more of our lives online, protecting our conversations is more important than ever.

That is why every private message sent using WhatsApp is secured with end-to-end encryption by default. Strong encryption acts like an unbreakable digital lock that keeps the information you send over WhatsApp secure, helping protect you from hackers and criminals. Messages are only kept on your phone, and no one in between can read your messages or listen to your calls, not even us. Your private conversations stay between you.

Strong encryption is a necessity in modern life. We will not compromise on security because that would make people less safe. For even more protection, we work with top security experts, employ industry leading technology to stop misuse as well as provide controls and ways to report issues — without sacrificing privacy.

WhatsApp started with the goal of creating a service that is simple, reliable and private for people to use. Today we remain as committed as when we started, to help connect the world privately and to protect the personal communication of 2 billion users all over the world.

The post Two Billion Users — Connecting the World Privately appeared first on About Facebook.

Facebook Newsroom

Continue Reading

FACEBOOK

Facebook, Instagram and YouTube: Government forcing companies to protect you online

Published

on

Although many of the details have still to be confirmed, it’s likely the new rules will apply to Facebook, Twitter, Whatsapp, Snapchat, and Instagram

We often talk about the risks you might find online and whether social media companies need to do more to make sure you don’t come across inappropriate content.

Well, now media regulator Ofcom is getting new powers, to make sure companies protect both adults and children from harmful content online.

The media regulator makes sure everyone in media, including the BBC, is keeping to the rules.

Harmful content refers to things like violence, terrorism, cyber-bullying and child abuse.

The new rules will likely apply to Facebook – who also own Instagram and WhatsApp – Snapchat, Twitter, YouTube and TikTok, and will include things like comments, forums and video-sharing.

Platforms will need to ensure that illegal content is removed quickly, and may also have to “minimise the risks” of it appearing at all.

These plans have been talked about for a while now.

The idea of new rules to tackle ‘online harms’ was originally set out by the Department for Digital, Culture, Media and Sport in May 2018.

The government has now decided to give Ofcom these new powers following research called the ‘Online Harms consultation’, carried out in the UK in 2019.

Plans allowing Ofcom to take control of social media were first spoken of in August last year.

The government will officially announce these new powers for Ofcom on Wednesday 12 February.

But we won’t know right away exactly what new rules will be introduced, or what will happen to tech or social media companies who break the new rules.

Children’s charity the NSPCC has welcomed the news. It says trusting companies to keep children safe online has failed.

“Too many times social media companies have said: ‘We don’t like the idea of children being abused on our sites, we’ll do something, leave it to us,'” said chief executive Peter Wanless.

“Thirteen self-regulatory attempts to keep children safe online have failed.

To enjoy the CBBC Newsround website at its best you will need to have JavaScript turned on.

Back in Feb 2018 YouTube said they were “very sorry” after Newsround found several videos not suitable for children on the YouTube Kids app

The UK government’s Digital Secretary, Baroness Nicky Morgan said: “There are many platforms who ideally would not have wanted regulation, but I think that’s changing.”

“I think they understand now that actually regulation is coming.”

In many countries, social media platforms are allowed to regulate themselves, as long as they stick to local laws on illegal material.

But some, including Germany and Australia, have introduced strict rules to force social media platforms do more to protect users online.

In Australia, social media companies have to pay big fines and bosses can even be sent to prison if they break the rules.

For more information and tips about staying safe online, go to BBC Own It, and find out how to make the internet a better place for all of us.

Read More

Continue Reading

FACEBOOK

Facebook Launches Digital Literacy Programme for Women in UP

Published

on

In a bid to provide digital literacy training to 1,00,000 women across seven states, Facebook on Tuesday launched its ‘We Think Digital’ programme in partnership with the National Commission for Women (NCW) and Cyber Peace Foundation on the occasion of Safer Internet Day.

“We are focusing on trying to create digital leadership amongst women and help them use technology for empowering themselves, enable them to make smart choices and secure from online risks. The training looks at transforming the learning process and bring about systemic change,” NCW Chairperson Rekha Sharma said in a statement.

Starting from the state of Uttar Pradesh, the programme will be expanded to other states including, Assam, West Bengal, Madhya Pradesh, Gujarat, Jharkhand, and Bihar through the year.

“The Internet has become a driver for change in the current age. These training modules will open doors of equal opportunities for women of Uttar Pradesh and together with Facebook we want to equip and educate people and help make a positive impact,” said Uttar Pradesh Women Welfare Minister Jai Pratap Singh.

The programme has been designed with a focus on digital literacy and citizenship, addressing issues around privacy, safety, and misinformation.

It was attended by 300 women trainees from across the state and also included workshops by the NCW and Cyber Peace Foundation.

NDTV Gadgets360.com

Continue Reading

Trending