OTHER
Snapchat Removes Few Children Off Its Platform Every Month in Britain: Ofcom

Snapchat is kicking dozens of children in Britain off its platform each month compared with tens of thousands blocked by rival TikTok, according to internal data the companies shared with Britain’s media regulator Ofcom and which Reuters has seen.
Social media platforms such as Meta‘s Instagram, ByteDance‘s TikTok, and Snap‘s Snapchat require users to be at least 13 years old. These restrictions are intended to protect the privacy and safety of young children.
Ahead of Britain’s planned Online Safety Bill, aimed at protecting social media users from harmful content such as child pornography, Ofcom asked TikTok and Snapchat how many suspected under-13s they had kicked off their platforms in a year.
According to the data seen by Reuters, TikTok told Ofcom that between April 2021 and April 2022, it had blocked an average of around 180,000 suspected underage accounts in Britain every month, or around 2 million in that 12-month period.
In the same period, Snapchat disclosed that it had removed approximately 60 accounts per month, or just over 700 in total.
A Snap spokesperson told Reuters the figures misrepresented the scale of work the company did to keep under-13s off its platform. The spokesperson declined to provide additional context or to detail specific blocking measures the company has taken.
“We take these obligations seriously and every month in the UK we block and delete tens of thousands of attempts from underage users to create a Snapchat account,” the Snap spokesperson said.
Recent Ofcom research suggests both apps are similarly popular with underage users. Children are also more likely to set up their own private account on Snapchat, rather than use a parent’s, when compared to TikTok.
“It makes no sense that Snapchat is blocking a fraction of the number of children that TikTok is,” said a source within Snapchat, speaking on condition of anonymity.
Snapchat does block users from signing up with a date of birth that puts them under the age of 13. Reuters could not determine what protocols are in place to remove underage users once they have accessed the platform and the spokesperson did not spell these out.
Ofcom told Reuters that assessing the steps video-sharing platforms were taking to protect children online remained a primary area of focus, and that the regulator, which operates independently of the government, would report its findings later this year.
At present, social media companies are responsible for setting the age limits on their platforms. However, under the long-awaited Online Safety Bill, they will be required by law to uphold these limits, and demonstrate how they are doing it, for example through age-verification technology.
Companies that fail to uphold their terms of service face being fined up to 10 percent of their annual turnover.
In 2022, Ofcom’s research found 60 percent of children aged between eight and 11 had at least one social media account, often created by supplying a false date of birth. The regulator also found Snapchat was the most popular app for underage social media users.
Risks to young children
Social media poses serious risks to young children, child safety advocates say.
According to figures recently published by the NSPCC (National Society for the Prevention of Cruelty to Young Children), Snapchat accounted for 43 percent of cases in which social media was used to distribute indecent images of children.
Richard Collard, associate head of child safety online at the NSPCC, said it was “incredibly alarming” how few underage users Snapchat appeared to be removing.
Snapchat “must take much stronger action to ensure that young children are not using the platform, and older children are being kept safe from harm,” he said.
Britain, like the European Union and other countries, has been seeking ways to protect social media users, in particular children, from harmful content without damaging free speech.
Enforcing age restrictions is expected to be a key part of its Online Safety Bill, along with ensuring companies remove content that is illegal or prohibited by their terms of service.
A TikTok spokesperson said its figures spoke to the strength of the company’s efforts to remove suspected underage users.
“TikTok is strictly a 13+ platform and we have processes in place to enforce our minimum age requirements, both at the point of sign up and through the continuous proactive removal of suspected underage accounts from our platform,” they said.
© Thomson Reuters 2023
Affiliate links may be automatically generated – see our ethics statement for details.
OTHER
YouTube Announces AI-Enabled Editing Products for Video Creators

YouTube will roll out a slew of artificial-intelligence-powered features for creators, the latest effort from parent company Alphabet to incorporate generative AI — technology that can create and synthesize text, images, music and other media given simple prompts — into its most important products and services.
Among the new products YouTube announced Thursday is a tool called Dream Screen that uses generative AI to add video or image backgrounds to short-form videos, which the company calls Shorts. It also announced new AI-enabled production tools to help with editing both short- and long-form videos on its platform.
“We’re unveiling a suite of products and features that will enable people to push the bounds of creative expression,” Toni Reid, YouTube’s vice president for community products, said in a blog post timed to the announcement Thursday. The Google-owned video platform first announced that it was developing the tools in March.
Google has been under pressure to show results and practical applications for its generative AI products. Some critics have been wary the company, which has long been seen as a leader in artificial intelligence, was falling behind upstarts like OpenAI or rival Microsoft, and that the products Google was rolling out weren’t yet ready for public consumption. OpenAI’s ChatGPT and a new Bing chatbot from Microsoft — which has invested $13 billion (nearly Rs. 1,08,100 crore) in OpenAI since 2019 — have been wildly popular and gained mainstream favour.
Over the past few months, Google launched its own ChatGPT competitor, Bard, and released a steady flow of updates to the product. It’s also incorporated experimental generative AI features into its most important services, including its flagship search engine, in what the company calls its experimental “search generative experience.” The product generates detailed summaries based on information it’s ingested from the internet and other digital sources in response to search queries.
The announcement of the new features also comes as YouTube is locked in fierce competition with ByteDance‘s TikTok and Meta Platforms‘s Instagram Reels to gain more share of the vertical, short-form video market. YouTube said it now sees more than 70 billion daily views on Shorts, and the new generative AI tools appear to be aimed at attracting even more users and creators and gaining a competitive edge over its rivals.
The company also announced YouTube Create, a mobile app aimed at helping the platform’s creators make video production work easier. The app includes AI-enabled features like editing and trimming, automatic captioning, voiceover capabilities and access to a library of filters and royalty-free music. The app is currently in beta on Android in “select markets,” the company said, and will be free of charge.
Beyond creation, YouTube said it would also provide creators with more tools to get AI-powered insights, help with automatic dubbing of videos and assist with finding music and soundtracks for videos.
© 2023 Bloomberg LP
Affiliate links may be automatically generated – see our ethics statement for details.
OTHER
WhatsApp Passkey Support Reportedly Rolling Out to Beta Testers on Android: How It Works

WhatsApp has begun rolling out support for a new feature that will allow you to log in to your account using the biometric authentication mechanism on your smartphone. The messaging service will soon allow you to create a passkey — a kind of login credential that eliminates the need to use or remember a password — on your device and use it to securely log in to apps and services using the facial recognition or fingerprint scanner on your device.
Feature tracker WABetaInfo spotted the new passkey feature on WhatsApp beta for Android 2.23.20.4 on Tuesday, that is rolling out to beta users. However, not all users who have updated to the latest beta release will have access to the feature, which is reportedly rolling out to a “limited number of beta testers”. Gadgets 360 was unable to access the feature on two different Android smartphones that are both enrolled in the beta program.
The new Passkeys feature on WhatsApp
Photo Credit: WABetaInfo
The new passkey feature is described as a “simple way to sign in safely” to WhatsApp in a screenshot shared by the feature tracker. This suggests that it could be used to help sign in to other devices via secure authentication on your primary device.
Authenticating using passkeys isn’t a novel concept and the technology is slowly gaining traction online— Google already allows you to log in to a new device by using fingerprint-based biometric authentication for passkeys in place of a password. These passkeys are securely stored on your device and used when biometric authentication is provided.
The screenshot posted by WABetaInfo also states that WhatsApp will store the passkey in the device’s password manager — for most users, that would be the device’s default password store that is handled by Google with autofill support. The feature is also expected to make its way to iOS, where it is likely to be stored in the iOS Keychain.
It is currently unclear whether WhatsApp will also support storing passkeys in third-party apps like Bitwarden, 1Password, or Dashlane. We can expect to learn more about how the feature works when it is rolled out to more users in the beta program and the feature is expected to arrive on all smartphones on the stable channel in the future.
Affiliate links may be automatically generated – see our ethics statement for details.
OTHER
Meta Urged Not to Roll Out End-to-end Encryption on Messenger, Instagram by UK

Britain urged Meta not to roll out end-to-end encryption on Instagram and Facebook Messenger without safety measures to protect children from sexual abuse after the Online Safety Bill was passed by parliament.
Meta, which already encrypts messages on WhatsApp, plans to implement end-to-end encryption across Messenger and Instagram direct messages, saying the technology re-enforced safety and security.
Britain’s Home Secretary Suella Braverman said she supported strong encryption for online users but it could not come at the expense of children’s safety.
“Meta has failed to provide assurances that they will keep their platforms safe from sickening abusers,” she said. “They must develop appropriate safeguards to sit alongside their plans for end-to-end encryption.”
A Meta spokesperson said: “The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters and criminals.
“We don’t think people want us reading their private messages so have spent the last five years developing robust safety measures to prevent, detect and combat abuse while maintaining online security.”
It said it would update on Wednesday on the measures it was taking, such as restricting people over 19 from messaging teens who do not follow them and using technology to identify and take action against malicious behaviour.
“As we roll out end-to-end encryption, we expect to continue providing more reports to law enforcement than our peers due to our industry leading work on keeping people safe,” the spokesperson said.
Social media platforms will face tougher requirements to protect children from accessing harmful content when the Online Safety Bill passed by Parliament on Tuesday becomes law.
End-to-end encryption is a bone of contention between companies and the government in the new law.
Messaging platforms led by WhatsApp oppose a provision that they say could force them to break end-to-end encryption.
The government, however, has said the bill does not ban the technology, but instead, it requires companies to take action to stop child abuse and as a last resort develop technology to scan encrypted messages.
Tech companies have said scanning messages and end-to-end encryption are fundamentally incompatible.
© Thomson Reuters 2023
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
Affiliate links may be automatically generated – see our ethics statement for details.
-
FACEBOOK2 weeks ago
Introducing Facebook Graph API v18.0 and Marketing API v18.0
-
Uncategorized2 weeks ago
3 Ways To Find Your Instagram Reels History
-
Uncategorized2 weeks ago
Community Manager: Job Description & Key Responsibilities
-
LINKEDIN2 weeks ago
Career Stories: Learning and growing through mentorship and community
-
Uncategorized2 weeks ago
Facebook Page Template Guide: Choose the Best One
-
Uncategorized1 week ago
The Complete Guide to Social Media Video Specs in 2023
-
OTHER6 days ago
WhatsApp iPad Support Spotted in Testing on Latest iOS Beta, Improved Group Calls Interface on Android
-
Uncategorized2 weeks ago
How to Create a Social Media Report [Free Template Included]