Connect with us

OTHER

UK Admits Encryption Hurdles to Online Safety Law After WhatsApp, Signal Threaten to Pull Out

Published

on

uk-admits-encryption-hurdles-to-online-safety-law-after-whatsapp,-signal-threaten-to-pull-out

The UK acknowledged possible technical hurdles in its planned crackdown on illegal online content after encrypted messaging companies including WhatsApp threatened to pull their service from the country.

Regulator Ofcom can only compel tech companies to scan platforms for illegal content such as images of child sexual abuse if it’s “technically feasible,” culture minister Stephen Parkinson told the House of Lords on Wednesday, as the chamber debated the government’s Online Safety Bill. He said the watchdog will work closely with businesses to develop and source new solutions.

“If appropriate technology does not exist which meets these requirements, Ofcom cannot require its use,” Parkinson said. Ofcom “cannot require companies to use proactive technology on private communications in order to comply” with the bill’s safety duties.

The remarks aim to allay concerns by tech companies that scanning their platforms for illegal content could compromise privacy and encryption of user data, giving hackers and spies a back door into private communications. In March Meta Platforms’s WhatsApp even threatened to pull out of the UK.

“Today really looks to be a case of the Department for Science, Innovation and Technology offering some wording to the messaging companies to enable them to save face and avoid the embarrassment of having to row back from their threats to leave the UK, their second largest market in the G7,” said Andy Burrows, a tech accountability campaigner who previously worked for the National Society for the Prevention of Cruelty to Children.

Advertisement
free widgets for website

Protecting Children

The sweeping legislation — which aims to make the web safer — is in its final stages in Parliament after six years of development. Parkinson said that Ofcom would, nevertheless be able to require companies to “develop or source a new solution” to allow them to comply with the bill.

See also  Elon Musk’s Twitter Deal Faces Strong Opposition as Advocacy Groups Launch Campaign to Stop Acquisition

“It is right that Ofcom should be able to require technology companies to use their considerable resources and their expertise to develop the best possible protections for children in encrypted environments,” he said.

Meredith Whittaker, president of encrypted messaging app Signal, earlier welcomed a Financial Times report suggesting the government was pulling back from its standoff with technology companies, citing anonymous officials as saying there isn’t a service today that can scan messages without undermining privacy.

However, security minister Tom Tugendhat and a government spokesman said it was wrong to suggest the policy had changed.

Feasibility

“As has always been the case, as a last resort, on a case-by-case basis and only when stringent privacy safeguards have been met, it will enable Ofcom to direct companies to either use or make best efforts to develop or source, technology to identify and remove illegal child sexual abuse content – which we know can be developed,” the spokesman said.

Advertisement
free widgets for website

Ministers met big tech companies including TikTok and Meta in Westminster on Tuesday.

Language around technical feasibility has been used by the government in the past. In July Parkinson told Parliament “Ofcom can require the use of technology on an end-to-end encrypted service only when it is technically feasible.”

The NSPCC, a major advocate of the UK crackdown, said the government’s statement “reinforces the status quo in the bill and the legal requirements on tech companies remain the same.”

Accredited Tech

Ultimately, the legislation’s wording leaves it up to the government to decide what is technically feasible.

Once the bill comes into force, Ofcom can serve a company with a notice requiring it to “use accredited technology” to identify and prevent child sexual abuse or terrorist content, or face fines, according to July’s published draft of the legislation. There is currently no accredited technology because the process of identifying and approving services only begins once the bill becomes law.

Advertisement
free widgets for website

Previous attempts to solve the dilemma have revolved around so-called client-side or device-side scanning. But in 2021 Apple Inc. delayed such a system, which would have searched photos on devices for signs of child sex abuse, after fierce criticism from privacy advocates, who feared it could set the stage for other forms of tracking.

See also  WhatsApp Rolls Out Feature to Briefly Undo Pressing the 'Delete for Me' Button on iOS, Android

Andy Yen, founder and CEO of privacy-focused VPN and messaging company Proton, said “As it stands, the bill still permits the imposition of a legally binding obligation to ban end-to-end encryption in the UK, undermining citizens’ fundamental rights to privacy, and leaves the government defining what is ‘technically feasible.’”

“For all the good intentions of today’s statement, without additional safeguards in the Online Safety Bill, all it takes is for a future government to change its mind and we’re right back where we started,” he said. 

© 2023 Bloomberg LP 


Affiliate links may be automatically generated – see our ethics statement for details.

Advertisement
free widgets for website

OTHER

YouTube Announces AI-Enabled Editing Products for Video Creators

Published

on

By

youtube-announces-ai-enabled-editing-products-for-video-creators

YouTube will roll out a slew of artificial-intelligence-powered features for creators, the latest effort from parent company Alphabet to incorporate generative AI — technology that can create and synthesize text, images, music and other media given simple prompts — into its most important products and services.

Among the new products YouTube announced Thursday is a tool called Dream Screen that uses generative AI to add video or image backgrounds to short-form videos, which the company calls Shorts. It also announced new AI-enabled production tools to help with editing both short- and long-form videos on its platform.

“We’re unveiling a suite of products and features that will enable people to push the bounds of creative expression,” Toni Reid, YouTube’s vice president for community products, said in a blog post timed to the announcement Thursday. The Google-owned video platform first announced that it was developing the tools in March.

Google has been under pressure to show results and practical applications for its generative AI products. Some critics have been wary the company, which has long been seen as a leader in artificial intelligence, was falling behind upstarts like OpenAI or rival Microsoft, and that the products Google was rolling out weren’t yet ready for public consumption. OpenAI’s ChatGPT and a new Bing chatbot from Microsoft — which has invested $13 billion (nearly Rs. 1,08,100 crore) in OpenAI since 2019 — have been wildly popular and gained mainstream favour. 

Over the past few months, Google launched its own ChatGPT competitor, Bard, and released a steady flow of updates to the product. It’s  also incorporated experimental generative AI features into its most important services, including its flagship search engine, in what the company calls its experimental “search generative experience.” The product generates detailed summaries based on information it’s ingested from the internet and other digital sources in response to search queries.

Advertisement
free widgets for website

The announcement of the new features also comes as YouTube is locked in fierce competition with ByteDance‘s TikTok and Meta Platforms‘s Instagram Reels to gain more share of the vertical, short-form video market. YouTube said it now sees more than 70 billion daily views on Shorts, and the new generative AI tools appear to be aimed at attracting even more users and creators and gaining a competitive edge over its rivals.

See also  Flipkart's Flipverse Metaverse Shopping Experience: How to Use It and What You Can Buy

The company also announced YouTube Create, a mobile app aimed at helping the platform’s creators make video production work easier. The app includes AI-enabled features like editing and trimming, automatic captioning, voiceover capabilities and access to a library of filters and royalty-free music. The app is currently in beta on Android in “select markets,” the company said, and will be free of charge.

Beyond creation, YouTube said it would also provide creators with more tools to get AI-powered insights, help with automatic dubbing of videos and assist with finding music and soundtracks for videos.

© 2023 Bloomberg LP 


Affiliate links may be automatically generated – see our ethics statement for details.

Advertisement
free widgets for website
Continue Reading

OTHER

WhatsApp Passkey Support Reportedly Rolling Out to Beta Testers on Android: How It Works

Published

on

By

whatsapp-passkey-support-reportedly-rolling-out-to-beta-testers-on-android:-how-it-works

WhatsApp has begun rolling out support for a new feature that will allow you to log in to your account using the biometric authentication mechanism on your smartphone. The messaging service will soon allow you to create a passkey — a kind of login credential that eliminates the need to use or remember a password — on your device and use it to securely log in to apps and services using the facial recognition or fingerprint scanner on your device.

Feature tracker WABetaInfo spotted the new passkey feature on WhatsApp beta for Android 2.23.20.4 on Tuesday, that is rolling out to beta users. However, not all users who have updated to the latest beta release will have access to the feature, which is reportedly rolling out to a “limited number of beta testers”. Gadgets 360 was unable to access the feature on two different Android smartphones that are both enrolled in the beta program.

whatsapp passkey android wabetainfo whatsapp passkey

The new Passkeys feature on WhatsApp

Photo Credit: WABetaInfo

The new passkey feature is described as a “simple way to sign in safely” to WhatsApp in a screenshot shared by the feature tracker. This suggests that it could be used to help sign in to other devices via secure authentication on your primary device.

Authenticating using passkeys isn’t a novel concept and the technology is slowly gaining traction online— Google already allows you to log in to a new device by using fingerprint-based biometric authentication for passkeys in place of a password. These passkeys are securely stored on your device and used when biometric authentication is provided.

Advertisement
free widgets for website

The screenshot posted by WABetaInfo also states that WhatsApp will store the passkey in the device’s password manager — for most users, that would be the device’s default password store that is handled by Google with autofill support. The feature is also expected to make its way to iOS, where it is likely to be stored in the iOS Keychain.

See also  Elon Musk’s Twitter Deal Faces Strong Opposition as Advocacy Groups Launch Campaign to Stop Acquisition

It is currently unclear whether WhatsApp will also support storing passkeys in third-party apps like Bitwarden, 1Password, or Dashlane. We can expect to learn more about how the feature works when it is rolled out to more users in the beta program and the feature is expected to arrive on all smartphones on the stable channel in the future.


Affiliate links may be automatically generated – see our ethics statement for details.

Continue Reading

OTHER

Meta Urged Not to Roll Out End-to-end Encryption on Messenger, Instagram by UK

Published

on

By

meta-urged-not-to-roll-out-end-to-end-encryption-on-messenger,-instagram-by-uk

Britain urged Meta not to roll out end-to-end encryption on Instagram and Facebook Messenger without safety measures to protect children from sexual abuse after the Online Safety Bill was passed by parliament.

Meta, which already encrypts messages on WhatsApp, plans to implement end-to-end encryption across Messenger and Instagram direct messages, saying the technology re-enforced safety and security.

Britain’s Home Secretary Suella Braverman said she supported strong encryption for online users but it could not come at the expense of children’s safety.

“Meta has failed to provide assurances that they will keep their platforms safe from sickening abusers,” she said. “They must develop appropriate safeguards to sit alongside their plans for end-to-end encryption.”

A Meta spokesperson said: “The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters and criminals.

Advertisement
free widgets for website

“We don’t think people want us reading their private messages so have spent the last five years developing robust safety measures to prevent, detect and combat abuse while maintaining online security.”

It said it would update on Wednesday on the measures it was taking, such as restricting people over 19 from messaging teens who do not follow them and using technology to identify and take action against malicious behaviour.

“As we roll out end-to-end encryption, we expect to continue providing more reports to law enforcement than our peers due to our industry leading work on keeping people safe,” the spokesperson said. 

Social media platforms will face tougher requirements to protect children from accessing harmful content when the Online Safety Bill passed by Parliament on Tuesday becomes law.

See also  WhatsApp Rolls Out Feature to Briefly Undo Pressing the 'Delete for Me' Button on iOS, Android

End-to-end encryption is a bone of contention between companies and the government in the new law.

Advertisement
free widgets for website

Messaging platforms led by WhatsApp oppose a provision that they say could force them to break end-to-end encryption.

The government, however, has said the bill does not ban the technology, but instead, it requires companies to take action to stop child abuse and as a last resort develop technology to scan encrypted messages.

Tech companies have said scanning messages and end-to-end encryption are fundamentally incompatible.

© Thomson Reuters 2023


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Advertisement
free widgets for website

Affiliate links may be automatically generated – see our ethics statement for details.

Continue Reading

Trending