Connect with us

OTHER

In Social Media Safety Messages, Pictures Should Match the Words, Study Finds

Published

on

Main Image

When using social media to nudge people toward safe and healthy behaviour, it’s critical to make sure the words match the pictures, according to a new study.

After looking at social media posts, parents of young children were better able to recall safety messages such as how to put a baby safely to sleep when the images in the posts aligned with the messages in the text, the researchers found.

The study appears in the Journal of Health Communication.

“Many times, scientists and safety experts aren’t involved in decisions about social media for health agencies and other organisations, and we end up seeing images that have nothing to do with the safety message or worse, images that contradict the guidance,” said lead author Liz Klein, an associate professor of public health at The Ohio State University.

Take the safe sleep example, for instance. The researchers found posts that advocated a bumper-free crib for baby but used an image of an infant in a crib with bumpers.

Advertisement
free widgets for website

They saw posts about preventing head injury with bike helmets illustrated by pictures of kids without bike helmets.

“In this study, we were trying to understand how much those mismatches matter — do people understand the message even if the picture isn’t right? Does the picture really matter?” Klein said.

Their answers came from research using eye-tracking technology to gauge the attention young parents paid to various posts, and subsequent tests to see what they recalled about the safety messages.

When the 150 parents in the study were shown a trio of posts with matched imagery and text and three other posts with mismatched visual and written messages, they spent far longer on the matched posts — 5.3 seconds, compared to the 3.3 seconds their eyes lingered on the mismatched posts.

See also  Twitter Gains Unique 'Like' Animation for Today's Apple Event - MacRumors

Further, the matched messages appeared to make a difference in understanding and recall of safety messages. After accounting for differences in health literacy and social media use among participants, the researchers found that each second of viewing time on matched posts was associated with a 2.8 percent increase in a safety knowledge score.

Advertisement
free widgets for website

“With nearly 70 percent of adults reporting use of social media, and many parents using social media and other internet sources to keep current on injury prevention strategies, social media is a great opportunity to broadcast safety and injury prevention messages,” said study co-author Lara McKenzie, a principal investigator in the Center for Injury Research and Policy at Nationwide Children’s Hospital in Columbus.

“As more health organisations and public health agencies use social media to share health information with the public, the findings of our study underscore the need to ensure that the imagery and text in social media posts are aligned,” added McKenzie.

Klein said she understands that those managing social media accounts may be drawn to images that are the most attention-grabbing. But when it comes to health and safety, this study suggests that making sure the image and the text are sending the same message is more important.

“If you want people to put their medicine up and out of reach of children, kids to wear their bike helmets, or new parents to remember that babies should always go to sleep on their backs, alone and in a crib — that’s where matching matters. Maybe save the eye-grabbing stuff and the humorous posts for different purposes,” Klein said.

See also  Facebook Brings Subscription to Groups in Major Push to Woo Creators

Klein said the findings in this study likely extend beyond child safety messaging to any number of health and safety campaigns. However, he added that there’s more work to be done to understand how to best harness the power of social media for different types of public health communication.

Advertisement
free widgets for website

“We need to pay more attention to how we communicate with the people we’re trying to influence with health and safety guidance. All of us can do a better job of thinking about how we use our social media accounts to contribute to better public health,” she said.

NDTV Gadgets360.com

OTHER

WhatsApp ‘Delete for Everyone’ Feature Gets Extension to Over 2 Days

Published

on

By

whatsapp-‘delete-for-everyone’-feature-gets-extension-to-over-2-days

WhatsApp ‘Delete for Everyone’ feature has got an extension. You can now delete your wrongly sent messages within a longer time frame — over two days — after transmitting it. Until now, the Meta-owned instant messaging platform allowed its users to delete a wrongly sent message within one hour, eight minutes, and 16 seconds’ time frame. The first mention of this extension was in February this year. The development comes as WhatsApp announced three new privacy features in order to provide a more secure conversation experience on the app.

WhatsApp shared a post on Twitter announcing that “you will have a little over 2 days to delete your messages from your chats after you hit send.” WhatsApp rolled out the over one-hour time limit to delete messages from the chat in 2018. The feature to delete messages for everyone in the chat originally had a time limit of seven minutes after hitting send. WABetainfo, a platform that tests WhatsApp features before they are released to the masses, replied to WhatsApp’s post on Twitter explicitly mentioning that the new time limit for “Delete Message for Everyone” is 2 hours and 12 hours.

In order to be able to delete messages within two days, you as well as all the recipients should have the latest version of WhatsApp. There is no clarification whether this feature is only available for Android or iOS users. However, it should be available to both WhatsApp for Android and WhatsApp for iOS. Deleting a message for everyone should be simple. You just need to tap and hold the message (image, video, or document) you want to remove, and tap Delete > select “Delete for everyone”.

See also  A Twitter Bug May Have Kept You From Muting "WandaVision"

As mentioned, the development comes as WhatsApp introduced three new privacy features with an aim to provide more control over conversations and offer more privacy. These new features are: exit group chats without notifying everyone, control who can see when you’re online, and prevent screenshots on view once messages.

WhatsApp already provides other features such as default end-to-end protection for calls and messages, disappearing messages, end-to-end encrypted backups, 2-step verification, and the ability to block and report unwanted chats.

Advertisement
free widgets for website

Continue Reading

OTHER

Is India at Risk of Chinese-Style Surveillance Capitalism?: Andy Mukherjee

Published

on

By

is-india-at-risk-of-chinese-style-surveillance-capitalism?:-andy-mukherjee

After five years of negotiations involving the government, tech companies, and civil society activists, the world’s largest democracy is sending its debate on privacy back to the drawing board. The Indian government has junked the personal data protection bill, and decided to replace it with “a comprehensive legal framework.” If the current anarchy wasn’t bad enough, nobody knows what the revamped regime will contain — whether it it will put individuals first, like in Europe, or promote vested commercial and party-state interests, like in China.

Back in 2017, India’s liberals were hopeful. In July that year, New Delhi set up a panel under retired Justice B.N. Srikrishna to frame data protection norms. The very next month, the country’s Supreme Court held privacy to be a part of a constitutionally guaranteed right to life and liberty. But the optimism didn’t take long to fade. The law introduced in parliament in December 2019 gave the government unfettered access to personal data in the name of sovereignty and public order — a move that will “turn India into an Orwellian State,” Srikrishna cautioned.

Those fears are coming true even without a privacy law. Razorpay, a Bengaluru-based payment gateway, was compelled by the police recently to supply data on donors to Alt News, a fact-checking portal. Although the records were obtained legally — as part of an investigation against the website’s cofounder — there was no safeguard against their misuse. The risk that authorities could target opponents of the ruling Bharatiya Janata Party led to howls of protests about the stifling of dissent under Prime Minister Narendra Modi.

See also  Facebook Shuts Funding for US News Partnerships Program Amid Economic Downturn, Changing User Behaviour

The backdrop to India’s privacy debate has changed. Six years ago, mobile data was expensive, and most people — especially in villages — used feature phones. That’s no longer the case. By 2026, India will have 1 billion smartphone users, and the consumer digital economy is poised for a 10-fold surge in the current decade to $800 billion (roughly Rs. 63,71,600 crore). To get a loan from the private sector or a subsidy from the state, citizens now need to part with far too much personal data than in the past: Dodgy lending apps ask for access to entire lists of phone contacts. The Modi government manages the world’s largest repository of biometric information and has used it to distribute $300 billion (roughly Rs. 23,89,440 crore) in benefits directly to voters. Rapid digitization without a strong data protection framework is leaving the public vulnerable to exploitation.

Europe’s general data protection regulation isn’t perfect. But at least it holds natural persons to be the owners of their names, email addresses, location, ethnicity, gender, religious beliefs, biometric markers, and political opinion. Instead of following that approach, India sought to give the state an upper hand against both individuals and private-sector data collectors. Large global tech firms, such as Alphabet, Meta Platforms, and Amazon, were concerned about the now-dropped bill’s insistence on storing “critical” personal data only in India for national security reasons. Not only does localization get in the way of efficient cross-border data storage and processing, but as China has shown with Didi Global, it can also be weaponised. The ride-hailing app was forced to delist in the U.S. months after it went public there against Beijing’s wishes and eventually slapped with a $1.2 billion (roughly Rs. 9,550 crore) fine for data breaches that “severely affected national security.”

Advertisement
free widgets for website

Still, the scrapping of the Indian bill will bring little cheer to Big Tech if its replacement turns out to be even more draconian. Both Twitter and Meta’s WhatsApp have initiated legal proceedings against the Indian government — the former against “arbitrary” directions to block handles or take down content and the latter against demands to make encrypted messages traceable. The government’s power to impose fines of up to 4 percent of global revenue — as envisaged in the discarded data protection law — can come in handy to make tech firms fall in line; so it’s unlikely that New Delhi will dilute it in the new legislation.

See also  Australia report says make Google and Facebook pay for news

For individuals, the big risk is the authoritarian tilt in India’s politics. The revamped framework may accord even less protection to citizens from a Beijing-inspired mix of surveillance state and surveillance capitalism than the abandoned law. According to the government, it was the 81 amendments sought by a joint parliamentary panel that made the current bill untenable. One such demand was to exempt any government department from privacy regulations as long as New Delhi is satisfied and state agencies follow just, fair, reasonable and proportionate procedures. That’s too much of a carte blanche. To prove overreach, for instance in the Alt News donors case, citizens would have to mount expensive legal battles. But to what end? If the law doesn’t bat for the individual, courts will offer little help.

Minority groups in India have the most at stake. S. Q. Masood, an activist in the southern city of Hyderabad, sued the state of Telangana, after the police stopped him on the street during the COVID-19 lockdown, asked him to remove his mask and took a picture. “Being Muslim and having worked with minority groups that are frequently targeted by the police, I’m concerned that my photo could be matched wrongly and that I could be harassed,” Masood told the Thomson Reuters Foundation. The zeal with which authorities are embracing technologies to profile individuals by pulling information scattered across databases shows a hankering for a Chinese-style system of command and control.

The abandoned Indian data protection legislation also wanted to allow voluntary verification of social-media users, ostensibly to check fake news. But as researchers at the Internet Freedom Foundation have pointed out, collection of identity documents by platforms like Facebook would leave users vulnerable to more sophisticated surveillance and commercial exploitation. Worse still, what starts out as voluntary may become mandatory if platforms start denying some services without identity checks, depriving whistleblowers and political dissidents of the right to anonymity. Since that wasn’t exactly a bug in the rejected law, expect it to be a feature of India’s upcoming privacy regime as well.

See also  Cabinet Approves Personal Data Protection Bill, to Be Introduced in Parliament This Session

© 2022 Bloomberg LP

Advertisement
free widgets for website

Continue Reading

OTHER

Snap Launches Parental Control Tool Family Center, Lets Parents Check Teens’ Contacts

Published

on

By

snap-launches-parental-control-tool-family-center,-lets-parents-check-teens’-contacts

Snap, owner of the popular messaging app Snapchat, rolled out its first parental control tools on Tuesday, which will allow parents to see who their teens are talking to, but not the substance of their conversations.

The new feature called Family Center is launching at a time when social media companies have been criticised over a lack protection for kids. In October, Snap and its tech peers TikTok and YouTube testified before US lawmakers accusing the companies of exposing young users to bullying or steering them toward harmful content.

Instagram also testified in a Senate hearing in December over children’s online safety, after a Facebook whistleblower leaked internal documents that she said showed the app harmed some teens’ mental health and body image.

Parents can invite their teens to join Family Center on Snapchat, and once the teens consent, parents will be able to view their kids’ friends list and who they have messaged on the app in the past seven days. They can also confidentially report any concerning accounts.

However, parents will not be able to see private content or messages sent to and from their teens, said Jeremy Voss, Snap’s head of messaging products, in an interview.

Advertisement
free widgets for website

“It strikes the right approach for enhancing safety and well-being, while still protecting autonomy and privacy,” he said.

Snap said it plans to launch additional features in the coming months, including notifications to parents when their teen reports abuse from a user.

Prior to Family Center, Snap already had some teen protection policies in place. By default, profiles for Snapchat users under 18 are private, and they only show up as a suggested friend in search results when they have friends in common with another user. Users must be at least 13 years old to sign up.

See also  Twitter Gains Unique 'Like' Animation for Today's Apple Event - MacRumors

Snap’s new tools follow a similar move by Instagram, which launched its Family Center in March, allowing parents to view what accounts their teens follow and how much time they spend on the app.

© Thomson Reuters 2022

Advertisement
free widgets for website

Continue Reading

Trending