Connect with us

OTHER

In Social Media Safety Messages, Pictures Should Match the Words, Study Finds

Published

on

Main Image

When using social media to nudge people toward safe and healthy behaviour, it’s critical to make sure the words match the pictures, according to a new study.

After looking at social media posts, parents of young children were better able to recall safety messages such as how to put a baby safely to sleep when the images in the posts aligned with the messages in the text, the researchers found.

The study appears in the Journal of Health Communication.

“Many times, scientists and safety experts aren’t involved in decisions about social media for health agencies and other organisations, and we end up seeing images that have nothing to do with the safety message or worse, images that contradict the guidance,” said lead author Liz Klein, an associate professor of public health at The Ohio State University.

Take the safe sleep example, for instance. The researchers found posts that advocated a bumper-free crib for baby but used an image of an infant in a crib with bumpers.

They saw posts about preventing head injury with bike helmets illustrated by pictures of kids without bike helmets.

“In this study, we were trying to understand how much those mismatches matter — do people understand the message even if the picture isn’t right? Does the picture really matter?” Klein said.

Their answers came from research using eye-tracking technology to gauge the attention young parents paid to various posts, and subsequent tests to see what they recalled about the safety messages.

When the 150 parents in the study were shown a trio of posts with matched imagery and text and three other posts with mismatched visual and written messages, they spent far longer on the matched posts — 5.3 seconds, compared to the 3.3 seconds their eyes lingered on the mismatched posts.

See also  Nine strikes deals with Facebook, Google

Further, the matched messages appeared to make a difference in understanding and recall of safety messages. After accounting for differences in health literacy and social media use among participants, the researchers found that each second of viewing time on matched posts was associated with a 2.8 percent increase in a safety knowledge score.

“With nearly 70 percent of adults reporting use of social media, and many parents using social media and other internet sources to keep current on injury prevention strategies, social media is a great opportunity to broadcast safety and injury prevention messages,” said study co-author Lara McKenzie, a principal investigator in the Center for Injury Research and Policy at Nationwide Children’s Hospital in Columbus.

“As more health organisations and public health agencies use social media to share health information with the public, the findings of our study underscore the need to ensure that the imagery and text in social media posts are aligned,” added McKenzie.

Klein said she understands that those managing social media accounts may be drawn to images that are the most attention-grabbing. But when it comes to health and safety, this study suggests that making sure the image and the text are sending the same message is more important.

“If you want people to put their medicine up and out of reach of children, kids to wear their bike helmets, or new parents to remember that babies should always go to sleep on their backs, alone and in a crib — that’s where matching matters. Maybe save the eye-grabbing stuff and the humorous posts for different purposes,” Klein said.

See also  WhatsApp Beta Testing Skin Tone Combinations for Couple Emojis on Android, Sticker Store on Desktop

Klein said the findings in this study likely extend beyond child safety messaging to any number of health and safety campaigns. However, he added that there’s more work to be done to understand how to best harness the power of social media for different types of public health communication.

“We need to pay more attention to how we communicate with the people we’re trying to influence with health and safety guidance. All of us can do a better job of thinking about how we use our social media accounts to contribute to better public health,” she said.

NDTV Gadgets360.com

OTHER

Twitter Admits Policy ‘Errors’ After Far-Right Abuse Its New Rules of Posting Pictures

Published

on

By

twitter-admits-policy-‘errors’-after-far-right-abuse-its-new-rules-of-posting-pictures

Twitter’s new picture permission policy was aimed at combating online abuse, but US activists and researchers said Friday that far-right backers have employed it to protect themselves from scrutiny and to harass opponents.

Even the social network admitted the rollout of the rules, which say anyone can ask Twitter to take down images of themselves posted without their consent, was marred by malicious reports and its teams’ own errors.

It was just the kind of trouble anti-racism advocates worried was coming after the policy was announced this week.

Their concerns were quickly validated, with anti-extremism researcher Kristofer Goldsmith tweeting a screenshot of a far-right call-to-action circulating on Telegram: “Due to the new privacy policy at Twitter, things now unexpectedly work more in our favor.”

“Anyone with a Twitter account should be reporting doxxing posts from the following accounts,” the message said, with a list of dozens of Twitter handles.

Gwen Snyder, an organizer and researcher in Philadelphia, said her account was blocked this week after a report to Twitter about a series of 2019 photos she said showed a local political candidate at a march organized by extreme-right group Proud Boys.

Rather than go through an appeal with Twitter she opted to delete the images and alert others to what was happening.

“Twitter moving to eliminate (my) work from their platform is incredibly dangerous and is going to enable and embolden fascists,” she told AFP.

In announcing the privacy policy on Tuesday, Twitter noted that “sharing personal media, such as images or videos, can potentially violate a person’s privacy, and may lead to emotional or physical harm.”

See also  Facebook Reels: How to Add Audio to a Reel - Adweek

But the rules don’t apply to “public figures or individuals when media and accompanying Tweets are shared in the public interest or add value to public discourse.”

By Friday, Twitter noted the roll out had been rough: “We became aware of a significant amount of coordinated and malicious reports, and unfortunately, our enforcement teams made several errors.”

“We’ve corrected those errors and are undergoing an internal review to make certain that this policy is used as intended,” the firm added.

Continue Reading

OTHER

Facebook Messenger Is Launching a Split Payments Feature for Users to Quickly Share Expenses

Published

on

By

facebook-messenger-is-launching-a-split-payments-feature-for-users-to-quickly-share-expenses

Meta has announced the arrival of a new Split Payments feature in Facebook Messenger. This feature, as the name suggests, will let you calculate and split expenses with others right from Facebook Messenger. This feature essentially looks to bring an easier method to share the cost of bills and expenses — for example, splitting a dinner bill with friends. Using this new Split Payment feature, Facebook Messenger users will be able to split bills evenly or modify the contribution for each individual, including their own.

The company took to its blog post to announce the new Split Payment feature in Facebook Messenger. 9to5Mac reports that this new bill splitting feature is still in beta and will be exclusive to US users at first. The rollout will begin early next week. As mentioned, it will help users share the cost of bills, expenses, and payments. This feature is especially useful for those who share an apartment and need to split the monthly rent and other expenses with their mates. It could also come handy at a group dinner with many people.

With Split Payments, users can add the number of people the expense needs to be divided with and, by default, the amount entered will be divided in equal parts. A user can also modify each person’s contribution including their own. To use Split Payments, click the Get Started button in a group chat or the Payments Hub in Messenger. Users can modify the contribution in the Split Payments option and send a notification to all the users who need to make payments. After entering a personalised message and confirming your Facebook Pay details, the request will be sent and viewable in the group chat thread.

See also  Authors of new book depict 'Facebook's dilemma and its ugly truth'

Once someone has made the payment, you can mark their transaction as ‘completed’. The Split Payment feature will automatically take into account your share as well and calculate the amount owed accordingly.


For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.

Tasneem Akolawala is a Senior Reporter for Gadgets 360. Her reporting expertise encompasses smartphones, wearables, apps, social media, and the overall tech industry. She reports out of Mumbai, and also writes about the ups and downs in the Indian telecom sector. Tasneem can be reached on Twitter at @MuteRiot, and leads, tips, and releases can be sent to tasneema@ndtv.com. More

Related Stories

Continue Reading

OTHER

What do Meta’s New Safety Initiatives to Protect Women Really Mean for Women in India?

Published

on

By

what-do-meta’s-new-safety-initiatives-to-protect-women-really-mean-for-women-in-india?

Meta, formerly Facebook, announced a series of initiatives aimed at the protection of the women users on the company’s social media platforms. The initiatives include the launch of stopncii.org in India — a platform that aims to combat the spread of non-consensual intimate images (NCII) and Safety Hub for Women that will enable more women users to access information about resources that can help them make the most of their social media experience. Meta has also appointed the first Indian members in the company’s Global Women’s Safety Expert Advisors.

“Safety is really core to our mission at Facebook,” Karuna Nain, Director of Global Safety Policy at Meta Platforms told reporters on Thursday, while announcing the initiatives. She further elaborated that the social media behemoth works to keep the platforms safe in three segments — by implementing clear policies, building cutting edge tools and technology, and by working with organisations on the frontlines on the issues around the world.

How does stopncci.org work?

According to Meta, Stopncci.org empowers victims who are concerned about their intimate images being abused, and gives them control over such content.

“If someone threatens you, you can report it so that we can take action on that content,” Nain said. Stopncci.org has been developed in partnership with the UK Revenge Porn Helpline and 50 other organisations around the world. Stopncci.org has been built with feedback from victims, victim advocates, and privacy and safety advocates.

What is striking though, is that despite the large number of teen users on Facebook and Instagram, stopncci.org is not accessible for users under the age of 18. If you are under 18 and want to register a case, the platform displays a message saying, “We are sorry, but we cannot help with your case,” and leads the user to a list of NGOs that can be contacted for help.

See also  Nine strikes deals with Facebook, Google

Also, at this point the Stop NCII platform is available only in English, and Nain said that it would take a few months more before the platform supports Indian languages. Given the widespread use of Facebook in a number of Indian languages, this will limit the scope of its impact, something that has been seen in the past with the company’s efforts to combat misinformation as well.

A safety hub for women

Women’s Safety Hub is a part of Meta’s Safety Centre. The Women’s Safety Hub is a centralised resource where the company tries to capture all the information that women would need to be able to navigate the social media platforms in a safe and secure manner so that they’d be empowered to know what tools they have at their disposal.

The Women’s Safety Hub contains information including Meta’s policies around different issues, tools, and on-demand training. The hub is available in 12 Indian languages including Hindi, Marathi, Punjabi, Bengali, Malayalam, Tamil, Telugu, Urdu, Gujarati, and Assamese, among others.

Meta has a Women’s Safety Experts Group in place, who the company consults on an ongoing basis regarding their policies, product, and resources that they should be offering on the platforms.

Bishakha Datta, Executive Editor, Point of View — a Mumbai-based non-profit and Jyoti Vadehra, Head of Media & Communications, Centre for Social Research — a Delhi-based advocacy group for women — are the first Indian members in Meta’s Global Women’s Safety Expert Advisors.

The group comprises 12 other non-profit leaders, activists, and academic experts from different parts of the world and consults Meta in the development of new policies, products and programs to better support women on its apps.

See also  WhatsApp Says It Banned 2.069 Million Accounts in India in October, Received 248 User Ban Appeals

Would women be safer on Meta’s social media platforms now?

Nain said that Meta has invested over $13 billion (roughly Rs. 97,640 crore) in tools and technology to keep the platforms safe and give people security since 2016 and are on track to spend more than $5 billion (roughly Rs. 37,555 crore) on safety and security in 2021.

“Our commitment to making our platform safe and secure isn’t just something that we talk about. We put real investment behind these efforts. We have around 40,000 people who work on these efforts across the company.”

When asked about the specific initiatives the money was spent on by Meta, Nain only said that the money is being spent on, “…people who work on this space, the technology that we are building, for example, the initiatives that we will announce today or that would come as part of this.”

What do you do if someone is threatening to share your intimate images?

  1. Go to https://stopncii.org/
  2. Click on the Create Your Case button
  3. Confirm if you are 18 years or older
  4. Provide details about the image including who is in the picture by clicking on the drop-down list.
  5. Select the image(s)/video(s) on your device that you would like to protect
  6. A unique “hash,” or a digital fingerprint is generated and shared with the participating companies (Facebook and Instagram)
  7. Create a Personal Identification Number (PIN) to use to check your case status
  8. Check the box consenting to your hashes being shared with the participating companies.
  9. Click Submit.

See also  Facebook retools messaging again by adding calling to main app - BNN Bloomberg
Continue Reading

Trending