Connect with us

OTHER

This LinkedIn Job Posting Does Not Exist

Published

on

this-linkedin-job-posting-does-not-exist

As the debate over bots on Twitter plays out in the courts of Chancery and public opinion, another social media company is being forced to tackle scams that pose a far bigger risk to users.

LinkedIn has become the latest target of inauthentic accounts with perpetrators appearing to be far more sophisticated and cunning than those afflicting Twitter. Even bigger dangers abound because customers expect more from the business networking site owned by Microsoft than they do from the short-message service Elon Musk may end up buying.

Scams aren’t unique to LinkedIn. Twitter, Facebook, Instagram and basically the entire internet have been platforms for nefarious actors for years, from variations on the Nigerian Prince fraud, to phishing attacks that lure users to download malicious code and steal credentials.

Yet recent LinkedIn campaigns have come extraordinarily close to replicating real people with the help of one of the most powerful websites on the internet.

ThisPersonDoesNotExist.com creates headshots using artificial intelligence complete with jewelry and a scenic backdrop. It’s eerily good, and allows anyone to create a deep-fake persona that passes as the real thing. Add in web-scraping tools, which copy data from actual LinkedIn resumes, and you too can become Victor Sites, Chief Information Security Officer at Chevron.

Advertisement
free widgets for website

That’s precisely what’s happened. Hundreds of times over. Brian Krebs, a noted author and cybersecurity investigator, discovered the profile of Sites and cross-checked it against the real CISO of Chevron. Compounding the perception of reality is that a Google search for that role returns the fake profile alongside the real one. There are countless similar phonies on the site, he noted.

See also  Twitter Says Will Step Up Fight Against Misinformation on Platform With New Policy

A confounding aspect of the problem is determining motive.

Earlier this year, the Federal Bureau of Investigation warned that one objective is to lure people into fraudulent cryptocurrency investment schemes by gaining trust before taking the victim’s money. Researchers at security firm Mandiant. also found evidence that North Korean hackers were using such profiles to land remote jobs inside cryptocurrency firms. These positions could then give the actors access to tools and intelligence that could aid money laundering and handling of illicit funds, Bloomberg News reported.

There are also more mundane purposes. As National Public Radio found earlier this year, dummy accounts have been deployed to cast a wide net as companies seek to hire candidates. Those who take the bait then get passed on to human resources. “Think telemarketing for the digital age,” NPR’s Shannon Bond wrote. The plethora of motives — from gaining inside access and stealing money, to marketing calls and phishing attacks — opens up a broad array of jobs that could be created to lure victims. And there are many more fake profiles for whom the goals and motives aren’t immediately obvious.

What’s clear, though, is that LinkedIn’s cachet as being the social network for serious professionals makes it the perfect platform for lulling members into a false sense of security. Although Musk is using the perception that Twitter is infested with bots as an excuse to wriggle out of his purchase agreement, there’s no evidence to suggest that the fake rate on LinkedIn is any lower.

Advertisement
free widgets for website

Yet it is true that consumers place far higher faith on it over rivals. Both Facebook and Twitter rated among the worst in surveys that assessed perceptions of deceptive content and of protecting privacy while LinkedIn was at the top, according to research published by Insider Intelligence last year. That air of professionalism goes a long way toward explaining LinkedIn’s user and revenue growth since Microsoft bought the company six years ago.

See also  Twitter to Provide More Data to Research Groups Studying Content Moderation

While the two companies were once neck and neck, LinkedIn now brings in twice the sales and has narrowed the gap in revenue per user. Its 850 million members is almost four times that of Twitter’s 238 million.

Exacerbating the security risk is the vast amount of data that LinkedIn collates and publishes, and which underpins its whole business model but which lacks any robust verification mechanisms. A Twitter user, by contrast, can gather a vast following while still remaining anonymous.

There are two simple steps LinkedIn could take to vastly improve its platform, Krebs noted in a recent post. First, add a “created on” date, which Twitter already deploys, in order to highlight which profiles are recent versus long-established. A second, more powerful, feature would be to implement domain verification which ensures that a member has an email account at the organization where they claim to be employed.

“We work every day to keep our members safe and this includes our automated systems paired with teams of experts to stop the vast majority of fake accounts before they appear in our community,” Oscar Rodriguez, LinkedIn Senior Director of Trust, Privacy and Equity, wrote in emailed response to Bloomberg Opinion. “We also ask members to report suspicious profiles and content to us so that we can take action.”

Advertisement
free widgets for website

The company declined to say whether it was considering adding creation date or domain verification, or outline any changes it has made in recent months to tackle the spate of deep-fake profiles.

LinkedIn has a chance to learn from its rivals’ mistakes, but it needs to take action quickly before the situation gets out of hand.

 

See also  Spotify Announces Supergrouper to Let Users Listen to a Custom Playlist Based on Their Artist Selection: Details

Affiliate links may be automatically generated – see our ethics statement for details.

OTHER

Twitter Under Elon Musk Leaning on Automation to Moderate Content Against Hate Speech

Published

on

By

twitter-under-elon-musk-leaning-on-automation-to-moderate-content-against-hate-speech

Elon Musk’s Twitter is leaning heavily on automation to moderate content, doing away with certain manual reviews and favoring restrictions on distribution rather than removing certain speech outright, its new head of trust and safety told Reuters.

Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas including child exploitation, regardless of potential impacts on “benign uses” of those terms, said Twitter Vice President of Trust and Safety Product Ella Irwin.

“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Irwin said on Thursday, in the first interview a Twitter executive has given since Musk’s acquisition of the social media company in late October.

Her comments come as researchers are reporting a surge in hate speech on the social media service, after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or engaged in “egregious spam.”

The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Musk slashed half of Twitter’s staff and issued an ultimatum to work long hours that resulted in the loss of hundreds more employees.

Advertisement
free widgets for website

And advertisers, Twitter’s main revenue source, have fled the platform over concerns about brand safety.

On Friday, Musk vowed “significant reinforcement of content moderation and protection of freedom of speech” in a meeting with France President Emmanuel Macron.

Irwin said Musk encouraged the team to worry less about how their actions would affect user growth or revenue, saying safety was the company’s top priority. “He emphasizes that every single day, multiple times a day,” she said.

See also  YouTube to Roll Out Unique Handles for Channels Similar to Instagram, TikTok: All Details

The approach to safety Irwin described at least in part reflects an acceleration of changes that were already being planned since last year around Twitter’s handling of hateful conduct and other policy violations, according to former employees familiar with that work.

One approach, captured in the industry mantra “freedom of speech, not freedom of reach,” entails leaving up certain tweets that violate the company’s policies but barring them from appearing in places like the home timeline and search.

Advertisement
free widgets for website

Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for more freewheeling speech while cutting down on the potential harms associated with viral abusive content.

The number of tweets containing hateful content on Twitter rose sharply in the week before Musk tweeted on November 23 that impressions, or views, of hateful speech were declining, according to the Center for Countering Digital Hate – in one example of researchers pointing to the prevalence of such content, while Musk touts a reduction in visibility.

Tweets containing words that were anti-Black that week were triple the number seen in the month before Musk took over, while tweets containing a gay slur were up 31%, the researchers said.

‘More risks, move fast’

Irwin, who joined the company in June and previously held safety roles at other companies including Amazon.com and Google, pushed back on suggestions that Twitter did not have the resources or willingness to protect the platform.

See also  Koo Clocks 50 Million Downloads in Just Over 2 Years; Sees Large Increase in Users, Engagement Since January

She said layoffs did not significantly impact full-time employees or contractors working on what the company referred to as its “Health” divisions, including in “critical areas” like child safety and content moderation.

Advertisement
free widgets for website

Two sources familiar with the cuts said that more than 50 percent of the Health engineering unit was laid off. Irwin did not immediately respond to a request for comment on the assertion, but previously denied that the Health team was severely impacted by layoffs.

She added that the number of people working on child safety had not changed since the acquisition, and that the product manager for the team was still there. Irwin said Twitter backfilled some positions for people who left the company, though she declined to provide specific figures for the extent of the turnover.

She said Musk was focused on using automation more, arguing that the company had in the past erred on the side of using time- and labor-intensive human reviews of harmful content.

“He’s encouraged the team to take more risks, move fast, get the platform safe,” she said.

On child safety, for instance, Irwin said Twitter had shifted toward automatically taking down tweets reported by trusted figures with a track record of accurately flagging harmful posts.

Advertisement
free widgets for website

Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specializes in child sexual abuse material, said she has noticed Twitter recently taking down some content as fast as 30 seconds after she reports it, without acknowledging receipt of her report or confirmation of its decision.

See also  WhatsApp Numbers of 500 Million Users Up for Sale, Twitter Data of 5.4 Million Users Leaked Online: Reports

In the interview on Thursday, Irwin said Twitter took down about 44,000 accounts involved in child safety violations, in collaboration with cybersecurity group Ghost Data.

Twitter is also restricting hashtags and search results frequently associated with abuse, like those aimed at looking up “teen” pornography. Past concerns about the impact of such restrictions on permitted uses of the terms were gone, she said.

The use of “trusted reporters” was “something we’ve discussed in the past at Twitter, but there was some hesitancy and frankly just some delay,” said Irwin.

“I think we now have the ability to actually move forward with things like that,” she said.

Advertisement
free widgets for website

© Thomson Reuters 2022


Affiliate links may be automatically generated – see our ethics statement for details.

Continue Reading

OTHER

Elon Musk Introduces ‘Live Tweeting’ Feature Amid Ongoing Plans for Twitter

Published

on

By

elon-musk-introduces-‘live-tweeting’-feature-amid-ongoing-plans-for-twitter

Twitter CEO Elon Musk, on Saturday, added a new feature ‘live tweeting’ and now it is currently active on the platform.

Author Matt Taibbi becomes the first user to use this new feature with his cryptic tweet “Thread: THE TWITTER FILES”

Taking to Twitter, Musk wrote, “Here we go!!” with popcorn emoticons.

Earlier, he tweeted, “We’re double-checking some facts, so probably start live tweeting in about 40 mins.”

We’re double-checking some facts, so probably start live tweeting in about 40 mins

— Elon Musk (@elonmusk) December 2, 2022

Advertisement
free widgets for website

Griftopia writer followed his first tweet which reads, “Thread: THE TWITTER FILES”, “What you’re about to read is the first installment in a series, based upon thousands of internal documents obtained by sources at Twitter.”

He added, “The “Twitter Files” tell an incredible story from inside one of the world’s largest and most influential social media platforms. It is a Frankensteinian tale of a human-built mechanism grown out the control of its designer.”

Twitter’s new boss is also working on “purging a lot” of spam/scam accounts.

On Thursday, Musk took to his Twitter account and shared the word limit update with everyone. He tweeted, “Twitter is purging a lot of spam/scam accounts right now, so you may see your follower count drop.”

Twitter is purging a lot of spam/scam accounts right now, so you may see your follower count drop

— Elon Musk (@elonmusk) December 1, 2022

Advertisement
free widgets for website

Musk is also planning to up Twitter’s character limit from 280 to 1000.

A few days ago, a social media user tagged Musk and tweeted, ” Idea on explanding character limit to 1000.”

In response, Musk wrote, “It’s on the todo list.”

The character limit has been one of the prime differences between Twitter and other social media services.Musk has shown interest in the idea of increasing the character limit on a number of occasions since his takeover of the platform, as per a report by Mashable. On November 27, a Twitter user suggested to Musk to increase the platform’s word limit from 280 to 420.

“Good idea” Musk wrote in response. Prior to that, another user had suggested “get rid of character limits.”

Advertisement
free widgets for website

“Absolutely”, the multi-billionaire responded.

Now it remains to be seen when Musk finally makes the changes regarding character limit.


Affiliate links may be automatically generated – see our ethics statement for details.

See also  WhatsApp Says It Banned 2.069 Million Accounts in India in October, Received 248 User Ban Appeals
Continue Reading

OTHER

WhatsApp Begins Beta Testing Android Tablet Support, iOS Testers Get ‘Search by Date’ Feature

Published

on

By

whatsapp-begins-beta-testing-android-tablet-support,-ios-testers-get-‘search-by-date’-feature

WhatsApp is rolling out an update for its beta testers on Android that introduces support for WhatsApp for tablets. Select Android beta testers can now link their existing WhatsApp account on their phones with the tablet version of the app. Until now, a WhatsApp account on an Android phone could not be accessed on a secondary Android device. Meanwhile, some beta testers on WhatsApp for iOS are reportedly getting access to a new feature that will allow them to quickly jump to messages based on the date they were sent or received.

Spotted by WhatsApp feature tracker WABetaInfo, users who have signed up for WhatsApp’s beta programme will begin to see an in-app banner announcing the feature. A screengrab on the website shows a banner on top of chats which reads “Have an Android tablet? WhatsApp for tablet is available for beta testers.’ The banner will be visible as part of WhatsApp beta 2.22.25.8 update for Android, which makes the popular messaging app compatible on tablets. Gadgets 360 staff members who are part of the beta programme also received the banner on their smartphones.

However, the feature tracker states that the new tablet version of the app might not be feature-complete. “Note that some features may still not be available when installing WhatsApp on your tablet, for example, the ability to share a new status update, live locations, and broadcast lists,” the report said.

Notably, WhatsApp users can access their primary account on computer and WhatsApp for Web. The feature is finally coming to Android tablets, allowing beta testers to access their messages on both their smartphone and their tablet.

See also  Social Media Firms Introduce Few Changes Ahead of Upcoming US Midterm Elections

Meanwhile, WABetaInfo also reports that beta testers on iOS are getting a new update that adds the ability to search for messages by date. WhatsApp version 22.24.0.77, via the TestFlight beta programme, will make it easier for users to jump to specific dates in a chat window. To check if the feature is available, users can look for a calendar icon inside the search option for a chat. The icon lets you use the ‘Jump to Date’ feature to look for specific messages sent on the day.

Advertisement
free widgets for website

WhatsApp has been busy in the past few weeks, rolling out multiple features on both Android and iOS. The app recently rolled out a ‘Message Yourself’ feature on both platforms that lets users text themselves, in case they need to note down important information, reminders, or store files. Additionally, iOS users also received an update that adds the ability to include captions for forwarded media.


Affiliate links may be automatically generated – see our ethics statement for details.

Continue Reading

Trending