Tesla shares have fallen 10% since Musk conducted a Tweet poll about the sale. They dropped more than 17% within the first few days following the Nov.
Twitter likely to roll out ‘Reactions’ feature soon
After unveiling several features this year, micro-blogging site Twitter is reportedly readying new features, including Reactions, Downvotes and Sorted Replies for iOS users.
According to reverse engineer Nima Owji, the Reactions feature, which started being tested a couple of months ago, is set to launch soon, reports 9To5Mac.
With four new reactions, “tears of joy,” “thinking face,” “clapping hands” and “crying face,” this feature is designed to give users the ability to better show how conversations make them feel and to give users “a better understanding of how their Tweets are received”.
Citing the reverse engineer, the report also mentioned that the micro-blogging site is now able to store data about the downvotes feature, which is another indicator that this function will be released sooner rather than later.
The report also notes that the company changed the downvote position as well. It has even added a new tab explaining how downvotes work.
This month, the company has rolled out its in-app tipping feature to all Android users above the age of 18, following the iOS launch in September.
Twitter said the “Tips” feature is geared toward users looking to get a little financial support from their followers through Cash App, PayPal, Venmo and Patreon directly through the app.
Twitter Investigating Bug Causing Unexpected Logouts on iOS 15
Posts on Twitter over the last several hours have shown users experiencing the bug, with some sharing frustrations that the app is requiring them to log back into Twitter upon every app launch. While some of the reports lack the specificity that the bug is happening on iOS devices, it seems likely to be the case following the acknowledgment from Twitter itself.
WHY IS TWITTER LOGGING ME OUT OF ALL OF MY ACCS???? I HAVE 8 TWITTER ACCS AND DO YOU KNOW HOW HAED TO LOG IN ALL OF THEM???????? IVE BEEN DOING IT 2 TIME ALREADY SINCE OCTOBER
— kyle (@leeknowonIyfans) November 24, 2021
I almost got a heart attack when I tried to get in my Twitter and it wanted me to log in?? I never logged out 😭😩😭
— Enny Does It All❤ (@Queen_Enny19) November 24, 2021
Users impacted by the bug are advised to ensure they’re running the latest Twitter version from the App Store and monitor the company’s support account for updates.
Should Twitter politely warn users not to tweet hate speech?
Warning Twitter users of the potential consequences of tweeting hate speech can temporarily reduce their hateful language on the platform, research suggests.
“Debates over the effectiveness of social media account suspensions and bans on abusive users abound, but we know little about the impact of either warning a user of suspending an account or of outright suspensions in order to reduce hate speech,” explains Mustafa Mikdat Yildirim, a doctoral candidate at New York University’s Center for Social Media and Politics and lead author of the paper in the journal Perspectives on Politics.
“Even though the impact of warnings is temporary, the research nonetheless provides a potential path forward for platforms seeking to reduce the use of hateful language by users.”
In the aftermath of decisions by Twitter and other social media platforms to suspend large numbers of accounts, in particular those of former President Donald Trump following the January 6, 2021 attack on the US Capitol, many have asked about the effectiveness of measures aimed at curbing hate speech and other messages that may incite violence.
In the paper, the researchers examined one approach—issuing warnings of possible suspensions resulting from the use of hate speech—to determine its efficacy in diminishing future use of this type of language.
To do so, the paper’s authors designed a series of experiments aimed at instilling the possible consequences of the use of hate and related speech.
“To effectively convey a warning message to its target, the message needs to make the target aware of the consequences of their behavior and also make them believe that these consequences will be administered,” they write.
In constructing their experiments, the authors focused on the followers of users whose accounts had been suspended for posting tweets that used hateful language in order to find a group of users for whom they could create credible warning messages. The researchers reasoned that the followers of those who had been suspended and who also used hateful language might consider themselves potential “suspension candidates” once they learned someone they followed had been suspended—and therefore be potentially willing to moderate their behavior following a warning.
To identify such candidates, the team downloaded more than 600,000 tweets on July 21, 2020 that were posted in the week prior and that contained at least one word from hateful language dictionaries used in previous research. During the period, Twitter was flooded by hateful tweets against both Asian and Black communities due to the coronavirus pandemic and Black Lives Matter protests.
From this group of users of hateful language, the researchers obtained a sample of approximately 4,300 followers of users whom Twitter had suspended during this period (i.e., “suspension candidates”).
These followers were divided into six treatment groups and one control group. The researchers tweeted one of six possible warning messages to these users, all prefaced with this sentence: “The user [@account] you follow was suspended, and I suspect that this was because of hateful language.” It was followed by different types of warnings, ranging from “If you continue to use hate speech, you might get suspended temporarily” to “If you continue to use hate speech, you might lose your posts, friends, and followers, and not get your account back.” The control group did not receive any messages.
Overall, the users who received these warning messages reduced the ratio of tweets containing hateful language by up to 10% a week later (there was no significant reduction among those in the control group). And, in cases in which the messaging to users was polite (“I understand that you have every right to express yourself but please keep in mind that using hate speech can get you suspended.”), the decline reached 15 to 20%. (Based on previous scholarship, the authors concluded that respectful and polite language would be more likely to be seen as legitimate.) However, the impact of the warnings dissipated a month later.
WhatsApp will now make it easier for users to end conversations and send cute stickers, finds a new update in their beta version
POV: Facebook’s Change to Meta Blurs Lines Even Further
Twitter likely to roll out ‘Reactions’ feature soon
Facebook’s centralized metaverse a threat to the decentralized ecosystem?
Facebook hackers target small business owners to scam money for ads
Social Media Marketing Trends To Watch In 2022
Facebook and Instagram to launch Mena campaign to prevent child exploitation online
Facebook to shutter its facial recognition system, citing ‘societal – USA Today
Facebook connects its Workplace to Microsoft Teams • The Register
Samantha’s cryptic Instagram posts bother fans – The Siasat Daily
Why is Facebook shutting down its facial recognition system and deleting ‘faceprints’? – The Guardian
The RNC Is Raising Funds Off Trump’s New Social-Media Platform
FACEBOOK1 week ago
Upgrade Your Facebook Marketing Strategy for 2022
WHATSAPP5 days ago
WhatsApp may soon allow users to react to messages
INSTAGRAM1 week ago
How to Get More Exposure on Instagram
PINTEREST4 days ago
Getting the Most Out of Shopify
INSTAGRAM6 days ago
How many hashtags should you use to get the most ‘Likes’ on Instagram?
WHATSAPP6 days ago
WhatsApp now lets you create your own stickers
OTHER1 week ago
What is Social Listening, Why it Matters, and 10 Tools to Make it Easier
FACEBOOK1 week ago
Facebook’s lead EU privacy supervisor hit with corruption complaint