Connect with us


Opinion | The Endless Facebook Apology – The New York Times



Opinion | The Endless <b>Facebook</b> Apology - The New York Times thumbnail

Kara Swisher

Credit…Arsh Raziuddin, The New York Times

Kara Swisher

In March of 2018, I interviewed Marc Benioff, the chief executive of Salesforce, at the top of the company’s San Francisco tower. He offered up an astonishing metaphor when I asked him for his take on the impact of social media companies.

“Facebook is the new cigarettes,” Benioff said. “It’s addictive. It’s not good for you.” As it did with cigarette companies, “the government needs to step in,” he added.” The government needs to really regulate what’s happening.”

At the time, I thought it was a flashy reach by an executive who often went out on verbal limbs to make brazen points. But today, after the latest series of investigations into the sketchy acts of the social media giant, Benioff seems like Nostradamus.

In the past weeks, The Wall Street Journal published “The Facebook Files” — well reported pieces that rely on whistle-blowers who are now just tossing incriminating documents over the wall at a furious pace.

The Journal’s series includes: internal reports showing that Facebook was fully aware of Instagram’s deleterious impact on the mental health of teen girls, while moving full steam ahead with an Instagram for Kids product; internal documents inferring that the company lied to its independent Oversight Board when it said it gave only a small amount of celebs, pols and other grandees a wide berth to break its rules on the platform while, in fact, the free pass was given to millions; and the latest revelation that Facebook makes people angry, in part because of futile efforts of its leader, Mark Zuckerberg, to stop the endless rage.

Even when Zuckerberg tries to do the right thing, and loudly, The Journal’s reporting shows how the platform he built is used to undermine his efforts, as we’ve seen with anti-vaccination misinformation.

“Facebook made a heralded change to its algorithm in 2018 designed to improve its platform — and arrest signs of declining user engagement. Mr. Zuckerberg declared his aim was to strengthen bonds between users and improve their well-being by fostering interactions between friends and family. Within the company, the documents show, staffers warned the change was having the opposite effect. It was making Facebook, and those who used it, angrier,” The Journal reported. “Mr. Zuckerberg resisted some fixes proposed by his team, the documents show, because he worried they would lead people to interact with Facebook less.”

It’s important to have this proof of Facebook’s duplicity. But these revelations come as a shock to no one who has been paying attention to the slippery machinations at the company over the years.

What’s most revealing is the persistence of the tired old, so-so-sorry, we’ll-do-better excuses that its executives trot out when the company is called out for its destructive products.

At this point, it’s probably best for Facebook executives to say nothing, since every time they do they trip all over themselves in their weird analogies — which are often centered on the idea that humanity sucked before Facebook.

Yes, fine, mankind has not always bathed itself in glory. But nowadays the human race seems even more abhorrent, and in many more twisted and amplified ways, and it’s because of Facebook, the biggest and least accountable communications and media platform in history.

As The Times’s Kevin Roose noted on Twitter about Facebook’s reaction to the Journal pieces: “It’s just such a weird tactic. Like if Chipotle was getting criticized for having salmonella in its guac or whatever and the CEO’s response was like “well, scaled food production has had many benefits for humanity, including freeing us from being hunter-gatherers.”

The stylings of the company’s head of Instagram, Adam Mosseri, are perhaps ground zero for this pointless logrolling.

“Cars create way more value in the world than they destroyed. And I think social media is similar,” he said to Peter Kafka on Recode Media. After giving that feeble analogy, Mosseri was frustrated that he got dunked on because his critics apparently failed to note that he discussed regulation, too, with Kafka. (Listen to the whole interview, to make Mosseri feel better, as it was substantive.)

About the problems for teen girls, Mosseri tried to shine up the, well, you know, noting in another tweet: “The WSJ’s story today on research we’re doing to understand young people’s experiences on IG casts our findings in a negative light, but speaks to important issues. We stand by this work and believe more companies should be doing the same.”

Obviously, you don’t get claps for doing your job. Nor should you get credit when you do the very least to fix problems like these.

So, sadly, I am coming around to the idea that Benioff’s once-over-the-top metaphor — that social media companies like Facebook are as bad for us as cigarette companies — might not be so far off the mark.

Let me say up front, I am not a tech-product reviewer, and this is not a tech review, so take what I say here with a grain of salt. Or rather, with a heaping tablespoon of sugar.

The latest investment trend to occupy the self-absorbed I’ll-never-die efforts of tech dudes — and they are mostly dudes — is continuous glucose monitoring.

C.G.M. is aimed at delivering a fine-grain look at what is being called our “metabolic” health, with devices that have typically been used by those with illnesses like diabetes. The goal is to give a wide range of people more data to grok about glucose-level reactions to the foods we eat, when we eat them, and in what combination.

There are lots of C.G.M. devices out there, all trying to attract the attention of the same groups of consumers who are already counting steps, hours of sleep, meditation effectiveness and much more. The goal is to commercialize and popularize the idea that everything you do physically can be measured digitally.

The C.G.M. app that I tried is from a start-up called Levels, which recently grabbed $12 million in Silicon Valley funding. It’s not the only one getting big investment rounds recently in this fast-growing space, which includes January AI ($8.8 million) and Supersapiens ($13.5 million).

Interest from the tech sector is not a surprise; these guys have long embraced the idea of the “quantified body.” It’s a tiresome term known to anyone who has spent any time around start-up entrepreneurs, who talk about their optimal intermittent fasting schedules ad nauseam.

Earlier entries into this space — so-called wearables — came out about a decade ago. Those include Fitbit, Nike+, Jawbone UP, the Oura Ring, and Whoop. And we can’t forget the all-purpose Apple Watch, which ended up besting them all with close to 34 million devices sold in 2020.

I have owned every one of these and took to calling them “unwearables,” since they came and went like the latest cooking gadget. I have a drawer at home with three Apple Watches, four Fitbits, an Oura Ring and so, so many Ups, as well as others I’ve lost track of.

Besides being mostly bulky, their overall efficacy escaped me. While it’s nice to know my step count, or my sleep patterns, the payoff for wearing these devices, as if I were some kind of pet experiment to tech, was minimal. That is largely because — other than getting links to articles that would help me understand that I should sleep more than four hours a night (duh) or buzzing reminders to stand up more during the workday (double duh) — most of these apps never gave me what I consider truly actionable information.

There have been some more helpful signals of late that wearables will become more useful, including some evidence that indicates that devices like Oura might be able to see some illnesses early, using data from things like heart rate variability and body temperature; some may even be able to pick up early indications of Covid.

One important feature of C.G.M. devices is that they offer data that may be useful. Knowing your steps, for example, is interesting, but that information tells you little about how the steps impact your body. It’s the same for a range of other data you might get from monitoring devices — all informative, but mostly lacking insight that you can use to make changes.

With a C.G.M. device, you can see how your body reacts to specific foods. In my case, the device knew that pita bread was evil incarnate for me — shooting my glucose numbers off the charts. It gave specific data about what I felt — an inevitable energy crash whenever I ate bread in the morning, even as I craved it. Level’s co-founder and chief medical officer, Casey Means, called bread “blood sugar bombs.”

People with diabetes have long used C.G.M. monitors for just these reasons, but now everyone is the market. When I talked with Means over Zoom, she reeled off some anonymized data from 6,000 beta users — there are over 100,000 on a wait list — that shows the foods that impact most people badly. Along with cake, bagels and cookies, some of the big surprises have been granola, oatmeal and even potatoes. Worst takeout: Pizza, Chinese and Thai.

“It looks like an epidemic of metabolic dysfunction,” joked Means. “I see it realistically as making important data more accessible and perhaps help shift the food industry if people begin to demand different options.”

Means said that in order to be most effective, such devices must eventually become cheap and easy to use for a large number of people (I paid about $395 for mine), so the collective real-time data can be used across populations.

Not everyone is convinced. Some have called these devices a waste of time and money with little benefit to those who mostly live in the normal blood glucose range. They say that the information you get is largely useless, even as others think any monitoring and analysis can set in motion behavioral changes that could help limit the glucose fluctuation.

We’ll see, but it’s an interesting investing space to watch, as more money pours in. No matter what: Put down that doughnut.

Has there been anything more entertaining this week than watching people react to a tweet by the rap star Nicki Minaj about an alleged reaction to the Covid vaccine by her cousin’s friend in Trinidad?

It’s certainly easy to dunk on her — she claimed the man’s testicles became swollen — and many did, largely with humor (including me). Though her claim was refuted by the health minister of Trinidad and Tobago, Minaj doubled down on exaggerations by saying she had been invited to the White House (they offered a call with a health expert) and that Twitter had disabled her ability to post (it had not); she is now asserting (on Instagram) that she is being attacked by the amorphous “Establishment” so that “no one will ever ask questions again.”

All of which is codswallop from a celebrity seeking attention and relevance, of course. Cancel culture, as Minaj seems to be implying? More like fact-checking.

Amazon said this week that it will hire 125,000 more employees, to add to the close to 450,000 it has hired since the pandemic started — and the company is dangling an average wage of $18 an hour for these jobs. It also said it would pay 100 percent of college tuition for hourly workers who stay longer than 90 days.

It’s all part of a push by many employers to attract and retain workers amid a dearth of them. But what’s most interesting is that the stimulus checks meant to give relief to workers during Covid have done what union organization was unable to do at the e-commerce giant: Compel it to pay its workers more.

That’s all good, but we should note that Big Tech companies like Amazon have never rewarded shareholders and their executives more, and these changes are no cause for back-patting on the their part.

As the writer Dave Eggers — whose new book, “The Every” imagines a world in which Amazon and Google are merged (Yipes!) — noted to me in a Sway interview this week: “The Bezos way, paying people $15 an hour, a sub-living wage, they hold on to that like it’s such a badge of honor.” Referring to how Amazon touts that it offers health care from day one, along with that $15 an hour, he said: “I don’t understand how that is such a point of pride.”


Have feedback? Send a note to

Read More

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


‘Vaccine Talk’ Facebook Group Is A Carefully Moderated Forum For Vaccine Questions …




'Vaccine Talk' <b>Facebook</b> Group Is A Carefully Moderated Forum For Vaccine Questions ... thumbnail

Kate Bilowitz moderates a Facebook group where people exchange views on vaccinations. She shares what moderating it has been like during the pandemic.

Read More

Continue Reading


Facebook exec says stablecoins ‘probably’ require more regulation – Yahoo Finance



SEC Chair Gary Gensler put forward a wide-ranging view of potential cryptocurrency regulation at a Senate hearing this week, saying that a type of digital asset called stablecoins may be considered a security.

The comments come as the Treasury Department works with other federal agencies to draft a report by next month on potential regulations for stablecoins, a form of cryptocurrency that pegs its value to a commodity or currency, like the U.S. dollar.

New rules could draw support from a top industry player, Facebook’s (FB) David Marcus, who has spearheaded the tech giant’s soon-to-launch digital wallet called Novi. Marcus also sits on the board of the Diem Association, a coalition of corporate and non-profit members that aim to bring out a stablecoin called Diem that will be exchanged over the new digital wallet from Facebook.

In a new interview, taped prior to Gensler’s comments on Tuesday, Marcus told Yahoo Finance stablecoins “probably” will require additional regulation, which should focus on consumer protection as well as the prevention of illegal payments like money laundering.

“Do we need more regulation?” says Marcus, head of F2, also known as Facebook Financial. “The answer is probably ‘yes.'” 

“The first thing is really consumer protection,” he adds. “Do consumers understand what they’re buying? And what guarantees do they have to get their money out in an adverse event? And so that pertains to if you’re talking about stable coins, specifically, what are the reserves made of? 

“Are they’re fully backed? reserves? Or are they not fully backed? And if they are fully backed? What are they backed with?” he adds.

During Gensler’s testimony before the Senate Banking Committee on Tuesday, Democratic Senator Elizabeth Warren (D-MA) asked about the possibility of crypto investors attempting to withdraw money during a market crash. Gensler said the SEC could not do much to help investors since crypto exchanges like Coinbase (COIN) had not registered with the SEC. 

Treasury Secretary Janet Yellen last month urged speedy adoption of stablecoin rules in remarks to regulators.

Marcus said investor risks found in stablecoins depend on the commodities that back a given cryptocurrency.

“In my view, very high quality stable coins are only backed by cash and very short term treasuries,” he says. “That’s it.”

“Then you could add a capital buffer on top of that, to basically cover unexpected operational losses, or what have you to add another layer of protection,” he says.

David Marcus, CEO of Facebook's Calibra digital wallet service, arrives for a House Financial Services Committee hearing on Facebook's proposed cryptocurrency on Capitol Hill in Washington, Wednesday, July 17, 2019. (AP Photo/Andrew Harnik)

David Marcus, CEO of Facebook’s Calibra digital wallet service, arrives for a House Financial Services Committee hearing on Facebook’s proposed cryptocurrency on Capitol Hill in Washington, Wednesday, July 17, 2019. (AP Photo/Andrew Harnik)

Facebook aims to release Novi along with Diem by the end of the year, Marcus told Axios earlier this month. Diem, which emerged from Facebook’s effort to develop a cryptocurrency that began under the name Libra in 2017, will be pegged to the U.S. dollar, Marcus said.

Libra faced backlash from regulators and lawmakers when it was announced in 2019, and ultimately lost support from corporate backers like Visa (V) and PayPal (PYPL). 

Speaking to Yahoo Finance, Marcus said concerns over illicit payments with stablecoins offer an opportunity for regulators to improve the clarity of rules governing such transactions, even though stablecoins are currently used for everyday payments in rare circumstances.

“We’re very motivated to solving payments use case but stable coins are mainly used right now for exchanges when people are buying and selling other crypto assets,” he says.

“There are provisions around anti-money laundering, combating the financing of terrorism, sanctions enforcement — and I think the rules are pretty clear,” he says. “This actually offers an opportunity to get better at it than the current system is, which I think it will be.”

Read more:

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, YouTube, and reddit.

Read More

Continue Reading


Facebook has an invisible system that shelters powerful rule-breakers. So do other online platforms.




<b>Facebook</b> has an invisible system that shelters powerful rule-breakers. So do other online platforms. thumbnail

Last week, the Wall Street Journal published Jeff Horwitz’s investigation into the inner workings of Facebook — with some troubling findings. Internal documents suggest that Facebook’s top management dismissed or downplayed an array of problems brought to their attention by product teams, internal researchers and their own Oversight Board. These include a report on what is known as the XCheck program, which reportedly allowed nearly any Facebook employee, at their own discretion, to whitelist users who were “newsworthy,” “influential or popular” or “PR risky.” The apparent result was that more than 5.8 million users were moderated according to different rules than ordinary Facebook users, or hardly moderated at all.

This system of “invisible elite tiers,” as the Journal describes it, meant that the speech of powerful and influential actors was protected while ordinary people’s speech was moderated by automated algorithms and overworked humans. As our research shows, that’s not surprising. Other platforms besides Facebook enforce different standards for different users, creating special classes of users as part of their business models.

Unequal and opaque standards can breed suspicion among users

In a recent research article, we explain how another important platform, YouTube, takes what we call a “tiered governance” approach, separating users into categories and applying different rules to each category’s videos. YouTube distinguishes among such categories as media partners, nonprofits and governments. Most important, it distinguishes between “creators” who get a slice of its ad revenue and ordinary users. Even among those paid creators, YouTube has a more subtle array of tiers according to popularity.

Facebook’s program began as a stopgap measure to avoid the public relations disasters that might happen if the platform hastily deleted content by someone powerful enough to fight back, such as a sitting president. YouTube’s program began when it created a special category of paid creators, the YouTube Partner Program, to give popular YouTubers incentives to stay on the site and make more content.

YouTube then began to create more intricate tiers, providing the most influential creators with special perks such as access to studios and camera equipment. An elite few had direct contact with handlers within the company who could help them deal with content moderation issues quickly, so that they didn’t lose money. But things changed when advertisers — YouTube’s main source of revenue — began to worry about their ads being shown together with offensive content. This drove YouTube to adjust its policies — over and over again — about which creators belonged to which tiers and what their benefits and responsibilities were, even if the creators didn’t like it.

Creators were understandably frustrated as these arrangements seemed to keep shifting under their feet. They didn’t object to different rules and sets of perks for different tiers of creators, but they did care that the whole system was opaque. Users like to know what to expect from platforms — whether they will enforce guidelines, and how much financial compensation they provide. They didn’t like the unpredictability of YouTube’s decisions, especially since those decisions had real social, financial and reputational impact.

Some were frustrated and suspicious about the platform’s real motives. Opacity and perceptions of unfairness provided fuel for conspiracy theories about why YouTube was doing what it was doing. Creators who didn’t know if YouTube’s algorithms had demonetized or demoted their videos began to worry that their political leanings were being penalized. This led to anger and despair, which was worsened by YouTube’s clumsy appeals system. And it gave fodder to those eager to accuse YouTube of censorship, whether it was true.

It’s fair to be unfair, as long as you’re fair about it

Social media companies such as YouTube and Facebook have suggested that their platforms are open, meritocratic, impartial and evenhanded. This makes it hard for them to explain why they treat different people differently. However, other systems for adjudication make distinctions, too. For example, criminal law takes into account whether the accused is a child, impaired, a repeat offender, under the influence, responding in self-defense or under justifiable duress.

Similarly, there are plausible reasons platform companies might want to treat different tiers of users in different ways. For example, for postings about the coronavirus, it made sense to establish different rules for those who had established themselves as trustworthy. To decrease the spread of misinformation or harassment, platforms might reasonably want to impose higher standards rather than lower ones on users who had many followers, who held political office and had special obligations to the public, or who paid or received money to post.

But YouTube’s experience suggests that clarity about why different users are treated differently matters for public perception. When a company such as Facebook discriminates between different tiers of users just to avoid offending powerful people and mitigate possible PR disasters, observers will treat that reasoning as less legitimate than if the company were trying to hold the powerful to account. This is especially so if the differences are kept hidden from users, the public and even Facebook’s own Oversight Board.

These allegations are likely to breed distrust, accusations of bias and suspicions about Facebook’s intentions.

Robyn Caplan is a researcher at Data & Society Research Institute. Follow her @RobynCaplan.

Read More

Continue Reading