Connect with us

FACEBOOK

Israel/Palestine: Facebook Censors Discussion of Rights Issues

Published

on

(Washington, DC) – Facebook has wrongfully removed and suppressed content by Palestinians and their supporters, including about human rights abuses carried out in Israel and Palestine during the May 2021 hostilities, Human Rights Watch said today. The company’s acknowledgment of errors and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, or adequately explain why they occurred in the first place.

Facebook should take up the Facebook Oversight Board’s recommendation on September 14, 2021, to commission an independent investigation into content moderation regarding Israel and Palestine, particularly in relation to any bias or discrimination in its policies, enforcement, or systems, and to publish the investigation’s findings. Facebook has 30 days from the day the decision was issued to respond to the board’s recommendations.

“Facebook has suppressed content posted by Palestinians and their supporters speaking out about human rights issues in Israel and Palestine,” said Deborah Brown, senior digital rights researcher and advocate at Human Rights Watch. “With the space for such advocacy under threat in many parts of the world, Facebook censorship threatens to restrict a critical platform for learning and engaging on these issues.”

An escalation in violence in parts of Israel and the Occupied Palestinian Territory (OPT) during May led people to turn to social media to document, raise awareness, and condemn the latest cycle of human rights abuses. There were efforts to force Palestinians out of their homes, brutal suppression of demonstrators, assaults on places of worship, communal violence, indiscriminate rocket attacks, and airstrikes that killed civilians.

Human Rights Watch documented that Instagram, which is owned by Facebook, removed posts, including reposts of content from mainstream news organizations. In one instance, Instagram removed a screenshot of headlines and photos from three New York Times opinion articles for which the Instagram user added commentary that urged Palestinians to “never concede” their rights. The post did not transform the material in any way that could reasonably be construed as incitement to violence or hatred.

In another instance, Instagram removed a photograph of a building with a caption that read, “This is a photo of my family’s building before it was struck by Israeli missiles on Saturday May 15, 2021. We have three apartments in this building.” The company also removed the reposting of a political cartoon whose message was that Palestinians are oppressed and not fighting a religious war with Israel.

All of these posts were removed for containing “hate speech or symbols” according to Instagram. These removals suggest that Instagram is restricting freedom of expression on matters of public interest. The fact that these three posts were reinstated after complaints suggests that Instagram’s detection or reporting mechanisms are flawed and result in false positives. Even when social media companies reinstate wrongly suppressed material, the error impedes the flow of information concerning human rights at critical moments, Human Rights Watch said.

Users and digital rights organizations also reported hundreds of deleted posts, suspended or restricted accounts, disabled groups, reduced visibility, lower engagement with content, and blocked hashtags. Human Rights Watch reviewed screenshots from people who were sharing content about the escalating violence and who reported restrictions on their accounts, including not being able to post content, livestream videos on Instagram, post videos on Facebook, or even like a post.

Human Rights Watch was not able to verify or determine that each case constituted an unjustified restriction due to lack of access to the underlying data needed for verification, and because Facebook refused to comment on specific details of various cases and accounts citing privacy obligations. The range and volume of restrictions reported warrant an independent investigation.

The Oversight Board recommended that Facebook engage an external, independent entity to conduct a thorough examination to determine whether Facebook has applied its content moderation in Arabic and Hebrew without bias, and that the report and its conclusions should be made public. This recommendation echoes multiple calls from human rights and digital rights organizations for a public audit.

In addition to removing content based on its own policies, Facebook often does so at the behest of governments. The Israeli government has been aggressive in seeking to remove content from social media. The Israeli Cyber Unit, based within the State Attorney’s Office, flags and submits requests to social media companies to “voluntarily” remove content. Instead of going through the legal process of filing a court order based on Israeli criminal law to take down online content, the Cyber Unit makes appeals directly to platforms based on their own terms of service. A 2018 report by Israel’s State Attorney’s office notes an extremely high compliance rate with these voluntary requests, 90 percent across all platforms.

Human Rights Watch is not aware that Facebook has ever disputed this claim. In a letter to Human Rights Watch, the company stated that it has “one single global process for handling government requests for content removal.” Facebook also provided a link to its process for assessing content that violates local law, but that does not address voluntary requests from governments to remove content based on the company’s terms of service.

Noting the role of governments in content removal, the Oversight Board recommended that Facebook make this process transparent and distinguish between government requests that led to global removals based on violations of the company’s Community Standards and requests that led to removal or geo-blocking based on violations of local law. Facebook should implement this recommendation, and in particular disclose the number and nature of requests for content removal by the Israeli Government’s Cyber Unit and how it responded to them, Human Rights Watch said.

Protecting free expression on issues related to Israel and Palestine is especially important in light of shrinking space for discussion. In addition to Israeli authorities, Palestinian authorities in the West Bank and Gaza have systematically clamped down on free expression, while in several other countries, including the US and Germany, steps have been taken to restrict the space for some forms of pro-Palestine advocacy.

See also  TRAFFIC and Facebook collaboration disrupts wildlife trafficking online in the Philippines and ...

Human Rights Watch wrote to Facebook in June 2021 to seek the company’s comment and to inquire about temporary measures and longstanding practices around the moderation of content concerning Israel and Palestine. The company responded by acknowledging that it had already apologized for “the impact these actions have had on their community in Israel and Palestine and on those speaking about Palestinian matters globally,” and provided further information on its policies and practices. However, the company did not answer any of the specific questions from Human Rights Watch or meaningfully address any of the issues raised.

“Facebook provides a particularly critical platform in the Israeli and Palestinian context, where Israeli authorities are committing crimes against humanity of apartheid and persecution against millions, and Palestinians and Israelis have committed war crimes,” Brown said. “Instead of respecting people’s right to speak out, Facebook is silencing many people arbitrarily and without explanation, replicating online some of the same power imbalances and rights abuses we see on the ground.”

Removal and Suppression of Human Rights and Other Content

In May, the escalating tensions between Israel and Palestinians culminated in 11 days of fighting between Israeli forces and Palestinian armed groups based in the Gaza Strip. From May 6 to 19, 7amleh, the Arab Center for the Advancement of Social Media (pronounced, “hamla” in Arabic, meaning “campaign”), reported documenting “a dramatic increase of censorship of Palestinian political speech online.”

In the two-week period alone, 7amleh said it documented 500 cases of what it described as content being taken down, accounts closed, hashtags hidden, the reach of specific content reduced, archived content deleted, and access to accounts restricted. Facebook and Instagram accounted for 85 percent of those restrictions.

The digital rights group Sada Social says it documented more than 700 instances of social media networks restricting access to or removing Palestinian content in May alone. On May 7, a group of 30 human rights and digital rights organizations denounced social media companies for “systematically silencing users protesting and documenting the evictions of Palestinian families from their homes in the neighborhood of Sheikh Jarrah in Jerusalem.”

In addition to removing content, Facebook affixed a sensitive warning label to some posts requiring users to click through a screen that says that the content might be “upsetting.” Human Rights Watch found evidence that Facebook affixed such warnings to posts that raised awareness about human rights issues without exposing the viewer to upsetting content such as graphic violence or racial epithets.

For example, on May 24, Instagram affixed such a label to multiple stories posted by Mohammed el-Kurd, a Palestinian activist and resident of Sheikh Jarrah, including a story that contained a reposted image from another user’s Instagram feed of an Israeli police truck and another truck with Hebrew writing on it. The image raised awareness about a high court ruling and the presence of soldiers in the Sheikh Jarrah neighborhood. As of September 30 this image remains on the other user’s Instagram feed, without a sensitive warning label.

In a July letter to Human Rights Watch, Facebook said that it uses warnings to accommodate for “different sensitivities about graphic and violent content” among people who use their platforms. For that reason, they add a warning label to “incredibly graphic or violent content so that it is not available to people under the age of 18,” and so that users are “aware of the graphic or violent nature of the content before they click to see it.” The post in question does not include content that could be considered “graphic or violent,” based on Facebook’s standard.

Facebook said that “some labels would apply to entire carousels of images even if only one is violating.” Hiding content behind a label that prevents it from being viewed by default restricts access to that content. This may be an appropriate step for certain types of graphic and violent content, but labeling all photos when only a subset of them deserves a label is an arbitrary restriction on expression, Human Rights Watch said. Human Rights Watch cannot confirm what other images were in the carousel.

According to 7amleh, 46 percent of content that it documented as taken down from Instagram occurred without the company providing the user a prior warning or notice. In an additional 20 percent of the cases, Instagram notified the user but did not provide a specific justification for restricting the content.

Human Rights Watch also reviewed screenshots from social media users who reported that their posts had less engagement and fewer views from other users than they typically do, or that content from their accounts was not showing up in feeds of other users, a sign that Facebook and Instagram may have made adjustments to their recommendation algorithm to demote certain content.

The Oversight Board investigated one instance of content concerning the escalation in violence in May being removed and, on September 15, issued a decision finding that Facebook acted wrongfully. The user had on May 10 shared a news article reporting on a threat by Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas, to fire rockets in response to a flare-up in Israel’s repression of Palestinians in occupied East Jerusalem. The Board recognized that re-publication of a news item on a matter of urgent public concern is protected expression and that removing the post restricted such expression without reducing offline harm.

The Board acknowledged receiving public comments from various parties alleging that Facebook has disproportionately removed or demoted content from Palestinian users and content in Arabic, especially in comparison to its treatment of posts threatening anti-Arab or anti-Palestinian violence within Israel. The Board also said it received public comments alleging that Facebook had not done enough to remove content that incites violence against Israeli civilians.

See also  Doctor's Facebook post on unvaccinated patients creates stir - KTNV

Designating Organizations as “Dangerous:” A Danger to Free Expression

In some cases, Facebook removed the content under its Dangerous Individuals and Organizations Community Standard, which does “not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.” This was the basis for removing the post with a news article about the Izz al-Din al-Qassam Brigades. The Oversight Board criticized the “vagueness” of this policy in its decision.

Facebook relies on the list of organizations that the US has designated as a “foreign terrorist organization,” among other lists. That list includes political movements that also have armed wings, such as the Popular Front for the Liberation of Palestine and Hamas. By deferring to the broad and sweeping US designations, Facebook prohibits leaders, founders, or prominent members of major Palestinian political movements from using its platform. It does this even though, as far as is publicly known, US law does not prohibit groups on the list from using free and freely available platforms like Facebook, and does not consider allowing groups on the list to use platforms tantamount to “providing material support” in violation of US law.

Facebook’s policy also calls for removing praise or support for major Palestinian political movements, even when those expressions of support contain no explicit advocacy of violence.

Facebook should make its list of Dangerous Individuals and Organizations public. It should ensure that the related policy and enforcement do not restrict protected expression, including about terrorism, human rights abuses, and political movements, consistent with international human rights standards, in line with the Oversight Board’s recommendations. In particular, it should clarify which of the organizations banned by Israeli authorities are included under its Dangerous Individuals and Organizations policy.

Reliance on Automation

The audit to determine whether Facebook’s content moderation has been applied without bias should include an examination of the use of automated content moderation. According to Facebook’s periodic transparency reporting on how it enforces its policies, for the period of April to June 2021, Facebook and Instagram indicated that through the use of its automated tools it had detected 99.7 percent of the content it deemed to potentially violate its Dangerous Individuals and Organizations policy before a human flagged it. For hate speech, the percentage is 97.6 percent for Facebook and 95.1 percent for Instagram for the same period.

Automated content moderation is notoriously poor at interpreting contextual factors that can be key to determining whether a post constitutes support for or glorification of terrorism. This can lead to overbroad limits on speech and inaccurate labeling of speakers as violent, criminal, or abusive. Automated content moderation of content that platforms consider to be “terrorist and violent extremist” has in other contexts led to the removal of evidence of war crimes and human rights atrocities from social media platforms, in some cases before investigators know that the potential evidence exists.

Processes intended to remove extremist content, in particular the use of automated tools, have sometimes perversely led to removing speech opposed to terrorism, including satire, journalistic material, and other content that would, under rights-respecting legal frameworks, be considered protected speech. For example, Facebook’s algorithms reportedly misinterpreted a post from an independent journalist who once headed the BBC’s Arabic News service that condemned Osama bin Laden as constituting support for him. As a result, the journalist was blocked from livestreaming a video of himself shortly before a public appearance. This kind of automatic content removal hampers journalism and other writing, and jeopardizes the future ability of judicial mechanisms to provide remedy for victims and accountability for perpetrators of serious crimes.

The audit of Facebook’s practices should investigate the role that designating a group as terrorist plays in automated content moderation. In one incident, Instagram restricted the hashtag #AlAqsa (#الاقصى or #الأقصى) and removed posts about Israeli police violence at the al-Aqsa mosque in Jerusalem, before Facebook acknowledged an error and reportedly reinstated some of the content.

Buzzfeed News reported that an internal Facebook post noted that the content had been taken down because al-Aqsa “is also the name of an organization sanctioned by the United States Government,” Al-Aqsa Martyrs’ Brigades. Human Rights Watch reviewed four screenshots that documented that Instagram had limited posts using the #AlAqsa hashtag and posts about Palestinian demonstrations at al-Aqsa. Israeli forces responded to demonstrations at the al-Aqsa mosque by firing teargas, stun grenades, and rubber-coated steel bullets, including inside the mosque.  The Israeli response left 1,000 Palestinians injured between May 7 and May 10. At least 32 Israeli officers were also injured.

The use of automated tools to moderate content has accelerated due to the ever-expanding growth of user-generated content online. It is important for companies like Facebook to recognize the limitations of such tools and increase their investment in people to review content to avoid, or at least more quickly correct, enforcement errors, in particular in sensitive situations.

In a letter to Human Rights Watch, Facebook referred to the incident as “an error that temporarily restricted content.” The audit should investigate how automation may have played a role in this erroneous enforcement of Facebook policies.

Lack of Transparency Around Government Requests

An independent audit should also evaluate Facebook’s relationship with the Israeli government’s Cyber Unit, which creates a parallel enforcement system for the government to seek to censor content without official legal orders. While Facebook regularly reports on legal orders, it does not report on government requests based on alleged violations of its community standards.

See also  Apple Takes On Facebook & Google | Straight Talking Cyber

This process may result in circumventing judicial processes for addressing illegal speech, and government-initiated restrictions on legal speech without informing targeted social media users. The result denies them the due process rights they would have if the government sought to restrict the content through legal processes.  On April 12 the Israeli Supreme Court rejected a petition filed by Adalah and the Association for Civil Rights in Israel seeking to stop the Cyber Unit’s operations.

Facebook declined to answer the Oversight Board’s questions about the number of requests the Israeli government made to remove content during the May 2021 hostilities. The company only said, in relation to the case that the Board ruled on, “Facebook has not received a valid legal request from a government authority related to the content the user posted in this case.”

Acceding to Israeli governmental requests raises concern, since Israeli authorities criminalize political activity in the West Bank using draconian laws to restrict peaceful speech and to ban more than 430 organizations, including all the major Palestinian political movements, as Human Rights Watch has documented. These sweeping restrictions on civil rights are part of the Israeli government’s crimes against humanity of apartheid and persecution against millions of Palestinians.

Technical Glitches Don’t Explain the Full Picture

Facebook has acknowledged several issues affecting Palestinians and their content, some of which it attributed to “technical glitches” and human error. However, these explanations do not explain the range of restrictions and suppression of content observed.

In other situations of political crisis or public emergencies, Facebook has announced so-called “break glass” measures. These include restricting the spread of live video on its platforms and adjustments to its algorithms that classify and rank content to reduce the likelihood that users will see content that potentially violates its policies. Facebook has reportedly deployed such measures in Ethiopia, Myanmar, Sri Lanka, and the US. Facebook has not publicly acknowledged any special measures it has taken in the context of content about Israel and Palestine, aside from setting up a “special operations center” to monitor content on its platforms regarding the May 2021 escalation in Israel and Palestine. Human Rights Watch requested information about the “special operations center,” but Facebook did not respond.

This latest spate of content takedowns is part of a wider pattern of reported censorship of Palestinians and their supporters by social media companies, which civil society organizations have documented for years. These restrictions highlight the need to commission a comprehensive, independent audit that examines Facebook’s underlying policies and enforcement of those policies for bias.

Social Media Companies’ Responsibilities

Businesses have a responsibility to respect human rights by identifying and addressing the human rights impacts of their operations, and providing meaningful access to a remedy. For social media companies, this responsibility includes being transparent and accountable in their moderation of content to ensure that decisions to take content down are not overly broad or biased.

The Santa Clara Principles on Transparency and Accountability in Content Moderation provide important guidance for how companies should carry out their responsibilities in upholding freedom of expression. Based on those principles, companies should clearly explain to users why their content or their account has been taken down, including the specific clause of the Community Standards that the content was found to violate.

Companies should also explain how the content was detected, evaluated, and removed – for example, by users, automation, or human content moderators – and provide a meaningful opportunity for timely appeal of any content removal or account suspension. Facebook has endorsed the Santa Clara Principles, but hasn’t fully applied them.

Need for an Independent Investigation

Facebook should ensure that investigators closely consult with civil society at the outset of the investigation, so that the investigation reflects the most pressing human rights concerns from those affected by its policies. It should make the outcome of the independent investigation public, as it did with its human rights impact assessment on Myanmar and civil rights audit in the US, and present its findings to Facebook’s executive leadership. Facebook should continuously consult with civil society about how its recommendations are being carried out.

Human Rights Watch raised several questions in the letter to Facebook to which the company did not respond. The investigation should address these questions in connection with the May hostilities, and more generally, including:

  • What changes did Facebook make to its algorithms to demote or reduce the spread of speech that it determined most likely violates policies on hate speech, violence and incitement, or dangerous individuals and organizations?
  • What automated detection methods were used, including what terms and classifiers were being used to flag content for potential hate speech or violence and incitement allowing them to be flagged automatically for demotion and/or removal?
  • What error rates for enforcement were deemed to be acceptable?
  • What policies were applied to content concerning Israel and Palestine that are not public?
  • Does Facebook have any firewalls in place to prevent undue influence of its public policy staff, including former Israeli and other government officials, over content moderation decisions with regard to Israel and Palestine?

Note: A member of Human Rights Watch staff is on the Facebook Oversight Board in his personal capacity. The staff member does not work on issues related to human rights and technology at Human Rights Watch. Any position Human Rights Watch takes on the Facebook Oversight Board is independent and is not informed or influenced by his membership on it.

Read More

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

FACEBOOK

POV: Facebook’s Change to Meta Blurs Lines Even Further

Published

on

By

pov:-facebook’s-change-to-meta-blurs-lines-even-further-|-bu-today
en flag
sv flag

COM’s Michelle Amazeen worries if people will know the difference between real-world and virtual experiences

When Facebook announced it was changing its name to Meta in October, the 2008 Pixar movie WALL-E was the first thing that came to my mind. The sci-fi movie is about a robot left on an uninhabitable Earth to clean up the garbage left behind by humans. Rampant consumerism and corporate greed have left Earth a wasteland, and humans have been evacuated to outer space. In this same way, I envision Facebook abandoning the real world for the virtual “metaverse”—shared online environments where people can interact. They leave behind unimaginable quantities of disinformation, amplified by their algorithms, along with harassment, hate speech, and angry partisans.

To move beyond my initial reaction and gain more insight into the implications of Facebook’s name change (and strategic plans) from a communication research perspective, I turned to two research fellows who study emerging media within the Communication Research Center (CRC) at Boston University’s College of Communication (COM).

Media psychologist James Cummings, a COM assistant professor of emerging media studies, indicates that a metaverse—if successful—would produce new issues in information processing and would place a new emphasis on theories of interpersonal communication—rather than just mass communication. As I feared, he also says it has the potential to augment existing media effects of concern related to social networking, namely misinformation, persuasion, addiction, and distraction.

First, Cummings explains there would be major implications for how billions of people select, process, and are influenced by media content. To be successful, the metaverse platforms will need to transform current modes of information processing and digital communication interactions into much more immersive, cognitively absorbing experiences.

“For instance,” he says, “the mainstreaming of consumer-facing immersive ‘virtual reality’ [VR]—which typically places high demands on users’ processing—will be coming in an age of media multitasking. Interfaces will need to figure out how to immerse users while still permitting them to access different information streams.”

See also  Mashpee TikToker gets real about neighborhood Facebook groups

Similarly, he says, mainstreaming “augmented reality” (AR) experiences will also mean requiring users to skillfully juggle attentional demands. People will suddenly be forced to multitask between virtual and real-world stimuli. These are common practices for hobbyists, but may present more challenges for a broader population of users.

Thus, Cummings suggests, if the metaverse is the ecosystem of devices and experiences that Facebook CEO Mark Zuckerberg envisions, users will be switching back and forth between different types of immersive experiences and stimuli, from reality to augmented reality to virtual reality. This scenario may present some new and interesting psychological experiences, in the effects of in-person (e.g., chatting with a friend in the same room), mediated (e.g., reading a news alert on your phone), and augmented messages (e.g., a holographic personal assistant)—all interdependent and blurring together.

Second, Cummings expects that a successful metaverse would mean exchanges with virtual content and people that are much more like face-to-face or interpersonal interactions. “This will require the designers of these platforms to master key elements of media richness theory and factors influencing users’ sense of spatial and social presence,” he explains. For instance, social networking in the metaverse may not only consist of the informational experiences we are used to today (e.g., reading text, watching videos, viewing pictures), but increasingly also perceptual experiences (e.g., a sense of being transported into the story, a feeling of being next to someone on the other side of the globe, noticing nonverbal behaviors).

Finally, Cummings indicates that immersive media are rife for a whole new breed of covert persuasion—such as “native advertising,” or ads that mimic their surroundings—to the extent that users confuse the perceptually plausible with the real. He’s particularly interested in seeing the impact of immersion on users’ perceptions of message authorship and authorial intent.

See also  Holding Facebook to account

Indeed, back in the real world, native advertising has been widely adopted to covertly promote not only commercial products, but also political candidates. Candidates are increasingly relying upon “influencers” to post supportive messages on Facebook and other social media without consistently disclosing they are being paid to do so, blurring the critical line between what is real news and what is merely paid advertising. As I have previously addressed here, if the regulatory agencies that oversee advertising—both commercial and political—have not been able to keep up with the digital transformation of our media ecosystem, how will they be able to regulate the metaverse?

For Chris Wells, a COM associate professor of emerging media studies, the promise and pitfalls of the metaverse depend entirely on how Facebook rolls it out. For example, the radical network effects we see from social media rely to some degree on the extremely shortened forms of communication—short texts and short videos—that allow information scanning and selection on a very rapid scale. He indicates the pseudo-social presence of virtual reality would seem to reduce the number of people you can actually interact with. “How will the metaverse be organized and who will you be able to interact with?” Wells asks. Are people going to have coffee virtually? Virtual meetings? He suggests that a site such as Second Life may offer rudimentary evidence of the kinds of interactions that emerge when people engage with strangers in a massive virtual world.

Presumably, Wells suggests, Facebook will still have to provide a great deal of content moderation in the metaverse if people are to have any interactions outside tightly defined networks. “Given Facebook’s track record with their current platform,” he says, “this could well be an unmitigated disaster; but expecting this may lead them to tightly control who interacts with whom and in what ways.”

See also  Delhi HC verdict on WhatsApp, Facebook pleas against CCI order likely tomorrow

Second Life notwithstanding, Wells also questions who will actually want to engage in such a virtual space. “My read of the pandemic is that people don’t particularly want to keep sitting in their bedrooms and interacting through Zoom,” he says.

“Will wearing an Oculus headset make that a lot better? I’m not sure,” he adds. “But I also suspect that there are at least a lot of people for whom going to a virtual concert or playing virtual chess with a friend in the park are paltry substitutes for the real thing.”

Wells concedes that there are a lot of millennials and Gen Zs who spend a lot of time in their bedrooms on video games, with digital avatars, and so forth. One possibility, he says, is that the metaverse becomes a niche space for these sorts of folks.

As these metaverse developments take shape, CRC fellows are well positioned to monitor these emerging media uses and perceptual effects. The CRC has multiple Oculus virtual reality headsets that can be paired with our psychophysiological measurement tools. For as technology takes us to new realms, we have a responsibility back in reality to analyze and understand how humans are affected.

Michelle Amazeen is a College of Communication associate professor and director of COM’s Communication Research Center.

“POV” is an opinion page that provides timely commentaries from students, faculty, and staff on a variety of issues: on-campus, local, state, national, or international. Anyone interested in submitting a piece, which should be about 700 words long, should contact John O’Rourke at orourkej@bu.eduBU Today reserves the right to reject or edit submissions. The views expressed are solely those of the author and are not intended to represent the views of Boston University.

Continue Reading

FACEBOOK

Facebook’s centralized metaverse a threat to the decentralized ecosystem?

Published

on

By

facebook’s-centralized-metaverse-a-threat-to-the-decentralized-ecosystem?-–-cointelegraph
en flag
sv flag

Facebook has been planning its foray into the metaverse for some time now — possibly even several years. But it’s only recently that its ambitious expansion plans have catapulted the concept into mainstream headlines across the globe. Renaming the parent company to Meta was perhaps the biggest, boldest statement of intent the firm could make. Suddenly, major news outlets were awash with explainer articles, while finance websites have been bubbling with excitement about the investment opportunities in this newly emerging sector. 

However, within the crypto sphere, the response has been understandably more muted. After all, decentralized versions of the metaverse have been in development around these parts for several years now. Even worse, the tech giants’ cavalier attitude to user privacy and data harvesting has informed many of the most cherished principles in the blockchain and crypto sector.

Nevertheless, metaverse tokens such as Decentraland (MANA) and Sandbox (SAND), enjoyed extensive rallies on the back of the news, and within a few days of Facebook’s announcement, decentralized metaverse project The Sandbox received $93 million in funding from investors, including Softbank.

But now that the dust has settled, do the company-formerly-known-as-Facebook’s plans represent good news for nonfungible token (NFT) and metaverse projects in crypto? Or does Meta have the potential to sink this still-nascent sector?

What is known so far?

Facebook hasn’t released many details about what can be expected from its version of the metaverse. A promotional video featuring the company co-founder and CEO Mark Zuckerberg, himself, along with his metaverse avatar, looked suitably glossy. Even so, it was scant with information about how things will actually work under the hood. However, based on precedent and what is known, some distinctions can be made between what Facebook is likely to be planning and the established decentralized metaverse projects.

Facebook has some form when it comes to questions over whether it will adopt decentralized infrastructure based on its efforts to launch a cryptocurrency. Diem, formerly Libra, is a currency run by a permissioned network of centralized companies. David Marcus, who heads up Diem, has also confirmed that the project, and by extension Facebook, is also considering NFTs integrated with Novi, the Diem-compatible wallet.

Based on all this, it’s fair to say that the Facebook metaverse would have an economy centered around the Diem currency, with NFT-based assets issued on the permissioned Diem network.

Announcing @Meta — the Facebook company’s new name. Meta is helping to build the metaverse, a place where we’ll play and connect in 3D. Welcome to the next chapter of social connection. pic.twitter.com/ywSJPLsCoD

— Meta (@Meta) October 28, 2021

The biggest difference between Facebook’s metaverse, and crypto’s metaverse projects, is that the latter operates on open, permissionless, blockchain architecture. Any developer can come and build a metaverse application on an open blockchain, and any user can acquire their own virtual real estate and engage with virtual assets.

See also  Delhi HC verdict on WhatsApp, Facebook pleas against CCI order likely tomorrow

Critically, one of the biggest benefits of a decentralized, open architecture is that users can join and move around barrier-free between different metaverses. Interoperability protocols reduce friction between blockchains, allowing assets, including cryptocurrencies, stablecoins, utility tokens, NFTs, loyalty points, or anything else to be transferable across chains.

So the most crucial question regarding Facebook’s plans is around the extent to which the company plans for its metaverse to be interoperable, and metaverse assets to be fungible with other, non-Facebook issued assets.

From the standpoint of the decentralized metaverse, it doesn’t necessarily sound like great news. After all, Meta’s global user base dwarfs crypto’s. But there’s another way of looking at it, according to Robbie Ferguson, co-founder of Immutable, a layer two platform for NFTs:

“Even if [Meta] decides to pursue a closed ecosystem, it is still a fundamental core admission of the value that digital ownership provides — and the fact that the most valuable battleground of the future will be who owns the infrastructure of digital universes.”

Centralization could be the most limiting factor

Based on the fact that Diem is already a closed system, it seems likely that the Facebook metaverse will also be a closed ecosystem that won’t necessarily allow direct or easy interaction with decentralized metaverses. Such a “walled garden” approach would suit the company’s monopolistic tendencies but limit the potential for growth or Facebook-issued NFTs to attain any real-world value.

Furthermore, as Nick Rose Ntertsas CEO and founder of an NFT marketplace Ethernity Chain pointed out, users are becoming weary of Facebook’s centralized dominance. He added in a conversation with Cointelegraph:

“Amidst [the pandemic-fuelled digital] transition, crypto adoption rose five-fold. At the same time, public opinion polling worldwide shows growing distrust of centralized tech platforms, and more favorable ratings of the very nature of what crypto and blockchain offer in protecting privacy, enabling peer-to-peer transactions, and championing transparency and immutability.”

This point is even more pertinent when considering that the utility of Diem has been preemptively limited by regulators before it has even launched. Regardless of how Diem could eventually be used in a Facebook metaverse, regulators have made it clear that Diem isn’t welcome in the established financial system.

See also  Editorial: Big Tech behemoth Apple cuts another behemoth, Facebook, down to size

So it seems evident that a closed Facebook metaverse will be limited to the point that it will be a completely different value proposition to what the decentralized metaverse projects are trying to achieve.

Meanwhile, decentralized digital platforms are already building and thriving. Does that mean there’s a risk that blockchain-based platforms could fall prey to the same fate as Instagram and WhatsApp, and get swallowed up as part of a Meta acquisition spree? Sebastien Borget, co-founder and chief operating officer of the Sandbox, believes that decentralized projects can take a different approach:

“Typically, big tech sits on the sidelines while new entrants fight for relevance and market share — and then swoops in to buy one of the strongest players. But that strategy only works if startups sell. So there has to be a different economic incentive, which is exactly why Web 3.0 is so powerful. It aligns the platform and the users to build a platform that stands on its own, where users have ownership over its governance — and ultimate success.”

A metaverse operated by tech giants?

Rather than attempting to dominate, Facebook may decide to integrate with established metaverses, games and crypto financial protocols — a potentially far more disruptive scenario. It could be seriously transformative for the crypto space, given the scale of Facebook’s user base.

Therefore, could there be a scenario where someone can move NFT assets between a Facebook metaverse and a decentralized network of metaverses? Sell Facebook-issued NFT assets on a DEX? Import a $69 billion Beeple to the Facebook metaverse to exhibit in a virtual gallery?

This seems to be an unlikely scenario as it would entail substantial changes in mindset from Facebook. While it would create exponentially more economic opportunity, regulatory concerns, risk assessments, and Facebook’s historical attitude to consuming competitors rather than playing alongside them are likely to be significant blockers.

See also  Op-ed: Facebook's Nick Clegg calls for bipartisan approach to break the deadlock on internet ...

Related: As Patreon tests the waters, can crypto open doors for content creators?

The most likely outcome seems to be that Facebook will attempt to play with established centralized tech and finance firms to bring value into its metaverse. Microsoft has already announced its own foray into the metaverse, but perhaps not as a direct competitor to what Facebook is attempting to achieve. Microsoft’s metaverse is focused on enhancing the “Teams” experience in comparison to Facebook’s VR-centric approach.

But it seems more plausible that the two firms would offer some kind of integration between their metaverse platforms than either of them would rush to partner with decentralized, open-source competitors. After all, Facebook’s original attempt to launch Libra involved other big tech and finance firms.

Make hay while the sun shines

Just as Libra created a lot of hype, which ultimately became muted by regulators, it seems likely that the development of a Facebook metaverse can play out in the same way with regards to its impact on the cryptocurrency sector.

Regulators will limit Facebook’s ability to get involved with money or finance, and the company isn’t likely to develop a sudden desire for open-source, decentralized, solutions.

However, the one positive boost that Libra brought to crypto was publicity. Ntertsas believes that this, alone, is enough to provide a boost to the decentralized NFT sector, explaining:

“Meta’s plans will enable a surge in utility for NFT issuers and minters. NFTs can then be used as metaverse goods — from wearables to art, to collectibles, and even status symbols — there is an infinite use case and utility to NFTs and what they can become in the ever-growing NFT ecosystem.”

In this respect, there are plenty of opportunities for decentralized metaverse projects to muscle into the limelight with their own offerings and showcase how decentralized solutions are already delivering what Facebook is still developing. Borget urges the community to seize the moment:

“Now is the time for us to double down on building our vision of the open, decentralized and user-driven metaverse. We also have to invest time and money in explaining the benefits of our vision over what the Facebooks of the world have offered thus far.”

Continue Reading

FACEBOOK

Facebook hackers target small business owners to scam money for ads

Published

on

By

facebook-hackers-target-small-business-owners-to-scam-money-for-ads-–-9news
en flag
sv flag

It took just 15 minutes for hackers to infiltrate Sydney single mum Sarah McTaggart’s Facebook page.

From there, they also took control of the account she uses to run her small business, wiping out 90 percent of the client base she has been building up for the past four years – almost in an instant.

Their target? The PayPal account she uses to buy Facebook ads for her business.

Sarah McTaggart has lost access to her business, which she runs through Facebook.
Sarah McTaggart has lost access to her business, which she runs through Facebook. (Supplied)

Ms McTaggart is among many small business owners who say they have had their Facebook pages hacked and fraudulent charges made on their PayPal or bank accounts as the scammers buy up ads with their money.

It was last Thursday evening when Ms McTaggart first noticed something was happening with her Facebook account.

“I was just watching TV and I opened up Facebook. I saw I had received and accepted a friend request from some guy in in the US who I didn’t send a friend request to,” Ms McTaggart said.

“Then, about five minutes later, Facebook sent me an email saying my account had been disabled because I had breached community standards,” she said.

Hackers changed Ms McTaggart's Facebook profile to that of a flag associated with ISIS.

The hackers had used a well-known technique, previously reported on by 9news.com.au, which involves changing the profile picture of the account they have hacked to that of a flag associated with the terrorist group ISIS.

The ISIS flag breaches Facebook’s community standards and automatically triggers an alert which causes Facebook to boot the user out of their account.

In another measure designed to keep her out, the hackers also changed Ms McTaggart’s age on her account, making her too young to own a Facebook account.

Ms McTaggart said she immediately took measures to to try report the hack to Facebook and prove her identity and age, but they were unsuccessful.

See also  Editorial: Big Tech behemoth Apple cuts another behemoth, Facebook, down to size

Next, the hackers took control of her business page.

“I woke up the next morning and I received an email from PayPal saying a payment of $320 had been authorised for Facebook ads,” Ms McTaggart said.

Ms McTaggart said she had been unable to get the money the hackers spent on Facebook ads through her account back from PayPal.
Ms McTaggart said she had been unable to get the money the hackers spent on Facebook ads through her account back from PayPal. (Supplied)

Ms McTaggart had previously used the PayPal account to buy ads for her dreadlock business – Better Off Dread – where she creates and maintains dreadlocks for clients as well as selling accessories.

The mother-of-one said she was devastated to lose access to both her personal and business page.

Her business, which is largely run out of Facebook, was her livelihood, Ms McTaggart said.

“It is so distressing. Close to 90 percent of my new business inquiries come through Facebook,” she said.

“Almost all of my communications with my clients is on Facebook, so disabling is my account has completely cut off my capacity to talk to any of those people.

“I’m booked out with clients until mid-January, and I have no way of confirming appointments with those people. They’ve got no way of cancelling if they are sick.”

Ms McTaggart said she was initially confident she would be able to get access to her accounts back.

“I was thinking of course this will get resolved,” she said.

But, after exhausting all of the suggestions offered by Facebook’s customer service department online, Ms McTaggart said she was left frustrated by Facebook’s lack of accountability, with no number available to call the social media giant directly.

“It just dawned on me gradually that this was quite a complex situation, and there is actually no way to speak to a human at Facebook,” she said.

See also  Facebook mishandles ads from EU institutions and governments

PayPal had also refused to refund the $320 the hackers spent on ads, she said.

“PayPal won’t refund that as I had an advertising agreement in place with Facebook,” she said.

“And I haven’t been able to communicate with anyone at Facebook to get them to refund it.”

A list of the charges Ianni Nicolaou found on his bank account statement after he was hacked.A list of the charges Ianni Nicolaou found on his bank account statement after he was hacked. (Supplied)

Ms McTaggart’s story is familiar to Ianni Nicolaou, a US real estate agent from Alabama.

Mr Nicolaou had his personal Facebook page and his business page hacked two months ago in August and has been unable to regain access to them both ever since.

“It’s awful. I’m a realtor and it’s absolutely necessary to use the platform these days,” Mr Nicolaou told 9News .com.au.

“I have a business page that I run advertisements through.

“I have invested money for my following, and now it’s gone – out of nowhere.”

After his accounts were hacked, Mr Nicolaou said he had also been hit with about A$1800 in charges made to the bank account linked to his Facebook business page.

“There were charges; charges after charges. They started at about $100 each and then kept getting bigger and bigger,” he said.

“What frustrated me the most is that there is no acknowledgement from Facebook. There is no-one to call at Facebook and say you have got fraudulent charges.

“I have literally tried everything but it is robots you are talking to.

“The way I feel is this is actually fraud. I can’t talk to a human who wants to help me but they are happy to take my money just fine.”

When contacted by 9news.com.au, Meta Australia spokesperson Antonia Sanda said its investigations team was working to restore both Ms McTaggart’s and Mr Nicolaou’s accounts.

See also  Holding Facebook to account

“We want to keep suspicious activity off our platform and protect people’s accounts, and are working to restore these accounts to the rightful owners,” she said.

“Online phishing techniques are not unique to Facebook, however we’re making significant investments in technology to protect the security of people’s accounts.

“We strongly encourage people to strengthen their online security by turning on app-based two-factor authentication and alerts for unrecognised logins.”

Tips to stop your Facebook page getting hacked

  • Take action and report an account: People can always report an account, an ad, or a post that they feel is suspicious.
  • Don’t click on suspicious links: Don’t trust messages demanding money, offering gifts or threatening to delete or ban your account (or verifying your account on Instagram). To help you identify phishing and spam emails, you can view official emails sent from your settings within the app.
  • Don’t click on suspicious links from Meta/Facebook/Instagram: If you get a suspicious email or message or see a post claiming to be from Facebook, don’t click any links or attachments. If the link is suspicious, you’ll see the name or URL at the top of the page in red with a red triangle.
  • Don’t respond to these messages/ emails: Don’t answer messages asking for your password, social security number, or credit card information.
  • Avoid phishing: If you accidentally entered your username or password into a strange link, someone else might be able to log in to your account. Change your password regularly and don’t use the same passwords for everything.
  • Get alerts: Turn on two-factor authentication for additional account security.
  • Use extra security features: Get alerts about unrecognised logins and turn on two-factor authentication to increase your account security.
Continue Reading

Trending