Connect with us


Israel/Palestine: Facebook Censors Discussion of Rights Issues



(Washington, DC) – Facebook has wrongfully removed and suppressed content by Palestinians and their supporters, including about human rights abuses carried out in Israel and Palestine during the May 2021 hostilities, Human Rights Watch said today. The company’s acknowledgment of errors and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, or adequately explain why they occurred in the first place.

Facebook should take up the Facebook Oversight Board’s recommendation on September 14, 2021, to commission an independent investigation into content moderation regarding Israel and Palestine, particularly in relation to any bias or discrimination in its policies, enforcement, or systems, and to publish the investigation’s findings. Facebook has 30 days from the day the decision was issued to respond to the board’s recommendations.

“Facebook has suppressed content posted by Palestinians and their supporters speaking out about human rights issues in Israel and Palestine,” said Deborah Brown, senior digital rights researcher and advocate at Human Rights Watch. “With the space for such advocacy under threat in many parts of the world, Facebook censorship threatens to restrict a critical platform for learning and engaging on these issues.”

An escalation in violence in parts of Israel and the Occupied Palestinian Territory (OPT) during May led people to turn to social media to document, raise awareness, and condemn the latest cycle of human rights abuses. There were efforts to force Palestinians out of their homes, brutal suppression of demonstrators, assaults on places of worship, communal violence, indiscriminate rocket attacks, and airstrikes that killed civilians.

Human Rights Watch documented that Instagram, which is owned by Facebook, removed posts, including reposts of content from mainstream news organizations. In one instance, Instagram removed a screenshot of headlines and photos from three New York Times opinion articles for which the Instagram user added commentary that urged Palestinians to “never concede” their rights. The post did not transform the material in any way that could reasonably be construed as incitement to violence or hatred.

free widgets for website

In another instance, Instagram removed a photograph of a building with a caption that read, “This is a photo of my family’s building before it was struck by Israeli missiles on Saturday May 15, 2021. We have three apartments in this building.” The company also removed the reposting of a political cartoon whose message was that Palestinians are oppressed and not fighting a religious war with Israel.

All of these posts were removed for containing “hate speech or symbols” according to Instagram. These removals suggest that Instagram is restricting freedom of expression on matters of public interest. The fact that these three posts were reinstated after complaints suggests that Instagram’s detection or reporting mechanisms are flawed and result in false positives. Even when social media companies reinstate wrongly suppressed material, the error impedes the flow of information concerning human rights at critical moments, Human Rights Watch said.

Users and digital rights organizations also reported hundreds of deleted posts, suspended or restricted accounts, disabled groups, reduced visibility, lower engagement with content, and blocked hashtags. Human Rights Watch reviewed screenshots from people who were sharing content about the escalating violence and who reported restrictions on their accounts, including not being able to post content, livestream videos on Instagram, post videos on Facebook, or even like a post.

Human Rights Watch was not able to verify or determine that each case constituted an unjustified restriction due to lack of access to the underlying data needed for verification, and because Facebook refused to comment on specific details of various cases and accounts citing privacy obligations. The range and volume of restrictions reported warrant an independent investigation.

The Oversight Board recommended that Facebook engage an external, independent entity to conduct a thorough examination to determine whether Facebook has applied its content moderation in Arabic and Hebrew without bias, and that the report and its conclusions should be made public. This recommendation echoes multiple calls from human rights and digital rights organizations for a public audit.

free widgets for website

In addition to removing content based on its own policies, Facebook often does so at the behest of governments. The Israeli government has been aggressive in seeking to remove content from social media. The Israeli Cyber Unit, based within the State Attorney’s Office, flags and submits requests to social media companies to “voluntarily” remove content. Instead of going through the legal process of filing a court order based on Israeli criminal law to take down online content, the Cyber Unit makes appeals directly to platforms based on their own terms of service. A 2018 report by Israel’s State Attorney’s office notes an extremely high compliance rate with these voluntary requests, 90 percent across all platforms.

Human Rights Watch is not aware that Facebook has ever disputed this claim. In a letter to Human Rights Watch, the company stated that it has “one single global process for handling government requests for content removal.” Facebook also provided a link to its process for assessing content that violates local law, but that does not address voluntary requests from governments to remove content based on the company’s terms of service.

Noting the role of governments in content removal, the Oversight Board recommended that Facebook make this process transparent and distinguish between government requests that led to global removals based on violations of the company’s Community Standards and requests that led to removal or geo-blocking based on violations of local law. Facebook should implement this recommendation, and in particular disclose the number and nature of requests for content removal by the Israeli Government’s Cyber Unit and how it responded to them, Human Rights Watch said.

Protecting free expression on issues related to Israel and Palestine is especially important in light of shrinking space for discussion. In addition to Israeli authorities, Palestinian authorities in the West Bank and Gaza have systematically clamped down on free expression, while in several other countries, including the US and Germany, steps have been taken to restrict the space for some forms of pro-Palestine advocacy.

See also  Facebook deletes content banned in Russia, but could still face fine -report | Reuters

Human Rights Watch wrote to Facebook in June 2021 to seek the company’s comment and to inquire about temporary measures and longstanding practices around the moderation of content concerning Israel and Palestine. The company responded by acknowledging that it had already apologized for “the impact these actions have had on their community in Israel and Palestine and on those speaking about Palestinian matters globally,” and provided further information on its policies and practices. However, the company did not answer any of the specific questions from Human Rights Watch or meaningfully address any of the issues raised.

free widgets for website

“Facebook provides a particularly critical platform in the Israeli and Palestinian context, where Israeli authorities are committing crimes against humanity of apartheid and persecution against millions, and Palestinians and Israelis have committed war crimes,” Brown said. “Instead of respecting people’s right to speak out, Facebook is silencing many people arbitrarily and without explanation, replicating online some of the same power imbalances and rights abuses we see on the ground.”

Removal and Suppression of Human Rights and Other Content

In May, the escalating tensions between Israel and Palestinians culminated in 11 days of fighting between Israeli forces and Palestinian armed groups based in the Gaza Strip. From May 6 to 19, 7amleh, the Arab Center for the Advancement of Social Media (pronounced, “hamla” in Arabic, meaning “campaign”), reported documenting “a dramatic increase of censorship of Palestinian political speech online.”

In the two-week period alone, 7amleh said it documented 500 cases of what it described as content being taken down, accounts closed, hashtags hidden, the reach of specific content reduced, archived content deleted, and access to accounts restricted. Facebook and Instagram accounted for 85 percent of those restrictions.

The digital rights group Sada Social says it documented more than 700 instances of social media networks restricting access to or removing Palestinian content in May alone. On May 7, a group of 30 human rights and digital rights organizations denounced social media companies for “systematically silencing users protesting and documenting the evictions of Palestinian families from their homes in the neighborhood of Sheikh Jarrah in Jerusalem.”

free widgets for website

In addition to removing content, Facebook affixed a sensitive warning label to some posts requiring users to click through a screen that says that the content might be “upsetting.” Human Rights Watch found evidence that Facebook affixed such warnings to posts that raised awareness about human rights issues without exposing the viewer to upsetting content such as graphic violence or racial epithets.

For example, on May 24, Instagram affixed such a label to multiple stories posted by Mohammed el-Kurd, a Palestinian activist and resident of Sheikh Jarrah, including a story that contained a reposted image from another user’s Instagram feed of an Israeli police truck and another truck with Hebrew writing on it. The image raised awareness about a high court ruling and the presence of soldiers in the Sheikh Jarrah neighborhood. As of September 30 this image remains on the other user’s Instagram feed, without a sensitive warning label.

In a July letter to Human Rights Watch, Facebook said that it uses warnings to accommodate for “different sensitivities about graphic and violent content” among people who use their platforms. For that reason, they add a warning label to “incredibly graphic or violent content so that it is not available to people under the age of 18,” and so that users are “aware of the graphic or violent nature of the content before they click to see it.” The post in question does not include content that could be considered “graphic or violent,” based on Facebook’s standard.

Facebook said that “some labels would apply to entire carousels of images even if only one is violating.” Hiding content behind a label that prevents it from being viewed by default restricts access to that content. This may be an appropriate step for certain types of graphic and violent content, but labeling all photos when only a subset of them deserves a label is an arbitrary restriction on expression, Human Rights Watch said. Human Rights Watch cannot confirm what other images were in the carousel.

According to 7amleh, 46 percent of content that it documented as taken down from Instagram occurred without the company providing the user a prior warning or notice. In an additional 20 percent of the cases, Instagram notified the user but did not provide a specific justification for restricting the content.

free widgets for website

Human Rights Watch also reviewed screenshots from social media users who reported that their posts had less engagement and fewer views from other users than they typically do, or that content from their accounts was not showing up in feeds of other users, a sign that Facebook and Instagram may have made adjustments to their recommendation algorithm to demote certain content.

The Oversight Board investigated one instance of content concerning the escalation in violence in May being removed and, on September 15, issued a decision finding that Facebook acted wrongfully. The user had on May 10 shared a news article reporting on a threat by Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas, to fire rockets in response to a flare-up in Israel’s repression of Palestinians in occupied East Jerusalem. The Board recognized that re-publication of a news item on a matter of urgent public concern is protected expression and that removing the post restricted such expression without reducing offline harm.

The Board acknowledged receiving public comments from various parties alleging that Facebook has disproportionately removed or demoted content from Palestinian users and content in Arabic, especially in comparison to its treatment of posts threatening anti-Arab or anti-Palestinian violence within Israel. The Board also said it received public comments alleging that Facebook had not done enough to remove content that incites violence against Israeli civilians.

See also  Preview our next generation OSM editor: RapiD 2.0

Designating Organizations as “Dangerous:” A Danger to Free Expression

In some cases, Facebook removed the content under its Dangerous Individuals and Organizations Community Standard, which does “not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.” This was the basis for removing the post with a news article about the Izz al-Din al-Qassam Brigades. The Oversight Board criticized the “vagueness” of this policy in its decision.

free widgets for website

Facebook relies on the list of organizations that the US has designated as a “foreign terrorist organization,” among other lists. That list includes political movements that also have armed wings, such as the Popular Front for the Liberation of Palestine and Hamas. By deferring to the broad and sweeping US designations, Facebook prohibits leaders, founders, or prominent members of major Palestinian political movements from using its platform. It does this even though, as far as is publicly known, US law does not prohibit groups on the list from using free and freely available platforms like Facebook, and does not consider allowing groups on the list to use platforms tantamount to “providing material support” in violation of US law.

Facebook’s policy also calls for removing praise or support for major Palestinian political movements, even when those expressions of support contain no explicit advocacy of violence.

Facebook should make its list of Dangerous Individuals and Organizations public. It should ensure that the related policy and enforcement do not restrict protected expression, including about terrorism, human rights abuses, and political movements, consistent with international human rights standards, in line with the Oversight Board’s recommendations. In particular, it should clarify which of the organizations banned by Israeli authorities are included under its Dangerous Individuals and Organizations policy.

Reliance on Automation

The audit to determine whether Facebook’s content moderation has been applied without bias should include an examination of the use of automated content moderation. According to Facebook’s periodic transparency reporting on how it enforces its policies, for the period of April to June 2021, Facebook and Instagram indicated that through the use of its automated tools it had detected 99.7 percent of the content it deemed to potentially violate its Dangerous Individuals and Organizations policy before a human flagged it. For hate speech, the percentage is 97.6 percent for Facebook and 95.1 percent for Instagram for the same period.

free widgets for website

Automated content moderation is notoriously poor at interpreting contextual factors that can be key to determining whether a post constitutes support for or glorification of terrorism. This can lead to overbroad limits on speech and inaccurate labeling of speakers as violent, criminal, or abusive. Automated content moderation of content that platforms consider to be “terrorist and violent extremist” has in other contexts led to the removal of evidence of war crimes and human rights atrocities from social media platforms, in some cases before investigators know that the potential evidence exists.

Processes intended to remove extremist content, in particular the use of automated tools, have sometimes perversely led to removing speech opposed to terrorism, including satire, journalistic material, and other content that would, under rights-respecting legal frameworks, be considered protected speech. For example, Facebook’s algorithms reportedly misinterpreted a post from an independent journalist who once headed the BBC’s Arabic News service that condemned Osama bin Laden as constituting support for him. As a result, the journalist was blocked from livestreaming a video of himself shortly before a public appearance. This kind of automatic content removal hampers journalism and other writing, and jeopardizes the future ability of judicial mechanisms to provide remedy for victims and accountability for perpetrators of serious crimes.

The audit of Facebook’s practices should investigate the role that designating a group as terrorist plays in automated content moderation. In one incident, Instagram restricted the hashtag #AlAqsa (#الاقصى or #الأقصى) and removed posts about Israeli police violence at the al-Aqsa mosque in Jerusalem, before Facebook acknowledged an error and reportedly reinstated some of the content.

Buzzfeed News reported that an internal Facebook post noted that the content had been taken down because al-Aqsa “is also the name of an organization sanctioned by the United States Government,” Al-Aqsa Martyrs’ Brigades. Human Rights Watch reviewed four screenshots that documented that Instagram had limited posts using the #AlAqsa hashtag and posts about Palestinian demonstrations at al-Aqsa. Israeli forces responded to demonstrations at the al-Aqsa mosque by firing teargas, stun grenades, and rubber-coated steel bullets, including inside the mosque.  The Israeli response left 1,000 Palestinians injured between May 7 and May 10. At least 32 Israeli officers were also injured.

The use of automated tools to moderate content has accelerated due to the ever-expanding growth of user-generated content online. It is important for companies like Facebook to recognize the limitations of such tools and increase their investment in people to review content to avoid, or at least more quickly correct, enforcement errors, in particular in sensitive situations.

free widgets for website

In a letter to Human Rights Watch, Facebook referred to the incident as “an error that temporarily restricted content.” The audit should investigate how automation may have played a role in this erroneous enforcement of Facebook policies.

Lack of Transparency Around Government Requests

An independent audit should also evaluate Facebook’s relationship with the Israeli government’s Cyber Unit, which creates a parallel enforcement system for the government to seek to censor content without official legal orders. While Facebook regularly reports on legal orders, it does not report on government requests based on alleged violations of its community standards.

See also  Facebook post spawns massive effort to help young adults in Central Florida aged out of foster care

This process may result in circumventing judicial processes for addressing illegal speech, and government-initiated restrictions on legal speech without informing targeted social media users. The result denies them the due process rights they would have if the government sought to restrict the content through legal processes.  On April 12 the Israeli Supreme Court rejected a petition filed by Adalah and the Association for Civil Rights in Israel seeking to stop the Cyber Unit’s operations.

Facebook declined to answer the Oversight Board’s questions about the number of requests the Israeli government made to remove content during the May 2021 hostilities. The company only said, in relation to the case that the Board ruled on, “Facebook has not received a valid legal request from a government authority related to the content the user posted in this case.”

free widgets for website

Acceding to Israeli governmental requests raises concern, since Israeli authorities criminalize political activity in the West Bank using draconian laws to restrict peaceful speech and to ban more than 430 organizations, including all the major Palestinian political movements, as Human Rights Watch has documented. These sweeping restrictions on civil rights are part of the Israeli government’s crimes against humanity of apartheid and persecution against millions of Palestinians.

Technical Glitches Don’t Explain the Full Picture

Facebook has acknowledged several issues affecting Palestinians and their content, some of which it attributed to “technical glitches” and human error. However, these explanations do not explain the range of restrictions and suppression of content observed.

In other situations of political crisis or public emergencies, Facebook has announced so-called “break glass” measures. These include restricting the spread of live video on its platforms and adjustments to its algorithms that classify and rank content to reduce the likelihood that users will see content that potentially violates its policies. Facebook has reportedly deployed such measures in Ethiopia, Myanmar, Sri Lanka, and the US. Facebook has not publicly acknowledged any special measures it has taken in the context of content about Israel and Palestine, aside from setting up a “special operations center” to monitor content on its platforms regarding the May 2021 escalation in Israel and Palestine. Human Rights Watch requested information about the “special operations center,” but Facebook did not respond.

This latest spate of content takedowns is part of a wider pattern of reported censorship of Palestinians and their supporters by social media companies, which civil society organizations have documented for years. These restrictions highlight the need to commission a comprehensive, independent audit that examines Facebook’s underlying policies and enforcement of those policies for bias.

free widgets for website

Social Media Companies’ Responsibilities

Businesses have a responsibility to respect human rights by identifying and addressing the human rights impacts of their operations, and providing meaningful access to a remedy. For social media companies, this responsibility includes being transparent and accountable in their moderation of content to ensure that decisions to take content down are not overly broad or biased.

The Santa Clara Principles on Transparency and Accountability in Content Moderation provide important guidance for how companies should carry out their responsibilities in upholding freedom of expression. Based on those principles, companies should clearly explain to users why their content or their account has been taken down, including the specific clause of the Community Standards that the content was found to violate.

Companies should also explain how the content was detected, evaluated, and removed – for example, by users, automation, or human content moderators – and provide a meaningful opportunity for timely appeal of any content removal or account suspension. Facebook has endorsed the Santa Clara Principles, but hasn’t fully applied them.

Need for an Independent Investigation

free widgets for website

Facebook should ensure that investigators closely consult with civil society at the outset of the investigation, so that the investigation reflects the most pressing human rights concerns from those affected by its policies. It should make the outcome of the independent investigation public, as it did with its human rights impact assessment on Myanmar and civil rights audit in the US, and present its findings to Facebook’s executive leadership. Facebook should continuously consult with civil society about how its recommendations are being carried out.

Human Rights Watch raised several questions in the letter to Facebook to which the company did not respond. The investigation should address these questions in connection with the May hostilities, and more generally, including:

  • What changes did Facebook make to its algorithms to demote or reduce the spread of speech that it determined most likely violates policies on hate speech, violence and incitement, or dangerous individuals and organizations?
  • What automated detection methods were used, including what terms and classifiers were being used to flag content for potential hate speech or violence and incitement allowing them to be flagged automatically for demotion and/or removal?
  • What error rates for enforcement were deemed to be acceptable?
  • What policies were applied to content concerning Israel and Palestine that are not public?
  • Does Facebook have any firewalls in place to prevent undue influence of its public policy staff, including former Israeli and other government officials, over content moderation decisions with regard to Israel and Palestine?

Note: A member of Human Rights Watch staff is on the Facebook Oversight Board in his personal capacity. The staff member does not work on issues related to human rights and technology at Human Rights Watch. Any position Human Rights Watch takes on the Facebook Oversight Board is independent and is not informed or influenced by his membership on it.

free widgets for website

Read More


Enabling developers to create innovative AIs on Messenger and WhatsApp





Every week over 1 billion people connect with businesses on our messaging apps. Many of these conversations are made possible by the thousands of developers who build innovative and engaging experiences on Messenger, Instagram and WhatsApp.

Since opening access to our Llama family of large language models, we’ve seen lots of momentum and innovation with more than 30 million downloads to date. As our messaging services continue to evolve, we believe the technology from Llama and other generative AI models have the potential to enhance business messaging through more natural, conversational experiences.

At Connect Meta announced that developers will be able to build third-party AIs – a term we use to refer to our generative AI-powered assistants – for our messaging services.

We’re making it easy for any developer to get started, so we’re simplifying the developer onboarding process and providing access to APIs for AIs that make it possible to build new conversational experiences within our messaging apps.

All developers will be able to access the new onboarding experience and features on Messenger in the coming weeks. For WhatsApp, we’ll be opening a Beta program in November – if you’re interested in participating please sign up to the waitlist here to learn more.

free widgets for website

We’ll keep everyone updated as we make these tools available to more developers later this year. We look forward to your feedback and seeing what you create.

First seen at

See also  3rd Circ. Revives Philly News Anchor's IP Row With Facebook - Law360
Continue Reading


Introducing Facebook Graph API v18.0 and Marketing API v18.0





Today, we are releasing Facebook Graph API v18.0 and Marketing API v18.0. As part of this release, we are highlighting changes below that we believe are relevant to parts of our developer community. These changes include announcements, product updates, and notifications on deprecations that we believe are relevant to your application(s)’ integration with our platform.

For a complete list of all changes and their details, please visit our changelog.

General Updates

Consolidation of Audience Location Status Options for Location Targeting

As previously announced in May 2023, we have consolidated Audience Location Status to our current default option of “People living in or recently in this location” when choosing the type of audience to reach within their Location Targeting selections. This update reflects a consolidation of other previously available options and removal of our “People traveling in this location” option.

We are making this change as part of our ongoing efforts to deliver more value to businesses, simplify our ads system, and streamline our targeting options in order to increase performance efficiency and remove options that have low usage.

This update will apply to new or duplicated campaigns. Existing campaigns created prior to launch will not be entered in this new experience unless they are in draft mode or duplicated.

free widgets for website

Add “add_security_recommendation” and “code_expiration_minutes” to WA Message Templates API

Earlier this year, we released WhatsApp’s authentication solution which enabled creating and sending authentication templates with native buttons and preset authentication messages. With the release of Graph API v18, we’re making improvements to the retrieval of authentication templates, making the end-to-end authentication template process easier for BSPs and businesses.

With Graph API v18, BSPs and businesses can have better visibility into preset authentication message template content after creation. Specifically, payloads will return preset content configuration options, in addition to the text used by WhatsApp. This improvement can enable BSPs and businesses to build “edit” UIs for authentication templates that can be constructed on top of the API.

See also  TikTok, Facebook most downloaded non-gaming apps worldwide in February 2021: Report

Note that errors may occur when upgrading to Graph API v18 if BSPs or businesses are taking the entire response from the GET request and providing it back to the POST request to update templates. To resolve, the body/header/footer text fields should be dropped before passing back into the API.

Re-launching dev docs and changelogs for creating Call Ads

  • Facebook Reels Placement for Call Ads

    Meta is releasing the ability to deliver Call Ads through the Facebook Reels platform. Call ads allow users to call businesses in the moment of consideration when they view an ad, and help businesses drive more complex discussions with interested users. This is an opportunity for businesses to advertise with call ads based on peoples’ real-time behavior on Facebook. Under the Ad set Level within Ads Manager, businesses can choose to add “Facebook Reels” Under the Placements section.
  • Re-Launching Call Ads via API

    On September 12, 2023, we’re providing updated guidance on how to create Call Ads via the API. We are introducing documentation solely for Call Ads, so that 3P developers can more easily create Call Ads’ campaigns and know how to view insights about their ongoing call ad campaigns, including call-related metrics. In the future, we also plan to support Call Add-ons via our API platform. Developers should have access to the general permissions necessary to create general ads in order to create Call Ads via the API platform.

    Please refer to developer documentation for additional information.

Deprecations & Breaking Changes

Graph API changes for user granular permission feature

We are updating two graph API endpoints for WhatsAppBusinessAccount. These endpoints are as follows:

  • Retrieve message templates associated with WhatsAppBusiness Account
  • Retrieve phone numbers associated with WhatsAppBusiness Account

With v18, we are rolling out a new feature “user granular permission”. All existing users who are already added to WhatsAppBusinessAccount will be backfilled and will continue to have access (no impact).

The admin has the flexibility to change these permissions. If the admin changes the permission and removes access to view message templates or phone numbers for one of their users, that specific user will start getting an error message saying you do not have permission to view message templates or phone numbers on all versions v18 and older.

free widgets for website

Deprecate legacy metrics naming for IG Media and User Insights

Starting on September 12, Instagram will remove duplicative and legacy, insights metrics from the Instagram Graph API in order to share a single source of metrics to our developers.

This new upgrade reduces any confusion as well as increases the reliability and quality of our reporting.

After 90 days of this launch (i.e. December 11, 2023), we will remove all these duplicative and legacy insights metrics from the Instagram Graph API on all versions in order to be more consistent with the Instagram app.

We appreciate all the feedback that we’ve received from our developer community, and look forward to continuing to work together.

Please review the media insights and user insights developer documentation to learn more.

free widgets for website

Deprecate all Facebook Wi-Fi v1 and Facebook Wi-Fi v2 endpoints

Facebook Wi-Fi was designed to improve the experience of connecting to Wi-Fi hotspots at businesses. It allowed a merchant’s customers to get free Wi-Fi simply by checking in on Facebook. It also allowed merchants to control who could use their Wi-Fi and for how long, and integrated with ads to enable targeting to customers who had used the merchant’s Wi-Fi. This product was deprecated on June 12, 2023. As the partner notice period has ended, all endpoints used by Facebook Wi-Fi v1 and Facebook Wi-Fi v2 have been deprecated and removed.

API Version Deprecations:

As part of Facebook’s versioning schedule for Graph API and Marketing API, please note the upcoming deprecations:

Graph API

  • September 14, 2023: Graph API v11.0 will be deprecated and removed from the platform
  • February 8, 2024: Graph API v12.0 will be deprecated and removed from the platform
  • May 28, 2024: Graph API v13.0 will be deprecated and removed from the platform

Marketing API

  • September 20, 2023: Marketing API v14.0 will be deprecated and removed from the platform
  • September 20, 2023: Marketing API v15.0 will be deprecated and removed from the platform
  • February 06, 2024: Marketing API v16.0 will be deprecated and removed from the platform

To avoid disruption to your business, we recommend migrating all calls to the latest API version that launched today.

Facebook Platform SDK

As part of our 2-year deprecation schedule for Platform SDKs, please note the upcoming deprecations and sunsets:

  • October 2023: Facebook Platform SDK v11.0 or below will be sunset
  • February 2024: Facebook Platform SDK v12.0 or below will be sunset

First seen at

See also  Judge approves $650M Facebook privacy lawsuit settlement
Continue Reading


Allowing Users to Promote Stories as Ads (via Marketing API)





Before today (August 28, 2023), advertisers could not promote images and/or videos used in Instagram Stories as ads via the Instagram Marketing API. This process created unwanted friction for our partners and their customers.

After consistently hearing about this pain point from our developer community, we have removed this unwanted friction for advertisers and now allow users to seamlessly promote their image and/or video media used in Instagram Stories as ads via the Instagram Marketing API as of August 28, 2023.

We appreciate all the feedback received from our developer community, and hope to continue improving your experience.

Please review the developer documentation to learn more.

First seen at

free widgets for website
See also  Facebook post spawns massive effort to help young adults in Central Florida aged out of foster care
Continue Reading