Connect with us

FACEBOOK

After Facebook Leaks, Here Is What Should Come Next

Published

on

after-facebook-leaks,-here-is-what-should-come-next

Every year or so, a new Facebook scandal emerges. These blowups follow a fairly standard pattern, at least in the U.S. First, new information is revealed that the company misled users about an element of the platform—data sharing and data privacy, extremist content, ad revenue, responses to abuse—the list goes on. Next, following a painful news cycle for the company, Mark Zuckerberg puts on a sobering presentation for Congress about the value that Facebook provides to its users, and the work that they’ve already done to resolve the issue. Finally, there is finger-wagging, political jockeying, and within a month or two, a curious thing happens: Congress does nothing.

It’s not for lack of trying, of course—much like Facebook, Congress is a many-headed beast, and its members rarely agree on the specific problems besetting American life, let alone the solutions. But this year may be different.

Many of the problems highlighted by these documents are not particularly new. Regardless, we may finally be at a tipping point.

For the last month, Facebook has been at the center of a lengthy, damaging news cycle brought on by the release of thousands of pages of leaked documents, sent to both Congress and news outlets by former Facebook data scientist Frances Haugen. The documents show the company struggling internally with the negative impacts of both Facebook and its former-rival, now-partner platform, Instagram. (Facebook’s attempt to rebrand as Meta should not distract from the takeaways of these documents, so we will continue to call the company Facebook here.)

Advertisement
free widgets for website

In addition to internal research and draft presentations released several weeks ago, thousands of new documents were released last week, including memos, chats, and emails. These documents paint a picture of a company that is seriously grappling with (and often failing in) its responsibility as the largest social media platform. In no particular order, the documents show that:

Many of the problems highlighted by these documents are not particularly new. People looking in at the black box of Facebook’s decision-making have come to similar conclusions in several areas; those conclusions have simply now been proven. Regardless, we may finally be at a tipping point.

When Mark Zuckerberg went in front of Congress to address his company’s role in the Cambridge Analytica scandal over three years ago, America’s lawmakers seemed to have trouble agreeing on basic things like how the company’s business model worked, not to mention the underlying causes of its issues or how to fix them. But since then, policymakers and politicians have had time to educate themselves. Several more hearings addressing the problems with Big Tech writ large, and with Facebook in particular have helped government develop a better shared understanding of how the behemoth operates; as a result, several pieces of legislation have been proposed to rein it in.

Advertisement
free widgets for website

Now, the Facebook Papers have once again thrust the company into the center of public discourse, and the scale of the company’s problems have captured the attention of both news outlets and Congress. That’s good—it’s high time to turn public outrage into meaningful action that will rein in the company.

But it’s equally important that the solutions be tailored, carefully, to solve the actual issues that need to be addressed. No one would be happy with legislation that ends up benefitting Facebook while making it more difficult for competing platforms to coexist. For example, Facebook has been heavily promoting changes to Section 230 that would, by and large, harm small platforms while helping the behemoth.

See also  Facebook study: People wanting convenience, new shopping experiences

Advertisement
free widgets for website

Here’s where EFF believes Congress and the U.S. government could make a serious impact:

Break Down the Walls

Much of the damage Facebook does is a factor of its size. Other social media sites that aren’t attempting to scale across the entire planet run into fewer localization problems, are able to be more thoughtful about content moderation, and have, frankly, a smaller impact on the world. We need more options. Interoperability will help us get there.

Interoperability is the simple idea that new services should be able to plug into dominant ones. An interoperable Facebook would mean that you wouldn’t have to choose between leaving Facebook and continuing to socialize with the friends, communities and customers you have there. Today, if you want to leave Facebook, you need to leave your social connections behind as well: that means no more DMs from your friend, no more access to your sibling’s photos, and no more event invitations from your co-workers. In order for a new social network to get traction, whole social groups have to decide to switch at the same time – a virtually insurmountable barrier. But if Facebook were to support rich interoperability, users on alternative services could communicate with users on Facebook. Leaving Facebook wouldn’t mean leaving leaving your personal network. You could choose a service – run by a rival, a startup, a co-op, a nonprofit, or just some friends – and it would let you continue to connect with content and people on Facebook, while enforcing its own moderation and privacy policies.

Advertisement
free widgets for website

We need more options. Interoperability will help us get there.

Critics often argue that in an interoperable world, Facebook would have less power to deny bad actors access to our data, and thus defend us from creeps like Cambridge Analytica. But Facebook has already failed to defend us from them. When Facebook does take action against third-party spying on its platform, it’s only because that happens to be in its interests: either as a way to quell massive public outcry, or as a convenient excuse to undermine legitimate competition. Meanwhile, Facebook continues to make billions from its own exploitation of our data. Instead of putting our trust in corporate privacy policies, we’d need a democratically accountable privacy law, with a private right of action. And any new policies which promote interoperability should come with built-in safeguards against the abuse of user data.

Interoperability isn’t an alternative to demanding better of Facebook – better moderation, more transparency, better privacy rules – rather, it’s an immediate, tangible way of helping Facebook’s users escape from its walled garden right now. Not only does that make those users’ lives better – it also makes it more likely that Facebook will obey whatever rules come next, not just because those are the rules, but because when they break the rules, their users can easily leave Facebook.

Facebook knows this. It’s been waging a “secret war on switching costs” for years now. Legislation like the ACCESS Act that would force platforms like Facebook to open up are a positive step toward a more interoperable future. If a user wants to view Facebook through a third-party app that allows for better searching or more privacy, they ought to be able to do so. If they want to take their data to platforms that have better privacy protections, without leaving their friends and social connections behind, they ought to be able to do that too.

Advertisement
free widgets for website

Pass a Baseline, Strong Privacy Law

Users deserve meaningful controls over how the data they provide to companies is collected, used, and shared. Facebook and other tech companies too often choose their profits over your privacy, opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of Facebook’s harms. Facebook’s core business model depends on collecting as much information about users as possible, then using that data to target ads – and target competitors. Meanwhile, Facebook (and Google) have created an ecosystem where other companies – from competing advertisers to independent publishers – fell as if they have no choice but to spy on their own users, or help Facebook do so, in order to squeak out revenue in the shadow of the monopolists.

See also  Facebook Dark Mode Testing Spotted by Android Users

Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data.  

Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data. They would also level the playing field, so that Facebook and Google cannot use their unrivaled access to our information as a competitive advantage. A strong privacy law should require real opt-in consent to collect personal data and prevent companies from re-using that data for secondary purposes. To let users enforce their rights, it must include a private cause of action that allows users to take companies to court if they break the law. This would put tip the balance of power away from the monopolists and back towards users. Ultimately, a well-structured baseline could put a big dent in the surveillance business model that not only powers Facebook, but enables so many of the worst harms of the tech ecosystem as well.

Advertisement
free widgets for website

Break Up the Tech

Facebook’s broken system is fueled by a growth-at-any-cost model, as indicated by some of the testimony Haugen delivered to Congress. The number of Facebook users and the increasing depth of the data it gathers about them is Facebook’s biggest selling point. In other words, Facebook’s badness is inextricably tied to its bigness

We’re pleased to see antitrust cases against Facebook. Requiring Facebook to divest Instagram, WhatsApp, and possibly other acquisitions and limiting the companies’ future mergers and acquisitions would go a long way toward solving some of the problems with the company, and inject competition into a field where it’s been stifled for many years now. Legislation to facilitate a breakup also awaits House floor action and was approved by the House Judiciary Committee.

Advertisement
free widgets for website

Shine a Light On the Problems

Some of the most detailed documents that have been released so far show research done by various teams at Facebook. And, despite being done by Facebook itself, much of that research’s  conclusions are critical of Facebook’s own services.

For example: a large percentage of users report seeing content on Facebook that they consider disturbing or hateful—a situation that the researcher notes “needs to change.” Research also showed that some young female Instagram users report that the platform makes them feel bad about themselves.

But one of the problems with documents like these is that it’s impossible to know what we don’t know—we’re getting reports piecemeal, and have no idea what practical responses might have been offered or tested. Also, some of the research might not always mean what first glances would indicate, due to reasonable limitations or the ubiquity of the platform itself.

EFF has been critical of Facebook’s lack of transparency for a very long time. When it comes to content moderation, for example, the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.

Advertisement
free widgets for website

The company must make it easier for researchers both inside and outside to engage in independent analysis.

Transparency about decisions has increased in some ways, such as through the Facebook Oversight Board’s public decisions. But revelations from the whistleblower documents about the company’s “cross-check” program, which gives some “VIP” users a near-blanket ability to ignore the community standards, make it clear that the company has a long way to go.  Facebook should start by embracing the Santa Clara Principles on Transparency and Accountability in Content Moderation, which are a starting point for companies to properly indicate the ways that they moderate user speech.

But content moderation is just the start. Facebook is constantly talking out of both sides of its depressingly large mouth—most recently by announcing it would delete face recognition templates of users of Facebook, then backing away from this commitment in its future ventures. Given how two-faced the company has frankly, always been, transparency is an important step towards ensuring we have real insight into the platform. The company must make it easier for researchers both inside and outside to engage in independent analysis.

Advertisement
free widgets for website

See also  Should Facebook and other platforms pay for our data?

Look Outside the U.S. 

Facebook must do more to respect its global user base. Facebook—the platform—is available in over 100 languages, but the company has only translated its community standards into around 50 of those (as of this writing). How can a company expect to enforce its moderation rules properly when they are written in languages, or dialects, that its users can’t read?

The company also must ensure that its employees, and in particular its content moderators, have cultural competence and local expertise. Otherwise it is literally impossible for them to appropriately moderate content. But first, it has to actually employ people with that expertise. It’s no wonder that the company has tended to play catch-up when crises arrive outside of America (where it also isn’t exactly ahead of the game).

And by the way: it’s profoundly disappointing that the Facebook Papers were released only to Western media outlets. We know that many of the documents contain information about how Facebook conducts business globally—and particularly how the company fails to put appropriate resources behind its policymaking and content moderation practices in different parts of the world. Providing trusted, international media publications that have the experience and expertise to provide nuanced, accurate analysis and perspective is a vital step in the process—after all the majority of Facebook’s users worldwide live outside of the United States and Europe.

Advertisement
free widgets for website

Don’t Give In To Easy Answers

Facebook is big, but it’s not the internet. More than a billion websites exist; tens of thousands of platforms allow users to connect with one another. Any solutions Congress proposes must remember this. Though Zuckerberg may “want every other company in our industry to make the investments and achieve the results that [Facebook has],” forcing everyone else to play by their rules won’t get us to a workable online future. We can’t fix the internet with legislation that pulls the ladder up behind Facebook, leaving everyone else below.

For example: legislation that forces sites to limit recommended content could have disastrous consequences, given how commonly sites make (often helpful) choices about the information we see when we browse, from restaurant recommendations to driving directions to search results. And forcing companies to rethink their algorithms, or offer “no algorithm” versions, may seem like fast fixes for a site like Facebook. But the devil is in the details, and in how those details get applied to the entire online ecosystem.

The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone. 

Facebook, for its part, seems interested in easy fixes as well. Rebranding as “Meta” amounts to a drunk driver switching cars. Gimmicks designed to attract younger users to combat its aging user base are a poor substitute for thinking about why those users refuse to use the platform in the first place.

Advertisement
free widgets for website

Zuckerberg has gotten very wealthy while wringing his hands every year or two and saying, “sorry. I’m sorry. I’m trying to fix it.” Facebook’s terrible, no good, very bad news cycle is happening at the same time that the company reported a $9 billion dollar profit for the quarter.

Zuckerberg insists this is not the Facebook he wanted to create. But, he’s had nearly two decades of more-or-less absolute power to make the company into whatever he most desired, and this is where it’s ended up—despised, dangerous, and immensely profitable. Given that track record, it’s only reasonable that we handicap his suggestions during any serious consideration about how to get out of this place.

Advertisement
free widgets for website

Nor should we expect policymakers to do much better unless and until they start listening to a wider array of voices. While the leaks have been directing the narrative about where the company is failing its users, there are plenty of other issues that aren’t grabbing headlines—like the fact that Facebook continues collecting data on deactivated accounts. A focused and thoughtful effort by Congress must include policy experts who have been studying the problems for years.

The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone. 

Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

First seen at developers.facebook.com

See also  Scammers strike again, tricking local businesses on Facebook
Continue Reading

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Updated July 18: Developers and advertising partners may be required to share information on their app’s privacy practices in third party app stores, such as Google Play and the Apple App Store, including the functionality of SDKs provided by Meta. To help make it easier for you to complete these requirements, we have consolidated information that explains our data collection practices for the Facebook and Audience Network SDKs.

Facebook SDK

To provide functionality within the Facebook SDK, we may receive and process certain contact, location, identifier, and device information associated with Facebook users and their use of your application. The information we receive depends on what SDK features 3rd party applications use and we have structured the document below according to these features.

App Ads, Facebook Analytics, & App Events

Facebook App Events allow you to measure the performance of your app using Facebook Analytics, measure conversions associated with Facebook ads, and build audiences to acquire new users as well as re-engage existing users. There are a number of different ways your app can use app events to keep track of when people take specific actions such as installing your app or completing a purchase.

With Facebook SDK, there are app events that are automatically logged (app installs, app launches, and in-app purchases) and collected for Facebook Analytics unless you disable automatic event logging. Developers determine what events to send to Facebook from a list of standard events, or via a custom event.

When developers send Facebook custom events, these events could include data types outside of standard events. Developers control sending these events to Facebook either directly via application code or in Events Manager for codeless app events. Developers can review their code and Events Manager to determine which data types they are sending to Facebook. It’s the developer’s responsibility to ensure this is reflected in their application’s privacy policy.

Advertisement
free widgets for website

Advanced Matching

Developers may also send us additional user contact information in code, or via the Events Manager. Advanced matching functionality may use the following data, if sent:

  • email address, name, phone number, physical address (city, state or province, zip or postal code and country), gender, and date of birth.
See also  Scammers strike again, tricking local businesses on Facebook

Facebook Login

There are two scenarios for applications that use Facebook Login via the Facebook SDK: Authenticated Sign Up or Sign In, and User Data Access via Permissions. For authentication, a unique, app-specific identifier tied to a user’s Facebook Account enables the user to sign in to your app. For Data Access, a user must explicitly grant your app permission to access data.

Note: Since Facebook Login is part of the Facebook SDK, we may collect other information referenced here when you use Facebook Login, depending on your settings.

Device Information

We may also receive and process the following information if your app is integrated with the Facebook SDK:

  • Device identifiers;
  • Device attributes, such as device model and screen dimensions, CPU core, storage size, SDK version, OS and app versions, and app package name; and
  • Networking information, such as the name of the mobile operator or ISP, language, time zone, and IP address.

Audience Network SDK

We may receive and process the following information when you use the Audience Network SDK to integrate Audience Network ads in your app:

  • Device identifiers;
  • Device attributes, such as device model and screen dimensions, operating system, mediation platform and SDK versions; and
  • Ad performance information, such as impressions, clicks, placement, and viewability.

First seen at developers.facebook.com

Continue Reading

FACEBOOK

Enabling Faster Python Authoring With Wasabi

Published

on

By

enabling-faster-python-authoring-with-wasabi

This article was written by Omer Dunay, Kun Jiang, Nachi Nagappan, Matt Bridges and Karim Nakad.


Motivation

At Meta, Python is one of the most used programming languages in terms of both lines of code and number of users. Everyday, we have thousands of developers working with Python to launch new features, fix bugs and develop the most sophisticated machine learning models. As such, it is important to ensure that our Python developers are productive and efficient by giving them state-of-the-art tools.

Introducing Wasabi

Today we introduce Wasabi, a Python language service that implements the language server protocol (LSP) and is designed to help our developers use Python easier and faster. Wasabi assists our developers to write Python code with a series of advanced features, including:

  • Lints and diagnostics: These are available as the user types.
  • Auto import quick fix: This is available for undefined-variable lint.
  • Global symbols autocomplete: When a user types a prefix, all symbols (e.g. function names, class names) that are defined in other files and start with that prefix will appear in the autocomplete suggestion automatically.
  • Organize Imports + Remove unused: A quick fix that removes all unused imports and reformats the import section according to pep8 rules. This feature is powered by other tools that are built inside Meta such as libCST that helps with safe code refactoring.
  • Python snippets: Snippet suggestions are available as the user types for common code patterns.

Additionally, Wasabi is a surface-agnostic service that can be deployed into multiple code repositories and various development environments (e.g., VSCode, Bento Notebook). Since its debut, Wasabi has been adopted by tens of thousands of Python users at Meta across Facebook, Instagram, Infrastructure teams and many more.

Figure 1: Example for global symbols autocomplete, one of Wasabi’s features

Language Services at Meta Scale

A major design requirement for language services is low latency / user responsiveness. Autocomplete suggestions, lints and quickFixes should appear to the developer immediately as they type.

Advertisement
free widgets for website

At Meta, code is organized in a monorepo, meaning that developers have access to all python files as they develop. This approach has major advantages for the developer workflow including better discoverability, transparency, easier to share libraries and increased collaboration between teams. It also introduces unique challenges for building developer tools such as language services that need to handle hundreds of thousands of files.

See also  Daily Crunch: Facebook faces questions over data breach

The scaling problem is one of the reasons that we tried to avoid using off-the-shelf language services available in the industry (e.g., pyright, jedi) to perform those operations. Most of those tools were built in the mindset of a relatively small to medium workspace of projects, maybe with the assumptions of thousands of files for large projects for operations that require o(repo) information.

For example, consider the “auto import” quick fix for undefined variables. In order to suggest all available symbols the language server needs to read all source files, the quick fix parses them and keeps an in-memory cache of all parsed symbols in order to respond to requests.

While this may scale to be performed in a single process on the development machine for small-medium repositories, this approach doesn’t scale in the monorepo use case. Reading and parsing hundreds of thousands of files can take many minutes, which means slow startup times and frustrated developers. Moving to an in-memory cache might help latency, but also may not fit in a single machine’s memory.

For example, assume an average python file takes roughly 10ms to be parsed and to extract symbols in a standard error recoverable parser. This means that on 1000 files it can take 10 seconds to initialize which is a fairly reasonable startup time. Running it on 1M files would take 166 minutes which is obviously a too lengthy startup time.

Advertisement
free widgets for website

How Wasabi Works

Offline + Online Processing:

In order to support low latency in Meta scale repositories, Wasabi is powered by two phases of parsing, background processing (offline) done by an external indexers, and local processing of locally changed “dirty files” (online):

  1. A background process indexes all committed source files and maintains the parsed symbols in a special database (glean) that is designed for storing code symbol information.
  2. Wasabi, which is a local process running on the user machine, calculates the delta between the base revision, stack of diffs and uncommitted changes that the user currently has, and extracts symbols only out of those “dirty” files. Since this set of “dirty” files is relatively small, the operation is performed very fast.
  3. Upon an LSP request such as auto import, Wasabi parses the abstract syntax tree (AST) of the file, then based on the context of the cursor, creates a query for both glean and local changes symbols, merges the results and returns it to the user.
See also  Facebook Dark Mode Testing Spotted by Android Users

As a result, all Wasabi features are low latency and available to the user seamlessly as they type.

Note: Wasabi currently doesn’t handle the potential delta between the revision that glean indexed (happens once every few hours) and the locally base revision that the user currently has. We plan on adding that in the future.

Figure 2: Wasabi’s high level architecture

Ranking the Results

In some cases, due to the scale of the repository, there may be many valid suggestions in the set of results. For example, consider “auto import” suggestions for the “utils” symbol. There may be many modules that define a class named “utils” across the repository, therefore we invest in ranking the results to ensure that users see the most relevant suggestions on the top.

Advertisement
free widgets for website

For example, auto import ranking is done by taking into account:

  • Locality:
    • The distance of the suggested module directory path from the directory paths of modules that are already imported in this file.
    • The distance of the suggested module directory path from the current directory path of the local file.
    • Whether the file has been locally changed (“dirty” files are ranked higher).
  • Usage: The number of occurrences the import statement was used by other files in the repository.

To measure our success, we measured the index in the suggestion list of an accepted suggestion and noted that in almost all cases the accepted suggestion was ranked in one of top 3 suggestions.

Positive feedbacks from developers

After launching Wasabi to several pilot runs inside Meta, we have received numerous positive feedbacks from our developers. Here is one example of the quote from a software engineer at Instagram:

“I’ve been using Wasabi for a couple months now, it’s been a boon to my productivity! Working in Instagram Server, especially on larger files, warnings from pyre are fairly slow. With Wasabi, they’re lightning fast 😃!”

“I use features like spelling errors and auto import several times an hour. This probably makes my development workflow 10% faster on average (rough guess, might be more, definitely not less), a pretty huge improvement!”

As noted above, Wasabi has made a meaningful change to keep our developers productive and make them feel delightful.

Advertisement
free widgets for website

The metric to measure authoring velocity

In order to quantitatively understand how much value Wasabi has delivered to our Python developers, we have considered a number of metrics to measure its impact. Ultimately, we landed on a metric that we call ‘Authoring Velocity’ to measure how fast developers write code. In essence, Authoring Velocity is the inverse function of the time taken on a specific diff (a collection of code changes) during the authoring stage. The authoring stage starts from the timestamp when a developer checks out from the source control repo to the timestamp when the diff is created. We have also normalized it against the number of lines of code changed in the diff, as a proxy for diff size, to offset any possible variance. The greater the value for ‘Authoring Velocity,’ the faster we think developers write their code.

See also  Peeking Behind the Scenes of Facebook Open Source

Figure 3: Authoring Velocity Metric Formula

The result

With the metric defined, we ran an experiment to measure the difference that Wasabi brings to our developers. Specifically, we selected ~700 developers who had never used Wasabi before, and then randomly put them into two independent groups at a 50:50 split ratio. For these developers in the test group, they were enabled with Wasabi when they wrote in Python, whereas there was no change for those in the control group. For both groups, we compare the changes in relative metric values before and after the Wasabi enablement. From our results, we find that for developers in the test group, the median value of authoring velocity has increased by 20% after they started using Wasabi. Meanwhile, we don’t see any significant change in the control group before and after, which is expected.

Figure 4: Authoring Velocity measurements for control and test groups, before and after Wasabi was rolled out to the test group.

Summary

With Python’s unprecedented growth, it is an exciting time to be working in the area to make it better and handy to use. Together with its advanced features, Wasabi has successfully improved developers’ productivity at Meta, allowing them to write Python faster and easier with a positive developer experience. We hope that our prototype and findings can benefit more people in the broader Python community.

Advertisement
free widgets for website

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Continue Reading

Trending