Connect with us

FACEBOOK

Levin slams Biden admin’s relationship with Facebook, calls Trump’s Big Tech lawsuit ‘slam dunk’

Published

on

Conservative commentator and author Mark Levin slammed the Biden administration for its relationship with Silicon Valley in its bid to halt the spread of misinformation concerning vaccines. 

“[The Biden administration is] working with Facebook and other social media platforms, and telling them what they want them to do. They can call it disinformation, they can call it whatever they want, that’s quite beside the point,” Levin said Thursday on his radio show. Social media platforms are “doing the work of the government, it is doing the work specifically of the Biden administration and the Democrat Party,” he added. 

His comments came in response to White House Press Secretary Jen Psaki and Surgeon General Vivek Murthy detailing the White House’s relationship with Facebook on Thursday to stop misinformation on vaccine safety. 

Psaki doubled down on the administration’s relationship with Facebook on Friday and said it is “making sure social media platforms are aware of the latest narratives,” and even added that if a user is banned from one platform “for providing misinformation” that user should be banned from all others. 

“We don’t take anything down. We don’t block anything, Facebook, and any private sector company makes decisions about what information should be on their platform,” Psaki said in defense of the relationship and reported this week that 12 people are to blame for 65% of anti-vaccine misinformation on social media platforms. 

Advertisement
free widgets for website

CRITICS SLAM BIDEN ADMINISTRATION’S REPORTED PLAN TO MONITOR VACCINE MISINFORMATION IN TEXT MESSAGES

Levin, however, argued that Facebook is not a private company due to this relationship. 

See also  Time to turn off Facebook's digital fire hose

He added that Psaki’s comments also make former President Donald Trump’s lawsuit against Big Tech a “slam dunk,” and said he hopes Trump’s lawyers were listening and to take note that the current administration is telling Silicon Valley “what they want them to do.”

“The Biden administration at multiple levels – senior staff, among others – are working with Facebook and other social media platforms to identify misinformation and thereby to sanction the misinformation and censor the social media posts, and to spread information. It can no longer be said that it is an independent business, conducting its own business,” Levin said.   

President Joe Biden argued, however, that misinformation on social media platforms was the matter of life and death, accusing the tech companies of “killing people” by allowing misinformation to remain on their platforms. 

Advertisement
free widgets for website

“I mean they really, look, the only pandemic we have is among the unvaccinated, and that’s — they’re killing people,” Biden said Friday on the South Lawn of the White House.

PSAKI SAYS ADMINISTRATION IS WORKING WITH FACEBOOK TO LIMIT MISINFORMATION

During Levin’s show, he specifically took issue with Psaki saying the federal government has “increased disinformation research and tracking within the Surgeon General’s office. We’re flagging problematic posts for Facebook that spread disinformation.” 

Levin responded that the government is “tracking you if they don’t agree with you.”

“They’re collecting information if they don’t agree with you. They have adapted and they are out front now that they are using police state tactics against freedom of speech and speech generally,” he said. 

Advertisement
free widgets for website

MSNBC GUEST SUGGESTED ‘ANTI-VAXXER’ MOVEMENT IS WHITE SUPREMACY TARGETING COMMUNITIES OF COLOR 

See also  EU antitrust regulators to rule on Facebook's Kustomer deal by Aug. 2

The conservative host also noted that members of the Biden administration, such as Vice President Kamala Harris, resisted the vaccine when the Trump administration first began rolling it out and are now working to censor people who are doing the same. 

“Biden and Harris, and their ilk opposed the vaccines early on they dismissed them. They said they were politicized. Remember? I remember.”

“Remember how many people in the media, and Big Tech rejected the idea that the virus may have been leaked from the lab and Wuhan?” he asked, referring to Facebook outright banning posts last year that discussed the theory that the virus came from a Chinese lab. 

CLICK HERE TO GET THE FOX NEWS APP

Advertisement
free widgets for website

The Wuhan lab theory was largely panned as a conspiracy before it regained traction in the mainstream this year, with Biden ordering a review of the theory and the World Health Organization acknowledging it was premature to discount the idea. 

“Politicians in government working with liberal Democrats who control these massive platforms censoring misinformation. Isn’t that amazing? The same ones that promoted Russia collusion, the same ones that silenced the New York Post when it came to Hunter Biden,” he added. 

Facebook did not immediately respond to Fox News’s request for comment on Levin’s remarks. 

Read More

Advertisement
free widgets for website
Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

First seen at developers.facebook.com

See also  Facebook reportedly investigated over 'systemic' racism in hiring
Continue Reading

FACEBOOK

Resources for Completing App Store Data Practice Questionnaires for Apps That Include the Facebook or Audience Network SDK

Published

on

By

resources-for-completing-app-store-data-practice-questionnaires-for-apps-that-include-the-facebook-or-audience-network-sdk

Updated July 18: Developers and advertising partners may be required to share information on their app’s privacy practices in third party app stores, such as Google Play and the Apple App Store, including the functionality of SDKs provided by Meta. To help make it easier for you to complete these requirements, we have consolidated information that explains our data collection practices for the Facebook and Audience Network SDKs.

Facebook SDK

To provide functionality within the Facebook SDK, we may receive and process certain contact, location, identifier, and device information associated with Facebook users and their use of your application. The information we receive depends on what SDK features 3rd party applications use and we have structured the document below according to these features.

App Ads, Facebook Analytics, & App Events

Facebook App Events allow you to measure the performance of your app using Facebook Analytics, measure conversions associated with Facebook ads, and build audiences to acquire new users as well as re-engage existing users. There are a number of different ways your app can use app events to keep track of when people take specific actions such as installing your app or completing a purchase.

With Facebook SDK, there are app events that are automatically logged (app installs, app launches, and in-app purchases) and collected for Facebook Analytics unless you disable automatic event logging. Developers determine what events to send to Facebook from a list of standard events, or via a custom event.

When developers send Facebook custom events, these events could include data types outside of standard events. Developers control sending these events to Facebook either directly via application code or in Events Manager for codeless app events. Developers can review their code and Events Manager to determine which data types they are sending to Facebook. It’s the developer’s responsibility to ensure this is reflected in their application’s privacy policy.

Advertisement
free widgets for website

Advanced Matching

Developers may also send us additional user contact information in code, or via the Events Manager. Advanced matching functionality may use the following data, if sent:

  • email address, name, phone number, physical address (city, state or province, zip or postal code and country), gender, and date of birth.
See also  How one Facebook group is helping people find vaccines for loved ones

Facebook Login

There are two scenarios for applications that use Facebook Login via the Facebook SDK: Authenticated Sign Up or Sign In, and User Data Access via Permissions. For authentication, a unique, app-specific identifier tied to a user’s Facebook Account enables the user to sign in to your app. For Data Access, a user must explicitly grant your app permission to access data.

Note: Since Facebook Login is part of the Facebook SDK, we may collect other information referenced here when you use Facebook Login, depending on your settings.

Device Information

We may also receive and process the following information if your app is integrated with the Facebook SDK:

  • Device identifiers;
  • Device attributes, such as device model and screen dimensions, CPU core, storage size, SDK version, OS and app versions, and app package name; and
  • Networking information, such as the name of the mobile operator or ISP, language, time zone, and IP address.

Audience Network SDK

We may receive and process the following information when you use the Audience Network SDK to integrate Audience Network ads in your app:

  • Device identifiers;
  • Device attributes, such as device model and screen dimensions, operating system, mediation platform and SDK versions; and
  • Ad performance information, such as impressions, clicks, placement, and viewability.

First seen at developers.facebook.com

Continue Reading

FACEBOOK

Enabling Faster Python Authoring With Wasabi

Published

on

By

enabling-faster-python-authoring-with-wasabi

This article was written by Omer Dunay, Kun Jiang, Nachi Nagappan, Matt Bridges and Karim Nakad.


Motivation

At Meta, Python is one of the most used programming languages in terms of both lines of code and number of users. Everyday, we have thousands of developers working with Python to launch new features, fix bugs and develop the most sophisticated machine learning models. As such, it is important to ensure that our Python developers are productive and efficient by giving them state-of-the-art tools.

Introducing Wasabi

Today we introduce Wasabi, a Python language service that implements the language server protocol (LSP) and is designed to help our developers use Python easier and faster. Wasabi assists our developers to write Python code with a series of advanced features, including:

  • Lints and diagnostics: These are available as the user types.
  • Auto import quick fix: This is available for undefined-variable lint.
  • Global symbols autocomplete: When a user types a prefix, all symbols (e.g. function names, class names) that are defined in other files and start with that prefix will appear in the autocomplete suggestion automatically.
  • Organize Imports + Remove unused: A quick fix that removes all unused imports and reformats the import section according to pep8 rules. This feature is powered by other tools that are built inside Meta such as libCST that helps with safe code refactoring.
  • Python snippets: Snippet suggestions are available as the user types for common code patterns.

Additionally, Wasabi is a surface-agnostic service that can be deployed into multiple code repositories and various development environments (e.g., VSCode, Bento Notebook). Since its debut, Wasabi has been adopted by tens of thousands of Python users at Meta across Facebook, Instagram, Infrastructure teams and many more.

Figure 1: Example for global symbols autocomplete, one of Wasabi’s features

Language Services at Meta Scale

A major design requirement for language services is low latency / user responsiveness. Autocomplete suggestions, lints and quickFixes should appear to the developer immediately as they type.

Advertisement
free widgets for website

At Meta, code is organized in a monorepo, meaning that developers have access to all python files as they develop. This approach has major advantages for the developer workflow including better discoverability, transparency, easier to share libraries and increased collaboration between teams. It also introduces unique challenges for building developer tools such as language services that need to handle hundreds of thousands of files.

See also  Facebook and Spotify Are Teaming Up: Why This Is Great for Both Companies

The scaling problem is one of the reasons that we tried to avoid using off-the-shelf language services available in the industry (e.g., pyright, jedi) to perform those operations. Most of those tools were built in the mindset of a relatively small to medium workspace of projects, maybe with the assumptions of thousands of files for large projects for operations that require o(repo) information.

For example, consider the “auto import” quick fix for undefined variables. In order to suggest all available symbols the language server needs to read all source files, the quick fix parses them and keeps an in-memory cache of all parsed symbols in order to respond to requests.

While this may scale to be performed in a single process on the development machine for small-medium repositories, this approach doesn’t scale in the monorepo use case. Reading and parsing hundreds of thousands of files can take many minutes, which means slow startup times and frustrated developers. Moving to an in-memory cache might help latency, but also may not fit in a single machine’s memory.

For example, assume an average python file takes roughly 10ms to be parsed and to extract symbols in a standard error recoverable parser. This means that on 1000 files it can take 10 seconds to initialize which is a fairly reasonable startup time. Running it on 1M files would take 166 minutes which is obviously a too lengthy startup time.

Advertisement
free widgets for website

How Wasabi Works

Offline + Online Processing:

In order to support low latency in Meta scale repositories, Wasabi is powered by two phases of parsing, background processing (offline) done by an external indexers, and local processing of locally changed “dirty files” (online):

  1. A background process indexes all committed source files and maintains the parsed symbols in a special database (glean) that is designed for storing code symbol information.
  2. Wasabi, which is a local process running on the user machine, calculates the delta between the base revision, stack of diffs and uncommitted changes that the user currently has, and extracts symbols only out of those “dirty” files. Since this set of “dirty” files is relatively small, the operation is performed very fast.
  3. Upon an LSP request such as auto import, Wasabi parses the abstract syntax tree (AST) of the file, then based on the context of the cursor, creates a query for both glean and local changes symbols, merges the results and returns it to the user.
See also  Facebook 'transparency report' a distraction | The Seattle Times

As a result, all Wasabi features are low latency and available to the user seamlessly as they type.

Note: Wasabi currently doesn’t handle the potential delta between the revision that glean indexed (happens once every few hours) and the locally base revision that the user currently has. We plan on adding that in the future.

Figure 2: Wasabi’s high level architecture

Ranking the Results

In some cases, due to the scale of the repository, there may be many valid suggestions in the set of results. For example, consider “auto import” suggestions for the “utils” symbol. There may be many modules that define a class named “utils” across the repository, therefore we invest in ranking the results to ensure that users see the most relevant suggestions on the top.

Advertisement
free widgets for website

For example, auto import ranking is done by taking into account:

  • Locality:
    • The distance of the suggested module directory path from the directory paths of modules that are already imported in this file.
    • The distance of the suggested module directory path from the current directory path of the local file.
    • Whether the file has been locally changed (“dirty” files are ranked higher).
  • Usage: The number of occurrences the import statement was used by other files in the repository.

To measure our success, we measured the index in the suggestion list of an accepted suggestion and noted that in almost all cases the accepted suggestion was ranked in one of top 3 suggestions.

Positive feedbacks from developers

After launching Wasabi to several pilot runs inside Meta, we have received numerous positive feedbacks from our developers. Here is one example of the quote from a software engineer at Instagram:

“I’ve been using Wasabi for a couple months now, it’s been a boon to my productivity! Working in Instagram Server, especially on larger files, warnings from pyre are fairly slow. With Wasabi, they’re lightning fast 😃!”

“I use features like spelling errors and auto import several times an hour. This probably makes my development workflow 10% faster on average (rough guess, might be more, definitely not less), a pretty huge improvement!”

As noted above, Wasabi has made a meaningful change to keep our developers productive and make them feel delightful.

Advertisement
free widgets for website

The metric to measure authoring velocity

In order to quantitatively understand how much value Wasabi has delivered to our Python developers, we have considered a number of metrics to measure its impact. Ultimately, we landed on a metric that we call ‘Authoring Velocity’ to measure how fast developers write code. In essence, Authoring Velocity is the inverse function of the time taken on a specific diff (a collection of code changes) during the authoring stage. The authoring stage starts from the timestamp when a developer checks out from the source control repo to the timestamp when the diff is created. We have also normalized it against the number of lines of code changed in the diff, as a proxy for diff size, to offset any possible variance. The greater the value for ‘Authoring Velocity,’ the faster we think developers write their code.

See also  2nd report of puppy scam via Facebook in Lubbock in recent weeks

Figure 3: Authoring Velocity Metric Formula

The result

With the metric defined, we ran an experiment to measure the difference that Wasabi brings to our developers. Specifically, we selected ~700 developers who had never used Wasabi before, and then randomly put them into two independent groups at a 50:50 split ratio. For these developers in the test group, they were enabled with Wasabi when they wrote in Python, whereas there was no change for those in the control group. For both groups, we compare the changes in relative metric values before and after the Wasabi enablement. From our results, we find that for developers in the test group, the median value of authoring velocity has increased by 20% after they started using Wasabi. Meanwhile, we don’t see any significant change in the control group before and after, which is expected.

Figure 4: Authoring Velocity measurements for control and test groups, before and after Wasabi was rolled out to the test group.

Summary

With Python’s unprecedented growth, it is an exciting time to be working in the area to make it better and handy to use. Together with its advanced features, Wasabi has successfully improved developers’ productivity at Meta, allowing them to write Python faster and easier with a positive developer experience. We hope that our prototype and findings can benefit more people in the broader Python community.

Advertisement
free widgets for website

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Continue Reading

Trending