Connect with us

FACEBOOK

Enabling Faster Python Authoring With Wasabi

Published

on

enabling-faster-python-authoring-with-wasabi

This article was written by Omer Dunay, Kun Jiang, Nachi Nagappan, Matt Bridges and Karim Nakad.


Motivation

At Meta, Python is one of the most used programming languages in terms of both lines of code and number of users. Everyday, we have thousands of developers working with Python to launch new features, fix bugs and develop the most sophisticated machine learning models. As such, it is important to ensure that our Python developers are productive and efficient by giving them state-of-the-art tools.

Introducing Wasabi

Today we introduce Wasabi, a Python language service that implements the language server protocol (LSP) and is designed to help our developers use Python easier and faster. Wasabi assists our developers to write Python code with a series of advanced features, including:

  • Lints and diagnostics: These are available as the user types.
  • Auto import quick fix: This is available for undefined-variable lint.
  • Global symbols autocomplete: When a user types a prefix, all symbols (e.g. function names, class names) that are defined in other files and start with that prefix will appear in the autocomplete suggestion automatically.
  • Organize Imports + Remove unused: A quick fix that removes all unused imports and reformats the import section according to pep8 rules. This feature is powered by other tools that are built inside Meta such as libCST that helps with safe code refactoring.
  • Python snippets: Snippet suggestions are available as the user types for common code patterns.

Additionally, Wasabi is a surface-agnostic service that can be deployed into multiple code repositories and various development environments (e.g., VSCode, Bento Notebook). Since its debut, Wasabi has been adopted by tens of thousands of Python users at Meta across Facebook, Instagram, Infrastructure teams and many more.

Figure 1: Example for global symbols autocomplete, one of Wasabi’s features

Language Services at Meta Scale

A major design requirement for language services is low latency / user responsiveness. Autocomplete suggestions, lints and quickFixes should appear to the developer immediately as they type.

Advertisement
free widgets for website

At Meta, code is organized in a monorepo, meaning that developers have access to all python files as they develop. This approach has major advantages for the developer workflow including better discoverability, transparency, easier to share libraries and increased collaboration between teams. It also introduces unique challenges for building developer tools such as language services that need to handle hundreds of thousands of files.

See also  Briggs: Todd Rokita's war on Big Tech is a taxpayer-funded political campaign

The scaling problem is one of the reasons that we tried to avoid using off-the-shelf language services available in the industry (e.g., pyright, jedi) to perform those operations. Most of those tools were built in the mindset of a relatively small to medium workspace of projects, maybe with the assumptions of thousands of files for large projects for operations that require o(repo) information.

For example, consider the “auto import” quick fix for undefined variables. In order to suggest all available symbols the language server needs to read all source files, the quick fix parses them and keeps an in-memory cache of all parsed symbols in order to respond to requests.

While this may scale to be performed in a single process on the development machine for small-medium repositories, this approach doesn’t scale in the monorepo use case. Reading and parsing hundreds of thousands of files can take many minutes, which means slow startup times and frustrated developers. Moving to an in-memory cache might help latency, but also may not fit in a single machine’s memory.

For example, assume an average python file takes roughly 10ms to be parsed and to extract symbols in a standard error recoverable parser. This means that on 1000 files it can take 10 seconds to initialize which is a fairly reasonable startup time. Running it on 1M files would take 166 minutes which is obviously a too lengthy startup time.

Advertisement
free widgets for website

How Wasabi Works

Offline + Online Processing:

In order to support low latency in Meta scale repositories, Wasabi is powered by two phases of parsing, background processing (offline) done by an external indexers, and local processing of locally changed “dirty files” (online):

  1. A background process indexes all committed source files and maintains the parsed symbols in a special database (glean) that is designed for storing code symbol information.
  2. Wasabi, which is a local process running on the user machine, calculates the delta between the base revision, stack of diffs and uncommitted changes that the user currently has, and extracts symbols only out of those “dirty” files. Since this set of “dirty” files is relatively small, the operation is performed very fast.
  3. Upon an LSP request such as auto import, Wasabi parses the abstract syntax tree (AST) of the file, then based on the context of the cursor, creates a query for both glean and local changes symbols, merges the results and returns it to the user.
See also  Facebook group with national reach joins search for missing Chula Vista mom

As a result, all Wasabi features are low latency and available to the user seamlessly as they type.

Note: Wasabi currently doesn’t handle the potential delta between the revision that glean indexed (happens once every few hours) and the locally base revision that the user currently has. We plan on adding that in the future.

Figure 2: Wasabi’s high level architecture

Ranking the Results

In some cases, due to the scale of the repository, there may be many valid suggestions in the set of results. For example, consider “auto import” suggestions for the “utils” symbol. There may be many modules that define a class named “utils” across the repository, therefore we invest in ranking the results to ensure that users see the most relevant suggestions on the top.

Advertisement
free widgets for website

For example, auto import ranking is done by taking into account:

  • Locality:
    • The distance of the suggested module directory path from the directory paths of modules that are already imported in this file.
    • The distance of the suggested module directory path from the current directory path of the local file.
    • Whether the file has been locally changed (“dirty” files are ranked higher).
  • Usage: The number of occurrences the import statement was used by other files in the repository.

To measure our success, we measured the index in the suggestion list of an accepted suggestion and noted that in almost all cases the accepted suggestion was ranked in one of top 3 suggestions.

Positive feedbacks from developers

After launching Wasabi to several pilot runs inside Meta, we have received numerous positive feedbacks from our developers. Here is one example of the quote from a software engineer at Instagram:

“I’ve been using Wasabi for a couple months now, it’s been a boon to my productivity! Working in Instagram Server, especially on larger files, warnings from pyre are fairly slow. With Wasabi, they’re lightning fast 😃!”

“I use features like spelling errors and auto import several times an hour. This probably makes my development workflow 10% faster on average (rough guess, might be more, definitely not less), a pretty huge improvement!”

As noted above, Wasabi has made a meaningful change to keep our developers productive and make them feel delightful.

Advertisement
free widgets for website

The metric to measure authoring velocity

In order to quantitatively understand how much value Wasabi has delivered to our Python developers, we have considered a number of metrics to measure its impact. Ultimately, we landed on a metric that we call ‘Authoring Velocity’ to measure how fast developers write code. In essence, Authoring Velocity is the inverse function of the time taken on a specific diff (a collection of code changes) during the authoring stage. The authoring stage starts from the timestamp when a developer checks out from the source control repo to the timestamp when the diff is created. We have also normalized it against the number of lines of code changed in the diff, as a proxy for diff size, to offset any possible variance. The greater the value for ‘Authoring Velocity,’ the faster we think developers write their code.

See also  Facebook and Instagram's cartoon avatar craze: How to do it (and a look at the privacy rules)

Figure 3: Authoring Velocity Metric Formula

The result

With the metric defined, we ran an experiment to measure the difference that Wasabi brings to our developers. Specifically, we selected ~700 developers who had never used Wasabi before, and then randomly put them into two independent groups at a 50:50 split ratio. For these developers in the test group, they were enabled with Wasabi when they wrote in Python, whereas there was no change for those in the control group. For both groups, we compare the changes in relative metric values before and after the Wasabi enablement. From our results, we find that for developers in the test group, the median value of authoring velocity has increased by 20% after they started using Wasabi. Meanwhile, we don’t see any significant change in the control group before and after, which is expected.

Figure 4: Authoring Velocity measurements for control and test groups, before and after Wasabi was rolled out to the test group.

Summary

With Python’s unprecedented growth, it is an exciting time to be working in the area to make it better and handy to use. Together with its advanced features, Wasabi has successfully improved developers’ productivity at Meta, allowing them to write Python faster and easier with a positive developer experience. We hope that our prototype and findings can benefit more people in the broader Python community.

Advertisement
free widgets for website

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

FACEBOOK

Understanding Authorization Tokens and Access for the WhatsApp Business Platform

Published

on

By

understanding-authorization-tokens-and-access-for-the-whatsapp-business-platform

The WhatsApp Business Platform makes it easy to send WhatsApp messages to your customers and automate replies. Here, we’ll explore authentication using the Cloud API, hosted by Meta.

We’ll start with generating and using a temporary access token and then replace it with a permanent access token. This tutorial assumes you’re building a server-side application and won’t need additional steps to keep your WhatsApp application secrets securely stored.

Managing Access and Authorization Tokens

First, let’s review how to manage authorization tokens and safely access the API.

Prerequisites

Start by making sure you have a developer account on Meta for Developers. You’ll also need WhatsApp installed on a mobile device to send test messages to.

Creating an App

Before you can authenticate, you’ll need an application to authenticate you.

Advertisement
free widgets for website

Once you’re signed in, you see the Meta for Developers App Dashboard. Click Create App to get started.

Next, you’ll need to choose an app type. Choose Business.

After that, enter a display name for your application. If you have a business account to link to your app, select it. If not, don’t worry. The Meta for Developers platform creates a test business account you can use to experiment with the API. When done, click Create App.

Then, you’ll need to add products to your app. Scroll down until you see WhatsApp and click the Set up button:

Finally, choose an existing Meta Business Account or ask the platform to create a new one and click Continue:

Advertisement
free widgets for website

And with that, your app is created and ready to use. You’re automatically directed to the app’s dashboard.

Note that you have a temporary access token. For security reasons, the token expires in less than 24 hours. However, you can use it for now to test accessing the API. Later, we’ll cover how to generate a permanent access token that your server applications can use. Also, note your app’s phone number ID because you’ll need it soon.

See also  Facebook blocks #VaccinesKill hashtag

Click the dropdown under the To field, and then click Manage phone number list.

In the popup that appears, enter the phone number of a WhatsApp account to send test messages to.

Then, scroll further down the dashboard page and you’ll see an example curl call that looks similar to this:

Advertisement
free widgets for website
curl -i -X POST https://graph.facebook.com/v13.0//messages -H 'Authorization: Bearer ' -H 'Content-Type: application/json' -d '{ "messaging_product": "whatsapp", "to": "", "type": "template", "template": { "name": "hello_world", "language": { "code": "en_US" } } }'

Note that the Meta for Developers platform inserts your app’s phone number ID and access token instead of the and placeholders shown above. If you have curl installed, paste the command into your terminal and run it. You should receive a “hello world” message in WhatsApp on your test device.

If you’d prefer, you can convert the curl request into an HTTP request in your programming language by simply creating a POST request that sets the Authorization and Content-Type headers as shown above, including the JSON payload in the request body.

Since this post is about authentication, let’s focus on that. Notice that you’ve included your app’s access token in the Authorization header. For any request to the API, you must set the Authorization header to Bearer .

Remember that you must use your token instead of the placeholder. Using bearer tokens will be familiar if you’ve worked with JWT or OAuth2 tokens before. If you’ve never seen one before, a bearer token is essentially a random secret string that you, as the bearer of the token, can present to an API to prove you’re allowed to access it.

Failure to include this header causes the API to return a 401 Unauthorized response code.

Advertisement
free widgets for website

Creating a Permanent Access Token

Knowing that you need to use a bearer token in the Authorization header of an HTTP request is helpful, but it’s not enough. The only access token you’ve seen so far is temporary. Chances are that you want your app to access the API for more than 24 hours, so you need to generate a longer-lasting access token.

Fortunately, the Meta for Developers platform makes this easy. All you need to do is add a System User to your business account to obtain an access token you can use to continue accessing the API. To create a system user, do the following:

  • Go to Business Settings.

  • Select the business account your app is associated with.
  • Below Users, click System Users.
  • Click Add.
  • Name the system user, choose Admin as the user role, and click Create System User.
  • Select the whatsapp_business_messaging permission.
  • Click Generate New Token.
  • Copy and save your token.

Your access token is a random string of letters and numbers. Now, try re-running the earlier request using the token you just created instead of the temporary one:

curl -i -X POST https://graph.facebook.com/v13.0//messages -H 'Authorization: Bearer ' -H 'Content-Type: application/json' -d '{ "messaging_product": "whatsapp", "to": "", "type": "template", "template": { "name": "hello_world", "language": { "code": "en_US" } } }'

Your test device should receive a second hello message sent via the API.

Best Practices for Managing Access Tokens

It’s important to remember that you should never embed an App Access Token in a mobile or desktop application. These tokens are only for use in server-side applications that communicate with the API. Safeguard them the same way you would any other application secrets, like your database credentials, as anyone with your token has access to the API as your business.

If your application runs on a cloud services provider like AWS, Azure, GCP, or others, those platforms have tools to securely store app secrets. Alternatively there are freely-available secret stores like Vault or Conjur. While any of these options may work for you, it’s important to evaluate your options and choose what works best for your setup. At the very least, consider storing access tokens in environment variables and not in a database or a file where they’re easy to find during a data breach.

Advertisement
free widgets for website

Conclusion

In this post, you learned how to create a Meta for Developers app that leverages the WhatsApp Business Platform. You now know how the Cloud API’s bearer access tokens work, how to send an access token using an HTTP authorization header, and what happens if you send an invalid access token. You also understand the importance of keeping your access tokens safe since an access token allows an application to access a business’ WhatsApp messaging capabilities.

Why not try using the Cloud API, hosted by Meta if you’re considering building an app for your business to manage WhatsApp messaging? Now that you know how to obtain and use access tokens, you can use them to access any endpoint in the API.

First seen at developers.facebook.com

Continue Reading

FACEBOOK

Now people can share directly to Instagram Reels from some of their favorite apps

Published

on

By

now-people-can-share-directly-to-instagram-reels-from-some-of-their-favorite-apps

More people are creating, sharing and watching Reels than ever before. We’ve seen the creator community dive deeply into video content – and use it to connect with their communities. We’re running a limited alpha test that lets creators share video content directly from select integrated apps to Instagram Reels. Now, creators won’t be interrupted in their workflow, making it easier for them share share and express themselves on Reels.

“With the shift to video happening across almost all online platforms, our innovative tools and services empower creativity and fuel the creator economy and we are proud to be able to offer a powerful editing tool like Videoleap that allows seamless content creation, while partnering with companies like Meta to make sharing content that much easier.”- Zeev Farbman, CEO and co-founder of Lightricks.

Starting this month, creators can share short videos directly to Instagram Reels from some of their favorite apps, including Videoleap, Reface, Smule, VivaVideo, SNOW, B612, VITA and Zoomerang, with more coming soon. These apps and others also allow direct sharing to Facebook , which is available for any business with a registered Facebook App to use.

We hope to expand this test to more partners in 2023. If you’re interested in being a part of that beta program, please fill out this form and we will keep track of your submission. We do not currently have information to share about general availability of this integration.

Learn more here about sharing Stories and Reels to Facebook and Instagram and start building today.

Advertisement
free widgets for website

FAQs

Q. What is the difference between the Instagram Content Publishing API and Instagram Sharing to Reels?

See also  Facebook group with national reach joins search for missing Chula Vista mom

A: Sharing to Reels is different from the Instagram Content Publishing API, which allows Instagram Business accounts to schedule and publish posts to Instagram from third-party platforms. Sharing to Reels is specifically for mobile apps to display a ‘Share to Reels’ widget. The target audience for the Share to Reels widget is consumers, whereas the Content Publishing API is targeted towards businesses, including third-party publishing platforms such as Hootsuite and Sprout Social that consolidate sharing to social media platforms within their third-party app.

Q: Why is Instagram partnering with other apps?

A: Creators already use a variety of apps to create and edit videos before uploading them to Instagram Reels – now we’re making that experience faster and easier. We are currently doing a small test of an integration with mobile apps that creators know and love, with more coming soon.

Q: How can I share my video from another app to Reels on Instagram?

Advertisement
free widgets for website

A: How it works (Make sure to update the mobile app you’re using to see the new Share to Reels option):

  • Create and edit your video in one of our partner apps
  • Once your video is ready, tap share and then tap the Instagram Reels icon
  • You will enter the Instagram Camera, where you can customize your reel with audio, effects, Voiceover and stickers. Record any additional clips or swipe up to add an additional clip from your camera roll.
  • Tap ‘Next’ to add a caption, hashtag, location, tag others or use the paid partnerships label.
  • Tap ‘Share’. Your reel will be visible where you share reels today, depending on your privacy settings.
See also  'Facebook has a blind spot': why Spanish-language misinformation is flourishing

Q: How were partners selected?

A. We are currently working with a small group of developers that focus on video creation and editing as early partners. We’ll continue to expand to apps with other types of creation experiences.

Q: When will other developers be able to access Sharing to Reels on Instagram?

A: We do not currently have a date for general availability, but are planning to expand further in 2023.

Q: Can you share to Facebook Reels from other apps?

Advertisement
free widgets for website

A: Yes, Facebook offers the ability for developers to integrate with Sharing to Reels. For more information on third-party sharing opportunities, check out our entire suite of sharing offerings .

First seen at developers.facebook.com

Continue Reading

FACEBOOK

What to know about Presto SQL query engine and PrestoCon

Published

on

By

what-to-know-about-presto-sql-query-engine-and-prestocon

The open source Presto SQL query engine is used by a diverse set of companies to navigate increasingly large data workflows. These companies are using Presto in support of e-commerce, cloud, security and other areas. Not only do many companies use Presto, but individuals from those companies are also active contributors to the Presto open source community.

In support of that community, Presto holds meetups around the world and has an annual conference, PrestoCon, where experts and contributors gather to exchange knowledge. This year’s PrestoCon, hosted by the Linux Foundation, takes place December 7-8 in Mountain View, CA. This blog post will explore some foundational elements of Presto and what to expect at this year’s PrestoCon.

What is Presto?

Presto is a distributed SQL query engine for data platform teams. Presto users can perform interactive queries on data where it lives using ANSI SQL across federated and diverse sources. Query engines allow data scientists and analysts to focus on building dashboards and utilizing BI tools so that data engineers can focus on storage and management, all while communicating through a unified connection layer.

In short, the scientist does not have to consider how or where data is stored, and the engineer does not have to optimize for every use case for the data sources they manage. You can learn more about Presto in a recent ELI5 video below.

Caption: Watch the video by clicking on the image above.

Advertisement
free widgets for website

Presto was developed to solve the problem of petabyte-scale, multi-source data queries taking hours or days to return. These resources and time constraints make real-time analysis impossible. Presto can return results from those same queries in less than a second in most cases, allowing for interactive data exploration.

See also  Facebook group with national reach joins search for missing Chula Vista mom

Not only is it highly scalable, but it’s also extensible, allowing you to build your own connector for any data source Presto does not already support. At a low level, Presto also supports a wide range of file types for query processing. Presto was open sourced by Meta and later donated to the Linux Foundation in September of 2019.

Here are some Presto resources for those who are new to the community:

What is PrestoCon?

PrestoCon is held annually in the Bay Area and hosted by the Linux Foundation. This year, the event takes place December 7-8 at the Computer History Museum. You can register here. Each year at PrestoCon, you can hear about the latest major evolutions of the platform, how different organizations use Presto and what plans the Technical Steering Committee has for Presto in the coming year.

Presto’s scalability is especially apparent as every year we hear from small startups, as well as industry leaders like Meta and Uber, who are using the Presto platform for different use cases, whether those are small or large. If you’re looking to contribute to open source, PrestoCon is a great opportunity for networking as well as hearing the vision that the Technical Steering Committee has for the project in the coming year.

Advertisement
free widgets for website

Explore what’s happening at PrestoCon 2022:

Where is Presto used?

Since its release in November of 2013, Presto has been used as an integral part of big data pipelines within Meta and other massive-scale companies, including Uber and Twitter.

The most common use case is connecting business intelligence tools to vast data sets within an organization. This enables crucial questions to be answered faster and data-driven decision-making can be more efficient.

How does Presto work?

First, a coordinator takes your statement and parses it into a query. The internal planner generates an optimized plan as a series of stages, which are further separated into tasks. Tasks are then assigned to workers to process in parallel.

Workers then use the relevant connector to pull data from the source.

Advertisement
free widgets for website

The output of each task is returned by the workers, until the stage is complete. The stage’s output is returned by the final worker towards the next stage, where another series of tasks must be executed.

The results of stages are combined, eventually returning the final result of the original statement to the coordinator, which then returns to the client.

How do I get involved?

To start using Presto, go to prestodb.io and click Get Started.

We would love for you to join the Presto Slack channel if you have any questions or need help. Visit the community page on the Presto website to see all the ways you can get involved and find other users and developers interested in Presto.

If you would like to contribute, go to the GitHub repository and read over the Contributors’ Guide.

Advertisement
free widgets for website

Where can I learn more?

To learn more about Presto, check out its website for installation guides, user guides, conference talks and samples.

Make sure you check out previous Presto talks, and attend the annual PrestoCon event if you are able to do so.

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Advertisement
free widgets for website
Continue Reading

Trending