Connect with us

FACEBOOK

Connecting a web app to your PyTorch model using Amazon SageMaker

Published

on

Author: Cami Williams

Hello fellow programmers, it is me, Cami Williams, your friendly neighborhood open source Developer Advocate, back at it again with a PyTorch blog post.

I have decided to take to task the challenge of deploying my PyTorch neural network (a.k.a. model), with the goal of hooking it up to a REST API so I can access it via a web application.

If you have yet to read my other blog posts about PyTorch, take a look at them here:

In these posts, I dive into what PyTorch is, how it is organized, and how to get up and running with training and testing a model. Next step: actually do something with it.

Admittedly, I come to the table with AWS knowledge and bias, so my go-to here is to check out their resources. In doing so I stumbled upon Amazon SageMaker: a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. *Italian chef kiss* Exactly what I want.

In theory, I would have my model hosted on SageMaker, create a Lambda function to take in data from the web, send it to the SageMaker instance, and send back information from the model, and then a REST API hosted through API Gateway to access that Lambda function. Here’s a semi-helpful diagram depicting the communication flow:

Before moving forward, it would be a MAJOR disservice to you unless I have future Cami tell you how much this all costs.

Future Cami: Hello PyTorch enthusiasts. The project Cami is about to build came back witgkh a very big AWS bill of $314.23, entirely due to the fact that she used an ml.m4.4xlarge running instance of SageMaker, versus medium or small. Don’t make the same mistake. If you do, AWS is really nice about experimenting mistakes, you can talk to customer support for help if something happens. Once I adjusted the instance, the bill was just shy of $30 (after a lot of testing). You have been warned.

AWS can get PRICEY and so experimenting with it is not always the best option. If you have found other options for deploying your model and attaching it to a REST API, please let the fam know by commenting on this post or writing a blog post that we can share! Otherwise, check out 7 Ways to Reduce Your AWS Bill, and remember to turn off whatever you aren’t actively using on AWS.

SO! What are we building today? We will use our MNIST handwritten numbers model from the Intro to PyTorch blog post to create a web app that detects digits of pi. A user will enter the website, draw a number on a web canvas, and if it is a digit of pi (i.e. 3.14159) according to our model, the digit will appear on the screen in the proper position. Thanks to Future Cami, again, for the following gif of the functioning web app:

I will build this app in the following order:

If this is your cup of tea, then let’s start building!

Host MNIST model on SageMaker

When I started building this, I went through the Get Started with Amazon SageMaker Notebook Instances and SDKs tutorial to understand how to deploy my model. This will be the abridged version, appealing to those who just want to plug and chug code and keep moving. If you are interested in the nitty gritty, definitely read that tutorial. Note: I am going to be working in us-east-1. Pick whichever AWS region you desire.

Navigate to Amazon SageMaker Studio and click on Notebook Instances in the left-hand menu.

Click on “Create notebook instance”, and enter in the following fields:

  • Notebook instance name: PyTorchPi (or something similar)
  • Notebook instance type: ml.m4.4xlarge ml.t2.medium Future Cami: this is where she made the billing mistake. Use medium.
  • Elastic inference: none
  • IAM Role: Create new role
  • S3 buckets you specify: Any S3 bucket (were you to push this app to production, you would probably want to change the permissions here, but for our example we will just make it completely open).
  • Create role
  • Root access: Enable
  • Encryption key: No Custom Encryption
  • Create notebook instance

Your notebook instance should show “Pending” for some time. While we wait, let’s put together the code for our model. There’s more documentation on how to train and deploy PyTorch models with SageMaker that you can also read. Rather than doing that, we can go straight to the source, a GitHub repo! This gives us an example on how we can train and use a PyTorch MNIST model on SageMaker, and get model results from it by drawing our own digits.

Cami, how did you find such a perfect example that matches exactly what we need to build this web app?

Dear reader, do not question a wonderful thing. Let’s keep going.

In this folder you will see “input.html“, “mnist.py“ and “pytorch_mnist.ipynb“.

  • “input.html“ : Code for the canvas to draw digits
  • “mnist.py“ : Code to construct, train, and test your model with some data
  • “pytorch_mnist.ipynb“ : A runnable notebook to train and host the model with MNIST data, and test with the “input.html“ app.

Within SageMaker, we will host “input.html“ and “mnist.py“, and probably never touch them again. “pytorch-mnist.ipynb“ is where we will interact with this code, potentially make changes, but ultimately deploy the model.

By this point, your PyTorchPi SageMaker Notebook instance should show a status of “InService”. If it does, under “Actions” select “Open Jupyter”. You should be directed to a new url: <NOTEBOOK-NAME>.notebook.<REGION>.sagemaker.aws/tree

Upload “input.html“, “mnist.py“, and “pytorch_mnist.ipynb“ to the Jupyter directory. Once they are uploaded, click on “pytorch_mnist.ipynb“.

This will open a new window with the runnable notebook. A couple quick housekeeping items:

  • Find the code block that says “predictor = estimator.deploy(initial_instance_count=1, instance_type=’ml.m4.xlarge’)“. Change it to say “predictor = estimator.deploy(initial_instance_count=1, instance_type=’ml.t2.medium’)“
  • Delete the “Cleanup” code block that says “estimator.delete_endpoint()“ so we can eventually deploy the SageMaker endpoint created from the code (“Edit”, then “Delete Cells”).

In the top menu, select “Cell” then “Run All” and see magic happen before your eyes: your model building, training, and optimizing. Also maybe go get a snack or something because this takes a while to finish running.

Scroll through the notebook to see what exactly is going down, or directly to the bottom of the notebook to see the final output.

Pro tip/explanation: If a number appears next to the code block, that means it ran successfully and you can see the output. If it is a “*” then it has yet to run or is currently running.

Once the code is done running, you will see a frame where you can draw a number (as coded in “input.html“. Draw a number in the box and then run the last code block. It will output the model’s prediction of what number your drawing represents.

Play around with this a bit for funzies. Once you are done, we are ready to deploy our model through this prediction endpoint.

Deploy model

Fortunately, the code we needed to run to deploy our model was already in this notebook! You can read more about how this is done under “Host”.

If you ever want to update the endpoint, you will have to re-run this code.

This aside, open a new tab, or navigate to the AWS SageMaker Studio again. In the left-hand menu, click on “Endpoints”.

You should be able to see your pytorch-training endpoint. Click on the endpoint to ensure that the creation time is recent to the last time you ran the notebook. Scroll to “Endpoint runtime settings” to also ensure your instance type is ml.t2.medium.

Assuming everything looks correct, take note of the Name and URL in Endpoint settings. This will be what we reference in our Lambda function. Ideally, we could connect this URL directly to API Gateway. That said, I want to make sure that it is protected by a Lambda function: my Lambda can parse the incoming data to ensure that it is in the correct format. If the sent-over data is incorrect, rather than checking it in the model (where it would be more costly time- and dollar-wise), I can handle that error in the Lambda.

Connect to Lambda

Now go to AWS Lambda, and click “Create function”. Select “Use a blueprint” and search for “microservice-http-endpoint-python”. This sets up a Lambda function that is intended to be a REST API endpoint using API Gateway.

Click on “Configure”, and enter in the following fields:

  • Function name: PyTorchPiFunction (or something similar)
  • Execution role: Create a new role from AWS policy templates
  • Role name: PyTorchSageMakerLambdaAPIGateway (or something similar)
  • Policy templates: Ensure “Simple microservice permissions” is selected.
  • API Gateway tigger: Create an API
  • API Type: REST API
  • Security: Open (again, were you to push this app to production, you would probably want to change the permissions here, but for our example we will just make it completely open).

Then click, “Create function”. In the window that appears, make note of your Lambda ARN in the top right corner. You should see a diagram in the Designer view of your function connected to API Gateway. If you click on API Gateway here, you will see information about the API endpoint (we will configure this later). For now, click on the PyTorchPiFunction.

In the Function Code section, you should see the code for your Lambda function. Here, we want to be able to intake data from our web app, send it to the SageMaker endpoint, and return the result of the model in SageMaker. To do so, we should create an environment variable for the SageMaker endpoint name.

Scroll down to the Environment variables section and click “Manage environment variables”. Click on “Add environment variable” and set the “Key” to ENDPOINT_NAME, and the value to your SageMaker endpoint name.

Save this environment variable. Now, let’s write some code. Back in the Function code section, delete everything present, and paste the following code:

 python import os import io import boto3 import json import csv # grab environment variables ENDPOINT_NAME = os.environ['ENDPOINT_NAME'] runtime= boto3.client('runtime.sagemaker') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) // 1 data = json.loads(json.dumps(event)) payload = data['data'] print("PAYLOAD: ", payload) // 2 response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME, ContentType='application/json', Body=payload)xe print(response) result = json.loads(response['Body'].read().decode()) // 3 pred = max(result[0]) // 4 predicted_label = result[0].index(pred) // 5 return predicted_label 

This code does the following:

  • Takes in image data
  • Invokes the SageMaker endpoint
  • Prints and loads the response from the model
  • Takes the maximum from the result (i.e. the label/number that has the highest probability according to the model output)
  • Sends this information back in the response.

This code is hard to test, because we would need to get the multi-dimensional interpretation of the image… so trust me for now and we will test momentarily. If you are curious what that interpretation looks like, here is an example of the data this model would intake for a drawing that resembles an “8”:

Neato, right? We will use this to test via API Gateway next, and I will show you the code how to translate an image to this multi-dimensional array. For now, Save your Lambda function.

Before moving on to the REST endpoint, we need to update the IAM permissions of this function to allow access to SageMaker. Scroll to the top and click on “Permissions”. Under “Execution role”, click on the Role name:

This should open a new tab where we can attach the SageMaker policy. Click on “Attach policies” and search for “SageMaker”. Add “AmazonSageMakerFullAccess” and click “Attach policy”.

Now, navigate to API Gateway and click on “PyTorchPiFunction-API”, or in your Lambda function under “Configuration”, click on the API Gateway tab in the Designer view, then click on the “PyTorchPiFunction-API”.

Create REST endpoint

Because we pre-configured this endpoint when we created our Lambda function, it has some things already defined for us. For sake of sanity, rather than explain this setup I am going to advise we delete and start from scratch.

Under Resources you should have the PyTorchPiFunction. Go ahead and delete this by clicking on “Actions”, then “Delete Resource”.

Once that is deleted, under “Actions” select “Create Method”. A dropdown menu should appear, select POST and click on the little gray checkmark. Doing so should bring up a pane where you can set up the endpoint.

In the “Lambda Function” section, paste in your Lambda ARN that we noted earlier. You can also input the name of the function (“PyTorchPiFunction”), but I always do the ARN just to ensure they are matched properly. Click Save.

Now we will see all the method executions of this endpoint. Click on the Actions dropdown and select “Enable CORS”. Update to the following:

  • Access-Control-Allow-Headers : ‘Content-Type,X-Amz-Date,Authorization,X-Api-Key,x-requested-with’
  • Access-Control-Allow-Origin* : ‘*’

Click “Enable CORS” and replace existing CORS headers. If you get a dialog asking to confirm method changes, select Yes, replace existing values.

Click on the POST endpoint to see all of the Method executions. We can leave “Method Request”, “Integration Request” and “Integration Response” as is. Select “Method Response”. You should see a line item that has the HTTP status as 200. Click on the dropdown for this.

Under Response Headers for 200, add the following headers if they aren’t already present:

  • X-Requested-With
  • Access-Control-Allow-Headers
  • Access-Control-Allow-Origin
  • Access-Control-Allow-Methods

Click back to Method Execution. Now we can finally test our REST API. Click on the Test lightning bolt. You should now see the Method Test pane.

If you scroll to the bottom of this pane and click “Test”, you should get a Response Body that resembles the following:

 json { "errorMessage": "'data'", "errorType": "KeyError", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 16, in lambda_handler\n payload = data['data']\n" ] } 

If you do, this shows that we correctly can access our Lambda function! Yay! It is coming back with a response saying that it is missing our input data. In the request body, enter:

 { "data": "[[[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]]]" } 

Remember the image above of the 8 as a multidimensional array? This is it! Click “Test” and you should get more examples of multidimensional arrays, you can refer to the runnable notebook on SageMaker:

  • Run all cells of the notebook if you haven’t already.
  • Scroll to the bottom of the notebook
  • Draw a number
  • In the last cell block, add the line “print([data])“
  • Copy the output multidimensional array and set “data” to this array in your API Gateway test.

If you are getting errors with any of this, you should check out the logs on AWS CloudWatch. To do so, navigate to CloudWatch on AWS, click on Log groups on the left-hand menu, and select your Lambda function name. There should be a line item for every time your API Gateway test calls your Lambda function. Common errors I had:

  • Parameter validation failed: Make sure that “data” and the multidimensional array are in quotes.
  • Expected 4-dimensional input for 4-dimensional weight 10 1 5 5, but got 3-dimensional input of size [1, 28, 28] instead: Ensure your multidimensional array has 4 [[[[ and ]]]] at the beginning and end.

Our API is working and fully connected to our model. In the Actions menu, select Deploy API. You can use either “default” or “[New Stage]”. For my own entertainment, I am going to make a new stage and call it “prod”. Once you have made your choice, select “Deploy”. This will take you to the Stage Editor, you shouldn’t have to make any changes here, but for sanity click “Save Changes”. Make a note somewhere of the Invoke URL, this will be the REST endpoint we call in our web app!

PHEW! We did it! All the crazy AWS stuff is DONE. Now we can begin our end… building the web application.

Build the web app

I don’t really want to get too in the weeds with this web app. I figure that you probably are reading this post more for the PyTorch stuff over the HTML/CSS madness. So, with that assumption, I am just going to explain the JavaScript side of things. Cool? Cool.

In the HTML, I am using a canvas, similar to what we used in the “input.html“ we uploaded to SageMaker. There are ways to style how a user would draw on the canvas, this can take some time to play with.

Once we are able to intake the drawing on the canvas properly, we need to get its pixel information. This took some playing around. But essentially when a user clicks “down” in the canvas, I execute the following code.

 javascript function getMousePos(e) { if (!e) var e = event; if (e.offsetX) { mouseX = e.offsetX; mouseY = e.offsetY; } else if (e.layerX) { mouseX = e.layerX; mouseY = e.layerY; } x = Math.floor(e.offsetY * 0.05); y = Math.floor(e.offsetX * 0.05) + 1; for (var dy = 0; dy < 2; dy++){ for (var dx = 0; dx < 2; dx++){ if ((x + dx < 28) && (y + dy < 28)){ pixels[(y+dy)+(x+dx)*28] = 1; } } } } 

The “pixels“ array is then updated with 0s and 1s representing where the user had drawn and the negative space around the drawing. Once the user hits “Enter” or is ready to send their drawing, I use a function from the “input.html“ to convert “pixels“ to the proper multidimensional array format for the model.

 javascript function set_value() { let result = "[[[" for (var i = 0; i < 28; i++) { result += "[" for (var j = 0; j < 28; j++) { result += pixels [i * 28 + j] || 0 if (j < 27) { result += ", " } } result += "]" if (i < 27) { result += ", " } } result += "]]]" return result } 

With the “result“ from “set_value()“, I call my REST API.

 async function sendImageToModel(canvas) { let data = set_value(); let payload = { "data" : data } let response = await fetch('<REST API URL>', { method: 'POST', body: JSON.stringify(payload), dataType: 'json', headers: { 'Content-Type': 'application/json' } }); let myJson = await response.json(); return myJson; } 

When I have the numerical response from calling my API, I then show the digit of pi according to its next match on the screen and clear the canvas for the user to provide the next input.

Party

Huzzah! We did it! We have successfully created a web application that accesses our MNIST PyTorch model.

To those of you who have read this far, I salute you! If you just scrolled to the bottom of the post, hello! If you have any questions, please feel free to leave a comment here or to tweet at me on Twitter (@cwillycs).

Thanks for reading, and happy hacking!

To learn more about Facebook Open Source, visit our open source site, subscribe on Youtube, or follow us on Twitter and Facebook.

Facebook Developers

Continue Reading

FACEBOOK

Data Use Checkup Rolling Out Broadly to Facebook Platform Developers

Published

on

Maintaining a safe, thriving developer ecosystem is critical to our mission of giving people the power to build community and bringing the world closer together. Recently, we have made changes to simplify our Platform Terms and Developer Policies so businesses and developers clearly understand their shared responsibility to safeguard data and respect people’s privacy when using our platform and tools.

These changes represent our strengthened commitment to protect people’s privacy and ensure developers have the tools and information they need to continue to use our platform responsibly.

Today, we’re announcing the broad launch of Data Use Checkup, a new annual workflow for Facebook platform developers that we began testing in April. Through Data Use Checkup, developers will be asked to review the permissions they have access to and commit that their API access and data use comply with the Facebook Platform Terms and Developer Policies within 60 days or risk losing their API access.

We are gradually rolling out Data Use Checkup in waves over the coming months. When you are enrolled in Data Use Checkup, you will receive information via a developer alert, an email to the registered contact, and in your Task List within the App Dashboard.

Developers who manage many apps will have the option to complete Data Use Checkup for multiple apps at once. You can access this flow by going to your “My Apps” page in the App Dashboard. From there, you will see all apps for which you are an admin, be able to filter down to a subset, and complete Data Use Checkup in bulk. This process will still require you to review each app you manage and the permissions you have access to and commit that your platform use complies with the Facebook Platform Terms and Developer Policies.

If you are not yet enrolled in Data Use Checkup, these are steps to take to prepare for the process:

  • Ensure you can access your app(s) in the App Dashboard. If you are unable to and need to regain admin status, click here.
  • Update contact details and app administrator designation for each app within your organization to receive the most up-to-date notifications. Any app admin will be able to complete the Data Use Checkup, so they should be in a position of authority to act on behalf of your organization. You can designate an app administrator within App Dashboard > Roles and update contact information within App Dashboard > Settings > Advanced.
  • Audit your apps and remove those that are no longer needed. To remove an app, go to App Dashboard > Settings > Advanced. This will ensure you’re only receiving developer alerts and notifications for apps that you need.
  • Review the permissions and features your apps have access to and remove any that are no longer needed in App Dashboard > App Review > My Permissions and Features.

Data Use Checkup is required for developers using many of our products and platforms across the Facebook Family of Apps. If you are developing on Oculus, here are specifics to consider and here is an update for businesses.

We know user privacy is just as important to our developer community as it is to us. Thank you for continuing to partner with us as we build a safer, more sustainable platform.

Facebook Developers

Continue Reading

FACEBOOK

Welcome to our 2020 Developer Circles Community Challenge

Published

on

It’s often said that the best way to learn is to become the teacher; that a great way to reinforce a skill or achievement is by sharing it with others.

It’s with this philosophy in mind that I’m pleased to welcome you to this year’s Community Challenge, our annual online hackathon for Developer Circles members – which runs until Monday October 26.

This time around, we’re inviting participants to go a step beyond building software solutions by creating tutorials about the code they’ve created. Winners are eligible to receive up to US$133,000 in cash prizes, Oculus VR headsets and fully credited amplification of their tutorials to millions of other developers across Facebook’s ecosystem.

We’re also broadening the products that competing innovators can build with, now spanning Open Source technologies including Docusaurus, Hack, Pytorch, React and React Native, as well as Messenger, Spark AR and Wit.ai.

Within each of these products, we see enormous opportunity for you and your teammates to connect, learn and build together, all while sharing your journey step-by-step through immersive tutorials for fellow tech enthusiasts.

I invite you to check out all of the guidelines for taking part, and I thought it would also be useful to share some examples that might help bring the challenge to life.

To get you started with a template for crafting impactful tutorials, here’s a framework providing an overview of recommended sections that will be helpful to readers, including some tips and tricks.

To provide general guidance on what a published tutorial looks like, here’s an example I wrote a couple of years ago about a product called Yoga – Facebook’s cross-platform layout engine that helps developers write more layout code. Please note that this particular tutorial is on the lengthier side – remember that your submission only needs to be between 1,000-4,000 words.

To demonstrate a useful step-by-step breakdown of a project, here’s a great tutorial on a smart bookmarking tool, Rust Rocket.

To understand the potential for using video to help bring your submission to life, check out this informative Docusaurus tutorial video and this engaging React Native tutorial video.

Remember, this is a team challenge, so we recommend dividing and conquering to create your tutorial most efficiently. If you have any tutorial ideas to share with potential teammates, or if you need support with troubleshooting or any questions about the challenge, please post in our Facebook group here.

What makes our Community Challenge so special every year is the many stories of collaboration and innovation we see from all around the world. We can’t wait to discover your story and for our global community to be inspired by your submission.

Good luck – and let’s get building!

#DevCCommunityChallenge #BuildwithFacebook

Facebook Developers

Continue Reading

FACEBOOK

Preparing Our Partners for iOS 14

Published

on

In June, Apple announced iOS 14 updates that, among other changes, require apps to ask users for permission to collect and share data using Apple’s device identifier.

Given the impact the policy will have on businesses’ ability to market themselves and monetize through ads, we’re sharing how we’re addressing iOS 14 changes and providing recommendations to help our partners prepare, while developers await more details on this policy.

First, we will not collect the identifier for advertisers (IDFA) on our own apps on iOS 14 devices. We believe this approach provides as much certainty and stability that we can provide our partners at this time. We may revisit this decision as Apple offers more guidance.

Additionally, we will remind users that they have a choice about how their information is used on Facebook and about our Off-Facebook Activity feature, which allows them to see a summary of the off-Facebook app and website activity businesses send to Facebook and disconnect it from their accounts.

There are a few things our partners should know to prepare for iOS 14. First, we will release an updated version of the Facebook SDK to support iOS 14. The new version of the Facebook SDK will provide support for Apple’s SKAdNetwork API, which limits the data available to businesses for running and measuring campaigns. In light of these limitations, and in an effort to mitigate the impact on the efficacy of app install campaign measurement, we will also ask businesses to create a new ad account dedicated to running app install ad campaigns for iOS 14 users.

We expect these changes will disproportionately affect Audience Network given its heavy dependence on app advertising. Like all ad networks on iOS 14, advertiser ability to accurately target and measure their campaigns on Audience Network will be impacted, and as a result publishers should expect their ability to effectively monetize on Audience Network to decrease. Ultimately, despite our best efforts, Apple’s updates may render Audience Network so ineffective on iOS 14 that it may not make sense to offer it on iOS 14. We expect less impact to our own advertising business, and we’re committed to supporting advertisers and publishers through these updates. Learn more about the impact to Audience Network here.

We believe that industry consultation is critical for changes to platform policies, as these updates have a far-reaching impact on the developer ecosystem. We’re encouraged by conversations and efforts already taking place in the industry – including within the World Wide Web Consortium (W3C) and the recently announced Partnership for Responsible Addressable Media (PRAM). We look forward to continuing to engage with these industry groups to get this right for people and small businesses.

Facebook Developers

Continue Reading

Trending