Connect with us


Building WhatsApp Message Templates for E-commerce




The WhatsApp Business Platform helps businesses communicate directly with their customers using APIs to send standardized messages, receive customer responses, and reply accordingly.

This walkthrough demonstrates how a fictitious grocery store that sells products online would keep customers up-to-date on their order status. Your app will use the Cloud API, hosted by Meta, to create WhatsApp message templates. Those templates can then be used to provide consistent and standardized information to e-commerce customers.

In the real world, order status changes according to customer actions. For instance, signing up, paying with a credit card, or finishing the purchase. You can accomplish this by integrating the WhatsApp Business Platform with your CRM or ecommerce systems, but that is outside the scope of this tutorial. Here we will be creating a manual update system so we can focus on creating and managing message templates.


First, we’ll go over creating a WhatsApp-powered Node.js web app. You can download the complete application code if you’d like a preview of where you’ll end up.

Follow the Set up Developer Assets and Platform Access tutorial to send and receive messages using a test phone number. Make sure you include building the initial app on Meta for Developers, connecting it with WhatsApp, and associating it with a Business Manager.

free widgets for website

You will also need an understanding of Node.js and all the required tools installed.

Building on a Minimal App with Node.js and Express

This article assumes you know how to build a basic app within Node.js. We’ll provide the sample app code and show how to implement WhatsApp message template functionality.

When you download the source code, this is the folder structure you see in your code editor. We’ll be working primarily in bin, public, routes, and views – as well as editing .env, app.js, and package.json:

  • bin folder: Contains executable binaries from your node modules.
  • public folder: Contains all the static files such as images, JavaScript, and CSS files.
  • routes folder: Contains routes registered in the app.js file. Routes are JavaScript codes that Express.js uses to associate an HTTP verb (GET, POST, PUT, and so on) with a URL path and a function that handles HTTP requests.
  • views folder: Contains the EJS/HTML files with markup logic that the EJS view engine uses to render web pages.
  • app.js file: Registers the routes and starts the Node.js web server.
  • .env file: Contains key and value pairs defining environment variables required by the project. This file allows configuring the application without modifying the project code.
  • package.json file: This is the heart of a Node.js project and contains the necessary metadata that the project needs to manage and install dependencies, run scripts, and identify the project’s entry point.

To get started, open the package.json file located in the project root folder:

 {   "name": "ecommerce-node",   "version": "0.0.0",   "private": true,   "scripts": {     "start": "node ./bin/www"   },   "dependencies": {     "axios": "^0.27.2",     "body-parser": "^1.20.0",     "cookie-parser": "~1.4.4",     "debug": "~2.6.9",     "dotenv": "^16.0.1",     "ejs": "~3.1.8",     "express": "~4.16.1",     "http-errors": "~1.6.3",     "morgan": "~1.9.1"   } } 

Note the requirements listed in the dependencies section. To run the application, you must install these dependencies first. Open your console and run npm install to install these dependencies.

This example uses a minimal app written with Node.js, Express, EJS. The app’s homepage is built on Bootstrap and features a simple form you use later to create the message template that you send to customers.

free widgets for website

Before we go any further, your Node.js application needs specific data from your developer account on Meta for Developers.

Edit the .env file in the project’s root folder, replacing the values below in the brackets with your data from your WhatsApp Business Account Dashboard according to the tutorial mentioned in the Prerequisites section:


Creating Message Templates with Node.js and WhatsApp Business Platform

Now we’re ready to start building your first message template.

Your form action in the index.ejs file located in the views/ folder tells the app to POST to the /createTemplates route. Therefore, you need a new router to:

  • Handle the createTemplates HTTP POST request.
  • Verify if the templates already exist.
  • Obtain the configuration needed to create each message template.
  • Create the message templates using the Cloud API.
  • Redirect the app to the homepage after creating the templates.

Create a new file called createTemplates.js in the /routes folder with the code below. This file contains the POST endpoint required to handle the request from the homepage and calls another function to generate the message templates to work with:

const express = require('express'); const router = express.Router(); const bodyParser = require('body-parser'); require('dotenv').config() const { messageTemplates } = require("../public/javascripts/messageTemplates") const { createMessageTemplate } = require("../messageHelper"); const { listTemplates } = require("../messageHelper");  router.use(bodyParser.json());'/', async function (req, res, next) {    try {       const templatesResponse = await listTemplates()       const remoteApprovedTemplates = => t.status === 'APPROVED');        const localTemplates = messageTemplates        for (let index = 0; index < localTemplates.length; index++) {         const localTemplate = localTemplates[index];          const queryResult = remoteApprovedTemplates         .filter(t => == process.env.TEMPLATE_NAME_PREFIX           + '_' +          console.log(`Creating template: ${}.`)          if (queryResult.length > 0) {           console.log(`Template ${queryResult[0].name} already exists.`)           continue;         }          try {           await createMessageTemplate(localTemplate);           console.log(`Template ${} created successfully.`)         } catch (error) {           console.log(`Failed creating template ${}.`)           console.log(;         }        }     } catch (error) {       console.log(“Failed obtaining remote template list.”)       console.log(error);     }      console.log(“Redirecting to the backoffice.”)     res.redirect('/backoffice'); });  module.exports = router;

Next, you need a function to encapsulate the Cloud API code to list and create message templates. Create a new file called messageHelper.js in the project’s root folder with the following code:

free widgets for website
const axios = require('axios');  async function listTemplates() {    return await axios({     method: 'get',     url: `${process.env.VERSION}/${process.env.BUSINESS_ACCOUNT_ID}/message_templates`       + '?limit=1000'       + `&access_token=${process.env.ACCESS_TOKEN}`   }) }

The code above sends a GET request to the message_templates endpoint to determine which templates already exist. The reply tells you whether you must create a new template.

Open the messageHelper.js file and add the following code to it:

async function createMessageTemplate(template) {    const config = {     method: 'post',     url: `${process.env.VERSION}/${process.env.BUSINESS_ACCOUNT_ID}/message_templates`,     headers: {       'Authorization': `Bearer ${process.env.ACCESS_TOKEN}`,       'Content-Type': 'application/json'     },     data: {       name:  process.env.TEMPLATE_NAME_PREFIX + '_' +,       category: "TRANSACTIONAL",       components: template.components,       language: template.language     }   };    return await axios(config) }  module.exports = {   listTemplates: listTemplates,   createMessageTemplate: createMessageTemplate };

The code above accesses the Meta Graph API and makes an HTTP request to the /message_templates endpoint to list (HTTP GET) and create (HTTP POST) templates. The templates pass:

  • The Meta Graph API version you’re using
  • Your WhatsApp Business account ID
  • The access token you generated for your system user

Note that the createMessageTemplate function used here creates a template with a specific data structure whose parameters fit the needs of the example e-commerce app. You can learn more and experiment with creating different parameters by visiting the Message Template Guidelines page.

The app.js file defines the entry point for the Node.js app. In the sample application source code, the app.js file starts like this:

const createError = require('http-errors'); const express = require('express'); const path = require('path'); const cookieParser = require('cookie-parser'); const logger = require('morgan'); const app = express();  const indexRouter = require('./routes/index'); const usersRouter = require('./routes/users');   // view engine setup app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'ejs');  app.use(logger('dev')); app.use(express.json()); app.use(express.urlencoded({ extended: false })); app.use(cookieParser()); app.use(express.static(path.join(__dirname, 'public')));  app.use('/', indexRouter); app.use('/users', usersRouter);  // catch 404 and forward to error handler app.use(function(req, res, next) {   next(createError(404)); });  // error handler app.use(function(err, req, res, next) {   // set locals, only providing error in development   res.locals.message = err.message;   res.locals.error ='env') === 'development' ? err : {};    // render the error page   res.status(err.status || 500);   res.render('error'); });  module.exports = app;

You can see how the app.js file above declares the required modules. Then it registers the routes necessary for the Express.js framework to decide which JavaScript functions handle each HTTP request the web application processes.

free widgets for website

Modify the app.js file to create a router variable for the /createTemplate route:

 const createTemplatesRouter = require('./routes/createTemplates'); 

Then, allow the app to use the new createTemplatesRouter variable:

 app.use('/createTemplates', createTemplatesRouter); 

Now, to create the test message templates, create a new file called messageTemplates.js in the public/javascripts/ folder using the contents of the messageTemplates.js file in the sample repository.

Finally, rerun the app:

 > npm start 

Click Create Message Templates. You see the creation of the message templates in the terminal window:

free widgets for website

Now that you’ve created them all, you use them to send preformatted messages to WhatsApp users.

Creating the Back Office Page

The next step is creating mock-up data for the product catalog and the orders. You store the data in files separated from your JavaScript code.

Create two new files called products.js and orders.js in the public/javascripts folder containing the product.js and the orders.js files in the sample repository.

You now need a new route for users to access the Back Office page:

Create a new file called backoffice.js in the /routes folder with the following content to render the Back Office page with the orders data:

free widgets for website
 const express = require('express'); const { products } = require("../public/javascripts/products"); const { orders, orderActions, statuses } = require("../public/javascripts/orders"); const { getMessageData, sendWhatsAppMessage } = require("../messageHelper") const router = express.Router();  const viewModel = products;'/', async function (req, res, next) {   const orderId = req.body.orderId;   const order = orders.filter(o => == orderId)[0]   order.statusId++;    const data = getMessageData(process.env.RECIPIENT_WAID, order);   try {     const response = await sendWhatsAppMessage(data)     console.log(response);   } catch (error) {     console.log(error);   }      renderBackoffice(res); });  router.get('/', function (req, res, next) {   renderBackoffice(res); });  module.exports = router; function renderBackoffice(res) {   res.render('backoffice',   {     title: 'e-Commerce Demo for Node.js',     products: viewModel,     statuses: statuses,     orders: orders,     orderActions: orderActions   }); } 

Create a new file called backoffice.ejs in the /views folder with the contents of the backoffice.js file in the sample repository to render the order data using the EJS template syntax.

Next, open the app.js file and create a router variable for the /backoffice route:

 const backofficeRouter = require('./routes/backoffice'); 

Allow the app to use the new backofficeRouter variable:

 app.use('/backoffice', backofficeRouter); 

Finally, rerun the app:

 > npm start 

Now, click Create Message Templates to create all the message templates with the Cloud API and redirect you to the Back Office view:

free widgets for website

Sending Templated Messages

A business-initiated conversation requires an approved message template during production and at minimum, be marked active during development. These conversations could include customer care messages, appointment reminders, payment or shipping updates, and alerts.

You must send the message using the parameters required by each template.

Open the messageHelper.js file and delete the module.exports block. Next, add the sendWhatsAppMessage and the getMessageData functions. Then, import the required javascript files for the products and message templates. Finally, add the new module.exports block using the code below:

const { messageTemplates } = require('./public/javascripts/messageTemplates') const { products } = require('./public/javascripts/products')  async function sendWhatsAppMessage(data) {   const config = {     method: 'post',     url: `${process.env.VERSION}/${process.env.PHONE_NUMBER_ID}/messages`,     headers: {       'Authorization': `Bearer ${process.env.ACCESS_TOKEN}`,       'Content-Type': 'application/json'     },     data: data   };    return await axios(config) }  function getMessageData(recipient, order) {    const messageTemplate = messageTemplates[order.statusId - 1]        let messageParameters    switch ( {     case 'welcome':       messageParameters = [         { type: "text", text: order.customer.split(' ')[0] },       ];       break;     case 'payment_analysis':       messageParameters = [         { type: "text", text: order.customer.split(' ')[0] },         { type: "text", text: products[order.items[0].productId - 1].name },       ];       break;     case 'payment_approved':       messageParameters = [         { type: "text", text: order.customer.split(' ')[0] },         { type: "text", text: },         { type: "text", text: order.deliveryDate },       ];       break;     case 'invoice_available':       messageParameters = [         { type: "text", text: order.customer.split(' ')[0] },         { type: "text", text: products[order.items[0].productId - 1].name },         { type: "text", text: `${}` },       ];       break;     case 'order_picked_packed':       messageParameters = [         { type: "text", text: order.customer.split(' ')[0] },         { type: "text", text: },         { type: "text", text: `${}` },       ];       break;     case 'order_in_transit':       messageParameters = [         { type: "text", text: order.customer.split(' ')[0] },         { type: "text", text: },         { type: "text", text: order.deliveryDate },         { type: "text", text: `${}` },       ];       break;     case 'order_delivered':       messageParameters = [         { type: "text", text: order.customer.split(' ')[0] },         { type: "text", text: },         { type: "text", text: order.deadlineDays },       ];       break;   }      const messageData = {     messaging_product: "whatsapp",     to: recipient,     type: "template",     template: {       name: process.env.TEMPLATE_NAME_PREFIX + '_' +,       language: { "code": "en_US" },       components: [{           type: "body",           parameters: messageParameters         }]}}    return JSON.stringify(messageData); }  module.exports = {   sendWhatsAppMessage: sendWhatsAppMessage,   listTemplates: listTemplates,   createMessageTemplate: createMessageTemplate,   getMessageData: getMessageData };

Note that the getMessageData function returns a data structure required for sending messages formatted for your back-office templates. You can experiment with other templates or create new ones by visiting the Message Templates page.

Finally, run the app:

free widgets for website
> npm start

On the Back Office page, click each Action button to move the orders along the sales funnel. Your app sends the appropriate templated WhatsApp message to the test phone number representing a customer. Each message corresponds to new order status.

Pick the first order and click the Approve button. Your app makes a POST request to the /messages endpoint of the Cloud API. It then sends a message to the customer according to the payment_analysis template created by your app. The app then moves the order status to Payment Approved.

Now open WhatsApp to visualize the templated message:

Next, pick the second order, click Ship, and repeat the process above:

Finally, pick the fifth order and click Deliver:

free widgets for website

The WhatsApp messages above have two essential elements: the message templates representing the current purchase order state in the e-commerce funnel and the raw data containing information about the customer or products.

In this exercise, the raw data came from static files (orders.js and products.js), but in a real-life app, you would need to pull that data from an enterprise data source such as a database, a CRM, or e-commerce APIs. This integration would also allow you to automate this process based on customer updates in those systems.

Creating and Managing Message Templates Using the Meta Business Manager UI

You have seen how the Cloud API allows your application to create message templates programmatically. In addition to this powerful feature, you can count on the Meta Business Manager UI to create and modify templates directly through the web without needing a program.

To reach this UI, login to the Meta for Developers Platform, then navigate to your app. On the left navigation panel, under Products, click WhatsApp and then click Getting Started.

Under Step 2: Send messages with the API, you see this paragraph with a link:

free widgets for website

“To send a test message, copy this command, paste it into Terminal, and press enter. To create your own message template, click here.”

Click the link. Alternatively, you can go directly to the Message Templates page.

To view one of the messages you created with the Node.js application, click the pencil icon, and you will see:

To create a new template, click the Create Message Template button and provide a category, a name, and one or more languages for your template:

Finally, type some text for your message template. The message body may contain variables, which are placeholders for data your app must provide later when it sends messages to WhatsApp users, just as in this article. To customize your templates further, you can include a message footer and a header with an image, video, or document.

free widgets for website


The WhatsApp Business Platform is a simple tool to facilitate communication between companies and their customers. Using the Cloud API, hosted by Meta, you can easily connect your app, communicate efficiently with your customers, and promote your business.

This article demonstrated how to connect an e-commerce back-office app based on Node.js with the WhatsApp Business Platform to create message templates, provide information, and send messages to WhatsApp users using the Cloud API.

Ready for more? See our tutorial on building and sending media-rich messages with your app and the Cloud API.

First seen at

free widgets for website
See also  This News Publisher Quit Facebook. Readership Went Up.


Hermit: Deterministic Linux for Controlled Testing and Software Bug-finding





If you’ve used emulators for older platforms, you probably experienced a level of precise control over software execution that we lack on contemporary platforms. For example, if you play 8-bit video games emulated on your modern console, you are able to suspend and rewind gameplay, and when you resume, that incoming creature or projectile will predictably appear in the same spot because any randomness plays out deterministically within the emulator.

Yet, as a software engineer, when your multithreaded service crashes under transient conditions, or when your test is flaky, you don’t have these luxuries. Everything the processor and operating system contributes to your program’s execution—thread scheduling, random numbers, virtual memory addresses, unique identifiers—constitutes an unrepeatable, unique set of implicit inputs. Standard test-driven methodologies control for explicit program inputs, but they don’t attempt to control these implicit ones.

Since 2020, our team within DevInfra has worked to tackle this hard problem at its root: the pervasive nondeterminism in the interface between applications and the operating system. We’ve built the first practical deterministic operating system called Hermit (see endnote on prior art). Hermit is not a new kernel—instead it’s an emulation layer on top of the Linux kernel. In the same way that Wine translates Windows system calls to POSIX ones, Hermit intercepts system calls and translates them from the Deterministic Linux abstraction to the underlying vanilla Linux OS.

Details on sources of and solutions for nondeterminism can be found in our paper, “Reproducible Containers,” published in ASPLOS ’20, which showcased an earlier version of our system. We’ve open-sourced the new Hermit system and the underlying program-instrumentation framework named Reverie.

Example Applications

Now we explore some of the applications Hermit can be used for, and the role [non]determinism plays. In the next section, we go deeper into how Hermit works.

free widgets for website

Flaky tests

First, flaky tests. They’re a problem for every company. Google, Apple, Microsoft and Meta have all published their experiences with flaky tests at scale. Fundamentally, the cause of flakiness is that test functions don’t really have the signatures that appear in the source code. An engineer might think they’re testing a function from a certain input type to output type, for example:

 test : Fn(Input) -> Output; 

Indeed, unless we’re doing property-based testing, then for unit tests it’s even simpler. (The input is empty, and the output is boolean.) Unfortunately, in reality, most tests may be affected by system conditions and even external network interactions, so test functions have a true signature more like the following:

 test : Fn(Input, ThreadSchedule, RNG, NetworkResponses) -> Output; 

The problem is that most of these parameters are outside of engineers’ control. The test harness and test code, running on a host machine, is at the mercy of the operating system and any external services.

Caption: Irreproducible, implicit inputs from the operating system can affect test outcomes.

That’s where Hermit comes in. Hermit’s job is to create a containerized software environment where every one of the implicit inputs (pictured above) is a repeatable function of the container state or the container configuration, including command line flags. For example, when the application requests the time, we provide a deterministic time that is a function of program progress only. When an application thread blocks on I/O, it resumes at a deterministic point relative to other threads.

free widgets for website

Hermit’s guarantee is that any program run by Hermit (without external networking) runs an identical execution—irrespective of the time and place it is run—yielding an identical stream of instructions and complete memory states at the time of each instruction. This means if you run your network-free regression test under Hermit, it is guaranteed not to be flaky:

 hermit run ./testprog 

Further, Hermit allows us to not merely explore a single repeatable execution of a program, but to systematically navigate the landscape of possible executions. Let’s look at how to control one specific feature: pseudo-random number generation (PRNG). Of course, for determinism, when the application requests random bytes from the operating system, we provide repeatable pseudo-random ones. To run a program with different PRNG seeds, we simply use a different --rng-seed parameter:

 hermit run --rng-seed=1 prog hermit run --rng-seed=2 prog 

In this case, it doesn’t matter what language prog is written in, or what random number generator library it uses, it must ultimately ask the operating system for entropy, at which point it will receive repeatable pseudo-random inputs.

See also  Facebook ropes in Uber's Sanjay Gupta as Director of International Marketing - Afaqs

It is the same for thread scheduling: Hermit takes a command line seed to control thread interleaving. Hermit is unique in being able to reproducibly generate schedules for full Linux programs, not merely record ones that happen in nature. It generates schedules via established randomized strategies, designed to exercise concurrency bugs. Alternatively, full thread schedules can be passed explicitly in as input files, which can be derived by capturing and modifying a schedule from a previous run. We’ll return to this in the next section.

There is an upshot to making all these implicit influences explicit. Engineers dealing with flaky programs can steer execution as they desire, to achieve the following goals:

free widgets for website
  • Regression testing: Use settings that keep the test passing.
  • Stress testing: Randomize settings to find bugs more effectively.
  • Diagnosis: Vary inputs systematically to find which implicit inputs cause flakiness.

Pinpointing a general class of concurrency bugs

As mentioned above, we can vary inputs to find what triggers a failure, and we can control schedules explicitly. Hermit builds on these basic capabilities to analyze a concurrency bug and pinpoint the root cause. First, Hermit searches through random schedules, similar to rr chaos mode. Once Hermit finds failing and passing schedules, it can analyze the schedules further to identify pairs of critical events that run in parallel and where flipping the order of those events causes the program to fail. These bugs are sometimes called ordering violations or race conditions (including but not limited to data races).

Many engineers use race detectors such as ThreadSanitizer or go run -race. Normally, however, a race detector requires compile-time support, is language-specific and works only to detect data races, specifically data races on memory (not files, pipes, etc.). What if, instead, we have a race between a client program written in Python, connecting to a server written in C++, where the client connects before the server has bound the socket? This nondeterministic failure is an instance of the “Async Await” flakiness category, as defined by an empirical analysis and classification of flaky tests.

By building on a deterministic operating system abstraction, we can explicitly vary the schedule to empirically find those critical events and print their stack traces. We start by using randomized scheduling approaches to generate a cloud of samples within the space of possible schedules:

Caption: A visualization of the (exponential) space of possible thread schedules, generated by different Hermit seeds. With the space organized by edit distance, the closest red and green dots correspond to the minimum edit distance observed between a passing and failing schedule.

We can organize this space by treating the thread schedules literally as strings, representing sequential scheduling histories. For example, with two threads A & B, “AABBA” could be an event history. The distance between points is the edit distance between strings (actually, a weighted edit distance preferring swaps over insertion or deletion). We can then take the closest pair of passing and failing schedules and then study it further. In particular, we can bisect between those schedules, following the minimum edit distance path between them, as visualized below.

See also  Get Started with the Page Insights API

Caption: A binary search between passing and failing schedules, probing points in between until it finds adjacent schedules that differ by a single transposition, a Damerau-Levenshtein distance of one.

free widgets for website

At this point, we’ve reduced the bug to adjacent events in the thread schedule, where flipping their order makes the difference between passing and failing. We report the stack traces of these events as a cause of flakiness. (Indeed, it is only a single cause because there may be others if flakiness is overdetermined.)

Challenges and how it works

Here, we’ll cover a bit about how Hermit works, emphasizing pieces that are different from our prototype from ASPLOS ’20. The basic scenario is the same, which is that we set out to build a user space determinization layer, not allowing ourselves the liberty of modifying the Linux kernel or using any privileged instructions.

Challenge 1: Interposing between operating system and application

Unfortunately, on Linux there is not a standard, efficient, complete method to interpose between user-space applications and the operating system. So we’ve built a completely new program sandboxing framework in Rust, called Reverie, that abstracts away the backend—how program sandboxing is implemented. Reverie provides a high-level Rust API to register handlers: callbacks on the desired events. The user writes a Reverie tool that observes guest events and maintains its own state.

Reverie is not just for observing events. When you write a Reverie handler, you are writing a snippet of operating system code. You intercept a syscall, and you become the operating system, updating the guest and tool state as you like, injecting zero or more system calls to the underlying Linux operating system and finally returning control to the guest. These handlers are async, and they run on the popular Rust tokio framework, interleaving with each other as multiple guest threads block and continue.

The reference Reverie backend uses the ptrace system call for guest event interception, and a more advanced backend uses in-memory program instrumentation. In fact, Reverie is the only program instrumentation library that abstracts away whether instrumentation code is running in a central place (its own process), or inside the guest processes themselves via injected code.

free widgets for website

Challenge 2: Inter-thread synchronization

Consider communication through sockets and pipes within the reproducible container. This is an area where our earlier prototype mainly used the strategy of converting blocking operations to non-blocking ones, and then polling them at deterministic points in time within a sequential execution. Because we run in user space, we don’t have a direct way to ask the kernel whether a blocking operation has completed, so attempting a non-blocking version of the syscall serves as our polling mechanism.

Hermit builds on this strategy and includes a sophisticated scheduler with thread priorities and multiple randomization points for exploring “chaos mode” paths through the code. This same scheduler implements a back-off strategy for polling operations to make it more efficient.

Hermit also goes beyond polling, implementing some inter-thread communication entirely inside Hermit. By including features like a built-in futex implementation, Hermit takes a small step closer to behaving like an operating system kernel. But Hermit is still vastly simpler than Linux and passes most of the heavy lifting on to Linux itself.

See also  Workplace and algorithm bias kill Palestine content on Facebook and Twitter

For specific features that Hermit implements directly, it never passes those system calls through to Linux. In the case of futexes, for example, it is hard or impossible to come up with a series of raw futex syscalls to issue to the kernel and achieve a deterministic outcome. Subtleties include spurious wake-ups (without the futex value changing), the nondeterministic selection of threads to wake, and the ineradicable moment in time after Hermit issues a blocking syscall to Linux, but before we know for sure if Linux has acted on it.

These issues are avoided entirely by intercepting each futex call and updating the Hermit scheduler’s own state precisely and deterministically. The underlying Linux scheduler still runs everything, physically, but Hermit’s own scheduler takes precedence, deciding which thread to unblock next.

free widgets for website

Challenge 3: Large and complex binaries

Meta has no shortage of large and challenging binaries that use recent features of the Linux kernel. Nevertheless, after a couple of years of development, Hermit runs thousands of different programs deterministically today. This includes more than 90 percent of test binaries we run it on.

Other applications

While flaky tests and concurrency bugs are where we’ve invested most heavily, there are many other applications which we will briefly outline below:

There’s more than we can explore on our own! That is why we’re excited to open up these possibilities for the community at large.


Repeatable execution of software is an important capability for debugging, auditability and the other applications described above. Nevertheless, it’s been treated as an afterthought in the technology stacks we all depend on—left as the developer’s responsibility to create “good enough” repeatability for stable tests or a reproducible bug report.

Hermit, as a reproducible container, provides a glimpse of what it would be like if the system stack provided repeatability as an abstraction: a guarantee the developer could rely upon, just like memory isolation or type safety. We’ve only scratched the surface of what is possible with this foundational technology. We hope you’ll check out the open source GitHub repo and help us apply and evolve Hermit.

free widgets for website

Note on prior Art: Earlier academic research on Determinator and dOS represents exploratory work in this area over a decade ago. But these systems were, respectively, a small educational OS and an experimental fork of Linux. Neither was designed as a maintainable mechanism for people to run real software deterministically in practice.

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at

Continue Reading


Building Intuitive Interactions in VR: Interaction SDK, First Hand Showcase and Other Resources





Interactions are a crucial aspect of making immersive VR experiences. These interactions must be natural and intuitive to improve retention and enhance users’ overall experience. To make it easy for developers to build immersive experiences that create a realistic sense of presence, we released Presence Platform, which provides developers with a variety of capabilities, including Passthrough, Spatial Anchors, Scene understanding, Interaction, Voice and more.

Interaction SDK lets developers create high-quality hand and controller interactions by providing modular and flexible components to implement a large range of interactions. In this blog, we’ll discuss the capabilities, samples and other resources available to you to help you get started with Presence Platform’s Interaction SDK. It’s important to us at Meta that the Metaverse continues to be built in the open, so we created an open source sample featuring the Interaction SDK to inspire you to innovate in VR along with us.

If you’re interested in learning by watching or listening, check out our video on Building Intuitive Interactions in VR: Interaction SDK, First Hand Showcase and Other Resources on the Meta Open Source YouTube channel.

Watch the video by clicking on the image above.

In this blog post, we’ll discuss the capabilities of the Interaction SDK, check out the Interaction SDK samples, the First Hand showcase app, our open-sourced First Hand sample, best practices and resources.

free widgets for website

Introduction to Interaction SDK

What is Interaction SDK?

Interaction SDK is a library of components for adding controllers and hands interactions to your experiences. Interaction SDK enables developers to incorporate best practices for user interactions and includes interaction models for Ray, Poke and Grab as well as hand-centric interaction models for HandGrab, HandGrabUse and Pose Detection. If you’re looking to learn more about what these models and interactions entail, see the “available interactions” section below.

To get started with Interaction SDK, you’ll need to have a supported Unity version and the Oculus Integration package installed. To learn more about the prerequisites, supported devices, Unity versions, package layout and dependencies, check out our documentation on the Meta Quest for Developers website.

We strive to make your experience of using Interaction SDK as smooth as possible. New features are regularly added, along with bug fixes and improvements. To learn more about latest versions and keep up to date with the latest features and fixes, check out our documentation on upgrade notes where you can find the latest information on new features, deprecations and improvements.

How does Interaction SDK work?

All interaction models in the Interaction SDK are defined by an InteractorInteractable pair of components. For example, a ray interaction is defined by the RayInteractor-RayInteractable pair of components. An Interactor is the component that acts on (hovers over, selects) Interactables. An Interactable is the component that gets acted on (can be hovered or selected) by Interactors. Together, the Interactor and Interactable components make up interactions.

To learn more about the Interactor-Interaction lifecycle, check out the documentation on Interactables. You can also choose to coordinate multiple Interactors based on state by using InteractorGroups. Our documentation on InteractorGroups goes over what an InteractorGroup is, how to use them more broadly and how to coordinate multiple Interactors to create InteractorGroupMulti.

free widgets for website

If you’re looking to make one interaction dependent on another ongoing interaction, you can link interactions with each other in a primary-secondary relationship. For example, if you’d first like to grab an object and then use it, the grab interaction would be the primary one, and the use interaction would be secondary to the grab. Check out our documentation to learn more about Secondary Interactions and how to use them.

What are the various Interactions available?

Interaction SDK allows developers to implement a range of robust and standardized interactions such as grab, poke, raycast and many more. Let’s dive into some of these interactions and how they work:

Hand Grab

Hand Grab Interactions provide a means of grabbing objects and snapping them to pre-authored hand grab poses. It uses the HandGrabInteractor and the HandGrabInteractable for this interaction.

Hand Grab interactions use per-finger information to inform when a selection should begin or end by using the HandGrabAPI, which indicates when the grabbing starts and stops, as well as the strength of the grabbing pose. While the HandGrabInteractor searches for the best interactable candidate and provides the necessary snapping information needed, the HandGrabInteractable indicates whether an object can be grabbed, how it moves, which fingers perform the grab, handedness information, finger constraints, how the hand should align and more. Check out the documentation on Hand Grab Interaction, where we discuss how this interaction works and how to customize the interactable movements and alignments.


Poke Interactions allow users to interact with surfaces via direct touch, such as pressing or hovering over buttons and interacting with curved and flat UI.

free widgets for website

Poke uses PokeInteractor and PokeInteractable for this interaction. The PokeInteractor uses a collision surface called a Pointable Surface to compute hovering and selection. To learn more about how the Poke interaction works and how it can be used, check out our documentation.

PokeInteractables can also be combined with a PointableCanvas to enable direct touch Unity UI. Read about how to integrate Interaction SDK with Unity Canvas on our documentation about Unity Canvas Integration.

Hand Pose

Hand Pose Detection provides a way to be able to recognize when tracked hands match shapes and wrist orientations.

The SDK provides 6 example poses as prefabs that show how pose detection works:

  • RockPose
  • PaperPose
  • ScissorsPose
  • ThumbsUpPose
  • ThumbsDownPose
  • StopPose

Based on the patterns defined in these poses, you can define your own custom poses. To learn more about what the Hand pose prefabs contain, check out our documentation on Pose Prefabs.

This interaction uses ShapeRecognition, TransformRecognition, VelocityRecognition and other methods to detect shapes, poses and orientations. The ShapeRecognizer allows you to specify the finger features that define a shape on a hand, and the Transform gives information about the orientation and position. Check out our documentation on Hand Pose Detection, where we discuss how Shape, Transform and Velocity Recognition works.

free widgets for website


Interaction SDK combines the use of Sequences and active state to allow you to create gestures. Active states can be chained together using Sequences to create gestures–the swipe gesture, for example.

When a specified criteria such as shape, transform feature, velocity and others are met, a state can become active. For example, one or more ShapeRecognizerActiveState components can become active when all of the states of the finger features match those in the listed shapes, that is, the criteria of a specified shape is met.

Since Sequences can recognize a series of active states over time, they can be used to create complex gestures. These gestures can be used to interact with objects by hovering over them or even different interactions based on which direction the hand swipes in. To learn more about how Sequences work, check out our documentation.

Distance Grab

The Distance Grab interaction allows a user to select and move objects that are usually beyond their hand reach. For example, this can mean attracting an object towards the hand and then grabbing it. Distance Grab supports other movements as well.

This interaction uses DistanceHandGrabInteractor and DistanceHandGrabInteractable for this interaction. Check out our documentation on Distance Hand Grab Interaction, where we discuss how this interaction works, how the best distant interactables are selected and how you can customize the behavior of the interactable.

free widgets for website

Touch Hand Grab

Touch Hand Grab enables using hands to grab objects based on their collider configuration and dynamically conforming fingers to their surface, so you can grab the interactables by simply touching the surface between your fingers, without fully needing to grab the object.

It uses the TouchHandGrabInteractor and the TouchHandGrabInteractable for this interaction. The TouchHandGrabInteractor defines the touch interaction properties, while the TouchHandGrabInteractable defines the colliders that are used by the interactor to test overlap against during selection. To learn more about how these work, check out the documentation about Touch Hand Grab interaction.


Transformers let you define different ways to manipulate Grabbable objects in 3D space, including how to translate, rotate or scale things with optional constraints.

For example, this interaction could use the Grab, Transformers and Constraints on objects to achieve interactions such as moving an object on a plane, using two hands to change its scale, rotating an object and much more.

Ray Interactions

Ray interactions allow users to select objects via raycast and pinch. The user can then choose to interact using pose and can pinch for selection.

free widgets for website

It uses RayInteractor and RayInteractable for this interaction. The RayInteractor defines the origin of raycasts, their direction and a max distance for the interaction, whereas the RayInteractable defines the surface of the object being raycasted against. When using hands, theHandPointerPose component can be used for specifying the origin and direction of the RayInteractor. Check out our documentation on Ray Interaction, where we discuss how RayInteractor and RayInteractable work as well as how Ray Interaction works with hands.

If you’d like to read more about the features mentioned above, check out our documentation on Interactions. There you’ll find information on Interactors, Interactables, Debugging interactions, Grabbables, the different types of interactions available and more.

Trying out the Interaction SDK Samples

To make it easy for developers to try out these interactions, our team has created the Interaction SDK Samples app that showcases each of the features that we discussed earlier. The Samples App can be found on AppLab. It goes over each feature as a separate example to help understand the interactions better.

See also  Flipper and JS: why we added JavaScript support to a mobile debugging platform

Before you try the samples on your headset, make sure that you have hand tracking turned on for your headset. To do that, go to Settings on the headset, click on Movement Tracking and turn on Hand Tracking. You can leave the Auto Switch Between Hands And Controllers selected so that you can use hands if you put controllers down.

You will begin the sample in the welcome scene, which shows you all the sample interactions available for you to try. This sample contains examples for interactions, such as Hand Grab, Touch Grab, Distance Grab, Transformers, Hand Grab Use, Poke, Ray, Poses and Gesture interactions.

free widgets for website

Let’s take a look at each of these samples:

The Hand Grab example: This example showcases the HandGrabInteractor. The Virtual Object demonstrates a simple pinch selection with no hand grab pose. The key demonstrates pinch-based grab with a single hand grab pose. The torch demonstrates curl-based grab with a single hand grab pose. The cup demonstrates pinch- and palm-capable grab interactables with associated hand grab poses.

The Touch Grab example: This example shows how you can grab an object by just touching it and not actually grabbing it. Touch Grab uses the physics shape of the object to create poses for the hand dynamically. In this example, you will find an example of kinematic as well as non-kinematic pieces to try this interaction with. Kinematic demonstrates the interaction working with non-physics objects, whereas Physical demonstrates the interaction of a non-kinematic object with gravity as well as the interplay between grabbing and non-grabbing with physical collisions.

The Distance Grab example: This example showcases multiple ways for signaling, attracting and grabbing distance objects. For example, Anchor At Hand anchors the item at the hand without attracting it, so that you can move it without actually grabbing it. Interactable To Hand shows an interaction where the item moves toward the hand in a rapid motion and then stays attached to it. And finally, Hand To Interactable shows an interaction where you can move the object as if the hand was there.

The Transformers example: This example showcases the GrabInteractor and HandGrabInteractable with the addition of Physics, Transformers and Constraints on objects. The map showcases translate-only grabbable objects with planar constraints. The stone gems demonstrate physics objects that can be picked up, thrown, transformed and scaled with two hands. The box demonstrates a one-handed rotational transformation with constraints. The figurine demonstrates hide-on-grab functionality for hands.

free widgets for website

The Hand Grab Use example: This example demonstrates how a Use interaction can be performed on top of a HandGrab interaction. For this example, you can grab the water spray bottle and hold it with a finger on the trigger. You can then aim at the plant and press the bottle trigger to moisturize the leaves–it feels intuitive and natural.

The Poke example: This example showcases the PokeInteractor on various surfaces with touch limiting. It demonstrates poke interactions and pressable buttons with standalone buttons or with Unity canvas. Multiple visual affordances are demonstrated, including various button depths as well as Hover On Touch and Hover Above variants for button hover states. These affordances also include Big Button with multiple poke interactors like the side of the hand or palm, and Unity canvases that showcase Scrolling and pressing buttons.

The Ray example: This example showcases the RayInteractor interacting with a curved Unity canvas using the CanvasCylinder component. It demonstrates ray interactions with a Unity canvas and hand interactions that use the system pointer pose and pinch for selection. Ray interactions use the controller pose for ray direction and trigger for selection. It also showcases curved surfaces that allow for various material types: Alpha Cutout, Alpha Blended and Underlay.

The Poses example: This example showcases six different hand poses, with visual signaling of pose recognition. It has detection for Thumbs Up, Thumbs Down, Rock, Paper, Scissors and Stop. Poses can be triggered by either the left or right hand. It also triggers a particle effect when a pose starts and then hides the particle effect when a pose ends. In the past, hand poses needed to be manually authored for every item in a game, and it could take several iterations before the pose felt natural. Interaction SDK provides a Hand Pose Authoring Tool, which lets you launch the editor, reach out and hold an item in a way that feels natural, record the pose and use the pose immediately.

The Gestures example: This example demonstrates the use of the Sequence component combined with ActiveState logic to create simple swipe gestures. For example, the stone shows a random color change triggered by either hand. The picture frame cycles through pictures in a carousel view, with directional change depending on which hand is swiping. Also note that the gestures are only active when hovering over objects.

free widgets for website

To learn more about how these samples work, and the various reference materials available to you to help you get started with these, check out our documentation on Example Scenes and Feature Scenes.

These examples showcase just a small subset of all the interactions that could be possible by using Interaction SDK. You can play around and build your own versions of these interactions by downloading the SDK in Unity. We’ll discuss how to do that later in the blog.

Trying out First Hand

Our team has also worked on a showcase experience called First Hand, which aims to demonstrate the capabilities and variety of interaction models possible with Hands. This is an official demo for Hand Tracking built with Interaction SDK. First Hand can be found on AppLab.

The experience starts with you in a clocktower. There’s a table in front of you and various objects that you can interact with. You can see your hands being tracked, and you will notice a light blinking on your left giving you an indication that it can be interacted with. It tells you that a delivery package is waiting for you to accept. You can grab the lift control and push the accept button. This demonstrates the HandGrab and Poke interaction.

The package is delivered, and it prompts you to pull the handle to provide power. Pulling the handle provides power, and it opens up. This demonstrates the HandGrab interaction with a one-handed rotate transform for rotating the lever. Next, you will need to type in the code written on the number pad to allow the box to unlock. This uses the Poke interaction.

free widgets for website

A wheel is presented in front of you. You can use your hands to HandGrab and Turn. This will open up the box, and you will be presented with an interactable UI, which you can interact with to create your own robotic gloves.

You can use your hands to scroll through the UI and poke to select which section of your hand you want to create first. This demonstrates the Poke interaction. Select the first section, and it will present the first section of the glove in front of you. You are presented with three color options. Swipe to choose the color you’d like your glove section to be. This demonstrates Swipe gesture detection feature. You can pick the piece up, scale it, rotate it or move it around. This uses the Touch Grab interaction and the Two Grab Free Transformer. Once you’ve selected a color, you can push the button to build it for you. You can move on to the next section and create the remaining parts of your glove in a similar manner.

Once your glove is ready, it presents you with three rocks that you can choose from to add to your glove. You can choose one of them by grabbing a rock, and then you’re able to crush the rock in your hand by squeezing it hard to reveal the crystal–a Use Grab interaction. Once the crystal is ready, you can place it on your glove to activate the crystal. You now have super power gloves!

An object starts flying around you and tries to attack you with lasers! You can use your super powers to shoot into the targets and to save yourself from getting hit. To shoot, open your palms and aim. To create a shield, fold your fingers and bring your hands together. This shows how Pose Detection can be used for even two-handed poses! Based on the right pose, it performs the appropriate actions.

Finally, you will be presented with a blue button on your glove, which you can Poke to select. It then teleports you to a world where you can see the clock tower that you were in, along with several potential objects you could interact with, leaving the players with the imagination of what we could unlock with these interactions at our disposal.

free widgets for website

Building great Hands-driven experiences requires optimizing across multiple constraints—first, the technical constraint like tracking capabilities, second, the physiological constraint, like comfort and fatigue, and finally, the perceptive constraint, like hand-virtual object interaction representation. This demo shows how the Interaction SDK unlocks several ways that can make your VR experiences more intuitive and immersive, without you needing to set up everything from scratch to get it all to work. It showcases some of the Hands interactions that we’ve found to be the most magical, robust and easy to learn but that are also applicable to many categories of content.

See also  This News Publisher Quit Facebook. Readership Went Up.

To make it really easy for you to follow along and use these interactions in your own VR experiences, we’ve created an open source project that demonstrates the use of Interaction SDK to create the various interactions that you saw in the First Hand showcase app. In the next section, you will learn about the Interaction SDK package in Unity and how to set up a local copy of the First Hand sample.

Setting up a local copy of the First Hand sample


To have a local copy of the sample, you will need to have Unity installed, a PC or Mac and the Oculus Integration package installed. We will be using Unity 2021.3.6f1 for setup in this blog, but you can use any of the supported Unity versions. You can find more information on supported devices, Unity versions and other prerequisites on our documentation for Interaction SDK. Interaction SDK can be downloaded from the Unity Asset Store or from our Developers Page.

You should also make sure that you have turned on Development Mode on your headset. You can do that from your headset as well as from the Meta Quest Mobile App. To do it from the headset, go to Settings → System → Developer, and then turn ON the USB Connection Dialog option. Alternatively, if using the Meta Quest Mobile App, you can do it by going to Menu → Devices. Select the headset from the list, and then turn ON the Developer Mode option.

Once you have Unity installed, open a new 3D scene. To install Interaction SDK, go to Window → Asset Store and look for the Oculus Integration package, and then click on “Add to my Assets.” To install the package, open Package Manager, click on Import and then Import the files into your project.

free widgets for website

If it detects that a newer version of OVRPlugin is available, it is recommended to use the newest version. Go ahead and enable it.

If asked to use OpenXR for OVRPlugin, click on Use OpenXR. New features, such as the Passthrough API, are only supported through the OpenXR backend. You can switch between legacy and OpenXR backends at any time from Oculus → Tools → OVR Utilities Plugin.

It will confirm the selection, and it may provide an option to upgrade the Interaction SDK and perform a cleanup to remove the obsolete and deprecated assets. Allow it to do so. If you choose to do this at a later stage, you can do it anytime by going to Oculus → Interaction → Remove Deprecated Assets.

Once installed, Interaction SDK components can be found under the Interaction folder under Oculus.

Since you will be building for Quest, make sure to update your build platform to Android. To do that, go to File → Build Settings → Select Android and choose “Switch Platform.”

free widgets for website

What does the Interaction SDK package contain?

Interaction SDK components can be found under the Interaction folder under Oculus. The SDK contains three folders–Editor, Runtime and Samples. The Editor contains all the editor code for the SDK. The Runtime contains the core runtime components of Interaction SDK, and the Samples contain the scenes and assets used in the Interaction SDK samples demo that we discussed earlier in the blog.

Under Samples, click on Scenes. Here you will find the Example Scenes, Feature Scenes and Tools. The Example Scenes directory includes all the scenes that you saw earlier in the Interaction SDK samples app from AppLab. The Feature directory includes scenes that are dedicated to the basics of any single feature, and the Tools directory includes helpers like a Hand Pose Authoring Tool. Let’s open one of these samples, the HandGrabExamples, and check out the scene setup, scripts and other components–as well as how these are used to create the interactions you saw in the samples. When opening any of the sample scenes, it might ask you to import TMP essentials. Go ahead and do that, and you will have your scene open.

Now, let’s look at some game objects and scripts to understand how they work. The OVRCameraRig prefab provides the transform object to represent the Oculus tracking space. It contains a TrackingSpace game object and the OVRInteraction game object under it. The TrackingSpace prefab allows users to fine-tune the relationship between the head tracking reference frame and your world. Under TrackingSpace, you will find a center-eye anchor (which is the main Unity camera), two anchor game objects for each eye, and left- and right-hand anchors for controllers.

The OVRInteraction prefab provides the base for attaching sources for Hands, Controllers or Hmd components that source data from OVRPlugin via OVRCameraRig. It contains the OVRControllerHands with left- and right-controller hands under it, along with the OVRHands with left and right hands under it.

To learn more about the OVRIntegration prefab and how it works with OVRHands, check out our documentation on Inputs.

free widgets for website

There are two main scripts attached to the OVRCameraRig prefab: OVRManager and the OVRCameraRig. OVRManager is the main interface to the VR hardware and exposes the Oculus SDK to Unity. It should only be declared once. It contains settings for your target devices, performance, tracking and color gamut. Apart from these, it also has some Quest-specific settings:

Hand Tracking Support: This allows you to choose the type of input affordance you like for your app. You can choose to have Controllers Only, Controllers and Hands to switch between the two, or only Hands.

Hand Tracking Frequency: You can select the hand tracking frequency from the list. A higher frequency allows for better gesture detection and lower latencies but reserves some performance headroom from your application’s budget.

Hand Tracking Version list: This setting lets you choose the version of hand tracking you’d like to use. Select V2 to use the latest version of hand tracking. The latest update brings you closer to building immersive and natural interactions in VR without the use of controllers, and it delivers key improvements on Quest 2.

As discussed earlier, each interaction consists of a pair of Interactor and Interactable. Interactables are rigid body objects that the hands or controllers will interact with, and they should always have a grabbable component attached to them. To learn more about Grabbables, check out our documentation.

free widgets for website

This sample demonstrates the HandGrabInteractor and HandGrabInteractable pair. The HandGrabInteractor script can be found under OVRInteraction → OVRHands → Choose LeftHand or RightHand → HandInteractorsLeft/Right → HandGrabInteractor. The corresponding interactable can be found for the rigid body objects under Interactables → choose one of the SimpleGrabs → HandGrabInteractable, which contains the HandGrabInteractable script. HandGrabInteractables are used to indicate that these objects can be Hand-Grab interacted with either hands or controllers. They require both a Rigidbody component for the interactable and a Grabbable component for 3D manipulation, attached as shown.

Now that you have a basic understanding of how a scene is set up using the OVRCameraRig, OVRManager, OVRInteraction and how to add Interactor-Interactable pairs, let’s learn how you can set up a local copy of the First Hand sample.

Cloning from GitHub and setting up the First Hand sample in Unity

The First Hand sample GitHub repo contains the Unity project that demonstrates the interactions presented in the First Hand showcase. It is designed to be used with hand tracking.

The first step to setting up the sample locally is to clone the repo.

First, you should ensure that you have Git LFS installed by running:

free widgets for website
 git lfs install 

Next, clone the repo from GitHub by running:

 git clone 

Once the project has successfully cloned, open it in Unity. It might give a warning if you have a different Unity version. Accept the warning, and it will resolve the packages and open the project with your Unity version. It might also give a warning that Unity needs to update the URP materials for the project to open. Accept it, and the project will load.

The main scene is called the Clocktower scene. All of the actual project files are in Assets/Project. This folder includes all scripts and assets to run the sample, excluding those that are part of the Interaction SDK. The project already includes v41 of the Oculus SDK, including the Interaction SDK. You can find the main scene under Assets/Project/Scenes/Clocktower. Just like before, import TMP essentials if you don’t already have it, and the scene should load up successfully.

Before you build our scene, make sure that Hand Tracking Support is set to Controllers and Hands. To do this, go to Player → OVRCameraRig → OVRManager. Under Hand Tracking support, choose “Hands and Controllers or Hands only.” Set the Hand Tracking Version to “V2” to use the latest and the Tracking Origin Type to “Floor Level.” Make sure you’re building for the Android platform. To build, go to File → Build Settings and click on “Build.” You can choose to click on “Build and Run” to directly load and run it on your headset, or you can install and run it from Meta Quest Developer Hub, which is a standalone companion development tool that positions Meta Quest and Meta Quest 2 headsets in the development workflow. To learn more about Meta Quest Developer Hub, visit our documentation.

If you choose to use Developer Hub, you can drag the APK into the application and click “Install” to install and run it on the headset.

free widgets for website

When the sample loads, you will see the objects that you interacted with in the First Hand showcase app. You will also see an option to enable Blast and Shield Gestures and the option to enable Distance Grab. You will be able to interact with the UI in front of you using the Poke interaction. You can also interact with the pieces from the glove like you saw in the First Hand showcase app. Note that this sample only showcases the interactions and not the complete gameplay of the FirstHand demo. Below you can find a list of the objects you can interact with in this sample and the kind of interaction they demonstrate:

See also  NI ceramic artist aiming to 'desexualise' women's bodies after Facebook bans her mugs as X-rated


Type of Interaction from Interaction SDK

Lift Control

“Hand Grab” and “Poke”

free widgets for website



Glove Schematic Pieces

“Hand Grab” with “Two Grab Free Transformer”

Glove Pieces

free widgets for website

“Touch Grab”

Schematic UI


Distance Grab Toggle

“Distance Grab”

free widgets for website

Blast and Shield Toggle

“Pose Detection”

This was a quick walkthrough of the First Hand sample GitHub project that demonstrates the various interactions created using the Interaction SDK, which you saw in the First Hand showcase app. The SDK provides you with these interactions to make it really easy for you to integrate these into your own applications. Apart from this, our team has created documentation, tutorials and many other resources to help you get started with using Interaction SDK. Let’s discuss some resources and best practices that you can keep in mind when using the SDK to add interactions in your VR experiences.

Best practices and resources

Best practices

For hand tracking to be an effective means of interaction in VR, it needs to be intuitive, natural and effortless. However, hands don’t come with buttons or switches the way other input modalities do. This means there’s nothing hinting at how to interact with the system and no tactile feedback to confirm user actions. To solve this, we recommend communicating affordances through clear signifiers and continuous feedback on all interactions.

You can think of this in two ways: signifiers, which communicate what a user can do with a given object, and feedback, which confirms the user’s state throughout the interaction. For example, in the First Hand sample, visual and auditory feedback plays an important role in prompting the user to interact with certain objects in the environment–the lift control beeps and glows to let the player know they should press the button, the glowing object helps the player identify when objects can be grabbed from a distance and color changes for UI buttons when selected confirm the user’s selection. Apart from this, creating prompts to guide the player through First Hand also worked especially well.

free widgets for website

Hands are practically unlimited in terms of how they move and the poses they can form. This opens up a world of opportunities, but it can also cause confusion. By limiting which hand motions can be interpreted by the system, we can achieve more accurate interactions. For example, in the First Hand sample, the player is allowed to grab certain objects, but not all. They can also grab, move and scale some of the objects, but this is limited to some objects—they can grab and scale the parts of the glove, but it doesn’t allow them to scale it only in one direction to prevent deforming the object.

Snap Interactors are a great option for drop zone interactions and fixing items in place. You will see this used extensively in First Hand, especially during the Glove building sequence where “Snap Interactors” trigger its progression. Interaction SDK’s Touch Hand Grab interaction can come in handy for small objects, especially objects that don’t have a natural orientation or grab point, without restricting the player with pre-authored poses. You will see this being used in First Hand for the glove parts, which the player can pick up in whatever way feels natural to them.

For objects out of a user’s reach, raycasting comes in very handy when selecting objects. Distance Grab is also another great way to make it easy for users to interact with the object, without needing to walk up to it or reach out to it. This can not only make your experience more intuitive, it also makes it more accessible. Distance Grab is easy to configure and is implemented by having a cone extend out from the hand that selects the best candidate item. When setting up an item for Distance Grab, it can save time to reuse the same hand poses that were set up for regular grab. In the First Hand sample, you can see how Distance Grab was used to grab objects from a distance and how the visual ray from the hand confirms which object will be grabbed.

Tracking and ergonomics also play a huge role when designing your experiences with hand tracking. The more of your hands the headset’s sensors can see, the more stable tracking will be. Only objects that are within the field of view of the headset can be detected and hence you should avoid forcing users to reach out to objects outside of the tracking volume.

It’s also important to make sure the user can remain in a neutral body position as much as possible. This allows for a more comfortable ergonomic experience, while keeping the hand in an ideal position for the tracking sensors. For example, in the First Hand sample, the entire game can be experienced sitting or standing, with hands almost always in front of the headset. This makes the hand tracking more reliable while making the experience more accessible.

free widgets for website

These were some of the best practices that we recommend keeping in mind when designing interactions using your hands in VR. Interaction SDK provides you with modular components that can make implementing these interactions easy so that you don’t have to start from scratch when building your experiences.


To get started with Interaction SDK, check out our documentation on the Interaction SDK overview, where we go over the setup, prerequisites and other settings to get started with using Interaction SDK in Unity. We’ve added several sample scenes in the SDK to help you get started with developing interactions in your apps. To learn more about the examples, check out our documentation that goes over the example scenes the feature scenes and the interactions they showcase. Check out our blog to learn from the team who built First Hand on their tips to get started with Interaction SDK.

To learn more about the OVRInteraction prefab, Hands and Controllers and how to set them up, check out our documentation for Inputs. To learn more about how Interactors and Interactables work, and the various kinds of interactor-interaction pairs available, check out our documentation on Interactions, where we go over Interactors and its methods, Interactables and the lifecycle between both and Grabbable components. You can also learn more about the various kinds of interactions and how they work.

Our team has also created tutorials to provide you with examples and references to help you get started with incorporating Interaction SDK to your VR apps. To try these out, check out our documentation for Tutorials under Interaction SDK. Here you’ll find complete walkthroughs for how to Create Hand Grab Poses, Use Hands with OVR Controllers, Building Curved UI, Enabling Direct Touch and Building a Hand Pose Recognizer.

Check out the session “Building Natural Hands Experiences with Presence Platform” from Connect 2022 where we talk about the latest in natural inputs and interactions using the Presence Platform.

free widgets for website

About Presence Platform

This blog supports the video “Building Intuitive Interactions in VR: Interaction SDK, the First Hand Sample and Other Resources.” In this blog, we discuss Presence Platform’s Interaction SDK and how it can drastically reduce development time if you’re looking to add interactions in your VR experiences. To help you create virtual environments that feel authentic and natural, we’ve created Presence Platform which consists of a broad range of machine perception and AI capabilities, including Passthrough, Spatial Anchors, Scene understanding, Interaction SDK, Voice SDK, Tracked keyboard and more.

To learn more about Meta Quest, visit our website, subscribe to our YouTube channel, or follow us on Twitter and Facebook. If you have any questions, suggestions or feedback, please let us know in the developer forums.

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter and Facebook.

First seen at

free widgets for website
Continue Reading


How to Send Interactive Messages with the WhatsApp Business Platform





With the introduction of the Cloud API, hosted by Meta, on the WhatsApp Business Platform, companies are finding new ways to leverage the power of WhatsApp in their projects. One of those discoveries is the WhatsApp interactive messaging feature – making communication even richer and allowing companies to engage with their customers beyond a standard text message.

This article highlights the WhatsApp interactive messaging features and explores how you can send interactive messages with the Cloud API. It demonstrates how to do this using a simple Node.js application.

Interactive Messages

Interactive messages are messages a user can interact with using prompts provided in the message. A user can, for example, select an item they would like from a list of product options.

Additionally, interactive messages enable you to provide tailored options for a user so they won’t have to go through an extensive list of products on your website. This type of messaging can help you achieve higher response rates.

The article looks at two interactive message features provided by the Cloud API — list messages and reply buttons — and how to use them.

free widgets for website

List messages include a menu of up to ten options users can choose from. The menu offers an overview of options available to the user. These can include restaurant specials, delivery times, appointment times, t-shirt colors, and more.

Reply buttons offer a selection of up to three buttons in a message that a user can select from to reply to the message. The buttons quickly respond to messages with discrete answers, like yes or no.

You can chain these different message types in a conversation flow. For example, you can combine a list message and a reply button to allow the user to select an option and perform an action based on the selection.

These types of messages are limited to replies to a user-initiated conversation within a 24-hour window after user contact.

Adding Interactive Messages to an App

Now that you’re familiar with interactive messages, here is how you can implement these interactive messages in your applications. Before jumping into this tutorial, register as a Meta Developer and create an app. You can find more information here.

free widgets for website

Creating a Meta app gets you a temporary access token, test phone number, and phone number ID. You must add a recipient phone number to receive the example messages. You’ll need to keep note of these values for later.

See also  NI ceramic artist aiming to 'desexualise' women's bodies after Facebook bans her mugs as X-rated


You must have Node.js and an Integrated Development Environment (IDE) installed that is able to parse the Javascript and other files. This article will demonstrate development on Visual Studio Code. You’ll need to launch part of the application as a Heroku app following steps outlined in their docs but using the application code referenced here.

To send and receive WhatsApp messages using a test phone number, complete the Set up Developer Assets and Platform Access tutorials. You’ll use the URI for your Heroku app to set up the webhook.

The complete code for this project is available on GitHub, including the Node.js application code.

Set Up the Environment

To send an interactive message, you’ll make an HTTP request to:

free widgets for website{{API_VERSION}}/{{YOUR_PHONE_NUMBER_ID}}/messages. 

and attach a message object, set type=interactive, and add the interactive object.

To see this in action, you must create a Node.js application using the following steps.

Create a folder in which you will store the application. Open that folder in VS Code. Open a terminal window and run the command npm init to set up a new npm package for the project. Accept all the defaults and reply yes to the prompts. You should now have a project folder with a package.json file like this:

For the convenience of having all of your configuration in one place, and not scattered throughout code during development, place it in a file.

Create a .env file at the project root with the following configurations:

free widgets for website

Now create a new file and call it index.js on the same level as package.json. Install Axios, to make asynchronous calls to the Graph API. Run the command npm install axios to install the latest version.

Now that you’ve got the environment set up, copy and paste this code into the index.js file:

 const axios = require("axios"); require('dotenv').config();  const accessToken = process.env.ACCESS_TOKEN; const apiVersion = process.env.VERSION; const recipientNumber = process.env.RECIPIENT_PHONE_NUMBER; const myNumberId = process.env.PHONE_NUMBER_ID;  // Update this variable to the interactive object to send const interactiveObject = {};   let messageObject = {     messaging_product: "whatsapp",     recipient_type: "individual",     to: `${recipientNumber}`,     type: "interactive",     interactive: listInteractiveObject };   axios   .post(     `${apiVersion}/${myNumberId}/messages`,     messageObject,     {       headers: {         Authorization: `Bearer ${accessToken}`,       },     }   ); 

Now, create an interactive object for each message type.

List Messages

Modify interactiveObject as follows:

 const interactiveObject = {   type: "list",   header: {     type: "text",     text: "Select the food item you would like.",   },   body: {     text: "You will be presented with a list of options to choose from",   },   footer: {     text: "All of them are freshly packed",   },   action: {     button: "Order",     sections: [       {         title: "Section 1 - Fruit",         rows: [           {             id: "1",             title: "Apple",             description: "Dozen",           },           {             id: "2",             title: "Orange",             description: "Dozen",           },         ],       },       {         title: "Section 2 - Vegetables",         rows: [           {             id: "3",             title: "Spinach",             description: "1kg ",           },           {             id: "2",             title: "Broccoli",             description: "1kg",           },         ],       },     ],   }, }; 

The object above defines a list message with items grouped into the two categories of fruits and vegetables. Each category has two items.

free widgets for website

Now, you’re ready to send a list message. But remember, you can send lists and buttons only as a reply to a user-initiated message. So first, send a message from your test recipient’s phone number to your test business number. Once you’ve initiated the conversation, run the command node index.js in the terminal.

The recipient receives a message as follows and can respond by clicking the Send button:

With list messages, the user can select their preferred items from a provided list — here, one dozen oranges. However, current list capabilities only allow selecting one item at a time, and a customer who needs multiple selections will need to reuse the same list for each choice.

Reply Buttons

Next, let’s look at how to use reply buttons to reply quickly to messages or questions you send.

To send a message with reply buttons, assign the reply buttons object to interactiveObject as follows:

free widgets for website
 const interactiveObject = {   "type": "button",   "header": {       "type": "text",       "text": "Dear valued customer."   },   "body": {       "text": "Would you like to receive marketing messages from us in the future?"   },   "footer": {       "text": "Please select an option"   },   "action": {       "buttons": [           {               "type": "reply",               "reply": {                   "id": "1",                   "title": "Yes"               }           },           {               "type": "reply",               "reply": {                   "id": "2",                   "title": "No"               }           },           {               "type": "reply",               "reply": {                   "id": "3",                   "title": "Never"               }           }       ]   } } 

This defines a message with three buttons that a user can click as a response.

Again, run the command node index.js. The recipient receives a message and can reply by clicking one of the buttons as follows:

Combine List Messages and Reply Buttons

So far, you’ve seen how to send both lists and reply buttons in separate messages. To make this more exciting and engaging for your users, you can combine the list message and reply buttons into a single message flow. To do that, you have to be able to respond to messages coming in with the appropriate message.

For instance, based on the previous examples, when a user initiates a conversation, you send a list message with options for them to choose. When they respond with an option, you can then send a message with reply buttons so they can confirm their choice.

For this to work, you must know when a user sends a message. You’ll receive these notifications by setting up webhooks, a feature of the WhatsApp Business Platform.

free widgets for website

While the article above goes into greater detail, here is a basic build to use in this tutorial:

  • Create an endpoint on a secure server that can process HTTPS requests. For this tutorial, you can create a Node.js application.
  • Add the code provided below and deploy it to Heroku. You can find more information here about how you can deploy a Node.js application to Heroku. You must create a free Heroku account before you can deploy.
  • Save the project in a GitHub repository and use the GitHub method to deploy it to Heroku.

Now, you’re ready to add the code to your application. Replace the index.js file with the index.js file on GitHub.

Run the command npm install body-parser dotenv express x-hub-signature axios to install the required dependencies. Now, commit this code and deploy it to Heroku. The flow then works as follows:

And there you have it! You’ve created an engaging, interactive message flow by combining list messages and button replies.


This article looked at two types of interactive messages, list messages and button replies, and showed how you could implement them in a Node.js application. It also looked at a more practical scenario where you combined the two message types into a single flow.

You can provide an engaging messaging experience to your customers with interactive messages. These engagement-boosting menu options, like list messages and button replies, help create a richer user experience.

free widgets for website

With the WhatsApp Business Platform, you can fully automate customer interactions and achieve significantly higher response rates. Head to our Cloud API Documentation to learn more about building production-ready applications.

First seen at

Continue Reading