Connect with us

FACEBOOK

How to Save Money With Facebook A/B Testing—No Matter Your Budget

Published

on

Whether you have one dollar or thousands of dollars to spend per day on your Facebook ads, keeping costs low by serving great ads is the name of the game. But what you think is a great ad may not actually be the greatest ad. With A/B testing, you can systematically nail down what the data says is the perfect ad for your audience.

facebook a/b testing on a budget monkey in awe at the perfect ad

But while Facebook A/B testing can help you to lower costs, you may be looking to lower costs precisely because you have a low budget. That’s why we’re going to focus on how to run effective A/B tests in a cost-effective manner.

Read on to learn:

  • Why Facebook A/B testing is important.
  • How to carry out Facebook A/B testing the right way.
  • Types of Facebook A/B tests you can run.
  • Ways to save time and money with Facebook A/B testing.
  • Budgeting tips for Facebook A/B testing.

Why Facebook A/B testing is important

Competing in Facebook ads means you can’t just set it and forget it. Once you launch a campaign, it won’t run consistently—you will need to check in on it, analyze performance, test constantly, and continue optimizing in an effort to see success. Here are some other reasons to run Facebook A/B tests.

  1. Facebook is finicky. What works well for some time may suddenly shift due to algorithmic changes or, quite often, unknown or irrational reasons.
  2. Lower costs. With testing, we can learn, gain valuable insights about our audiences, find out what does and does not resonate with them, and lower our costs over time.
  3. Reach your whole audience. What may captivate one user in an audience may not with another, so it’s best practice to always run tests in your campaigns.

For example, our custom window insert client, Indow Windows, saw a dip in their lead volume and an increase in cost per lead for one lower-funnel ad set.

To see if we could improve performance, we launched a duplicate campaign that optimized for awareness rather than conversions. Within two weeks, we raised their lead volume for this audience 1800% and reduced their cost per lead 94%.

facebook a/b testing results 1800%25 increase in leads

While warmer audiences tend to perform well with lower-funnel campaign objectives, that isn’t always the case, as we can see here. Another added benefit to testing upper-funnel campaign objectives with lower-funnel audiences is that you can accomplish your goals at significantly lower costs (more on that later).

Advertisement
free widgets for website

How to carry out Facebook A/B testing (the RIGHT way)

According to Adespresso, a good split test can increase ROI by 10x. So let’s make sure you know all of the steps of a good split test.

1. Determine your goal

The basic concept of Facebook A/B testing is to set a goal and then see which ad variation performs best to accomplish said goal. Here are some examples of goals you might be looking to achieve with your testing:

  • If you are looking to drive more website traffic at a lower cost, test optimizing for link clicks in one ad set and landing page views in another ad set to see which can get you lower CPCs.
  • If you are looking to drive more leads at a lower cost, test out a lead generation campaign objective versus a website traffic campaign objective (where you are sending people to a dedicated lead generation landing page) to see which can get you more leads at a lower cost.
  • If your goal is to drive more video views, test out several video variations using the same copy or one video with several ad copy variations to see which one audiences watch a longer duration, according to the % watched metrics.
  • Looking to improve your CTR? Test 2-3 ad copy variations using one creative in the first round of testing. If the CTR is low (below 1%), this is typically indicative of your ads not resonating with your audience, so test new ad copy. If that doesn’t work, test a new audience. If your CTR is above 1%, test new creatives to see if you can raise it further to a healthy 2%+.
See also  SC orders immediate release of Manipur activist arrested over FB post saying 'cow dung won't cure ...

2. Determine your variable

There are dozens of variables to test with your Facebook ads. We’ll go over variables in more depth in the next section, but here is the shortlist:

  • Campaign objectives
  • Audiences
  • Optimization
  • Ad level elements (copy, headline, creative, CTA, etc.)

facebook a/b testing with creative as variable

Image source

For this example, let’s say our goal is to increase conversions and our variable is ad copy.

3. Launch your first round of testing

In order for us to understand which ad copy variation is our winner(s), we will want to launch 2-3 ads that are identical except for the ad copy. If we test more than one variable, then we won’t know if it was truly the ad copy that made a difference.

Advertisement
free widgets for website

4. Use Facebook’s data AND your own

As your test runs, check in to see how it’s going in the first few days. Make sure the ad set is out of the “Learning Period” before making any optimizations. What you may notice is that Facebook will somewhat quickly choose a favorite one or two of your active ads based on performance. But use your data as well to determine which ad(s) is/are the true winners.

If, for example, Facebook notices fewer impressions with an ad, but your data shows that it has a higher CTR higher or that it’s driving conversions at a lower cost, keep it running! Again, use the data.

5. Disable underperforming ads and move to the next variable

Once you run your ads long enough to see how performance is going, disable the ad(s) that were your underperformers and then determine what you want to test next. Oftentimes, after testing ad copy, we will then test creatives. So we’ll apply the winning ad copy to each of the ads, but change out the creatives to see which combination of copy and creatives are driving the most conversions.

6. Rinse and repeat

Continue building with one single variable at a time. The more we test, find winners, and improve our performance, Facebook rewards us with lower ad costs.

What to A/B test in your Facebook ads

As mentioned above, there are several different variables you can test in your Facebook ads. Just make sure that you’re picking the best one for the metric you want to improve, and that you’re only testing one variable at a time.

Advertisement
free widgets for website

Here is a closer look at variables you can test:

1. Campaign objectives

Currently, there are 11 campaign objectives to choose from.

campaign objectives you can compare with facebook a/b testing

Lead gen vs traffic campaign

Perhaps you want to see if you can drive leads using a lead generation campaign and another using the traffic campaign. We do this often for our clients that want to increase sales leads or want to acquire more email subscribers.

What we’ve seen from testing this across accounts and verticals is that lead gen campaigns drive higher lead volumes and at lower costs, however, they tend to also have lower quality leads.

Advertisement
free widgets for website

On the flip side, traffic campaigns sending users to a lead generation dedicated landing page to complete a form there and off of Facebook, results in fewer leads at slightly higher costs, however, lead quality tends to be higher as people are more interested to learn more and leave the social platform to complete your form.

See also  Michigan EGLE launches Facebook page

While this has been our experience, see how it performs for you and whether quantity or quality are your goal.

Website traffic vs conversion campaigns

Another way to A/B test campaign objectives is to see if you can drive more purchases or other conversions through a website traffic campaign versus a conversions campaign. Both can accomplish your conversion goal but in some instances, we’ve seen traffic campaigns drive more purchases at a lower cost. Upper funnel campaigns tend to be lower in cost, so it may be a worthwhile test for you and an effective way to have your budget go further.

2. Audiences

Test various native and custom audiences in your ad sets. If your goal is to drive people to purchase a water bottle, some audiences you may want to test are people interested in water bottles, another targeting people interested in a competitors water bottle company, and a third may be your customer lookalikes.

Advertisement
free widgets for website

3. Optimization

Each campaign type has optimization goal settings within its ad sets. For example, if you use the website traffic campaign, you can optimize for link clicks or landing page views. Test an ad set that uses one of each to see which performs better for you. Do you want more people clicking to the site and getting tagged with the Pixel or do you want perhaps fewer people but more interested in your content and allowing the landing page to fully load? Which one is lower in cost and can still accomplish my goals?

optimization and delivery facebook ab testing

4. Ad level elements

There are plenty of other options to test at the ad set level but let’s move down to the ad level as that too, contains a number of variables that can make or break a campaign and impact costs. For example:

  • Ad copy length (short vs long)
  • Headlines
  • Creatives (single image/video or carousel)
  • Calls to action
  • Landing pages

Determine which you want to test at a given time with your various audiences and campaigns. Remember, don’t test more than variable at a time, unless, of course, you’re running dynamic ads.

facebook a/b testing example with headlines

Image source

3 cost-effective methods of Facebook A/B testing

We know that testing our Facebook ads will ultimately save us money and increase revenue, but it can get expensive. Here are some tactics you can use to test your ads economically.

1. Run dynamic ad campaigns

Manual testing of ads is great, but if you want to move faster in serving ads, testing, learning, and optimizing on a limited budget, dynamic ads are an excellent way to do this.

Advertisement
free widgets for website

Dynamic ads quickly and effectively test various ad level assets while also saving time and money. By leveraging the system to put together combinations of ad copy and creatives together for you, especially in your prospecting campaigns, can give you quick insights through this system-led testing option.

an example of a facebook dynamic ad for facebook a/b testing

If you have a product catalog, set up dynamic product ads in your retargeting campaigns so users that previously looked at those products and didn’t purchase, can come back more efficiently and complete their purchase.

The Facebook dynamic ads feature uses automation to test different combinations of ad copy and creatives for you and will then serve those combinations accordingly.

2. Test upper-funnel campaign objectives

If you have a small budget, say $10/day, but your product or service costs much more than that. You may want to test upper-funnel campaign objectives that are less expensive to have your budget go further.

As we mentioned earlier, if you can’t drive many conversions, particularly for a higher priced product or service, a conversion campaign is not going to perform well and Facebook will charge you quite a bit.

Instead, try an awareness, reach, or website traffic campaign to re-engage with your warm audiences. While the campaign is optimizing for other goals, people can still convert if you add a landing page URL to your ad.

Advertisement
free widgets for website

We implement this tactic quite a bit to cut down on costs or help clients find more success on smaller budgets and we often see higher CTRs and lower CPAs compared to some conversion campaigns.

We find that upper-funnel campaign objectives help clients to test on smaller budgets, and often lead to higher CTRs and lower CPAs compared to some conversion campaigns.

3. Use the ad set level budget

Advertisers can choose from two budget setting options in Ads Manager, one of them being at the campaign level using Campaign Budget Optimization (CBO) and the other at the ad set level.

CBO can work well for prospecting audiences that use similar size audiences in the ad sets within, however, it does not work very well for small audiences, as we may see when testing smaller budgets or in retargeting campaigns. When working with smaller budget accounts, we typically recommend using ad set level budget so you have more manual control on how much you spend, how, and where.

When working with smaller budget accounts, we typically recommend using ad set level budget so you have more manual control on how much you spend, how, and where.

How much should I budget for Facebook A/B testing?

You can start with as little as $1/day. Yes, you read that right. Though, there are some caveats. It depends on your campaign type.

With top-of-funnel campaigns such as reach, awareness, and engagement, we have seen success in running ads and boosted posts to various audiences.

Advertisement
free widgets for website

However, if you want to test lower-funnel consideration or conversion campaigns, a budget that small will not do.

The smaller your budget, the slower the testing, learning, and optimizing will be. The more budget you can allocate, the quicker you can get out of the Learning Period, serve impressions, acquire performance data, learn, and take your next optimization steps.

If you can, try to spend at least $10/day to start and as you begin seeing good results and as you’re maxing out on your daily budget, increase your daily budget 10-15% per day.

If you scale too quickly, that can sometimes backfire if Facebook isn’t able to spend your budget that day and then it may lower the quality of your campaign, thus raising costs. This way, you can learn and grow without wasting money and scale forward.

How to test Facebook ads on a budget: recap

Facebook A/B testing is important not only because it affords us valuable insights and saves us money, but also because Facebook frequently undergoes algorithm changes. Let’s close off with a brief summary of how to run Facebook A/B tests on a budget:

Advertisement
free widgets for website

The basic process of Facebook A/B testing is:

  • Determine your goal
  • Determine your variable
  • Launch your first set
  • Let data accumulate
  • Disable underperforming ads
  • Clone the winner and test a new variable
  • Repeat

Some of the best variables to test on a budget include:

  • Campaign objectives
  • Audiences
  • Optimization
  • Ad level variables

The best way to test your Facebook ads on a budget is to:

  • Run upper-funnel tests
  • Test upper-level campaign objectives
  • Use the ad set level budget

Finally, if you can, try to spend at least $10/day to start and as you begin seeing good results, increase your daily budget 10-15% per day.



author image


Read More

Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

FACEBOOK

Upcoming Restriction Period for US ads about social issues, elections, or politics

Published

on

By

upcoming-restriction-period-for-us-ads-about-social-issues,-elections,-or-politics

In recent years, Meta has developed a comprehensive approach to protecting elections on our technologies. These efforts continue in advance of the US 2022 Midterms, which you can read more about in our Newsroom.

Implementing a restriction period for ads about social issues, elections or politics in the US

Consistent with our approach during the US 2020 General Election, we are introducing a restriction period for ads about social issues, elections or politics in the US. The restriction period will run from 12:01 AM PT on Tuesday, November 1, 2022 through 11:59 PM PT on Tuesday, November 8, 2022.

We are putting this restriction period in place again because we found that the restriction period achieves the right balance of giving campaigns a voice while providing additional time for scrutiny of issue, electoral, and political ads in the Ad Library. We are sharing the requirements and key dates ahead of time, so advertisers are able to prepare their campaigns in the months and weeks ahead.

What to know about the ad restriction period in the US

We will not allow any new ads about social issues, elections or politics in the US from 12:01 AM PT on Tuesday, November 1, 2022 through 11:59 PM PT on Tuesday, November 8, 2022.

In order to run ads about social issues, elections or politics in the US during the restriction period, the ads must be created with a valid disclaimer and have delivered an impression prior to 12:01 AM PT on Tuesday, November 1, 2022, but with limited editing capabilities.

Advertisement
free widgets for website

What advertisers can do during the restriction period for eligible ads:

  • Edit bid amount, budget amount and scheduled end date
  • Pause and unpause eligible ads that have already served at least 1 impression with a valid disclaimer prior to the restriction period going into effect
See also  Facebook to announce this week if it will permanently dump Trump

What advertisers cannot do during the restriction period for eligible ads, includes but is not limited to:

  • Editing certain aspects of eligible ads, such as ad creative (including ad copy, image/video assets, website URL)
  • Editing targeting, placement, optimization or campaign objective
  • Removing or adding a disclaimer
  • Copy, duplicating or boosting ads

See the Help Center for detailed requirements of what is or isn’t allowed during the restriction period.

Planning ahead for key dates

Keep in mind the following dates as you plan your campaign to avoid delays or disapprovals that may prevent your ads from running during the restriction period:

  • By Tuesday, October 18, 2022: Complete the ad authorization process to get authorized to run ads about social issues, elections or politics, which includes setting up an approved disclaimer for your ads.

  • By Tuesday, October 25, 2022: Submit your issue, electoral or political ads in order to best ensure that your ads are live and have delivered at least 1 impression with a valid disclaimer before the restriction period begins.
    • Please ensure that you add your approved disclaimer to these ads by choosing ISSUES_ELECTIONS_POLITICS in the special_ad_categories field. You will not be able to add a disclaimer after 12:01 AM PT on Tuesday, November 1, 2022.

  • Between Tuesday, November 1, 2022 and Tuesday, November 8, 2022: The ad restriction period will be in effect. We will not allow any new ads to run about social issues, elections or politics in the US starting 12:01 AM PT on Tuesday, November 1 through 11:59 PM PT on Tuesday, November 8, 2022.
  • At 12:00 AM PT on Wednesday, November 9, 2022: We will allow new ads about social issues, elections or politics to be published.

As the restriction period approaches, we encourage you to review these ad restriction period best practices to properly prepare ahead of time.

We will continue to provide updates on our approach to elections integrity or on any changes regarding the restriction period via this blog.

Visit the Elections Hub or our FAQ for more advertising resources.

First seen at developers.facebook.com

Advertisement
free widgets for website
Continue Reading

FACEBOOK

Signals in prod: dangers and pitfalls

Published

on

By

signals-in-prod:-dangers-and-pitfalls

In this blog post, Chris Down, a Kernel Engineer at Meta, discusses the pitfalls of using Linux signals in Linux production environments and why developers should avoid using signals whenever possible.

What are Linux Signals?

A signal is an event that Linux systems generate in response to some condition. Signals can be sent by the kernel to a process, by a process to another process, or a process to itself. Upon receipt of a signal, a process may take action.

Signals are a core part of Unix-like operating environments and have existed since more or less the dawn of time. They are the plumbing for many of the core components of the operating system—core dumping, process life cycle management, etc.—and in general, they’ve held up pretty well in the fifty or so years that we have been using them. As such, when somebody suggests that using them for interprocess communication (IPC) is potentially dangerous, one might think these are the ramblings of someone desperate to invent the wheel. However, this article is intended to demonstrate cases where signals have been the cause of production issues and offer some potential mitigations and alternatives.

Signals may appear attractive due to their standardization, wide availability and the fact that they don’t require any additional dependencies outside of what the operating system provides. However, they can be difficult to use safely. Signals make a vast number of assumptions which one must be careful to validate to match their requirements, and if not, one must be careful to configure correctly. In reality, many applications, even widely known ones, do not do so, and may have hard-to-debug incidents in the future as a result.

Let us look into a recent incident that occurred in the Meta production environment, reinforcing the pitfalls of using signals. We’ll go briefly over the history of some signals and how they led us to where we are today, and then we’ll contrast that with our current needs and issues that we’re seeing in production.

Advertisement
free widgets for website

The Incident

First, let’s rewind a bit. The LogDevice team cleaned up their codebase, removing unused code and features. One of the features that was deprecated was a type of log that documents certain operations performed by the service. This feature eventually became redundant, had no consumers and as such was removed. You can see the change here on GitHub. So far, so good.

The next little while after the change passed without much to speak about, production continued ticking on steadily and serving traffic as usual. A few weeks later, a report that service nodes were being lost at a staggering rate was received. It was something to do with the rollout of the new release, but what exactly was wrong was unclear. What was different now that had caused things to fall over?

The team in question narrowed the problem to the code change we mentioned earlier, deprecating these logs. But why? What’s wrong with that code? If you don’t already know the answer, we invite you to look at that diff and try to work out what’s wrong because it’s not immediately obvious, and it’s a mistake anyone could make.

logrotate, Enter the Ring

logrotate is more or less the standard tool for log rotation when using Linux. It’s been around for almost thirty years now, and the concept is simple: manage the life cycle of logs by rotating and vacuuming them.

logrotate doesn’t send any signals by itself, so you won’t find much, if anything, about them in the logrotate main page or its documentation. However, logrotate can take arbitrary commands to execute before or after its rotations. Just as a basic example from the default logrotate configuration in CentOS, you can see this configuration:

Advertisement
free widgets for website
 /var/log/cron /var/log/maillog /var/log/messages /var/log/secure /var/log/spooler {     sharedscripts     postrotate         /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true     endscript } 

A bit brittle, but we’ll forgive that and assume that this works as intended. This configuration says that after logrotate rotates any of the files listed, it should send SIGHUP to the pid contained in /var/run/syslogd.pid, which should be that of the running syslogd instance.

This is all well and good for something with a stable public API like syslog, but what about something internal where the implementation of SIGHUP is an internal implementation detail that could change at any time?

A History of Hangups

One of the problems here is that, except for signals which cannot be caught in user space and thus have only one meaning, like SIGKILL and SIGSTOP, the semantic meaning of signals is up to application developers and users to interpret and program. In some cases, the distinction is largely academic, like SIGTERM, which is pretty much universally understood to mean “terminate gracefully as soon as possible.” However, in the case of SIGHUP, the meaning is significantly less clear.

SIGHUP was invented for serial lines and was originally used to indicate that the other end of the connection had dropped the line. Nowadays, we still carry our lineage with us of course, so SIGHUP is still sent for its modern equivalent: where a pseudo or virtual terminal is closed (hence tools like nohup, which mask it).

In the early days of Unix, there was a need to implement daemon reloading. This usually consists at least of configuration/log file reopening without restarting, and signals seemed like a dependency-free way to achieve that. Of course, there was no signal for such a thing, but as these daemons have no controlling terminal, there should be no reason to receive SIGHUP, so it seemed like a convenient signal to piggyback onto without any obvious side effects.

Advertisement
free widgets for website

There is a small hitch with this plan though. The default state for signals is not “ignored,” but signal-specific. So, for example, programs don’t have to configure SIGTERM manually to terminate their application. As long as they don’t set any other signal handler, the kernel just terminates their program for free, without any code needed in user space. Convenient!

What’s not so convenient though, is that SIGHUP also has the default behavior of terminating the program immediately. This works great for the original hangup case, where these applications likely aren’t needed anymore, but is not so great for this new meaning.

This would be fine of course, if we removed all the places which could potentially send SIGHUP to the program. The problem is that in any large, mature codebase, that is difficult. SIGHUP is not like a tightly controlled IPC call for which you can easily grep the codebase for. Signals can come from anywhere, at any time, and there are few checks on their operation (other than the most basic “are you this user or have CAP_KILL“). The bottom line is that it’s hard to determine where signals could come from, but with more explicit IPC, we would know that this signal doesn’t mean anything to us and should be ignored.

See also  Personal accounts of researchers studying Facebook misinformation removed - Deccan Herald

From Hangup to Hazard

By now, I suppose you may have started to guess what happened. A LogDevice release started one fateful afternoon containing the aforementioned code change. At first, nothing had gone awry, but at midnight the next day, everything mysteriously started falling over. The reason is the following stanza in the machine’s logrotate configuration, which sends a now unhandled (and therefore fatal) SIGHUP to the logdevice daemon:

 /var/log/logdevice/audit.log {   daily   # [...]   postrotate     pkill -HUP logdeviced   endscript } 

Missing just one short stanza of a logrotate configuration is incredibly easy and common when removing a large feature. Unfortunately, it’s also hard to be certain that every last vestige of its existence was removed at once. Even in cases that are easier to validate than this, it’s common to mistakenly leave remnants when doing code cleanup. Still, usually, it’s without any destructive consequences, that is, the remaining detritus is just dead or no-op code.

Advertisement
free widgets for website

Conceptually, the incident itself and its resolution are simple: don’t send SIGHUP, and spread LogDevice actions out more over time (that is, don’t run this at midnight on the dot). However, it’s not just this one incident’s nuances that we should focus on here. This incident, more than anything, has to serve as a platform to discourage the use of signals in production for anything other than the most basic, essential cases.

The Dangers of Signals

What Signals are Good For

First, using signals as a mechanism to affect changes in the process state of the operating system is well founded. This includes signals like SIGKILL, which are impossible to install a signal handler for and does exactly what you would expect, and the kernel-default behavior of SIGABRT, SIGTERM, SIGINT, SIGSEGV, and SIGQUIT and the like, which are generally well understood by users and programmers.

What these signals all have in common is that once you’ve received them, they’re all progressing towards a terminal end state within the kernel itself. That is, no more user space instructions will be executed once you get a SIGKILL or SIGTERM with no user space signal handler.

A terminal end state is important because it usually means you’re working towards decreasing the complexity of the stack and code currently being executed. Other desired states often result in the complexity actually becoming higher and harder to reason about as concurrency and code flow become more muddled.

Dangerous Default Behavior

You may notice that we didn’t mention some other signals that also terminate by default. Here’s a list of all of the standard signals that terminate by default (excluding core dump signals like SIGABRT or SIGSEGV, since they’re all sensible):

Advertisement
free widgets for website
  • SIGALRM
  • SIGEMT
  • SIGHUP
  • SIGINT
  • SIGIO
  • SIGKILL
  • SIGLOST
  • SIGPIPE
  • SIGPOLL
  • SIGPROF
  • SIGPWR
  • SIGSTKFLT
  • SIGTERM
  • SIGUSR1
  • SIGUSR2
  • SIGVTALRM

At first glance, these may seem reasonable, but here are a few outliers:

  • SIGHUP: If this was used only as it was originally intended, defaulting to terminate would be sensible. With the current mixed usage meaning “reopen files,” this is dangerous.
  • SIGPOLL and SIGPROF: These are in the bucket of “these should be handled internally by some standard function rather than your program.” However, while probably harmless, the default behavior to terminate still seems nonideal.
  • SIGUSR1 and SIGUSR2: These are “user-defined signals” that you can ostensibly use however you like. But because these are terminal by default, if you implement USR1 for some specific need and later don’t need that, you can’t just safely remove the code. You have to consciously think to explicitly ignore the signal. That’s really not going to be obvious even to every experienced programmer.

So that’s almost one-third of terminal signals, which are at best questionable and, at worst, actively dangerous as a program’s needs change. Worse still, even the supposedly “user-defined” signals are a disaster waiting to happen when someone forgets to explicitly SIG_IGN it. Even an innocuous SIGUSR1 or SIGPOLL may cause incidents.

This is not simply a question of familiarity. No matter how well you know how signals work, it’s still extremely hard to write signal-correct code the first time around because, despite their appearance, signals are far more complex than they seem.

Code flow, Concurrency, and the Myth of SA_RESTART

Programmers generally do not spend their entire day thinking about the inner workings of signals. This means that when it comes to actually implementing signal handling, they often subtly do the wrong thing.

I’m not even talking about the “trivial” cases, like safety in a signal handling function, which is mostly solved by only bumping a sig_atomic_t, or using C++’s atomic signal fence stuff. No, that’s mostly easily searchable and memorable as a pitfall by anyone after their first time through signal hell. What’s a lot harder is reasoning about the code flow of the nominal portions of a complex program when it receives a signal. Doing so requires either constantly and explicitly thinking about signals at every part of the application life cycle (hey, what about EINTR, is SA_RESTART enough here? What flow should we go into if this terminates prematurely? I now have a concurrent program, what are the implications of that?), or setting up a sigprocmask or pthread_setmask for some part of your application life cycle and praying that the code flow never changes (which is certainly not a good guess in an atmosphere of fast-paced development). signalfd or running sigwaitinfo in a dedicated thread can help somewhat here, but both of these have enough edge cases and usability concerns to make them hard to recommend.

We like to believe that most experienced programmers know by now that even a facetious example of correctly writing thread-safe code is very hard. Well, if you thought correctly writing thread-safe code was hard, signals are significantly harder. Signal handlers must only rely on strictly lock-free code with atomic data structures, respectively, because the main flow of execution is suspended and we don’t know what locks it’s holding, and because the main flow of execution could be performing non-atomic operations. They must also be fully reentrant, that is, they must be able to nest within themselves since signal handlers can overlap if a signal is sent multiple times (or even with one signal, with SA_NODEFER). That’s one of the reasons why you can’t use functions like printf or malloc in a signal handler because they rely on global mutexes for synchronization. If you were holding that lock when the signal was received and then called a function requiring that lock again, your application would end up deadlocked. This is really, really hard to reason about. That’s why many people simply write something like the following as their signal handling:

 static volatile sig_atomic_t received_sighup;   static void sighup(int sig __attribute__((unused))) { received_sighup = 1; }  static int configure_signal_handlers(void) {   return sigaction(     SIGHUP,     &(const struct sigaction){.sa_handler = sighup, .sa_flags = SA_RESTART},     NULL); }  int main(int argc, char *argv[]) {   if (configure_signal_handlers()) {        /* failed to set handlers */   }    /* usual program flow */    if (received_sighup) {     /* reload */     received_sighup = 0;   }    /* usual program flow */ }  

The problem is that, while this, signalfd, or other attempts at async signal handling might look fairly simple and robust, it ignores the fact that the point of interruption is just as important as the actions performed after receiving the signal. For example, suppose your user space code is doing I/O or changing the metadata of objects that come from the kernel (like inodes or FDs). In this case, you’re probably actually in a kernel space stack at the time of interruption. For example, here’s how a thread might look when it’s trying to close a file descriptor:

Advertisement
free widgets for website
# cat /proc/2965230/stack  [<0>] schedule+0x43/0xd0  [<0>] io_schedule+0x12/0x40  [<0>] wait_on_page_bit+0x139/0x230  [<0>] filemap_write_and_wait+0x5a/0x90  [<0>] filp_close+0x32/0x70  [<0>] __x64_sys_close+0x1e/0x50  [<0>] do_syscall_64+0x4e/0x140  [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

Here, __x64_sys_close is the x86_64 variant of the close system call, which closes a file descriptor. At this point in its execution, we’re waiting for the backing storage to be updated (that’s this wait_on_page_bit). Since I/O work is usually several orders of magnitude slower than other operations, schedule here is a way of voluntarily hinting to the kernel’s CPU scheduler that we are about to perform a high-latency operation (like disk or network I/O) and that it should consider finding another process to schedule instead of the current process for now. This is good, as it allows us to signal to the kernel that it is a good idea to go ahead and pick a process that will actually make use of the CPU instead of wasting time on one which can’t continue until it’s finished waiting for a response from something that may take a while.

Imagine that we send a signal to the process we were running. The signal that we have sent has a user space handler in the receiving thread, so we’ll resume in user space. One of the many ways this race can end up is that the kernel will try to come out of schedule, further unwind the stack and eventually return an errno of ESYSRESTART or EINTR to user space to indicate that we were interrupted. But how far did we get in closing it? What’s the state of the file descriptor now?

Now that we’ve returned to user space, we’ll run the signal handler. When the signal handler exits, we’ll propagate the error to the user space libc’s close wrapper, and then to the application, which, in theory, can do something about the situation encountered. We say “in theory” because it’s really hard to know what to do about many of these situations with signals, and many services in production do not handle the edge cases here very well. That might be fine in some applications where data integrity isn’t that important. However, in production applications that do care about data consistency and integrity, this presents a significant problem: the kernel doesn’t expose any granular way to understand how far it got, what it achieved and didn’t and what we should actually do about the situation. Even worse, if close returns with EINTR, the state of the file descriptor is now unspecified:

“If close() is interrupted by a signal [...] the state of [the file descriptor] is unspecified.”

Good luck trying to reason about how to handle that safely and securely in your application. In general, handling EINTR even for well-behaved syscalls is complicated. There are plenty of subtle issues forming a large part of the reason why SA_RESTART is not enough. Not all system calls are restartable, and expecting every single one of your application’s developers to understand and mitigate the deep nuances of getting a signal for every single syscall at every single call site is asking for outages. From man 7 signal:

Advertisement
free widgets for website

“The following interfaces are never restarted after being interrupted by a signal handler, regardless of the use of SA_RESTART; they always fail with the error EINTR [...]”

Likewise, using a sigprocmask and expecting code flow to remain static is asking for trouble as developers do not typically spend their lives thinking about the bounds of signal handling or how to produce or preserve signal-correct code. The same goes for handling signals in a dedicated thread with sigwaitinfo, which can easily end up with GDB and similar tools being unable to debug the process. Subtly wrong code flows or error handling can result in bugs, crashes, difficult to debug corruptions, deadlocks and many more issues that will send you running straight into the warm embrace of your preferred incident management tool.

High Complexity in Multithreaded Environments

If you thought all this talk of concurrency, reentrancy and atomicity was bad enough, throwing multithreading into the mix makes things even more complicated. This is especially important when considering the fact that many complex applications run separate threads implicitly, for example, as part of jemalloc, GLib, or similar. Some of these libraries even install signal handlers themselves, opening a whole other can of worms.

Overall, man 7 signal has this to say on the matter:

“A signal may be generated (and thus pending) for a process as a whole (e.g., when sent using kill(2)) or for a specific thread [...] If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.”

Advertisement
free widgets for website

More succinctly, “for most signals, the kernel sends the signal to any thread that doesn’t have that signal blocked with sigprocmask“. SIGSEGV, SIGILL and the like resemble traps, and have the signal explicitly directed at the offending thread. However, despite what one might think, most signals cannot be explicitly sent to a single thread in a thread group, even with tgkill or pthread_kill.

This means that you can’t trivially change overall signal handling characteristics as soon as you have a set of threads. If a service needs to do periodic signal blocking with sigprocmask in the main thread, you need to somehow communicate to other threads externally about how they should handle that. Otherwise, the signal may be swallowed by another thread, never to be seen again. Of course, you can block signals in child threads to avoid this, but if they need to do their own signal handling, even for primitive things like waitpid, it will end up making things complex.

Just as with everything else here, these aren’t technically insurmountable problems. However, one would be negligent in ignoring the fact that the complexity of synchronization required to make this work correctly is burdensome and lays the groundwork for bugs, confusion and worse.

Lack of Definition and Communication of Success or Failure

Signals are propagated asynchronously in the kernel. The kill syscall returns as soon as the pending signal is recorded for the process or thread’s task_struct in question. Thus, there’s no guarantee of timely delivery, even if the signal isn’t blocked.

Even if there is timely delivery of the signal, there’s no way to communicate back to the signal issuer what the status of their request for action is. As such, any meaningful action should not be delivered by signals, since they only implement fire-and-forget with no real mechanism to report the success or failure of delivery and subsequent actions. As we’ve seen above, even seemingly innocuous signals can be dangerous when they are not configured in user space.

Advertisement
free widgets for website

Anyone using Linux for long enough has undoubtedly run into a case where they want to kill some process but find that the process is unresponsive even to supposedly always fatal signals like SIGKILL. The problem is that misleadingly, kill(1)’s purpose isn’t to kill processes, but just to queue a request to the kernel (with no indication about when it will be serviced) that someone has requested some action to be taken.

The kill syscall’s job is to mark the signal as pending in the kernel’s task metadata, which it does successfully even when a SIGKILL task doesn’t die. In the case of SIGKILL in particular, the kernel guarantees that no more user mode instructions will be executed, but we may still have to execute instructions in kernel mode to complete actions that otherwise may result in data corruption or to release resources. For this reason, we still succeed even if the state is D (uninterruptible sleep). Kill itself doesn’t fail unless you provided an invalid signal, you don’t have permission to send that signal or the pid that you requested to send a signal to does not exist and is thus not useful to reliably propagate non-terminal states to applications.

In Conclusion

  • Signals are fine for terminal state handled purely in-kernel with no user space handler. For signals that you actually would like to immediately kill your program, leave those signals alone for the kernel to handle. This also means that the kernel may be able to exit early from its work, freeing up your program resources more quickly, whereas a user space IPC request would have to wait for the user space portion to start executing again.
  • A way to avoid getting into trouble handling signals is to not handle them at all. However, for applications handling state processing that must do something about cases like SIGTERM, ideally use a high-level API like folly::AsyncSignalHandler where a number of the warts have already been made more intuitive.

  • Avoid communicating application requests with signals. Use self-managed notifications (like inotify) or user space RPC with a dedicated part of the application life cycle to handle it instead of relying on interrupting the application.
  • Where possible, limit the scope of signals to a subsection of your program or threads with sigprocmask, reducing the amount of code that needs to be regularly scrutinized for signal-correctness. Bear in mind that if code flows or threading strategies change, the mask may not have the effect you intended.
  • At daemon start, mask terminal signals that are not uniformly understood and could be repurposed at some point in your program to avoid falling back to kernel default behavior. My suggestion is the following:
 signal(SIGHUP, SIG_IGN); signal(SIGQUIT, SIG_IGN); signal(SIGUSR1, SIG_IGN); signal(SIGUSR2, SIG_IGN); 

Signal behavior is extremely complicated to reason about even in well-authored programs, and its use presents an unnecessary risk in applications where other alternatives are available. In general, do not use signals for communicating with the user space portion of your program. Instead, either have the program transparently handle events itself (for example, with inotify), or use user space communication that can report back errors to the issuer and is enumerable and demonstrable at compile time, like Thrift, gRPC or similar.

I hope this article has shown you that signals, while they may ostensibly appear simple, are in reality anything but. The aesthetics of simplicity that promote their use as an API for user space software belie a series of implicit design decisions that do not fit most production use cases in the modern era.

Let’s be clear: there are valid use cases for signals. Signals are fine for basic communication with the kernel about a desired process state when there’s no user space component, for example, that a process should be killed. However, it is difficult to write signal-correct code the first time around when signals are expected to be trapped in user space.

Signals may seem attractive due to their standardization, wide availability and lack of dependencies, but they come with a significant number of pitfalls that will only increase concern as your project grows. Hopefully, this article has provided you with some mitigations and alternative strategies that will allow you to still achieve your goals, but in a safer, less subtly complex and more intuitive way.

Advertisement
free widgets for website

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Continue Reading

FACEBOOK

Meet the Developers – Linux Kernel Team (David Vernet)

Published

on

By

meet-the-developers-–-linux-kernel-team-(david-vernet)

Credit: Larry Ewing (lewing@isc.tamu.edu) and The GIMP for the original design of Tux the penguin.

Intro

For today’s interview, we have David Vernet, a core systems engineer on the Kernel team at Meta. He works on the BPF (Berkeley Packet Filter) and the Linux kernel scheduler. This series highlights Meta Software Engineers who contribute to the Linux kernel. The Meta Linux Kernel team works with the broader Linux community to add new features to the kernel and makes sure that the kernel works well in Meta production data centers. Engineers on the team work with peers in the industry to make the kernel better for Meta’s workloads and to make Linux better for everyone.

Tell us about yourself.

I’m a systems engineer who’s spent a good chunk of his career in the kernel space, and some time in the user-space as well working on a microkernel. Right now, I’m focusing most of my time on BPF and the Linux kernel scheduler.

I started my career as a web developer after getting a degree in math. After going to grad school, I realized that I was happiest when hacking on low-level systems and figuring out how computers work.

As a kernel developer at Meta, what does your typical day look like?

I’m not a maintainer of any subsystems in the kernel, so my typical day is filled with almost exclusively coding and engineering. That being said, participating in the upstream Linux kernel community is one of the coolest parts of being on the kernel team, so I still spend some time reading over upstream discussions. A typical day goes something like this:

Advertisement
free widgets for website
  1. Read over some of the discussions taking place on various upstream lists, such as BPF and mm. I usually spend about 30-60 minutes or so per day on this, though it depends on the day.

  2. Hack on the project that I’m working on. Lately, that’s adding a user-space ringbuffer map type to BPF.

  3. Work on drafting an article for lwn.net.

What have you been excited about or incredibly proud of lately?

I recently submitted a patch-set to enable a new map type in BPF. This allows user-space to publish messages to BPF programs in the kernel over the ringbuffer. This map type is exciting because it sets the stage to enable frameworks for user-space to drive logic in BPF programs in a performant way.

Is there something especially exciting about being a kernel developer at a company like Meta?

The Meta kernel team has a strong upstream-first culture. Bug fixes that we find in our Meta kernel, and features that we’d like to add, are almost always first submitted to the upstream kernel, and then they are backported to our internal kernel.

Do you have a favorite part of the kernel dev life cycle?

I enjoy architecting and designing APIs. Kernel code can never crash and needs to be able to run forever. I find it gratifying to architect systems in the kernel that make it easy to reason about correctness and robustness and provide intuitive APIs that make it easy for other parts of the kernel to use your code.

I also enjoy iterating with the upstream community. It’s great that your patches have a whole community of people looking at them to help you find bugs in your code and suggest improvements that you may never have considered on your own. A lot of people find this process to be cumbersome, but I find that it’s a small price to pay for what you get out of it.

Tell us a bit about the topic you presented at the Linux Plumbers Conference this year.

We presented the live patch feature in the Linux kernel, describing how we have utilized it at Meta and how our hyper-scale has shown some unique challenges with the feature.

Advertisement
free widgets for website

What are some of the misconceptions about kernel or open source software development that you have encountered in your career?

The biggest misconception is that it’s an exclusive, invite-only club to contribute to the Linux kernel. You certainly must understand operating systems to be an effective contributor and be ready to receive constructive criticism when there is scope for improvement in your code. Still, the community always welcomes people who come in with an open mind and want to contribute.

What resources are helpful in getting started in kernel development?

There is a lot of information out there that people have written on how to get integrated into the Linux kernel community. I wrote a blog post on how to get plugged into Linux kernel upstream mailing list discussions, and another on how to submit your first patch. There is also a video on writing and submitting your first Linux kernel patch from Greg Kroah-Hartman.

In terms of resources to learn about the kernel itself, there are many resources and books, such as:

Where can people find you and follow your work?

I have a blog where I talk about my experiences as a systems engineer: https://www.bytelab.codes/. I publish articles that range from topics that are totally newcomer friendly to more advanced topics that discuss kernel code in more detail. Feel free to check it out and let me know if there’s anything you’d like me to discuss.

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Advertisement
free widgets for website
See also  EU ready to follow Australia's lead on making Google and Facebook pay for news
Continue Reading

Trending