Connect with us

FACEBOOK

Salesforce, Google, Facebook. How Big Tech undermines California’s public health system

Published

on

SACRAMENTO, Calif. — California Gov. Gavin Newsom has embraced Silicon Valley tech companies and health care industry titans in response to the covid-19 pandemic like no other governor in America — routinely outsourcing life-or-death public health duties to his allies in the private sector.

At least 30 tech and health care companies have received lucrative, no-bid government contracts, or helped fund and carry out critical public health activities during the state’s battle against the coronavirus, a KHN analysis has found. The vast majority are Newsom supporters and donors who have contributed more than $113 million to his political campaigns and charitable causes, or to fund his policy initiatives, since his first run for statewide office in 2010.

For instance, the San Francisco-based software company Salesforce — whose CEO, Marc Benioff, is a repeat donor and is so tight with the governor that Newsom named him the godfather of his first child — helped create My Turn, California’s centralized vaccine clearinghouse, which has been unpopular among Californians seeking shots and has so far cost the state $93 million.

Verily Life Sciences, a sister company of Google, another deep-pocketed Newsom donor, received a no-bid contract in March 2020 to expand covid testing — a $72 million venture that the state later retreated on. And after Newsom handed another no-bid testing contract — now valued at $600 million — to OptumServe, its parent company, national insurance giant UnitedHealth Group dropped $100,000 into a campaign account he can tap to fight the recall effort against him.

Newsom’s unprecedented reliance on private companies — including health and technology start-ups — has come at the expense of California’s overtaxed and underfunded public health system. Current and former public health officials say Newsom has entrusted the essential work of government to private-sector health and tech allies, hurting the ability of the state and local health departments to respond to the coronavirus pandemic and prepare for future threats.

Advertisement
free widgets for website

“This outsourcing is weakening us. The lack of investment in our public health system is weakening us,” said Flojaune Cofer, a former state Department of Public Health epidemiologist and senior director of policy for Public Health Advocates, which has lobbied unsuccessfully for years for more state public health dollars.

“These are companies that are profit-driven, with shareholders. They’re not accountable to the public,” Cofer said. “We can’t rely on them helicoptering in. What if next time it’s not in the interest of the business or it’s not profitable?”

Kathleen Kelly Janus, Newsom’s senior adviser on social innovation, said the governor is “very proud of our innovative public-private partnerships,” which have provided “critical support for Californians in need during this pandemic.”

State Health and Human Services Secretary Dr. Mark Ghaly echoed the praise, saying private-sector companies have filled “important” roles during an unprecedented public health crisis.

The state’s contract with OptumServe has helped dramatically lower covid test turnaround times after a troubled start. Another subsidiary of UnitedHealth Group, OptumInsight, received $41 million to help California rescue its outdated infectious disease reporting and monitoring system last year after it crashed.

Advertisement
free widgets for website

“Not only are we much better equipped on all of these things than we were at the beginning, but we are also seeing some success,” Ghaly said, “whether it’s on the vaccination front, which has really picked up and put us in a place of success, or just being able to do testing at a broad scale. So, I feel like we’re in a reasonable position to continue to deal with covid.”

The federal government finances most public health activities in California and significantly boosted funding during the pandemic, but local health departments also rely on state and local money to keep their communities safe.

In his first year as governor, the year before the pandemic, Newsom denied a budget request from California’s 61 local public health departments to provide $50 million in state money per year to help rebuild core public health infrastructure — which had been decimated by decades of budget cuts — despite warnings from his own public health agency that the state wasn’t prepared for what was coming.

See also  Facebook Testing App for Prisoners Re-Entering Society

After the pandemic struck, Newsom and state lawmakers turned away another budget request to support the local health departments driving California’s pandemic response, this time for $150 million in additional annual infrastructure funding. Facing deficits at the time, the state couldn’t afford it, Newsom said, and federal help was on the way.

Yet covid cases continued to mount, and resources dwindled. Bare-bones staffing meant that some local health departments had to abandon fundamental public health functions, such as contact tracing, communicable disease testing and enforcement of public health orders.

Advertisement
free widgets for website

“As the pandemic rages on and without additional resources, some pandemic activities previously funded with federal CARES Act resources simply cannot be sustained,” a coalition of public health officials warned in a late December letter to Newsom and legislative leaders.

Newsom has long promoted tech and private companies as a way to improve government, and has leaned on the private sector throughout his political career, dating to his time as San Francisco mayor from 2004 to 2011, when he called on corporations to contribute to his homelessness initiatives.

And since becoming governor in January 2019, he has regularly held private meetings with health and tech executives, his calendars show, including Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Apple CEO Tim Cook.

“We’re right next door to Silicon Valley, of course, so technology is our friend,” Newsom wrote in his 2013 book, “Citizenville,” arguing that “government needs to adapt to this new technological age.”

With California’s core public health infrastructure already gutted, Newsom funneled taxpayer money to tech and health companies during the pandemic or allowed them to help design and fund certain public health activities.

Advertisement
free widgets for website

Other industries have jumped into covid response, including telecommunications and entertainment, but not to the degree of the health and technology sectors.

“It’s not the ideal situation,” said Daniel Zingale, who has steered consequential health policy decisions under three California governors, including Newsom. “What is best for Google is not necessarily best for the people of California.”

Among the corporate titans that have received government contracts to conduct core public health functions is Google’s sister company Verily.

Google and its executives have given more than $10 million to Newsom’s gubernatorial campaigns and special causes since 2010, according to state records. It has infiltrated the state’s pandemic response: The company, along with Apple, helped build a smartphone alert system called CA Notify to assist state and local health officials with contact tracing, a venture Newsom hailed as an innovative, “data-driven” approach to reducing community spread. Google, Apple and Facebook are sharing tracking data with the state to help chart the spread of covid. Google — as well as Facebook, Snapchat, TikTok, Twitter and other platforms — also contributed millions of dollars in free advertising to California, in Newsom’s name, for public health messaging.

Other companies that have received lucrative contracts to help carry out the state’s covid plans include health insurance company Blue Shield of California, which received a $15 million no-bid contract to oversee vaccine allocation and distribution, and the private consulting firm McKinsey & Co., which has received $48 million in government contracts to boost vaccinations and testing and work on genomic sequencing to help track and monitor covid variants. Together, they have given Newsom more than $20 million in campaign and charitable donations since 2010.

Advertisement
free widgets for website

Private companies have also helped finance government programs and core public health functions during the pandemic — at times bypassing local public health departments — under the guise of making charitable or governmental contributions, known as “behested payments, in Newsom’s name. They have helped fund vaccination clinics, hosted public service announcements on their platforms, and paid for hotel rooms to safely shelter and quarantine homeless people.

Facebook and the Chan Zuckerberg Initiative, the philanthropic organization started by Facebook founder Mark Zuckerberg and his wife, Priscilla Chan, have been among the most generous, and have given $36.5 million to Newsom, either directly or to causes and policy initiatives on his behalf. Much of that money was spent on pandemic response efforts championed by Newsom, such as hotel rooms and child care for front-line health care workers; computers and internet access for kids learning at home; and social services for incarcerated people leaving prison because of covid outbreaks.

See also  WPP Pulls Out of Facebook's Media Agency Review

Facebook said it is also partnering with the state to deploy pop-up vaccination clinics in hard-hit areas like the Central Valley, Inland Empire and South Los Angeles.

In prepared statements, Google and Facebook said they threw themselves into the pandemic response because they wanted to help struggling workers and businesses in their home state, and to respond to the needs of vulnerable communities.

Venture capitalist Dr. Bob Kocher, a Newsom ally who was one of the governor’s earliest pandemic advisers, said private-sector involvement helped California tremendously.

Advertisement
free widgets for website

“We’re doing really well. We got almost 20 million people vaccinated and our test positivity rate is at an all-time low,” Kocher said. “Our public health system was set up to handle small-scale outbreaks like E. coli or hepatitis. Things work better when you build coalitions that go beyond government.”

Public health leaders acknowledge that private-sector participation during an emergency can help the state respond quickly and on a large scale. But by outsourcing so much work to the private sector, they say, California has also undercut its already struggling public health system — and missed an opportunity to invest in it.

Take Verily. Newsom tapped the company to help expand testing to underserved populations, but the state chose to end its relationship with the company in January after county health departments rejected the partnership, in part because testing was not adequately reaching Black and Latino neighborhoods. In addition to requiring that residents have a car and Gmail account, Verily was seen by many local health officials as an outsider that didn’t understand the communities.

It takes years of shoe leather public health work to build trusted relationships within communities, said Dr. Noha Aboelata, founder and CEO of the Roots Community Health Center in the predominantly Black and Latino neighborhood of East Oakland.

“I think what’s not fine is when these corporations are claiming to be the center of equity, when in fact it can manifest as the opposite,” she said. “We’re in a neighborhood where people walk to our clinic, which is why when Verily testing first started and they were drive-up and you needed a Gmail account, most of our community wasn’t able to take advantage of it.”

Advertisement
free widgets for website

To fill the gap, the clinic worked with Alameda County to offer old-fashioned walk-up appointments. “We’re very focused on disparities, and we’re definitely seeing the folks who are most at risk,” Aboelata said.

The state took a similar approach to vaccination. Instead of giving local health departments the funding and power to manage their own vaccination programs with community partners, it looked to the private sector again. Among the companies that received a vaccination contract is Color Health Inc., awarded $10 million to run 10 vaccine clinics across the state, among other covid-related work. Since partnering with California, Color has seen its valuation soar to $1.5 billion — helping it achieve “unicorn” start-up status.

As the state’s Silicon Valley partners rake in money, staffing at local health departments has suffered, in part because they don’t have enough funding to hire or replace workers. “It is our biggest commodity and it’s our No. 1 need,” said Kat DeBurgh, executive director of the Health Officers Association of California.

With inadequate staffing to address the pandemic, the state is falling further behind on other basic public health duties, such as updating data systems and technology — many county health departments still rely on fax machines to report lab results — and combating record-setting levels of sexually transmitted diseases such as syphilis.

“We’ve put so many resources into law enforcement and private tech companies instead of public health,” said Kiran Savage-Sangwan, executive director of the California Pan-Ethnic Health Network. “This is having a devastating impact.”

Advertisement
free widgets for website

Dr. Karen Smith, former director of the state Department of Public Health, left the state in July 2019 and now is a consultant with Google Health, one of Big Tech’s forays into the business of health care.

See also  As IPO Nears, Reddit Signs Agreement With OMG; Facebook Asks Federal Judge To Dismiss ...

She believes Silicon Valley can improve the state’s crumbling public health infrastructure, especially when it comes to collecting and sharing data, but it can’t be done without substantial investment from the state. “Who the heck still uses fax? Public health doesn’t have the kind of money that tech companies have,” said Smith, who said she wasn’t speaking on behalf of Google.

Without adequate funding to rebuild its infrastructure and hire permanent workers, Smith and others fear California isn’t prepared to ride out the remainder of this pandemic — let alone manage the next public health crisis.

Statewide public health advocacy groups have formed a coalition called “California Can’t Wait” to pressure state lawmakers and Newsom to put more money into the state budget for local public health departments. They’re asking for $200 million annually. Newsom will unveil his latest state budget proposal by mid-May.

“We’re in one of those change-or-die moments,” Capitol health care veteran Zingale said. “Newsom has been at the vanguard of the nation in marshaling the help of our robust technological private sector, and we’re thankful for their contributions, but change is better than charity. I don’t want to show ingratitude, but we should keep our eyes on building a better system.”

Advertisement
free widgets for website

KHN data editor Elizabeth Lucas and California politics correspondent Samantha Young contributed to this report.

Methodology: How KHN compiled data about political spending and the role of technology and health care companies in California’s covid response.

Private-sector companies from Silicon Valley and the health care industry have participated in California’s public health response to covid-19 in a variety of ways, big and small. Some have received multimillion-dollar contracts from the state of California to perform testing, vaccination and other activities. Others have donated money and resources to the effort, such as free public health advertising time.

KHN identified the companies that received pandemic-related contracts or work from the state by filing Public Records Act requests with state agencies; searching other sources, including California’s “Released COVID-19 Response Contracts” page; and contacting state agencies and companies directly.

We then searched the California Fair Political Practices Commission website for tech and health care companies that didn’t receive contracts but played a role in the state’s pandemic response by donating money and resources. Through what are known as “behested payments,” these companies donated to charitable causes or Gov. Gavin Newsom’s policy initiatives on his behalf. These contributions included money to help fund and design state public health initiatives such as quarantine hotel rooms.

Based on those searches, we found at least 30 health or technology companies that have participated in the state’s pandemic response: Google and its sister company Verily Life Sciences; Salesforce; Facebook; Apple; McKinsey & Co.; OptumServe and OptumInsight — subsidiaries of national health care company UnitedHealth Group; Netflix; Pandora; Spotify; Zoom Video Communications Inc.; electric car manufacturer BYD; Bloom Energy; Color Health Inc.; DoorDash; Twitter; Amazon; Accenture; Skedulo; Primary.Health; Pfizer; HP Inc.; Microsoft; Snapchat; Blue Shield of California; Kaiser Permanente; Lenovo Inc.; YouTube; and TikTok. The Chan Zuckerberg Initiative, the philanthropic organization started by Facebook founder Mark Zuckerberg and his wife, Priscilla Chan, also participated.

Advertisement
free widgets for website

We then searched the California secretary of state’s website to determine which of those companies, and their executives, gave direct political contributions to Newsom’s personal campaign accounts and a ballot measure account run by the governor called “Newsom’s Ballot Measure Committee” during his five campaigns for statewide office since 2010, plus the ongoing recall effort against him.

We found that at least 24 of the tech or health companies that participated in the state’s pandemic response, or their executives, gave direct political contributions to Newsom, made behested payments in his name or both.

This story was produced by KHN, which publishes California Healthline, an editorially independent service of the California Health Care Foundation.

Read More

Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

FACEBOOK

Meet the Developers – Linux Kernel Team (Martin Lau)

Published

on

By

meet-the-developers-–-linux-kernel-team-(martin-lau)

For today’s interview, we have Martin Lau, a software engineer on the Kernel team at Meta. He works on the BPF (Berkeley Packet Filter) and Kernel Networking development.

This series highlights Meta Software Engineers who contribute to the Linux kernel. The Meta Linux Kernel team works with the broader Linux community to add new features to the kernel and makes sure that the kernel works well in Meta production data centers. Engineers on the team work with peers in the industry to make the kernel better for Meta’s workloads and to make Linux better for everyone.

Tell us about yourself and what your typical day looks like as a kernel developer at Meta.

My name is Martin KaFai Lau. I have been with Meta for 9 and a half years. I joined the Meta Kernel team 6 years ago. I am focusing on BPF and networking development. Before that, I was on the Meta Traffic team doing HTTP, TLS and CDN work.

My typical day involves reviewing patches in the mailing list, supporting Meta use cases in production, exploring ideas with other networking teams and writing patches for upstream.

What have you been excited about or incredibly proud of lately?

Works that make kernel networking stack extensible by BPF. It also excites me to work on stackable workloads that need to impose bandwidth limitations per task, explore One-Way-Delay measurement with BPF and speed up a service start-up time from more than a minute down to 6 seconds.

Advertisement
free widgets for website

Is there something especially exciting about being a Kernel developer at a company like Meta?

Scale. Being able to solve problems in Meta’s production scale is very exciting. Kernel is the core piece used by all services, and it is the core interface between a computer’s hardware and its processes. Improvement in kernel will have a positive effect on all of them.

See also  Against rules, 37.8% 10-year-olds have Facebook accounts, 24.3% on Instagram: NCPCR study

Scale is also about the number of internal users that have a lot of production experience, and each of them could have a diverge usage. It is so much faster to get feedback from different teams, and they can explain if something will work well in the real world or not. Their feedback also leads to new kernel development.

My coworkers are maintainers in different kernel subsystems that I am not familiar with. This is a great learning experience that is very valuable in Meta.

Tell us a bit about the topic you presented at the Linux Plumbers Conference (LPC) this year.

In the LPC presentation, I gave an overview of the BPF networking hooks in the kernel and how Meta uses them, the surprises I usually hear about from our internal users and what could be addressed in the future.

Over the years, BPF has grown considerably, so it can sometimes be difficult to navigate how BPF should be used in the networking stack. You can explore some of the resources from the talk on the event page “Overview of the BPF networking hooks and user experience in Meta.”

Advertisement
free widgets for website

What are some of the misconceptions about kernel or OSS development that you have encountered in your career?

My previous life was user space only. I usually could quickly understand the big picture of the whole piece I was working on. I thought that would be hard in the kernel because the code base is large. More importantly, it is also tough to get help from upstream people.

See also  Facebook, more than mere dominance - The Toledo Blade

I spent time reading and observing the mailing list and paid attention to how people work on the mailing list. It is impossible to get a grip on everything in the kernel. I shrink the scope, focus on one piece and fix something to gain credibility in the upstream. Then, I repeat the process to expand my knowledge space.

What resources are helpful in getting started in kernel development, and where can people follow your work?

The Linux kernel mailing list. Spend time reading the threads that you are interested in. Understand the concern and interests that the stakeholders usually have. Start with something small that solves a real production problem.

I am usually active in the BPF mailing list.

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

Advertisement
free widgets for website

Image credit: Larry Ewing (lewing@isc.tamu.edu) and The GIMP for the original design of Tux the penguin.

First seen at developers.facebook.com

Continue Reading

FACEBOOK

Upcoming Restriction Period for US ads about social issues, elections, or politics

Published

on

By

upcoming-restriction-period-for-us-ads-about-social-issues,-elections,-or-politics

In recent years, Meta has developed a comprehensive approach to protecting elections on our technologies. These efforts continue in advance of the US 2022 Midterms, which you can read more about in our Newsroom.

Implementing a restriction period for ads about social issues, elections or politics in the US

Consistent with our approach during the US 2020 General Election, we are introducing a restriction period for ads about social issues, elections or politics in the US. The restriction period will run from 12:01 AM PT on Tuesday, November 1, 2022 through 11:59 PM PT on Tuesday, November 8, 2022.

We are putting this restriction period in place again because we found that the restriction period achieves the right balance of giving campaigns a voice while providing additional time for scrutiny of issue, electoral, and political ads in the Ad Library. We are sharing the requirements and key dates ahead of time, so advertisers are able to prepare their campaigns in the months and weeks ahead.

What to know about the ad restriction period in the US

We will not allow any new ads about social issues, elections or politics in the US from 12:01 AM PT on Tuesday, November 1, 2022 through 11:59 PM PT on Tuesday, November 8, 2022.

In order to run ads about social issues, elections or politics in the US during the restriction period, the ads must be created with a valid disclaimer and have delivered an impression prior to 12:01 AM PT on Tuesday, November 1, 2022, but with limited editing capabilities.

Advertisement
free widgets for website

What advertisers can do during the restriction period for eligible ads:

  • Edit bid amount, budget amount and scheduled end date
  • Pause and unpause eligible ads that have already served at least 1 impression with a valid disclaimer prior to the restriction period going into effect
See also  Facebook ropes in Uber's Sanjay Gupta as Director of International Marketing - Afaqs

What advertisers cannot do during the restriction period for eligible ads, includes but is not limited to:

  • Editing certain aspects of eligible ads, such as ad creative (including ad copy, image/video assets, website URL)
  • Editing targeting, placement, optimization or campaign objective
  • Removing or adding a disclaimer
  • Copy, duplicating or boosting ads

See the Help Center for detailed requirements of what is or isn’t allowed during the restriction period.

Planning ahead for key dates

Keep in mind the following dates as you plan your campaign to avoid delays or disapprovals that may prevent your ads from running during the restriction period:

  • By Tuesday, October 18, 2022: Complete the ad authorization process to get authorized to run ads about social issues, elections or politics, which includes setting up an approved disclaimer for your ads.

  • By Tuesday, October 25, 2022: Submit your issue, electoral or political ads in order to best ensure that your ads are live and have delivered at least 1 impression with a valid disclaimer before the restriction period begins.
    • Please ensure that you add your approved disclaimer to these ads by choosing ISSUES_ELECTIONS_POLITICS in the special_ad_categories field. You will not be able to add a disclaimer after 12:01 AM PT on Tuesday, November 1, 2022.

  • Between Tuesday, November 1, 2022 and Tuesday, November 8, 2022: The ad restriction period will be in effect. We will not allow any new ads to run about social issues, elections or politics in the US starting 12:01 AM PT on Tuesday, November 1 through 11:59 PM PT on Tuesday, November 8, 2022.
  • At 12:00 AM PT on Wednesday, November 9, 2022: We will allow new ads about social issues, elections or politics to be published.

As the restriction period approaches, we encourage you to review these ad restriction period best practices to properly prepare ahead of time.

We will continue to provide updates on our approach to elections integrity or on any changes regarding the restriction period via this blog.

Visit the Elections Hub or our FAQ for more advertising resources.

First seen at developers.facebook.com

Advertisement
free widgets for website
Continue Reading

FACEBOOK

Signals in prod: dangers and pitfalls

Published

on

By

signals-in-prod:-dangers-and-pitfalls

In this blog post, Chris Down, a Kernel Engineer at Meta, discusses the pitfalls of using Linux signals in Linux production environments and why developers should avoid using signals whenever possible.

What are Linux Signals?

A signal is an event that Linux systems generate in response to some condition. Signals can be sent by the kernel to a process, by a process to another process, or a process to itself. Upon receipt of a signal, a process may take action.

Signals are a core part of Unix-like operating environments and have existed since more or less the dawn of time. They are the plumbing for many of the core components of the operating system—core dumping, process life cycle management, etc.—and in general, they’ve held up pretty well in the fifty or so years that we have been using them. As such, when somebody suggests that using them for interprocess communication (IPC) is potentially dangerous, one might think these are the ramblings of someone desperate to invent the wheel. However, this article is intended to demonstrate cases where signals have been the cause of production issues and offer some potential mitigations and alternatives.

Signals may appear attractive due to their standardization, wide availability and the fact that they don’t require any additional dependencies outside of what the operating system provides. However, they can be difficult to use safely. Signals make a vast number of assumptions which one must be careful to validate to match their requirements, and if not, one must be careful to configure correctly. In reality, many applications, even widely known ones, do not do so, and may have hard-to-debug incidents in the future as a result.

Let us look into a recent incident that occurred in the Meta production environment, reinforcing the pitfalls of using signals. We’ll go briefly over the history of some signals and how they led us to where we are today, and then we’ll contrast that with our current needs and issues that we’re seeing in production.

Advertisement
free widgets for website

The Incident

First, let’s rewind a bit. The LogDevice team cleaned up their codebase, removing unused code and features. One of the features that was deprecated was a type of log that documents certain operations performed by the service. This feature eventually became redundant, had no consumers and as such was removed. You can see the change here on GitHub. So far, so good.

The next little while after the change passed without much to speak about, production continued ticking on steadily and serving traffic as usual. A few weeks later, a report that service nodes were being lost at a staggering rate was received. It was something to do with the rollout of the new release, but what exactly was wrong was unclear. What was different now that had caused things to fall over?

The team in question narrowed the problem to the code change we mentioned earlier, deprecating these logs. But why? What’s wrong with that code? If you don’t already know the answer, we invite you to look at that diff and try to work out what’s wrong because it’s not immediately obvious, and it’s a mistake anyone could make.

logrotate, Enter the Ring

logrotate is more or less the standard tool for log rotation when using Linux. It’s been around for almost thirty years now, and the concept is simple: manage the life cycle of logs by rotating and vacuuming them.

logrotate doesn’t send any signals by itself, so you won’t find much, if anything, about them in the logrotate main page or its documentation. However, logrotate can take arbitrary commands to execute before or after its rotations. Just as a basic example from the default logrotate configuration in CentOS, you can see this configuration:

Advertisement
free widgets for website
 /var/log/cron /var/log/maillog /var/log/messages /var/log/secure /var/log/spooler {     sharedscripts     postrotate         /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true     endscript } 

A bit brittle, but we’ll forgive that and assume that this works as intended. This configuration says that after logrotate rotates any of the files listed, it should send SIGHUP to the pid contained in /var/run/syslogd.pid, which should be that of the running syslogd instance.

This is all well and good for something with a stable public API like syslog, but what about something internal where the implementation of SIGHUP is an internal implementation detail that could change at any time?

A History of Hangups

One of the problems here is that, except for signals which cannot be caught in user space and thus have only one meaning, like SIGKILL and SIGSTOP, the semantic meaning of signals is up to application developers and users to interpret and program. In some cases, the distinction is largely academic, like SIGTERM, which is pretty much universally understood to mean “terminate gracefully as soon as possible.” However, in the case of SIGHUP, the meaning is significantly less clear.

SIGHUP was invented for serial lines and was originally used to indicate that the other end of the connection had dropped the line. Nowadays, we still carry our lineage with us of course, so SIGHUP is still sent for its modern equivalent: where a pseudo or virtual terminal is closed (hence tools like nohup, which mask it).

In the early days of Unix, there was a need to implement daemon reloading. This usually consists at least of configuration/log file reopening without restarting, and signals seemed like a dependency-free way to achieve that. Of course, there was no signal for such a thing, but as these daemons have no controlling terminal, there should be no reason to receive SIGHUP, so it seemed like a convenient signal to piggyback onto without any obvious side effects.

Advertisement
free widgets for website

There is a small hitch with this plan though. The default state for signals is not “ignored,” but signal-specific. So, for example, programs don’t have to configure SIGTERM manually to terminate their application. As long as they don’t set any other signal handler, the kernel just terminates their program for free, without any code needed in user space. Convenient!

What’s not so convenient though, is that SIGHUP also has the default behavior of terminating the program immediately. This works great for the original hangup case, where these applications likely aren’t needed anymore, but is not so great for this new meaning.

This would be fine of course, if we removed all the places which could potentially send SIGHUP to the program. The problem is that in any large, mature codebase, that is difficult. SIGHUP is not like a tightly controlled IPC call for which you can easily grep the codebase for. Signals can come from anywhere, at any time, and there are few checks on their operation (other than the most basic “are you this user or have CAP_KILL“). The bottom line is that it’s hard to determine where signals could come from, but with more explicit IPC, we would know that this signal doesn’t mean anything to us and should be ignored.

See also  Facebook Question - Feb. 8th, 2021

From Hangup to Hazard

By now, I suppose you may have started to guess what happened. A LogDevice release started one fateful afternoon containing the aforementioned code change. At first, nothing had gone awry, but at midnight the next day, everything mysteriously started falling over. The reason is the following stanza in the machine’s logrotate configuration, which sends a now unhandled (and therefore fatal) SIGHUP to the logdevice daemon:

 /var/log/logdevice/audit.log {   daily   # [...]   postrotate     pkill -HUP logdeviced   endscript } 

Missing just one short stanza of a logrotate configuration is incredibly easy and common when removing a large feature. Unfortunately, it’s also hard to be certain that every last vestige of its existence was removed at once. Even in cases that are easier to validate than this, it’s common to mistakenly leave remnants when doing code cleanup. Still, usually, it’s without any destructive consequences, that is, the remaining detritus is just dead or no-op code.

Advertisement
free widgets for website

Conceptually, the incident itself and its resolution are simple: don’t send SIGHUP, and spread LogDevice actions out more over time (that is, don’t run this at midnight on the dot). However, it’s not just this one incident’s nuances that we should focus on here. This incident, more than anything, has to serve as a platform to discourage the use of signals in production for anything other than the most basic, essential cases.

The Dangers of Signals

What Signals are Good For

First, using signals as a mechanism to affect changes in the process state of the operating system is well founded. This includes signals like SIGKILL, which are impossible to install a signal handler for and does exactly what you would expect, and the kernel-default behavior of SIGABRT, SIGTERM, SIGINT, SIGSEGV, and SIGQUIT and the like, which are generally well understood by users and programmers.

What these signals all have in common is that once you’ve received them, they’re all progressing towards a terminal end state within the kernel itself. That is, no more user space instructions will be executed once you get a SIGKILL or SIGTERM with no user space signal handler.

A terminal end state is important because it usually means you’re working towards decreasing the complexity of the stack and code currently being executed. Other desired states often result in the complexity actually becoming higher and harder to reason about as concurrency and code flow become more muddled.

Dangerous Default Behavior

You may notice that we didn’t mention some other signals that also terminate by default. Here’s a list of all of the standard signals that terminate by default (excluding core dump signals like SIGABRT or SIGSEGV, since they’re all sensible):

Advertisement
free widgets for website
  • SIGALRM
  • SIGEMT
  • SIGHUP
  • SIGINT
  • SIGIO
  • SIGKILL
  • SIGLOST
  • SIGPIPE
  • SIGPOLL
  • SIGPROF
  • SIGPWR
  • SIGSTKFLT
  • SIGTERM
  • SIGUSR1
  • SIGUSR2
  • SIGVTALRM

At first glance, these may seem reasonable, but here are a few outliers:

  • SIGHUP: If this was used only as it was originally intended, defaulting to terminate would be sensible. With the current mixed usage meaning “reopen files,” this is dangerous.
  • SIGPOLL and SIGPROF: These are in the bucket of “these should be handled internally by some standard function rather than your program.” However, while probably harmless, the default behavior to terminate still seems nonideal.
  • SIGUSR1 and SIGUSR2: These are “user-defined signals” that you can ostensibly use however you like. But because these are terminal by default, if you implement USR1 for some specific need and later don’t need that, you can’t just safely remove the code. You have to consciously think to explicitly ignore the signal. That’s really not going to be obvious even to every experienced programmer.

So that’s almost one-third of terminal signals, which are at best questionable and, at worst, actively dangerous as a program’s needs change. Worse still, even the supposedly “user-defined” signals are a disaster waiting to happen when someone forgets to explicitly SIG_IGN it. Even an innocuous SIGUSR1 or SIGPOLL may cause incidents.

This is not simply a question of familiarity. No matter how well you know how signals work, it’s still extremely hard to write signal-correct code the first time around because, despite their appearance, signals are far more complex than they seem.

Code flow, Concurrency, and the Myth of SA_RESTART

Programmers generally do not spend their entire day thinking about the inner workings of signals. This means that when it comes to actually implementing signal handling, they often subtly do the wrong thing.

I’m not even talking about the “trivial” cases, like safety in a signal handling function, which is mostly solved by only bumping a sig_atomic_t, or using C++’s atomic signal fence stuff. No, that’s mostly easily searchable and memorable as a pitfall by anyone after their first time through signal hell. What’s a lot harder is reasoning about the code flow of the nominal portions of a complex program when it receives a signal. Doing so requires either constantly and explicitly thinking about signals at every part of the application life cycle (hey, what about EINTR, is SA_RESTART enough here? What flow should we go into if this terminates prematurely? I now have a concurrent program, what are the implications of that?), or setting up a sigprocmask or pthread_setmask for some part of your application life cycle and praying that the code flow never changes (which is certainly not a good guess in an atmosphere of fast-paced development). signalfd or running sigwaitinfo in a dedicated thread can help somewhat here, but both of these have enough edge cases and usability concerns to make them hard to recommend.

We like to believe that most experienced programmers know by now that even a facetious example of correctly writing thread-safe code is very hard. Well, if you thought correctly writing thread-safe code was hard, signals are significantly harder. Signal handlers must only rely on strictly lock-free code with atomic data structures, respectively, because the main flow of execution is suspended and we don’t know what locks it’s holding, and because the main flow of execution could be performing non-atomic operations. They must also be fully reentrant, that is, they must be able to nest within themselves since signal handlers can overlap if a signal is sent multiple times (or even with one signal, with SA_NODEFER). That’s one of the reasons why you can’t use functions like printf or malloc in a signal handler because they rely on global mutexes for synchronization. If you were holding that lock when the signal was received and then called a function requiring that lock again, your application would end up deadlocked. This is really, really hard to reason about. That’s why many people simply write something like the following as their signal handling:

 static volatile sig_atomic_t received_sighup;   static void sighup(int sig __attribute__((unused))) { received_sighup = 1; }  static int configure_signal_handlers(void) {   return sigaction(     SIGHUP,     &(const struct sigaction){.sa_handler = sighup, .sa_flags = SA_RESTART},     NULL); }  int main(int argc, char *argv[]) {   if (configure_signal_handlers()) {        /* failed to set handlers */   }    /* usual program flow */    if (received_sighup) {     /* reload */     received_sighup = 0;   }    /* usual program flow */ }  

The problem is that, while this, signalfd, or other attempts at async signal handling might look fairly simple and robust, it ignores the fact that the point of interruption is just as important as the actions performed after receiving the signal. For example, suppose your user space code is doing I/O or changing the metadata of objects that come from the kernel (like inodes or FDs). In this case, you’re probably actually in a kernel space stack at the time of interruption. For example, here’s how a thread might look when it’s trying to close a file descriptor:

Advertisement
free widgets for website
# cat /proc/2965230/stack  [<0>] schedule+0x43/0xd0  [<0>] io_schedule+0x12/0x40  [<0>] wait_on_page_bit+0x139/0x230  [<0>] filemap_write_and_wait+0x5a/0x90  [<0>] filp_close+0x32/0x70  [<0>] __x64_sys_close+0x1e/0x50  [<0>] do_syscall_64+0x4e/0x140  [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

Here, __x64_sys_close is the x86_64 variant of the close system call, which closes a file descriptor. At this point in its execution, we’re waiting for the backing storage to be updated (that’s this wait_on_page_bit). Since I/O work is usually several orders of magnitude slower than other operations, schedule here is a way of voluntarily hinting to the kernel’s CPU scheduler that we are about to perform a high-latency operation (like disk or network I/O) and that it should consider finding another process to schedule instead of the current process for now. This is good, as it allows us to signal to the kernel that it is a good idea to go ahead and pick a process that will actually make use of the CPU instead of wasting time on one which can’t continue until it’s finished waiting for a response from something that may take a while.

Imagine that we send a signal to the process we were running. The signal that we have sent has a user space handler in the receiving thread, so we’ll resume in user space. One of the many ways this race can end up is that the kernel will try to come out of schedule, further unwind the stack and eventually return an errno of ESYSRESTART or EINTR to user space to indicate that we were interrupted. But how far did we get in closing it? What’s the state of the file descriptor now?

Now that we’ve returned to user space, we’ll run the signal handler. When the signal handler exits, we’ll propagate the error to the user space libc’s close wrapper, and then to the application, which, in theory, can do something about the situation encountered. We say “in theory” because it’s really hard to know what to do about many of these situations with signals, and many services in production do not handle the edge cases here very well. That might be fine in some applications where data integrity isn’t that important. However, in production applications that do care about data consistency and integrity, this presents a significant problem: the kernel doesn’t expose any granular way to understand how far it got, what it achieved and didn’t and what we should actually do about the situation. Even worse, if close returns with EINTR, the state of the file descriptor is now unspecified:

“If close() is interrupted by a signal [...] the state of [the file descriptor] is unspecified.”

Good luck trying to reason about how to handle that safely and securely in your application. In general, handling EINTR even for well-behaved syscalls is complicated. There are plenty of subtle issues forming a large part of the reason why SA_RESTART is not enough. Not all system calls are restartable, and expecting every single one of your application’s developers to understand and mitigate the deep nuances of getting a signal for every single syscall at every single call site is asking for outages. From man 7 signal:

Advertisement
free widgets for website

“The following interfaces are never restarted after being interrupted by a signal handler, regardless of the use of SA_RESTART; they always fail with the error EINTR [...]”

Likewise, using a sigprocmask and expecting code flow to remain static is asking for trouble as developers do not typically spend their lives thinking about the bounds of signal handling or how to produce or preserve signal-correct code. The same goes for handling signals in a dedicated thread with sigwaitinfo, which can easily end up with GDB and similar tools being unable to debug the process. Subtly wrong code flows or error handling can result in bugs, crashes, difficult to debug corruptions, deadlocks and many more issues that will send you running straight into the warm embrace of your preferred incident management tool.

High Complexity in Multithreaded Environments

If you thought all this talk of concurrency, reentrancy and atomicity was bad enough, throwing multithreading into the mix makes things even more complicated. This is especially important when considering the fact that many complex applications run separate threads implicitly, for example, as part of jemalloc, GLib, or similar. Some of these libraries even install signal handlers themselves, opening a whole other can of worms.

Overall, man 7 signal has this to say on the matter:

“A signal may be generated (and thus pending) for a process as a whole (e.g., when sent using kill(2)) or for a specific thread [...] If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.”

Advertisement
free widgets for website

More succinctly, “for most signals, the kernel sends the signal to any thread that doesn’t have that signal blocked with sigprocmask“. SIGSEGV, SIGILL and the like resemble traps, and have the signal explicitly directed at the offending thread. However, despite what one might think, most signals cannot be explicitly sent to a single thread in a thread group, even with tgkill or pthread_kill.

This means that you can’t trivially change overall signal handling characteristics as soon as you have a set of threads. If a service needs to do periodic signal blocking with sigprocmask in the main thread, you need to somehow communicate to other threads externally about how they should handle that. Otherwise, the signal may be swallowed by another thread, never to be seen again. Of course, you can block signals in child threads to avoid this, but if they need to do their own signal handling, even for primitive things like waitpid, it will end up making things complex.

Just as with everything else here, these aren’t technically insurmountable problems. However, one would be negligent in ignoring the fact that the complexity of synchronization required to make this work correctly is burdensome and lays the groundwork for bugs, confusion and worse.

Lack of Definition and Communication of Success or Failure

Signals are propagated asynchronously in the kernel. The kill syscall returns as soon as the pending signal is recorded for the process or thread’s task_struct in question. Thus, there’s no guarantee of timely delivery, even if the signal isn’t blocked.

Even if there is timely delivery of the signal, there’s no way to communicate back to the signal issuer what the status of their request for action is. As such, any meaningful action should not be delivered by signals, since they only implement fire-and-forget with no real mechanism to report the success or failure of delivery and subsequent actions. As we’ve seen above, even seemingly innocuous signals can be dangerous when they are not configured in user space.

Advertisement
free widgets for website

Anyone using Linux for long enough has undoubtedly run into a case where they want to kill some process but find that the process is unresponsive even to supposedly always fatal signals like SIGKILL. The problem is that misleadingly, kill(1)’s purpose isn’t to kill processes, but just to queue a request to the kernel (with no indication about when it will be serviced) that someone has requested some action to be taken.

The kill syscall’s job is to mark the signal as pending in the kernel’s task metadata, which it does successfully even when a SIGKILL task doesn’t die. In the case of SIGKILL in particular, the kernel guarantees that no more user mode instructions will be executed, but we may still have to execute instructions in kernel mode to complete actions that otherwise may result in data corruption or to release resources. For this reason, we still succeed even if the state is D (uninterruptible sleep). Kill itself doesn’t fail unless you provided an invalid signal, you don’t have permission to send that signal or the pid that you requested to send a signal to does not exist and is thus not useful to reliably propagate non-terminal states to applications.

In Conclusion

  • Signals are fine for terminal state handled purely in-kernel with no user space handler. For signals that you actually would like to immediately kill your program, leave those signals alone for the kernel to handle. This also means that the kernel may be able to exit early from its work, freeing up your program resources more quickly, whereas a user space IPC request would have to wait for the user space portion to start executing again.
  • A way to avoid getting into trouble handling signals is to not handle them at all. However, for applications handling state processing that must do something about cases like SIGTERM, ideally use a high-level API like folly::AsyncSignalHandler where a number of the warts have already been made more intuitive.

  • Avoid communicating application requests with signals. Use self-managed notifications (like inotify) or user space RPC with a dedicated part of the application life cycle to handle it instead of relying on interrupting the application.
  • Where possible, limit the scope of signals to a subsection of your program or threads with sigprocmask, reducing the amount of code that needs to be regularly scrutinized for signal-correctness. Bear in mind that if code flows or threading strategies change, the mask may not have the effect you intended.
  • At daemon start, mask terminal signals that are not uniformly understood and could be repurposed at some point in your program to avoid falling back to kernel default behavior. My suggestion is the following:
 signal(SIGHUP, SIG_IGN); signal(SIGQUIT, SIG_IGN); signal(SIGUSR1, SIG_IGN); signal(SIGUSR2, SIG_IGN); 

Signal behavior is extremely complicated to reason about even in well-authored programs, and its use presents an unnecessary risk in applications where other alternatives are available. In general, do not use signals for communicating with the user space portion of your program. Instead, either have the program transparently handle events itself (for example, with inotify), or use user space communication that can report back errors to the issuer and is enumerable and demonstrable at compile time, like Thrift, gRPC or similar.

I hope this article has shown you that signals, while they may ostensibly appear simple, are in reality anything but. The aesthetics of simplicity that promote their use as an API for user space software belie a series of implicit design decisions that do not fit most production use cases in the modern era.

Let’s be clear: there are valid use cases for signals. Signals are fine for basic communication with the kernel about a desired process state when there’s no user space component, for example, that a process should be killed. However, it is difficult to write signal-correct code the first time around when signals are expected to be trapped in user space.

Signals may seem attractive due to their standardization, wide availability and lack of dependencies, but they come with a significant number of pitfalls that will only increase concern as your project grows. Hopefully, this article has provided you with some mitigations and alternative strategies that will allow you to still achieve your goals, but in a safer, less subtly complex and more intuitive way.

Advertisement
free widgets for website

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Continue Reading

Trending