Connect with us

FACEBOOK

Amid Modi visit, Facebook says services restricted in Bangladesh

Published

on

Bloomberg

Refloat Efforts Suspended; U.S. Seeks to Help: Suez Update

(Bloomberg) — Supply Lines is a daily newsletter that tracks trade and supply chains disrupted by the pandemic. Sign up here, and subscribe to our Covid-19 podcast for the latest news and analysis.The blockage of the Suez Canal is wreaking havoc on global seaborne trade, raising the prospects of higher inflation with more ships ferrying cargoes and commodities forced to divert.A special dredger has been deployed to free the vessel that has been stuck in the key waterway for days. Natural gas prices have increased and food supply chains may be affected if the blockage persists. Mark Ma, owner of China-based Seabay International Freight Forwarding Ltd., which has 20 to 30 containers waiting to cross the blocked canal, said that if traffic doesn’t resume in a week, “it will be horrible.”Two additional tugs will arrive at the Suez Canal by Sunday to assist in the refloating of the ship, Bernhard Schulte Shipmanagement, technical manager for the vessel, said in a statement on Friday.Efforts to Dislodge Suez Canal Ship Said to Need at Least a WeekThe pile-up of ships is creating another setback for global supply chains already strained by the e-commerce boom linked to the pandemic. About 12% of global trade transits the canal that’s so strategic world powers have fought over it. On Friday, the Biden administration raised concerns about the impact on global energy markets.Key Highlights:Two more tugs will arrive at the Ever Given by Sunday, ship management company says in statementOil tanker diverts; several ships in Indian Ocean that were bound for Suez change courseFood supply chain faces risksThe containership could be carrying almost $1 billion of cargo, IHS saysWork to dislodge the ship will take until at least middle of next weekAlmost 300 vessels have queued up, compared with 238 on Thursday, according to Bloomberg dataWhy the Suez Canal is so important: QuickTakeQatar Airways Gets Air Freight Queries (6 a.m. London)Qatar Airways, one of the world’s largest cargo airlines, said shippers stuck in the canal were sending queries as a precautionary measure. The airline expects “to see firmer interest in the coming days if the situation remains the same,” a spokesperson for the company said in response to questions from Bloomberg.Timing Couldn’t be Worse, Moody’s SaysThe canal’s temporary closure might affect 10%-15% of world container throughput, Moody’s Investors Service estimated earlier this week. Under normal circumstances, the temporary delays in global supply chains would not be a “big issue,” it said. However, a global shortage in container capacity and low service reliability has made supply chains highly vulnerable to external shocks despite high consumer demand, its analysts said.“The timing of this event could not have been worse,” analysts including Daniel Harlid wrote in a March 25 report.Insurers May Be on Hook for Millions (12:42 a.m. London)There were potentially thousands of insurance policies taken out on the steel boxes stacked high on Ever Given. They could result in millions of dollars in payouts.The blockage is set to unleash a flood of claims by everyone affected, from those in the shipping industry to those in the commodities business. Read the story here.Refloat Efforts Suspended (11:25 p.m. London)The salvage team suspended the re-floating operation at midnight local time, according to Inchcape Shipping Services, a maritime services provider.The dredger will continue working and the next re-floating attempt will be made at 2 p.m. local time Saturday with the high tide.Ship’s Rear Isn’t ‘Fully Stuck’ (10:52 p.m. London)“We have done a full inspection, and the positive news is that the rear end of the ship isn’t fully stuck in the clay,” said Peter Berdowski, chief executive officer of Boskalis Westminster, the parent company of the elite salvage team.He spoke in an interview on the Nieuwsuur TV program in the Netherlands.“With the two big tugboats that are underway, combined with the dredging, we hope that will be sufficient to get the ship afloat somewhere next week,” he said.Tides are expected to swell Monday night and Tuesday night and into the early hours of Wednesday.If the ship isn’t refloated then, the salvage team will move to “plan B,” which will involve lifting containers off the vessel, he said.“We will start taking containers from the ship anyway this weekend.”Biden Says U.S. Looking to Help (10:22 p.m. London)The U.S. is looking into how it can help to unblock the canal, President Joe Biden said.“We have equipment and capacity that most countries don’t have. And we’re seeing what help we can be,” Biden said.Sea-Doo Maker Pivots to Planes (9 p.m. London)Sea-Doo maker BRP Inc. has parts from Asian suppliers stuck on vessels jammed in the blockage.The situation prompted the Canadian maker of recreational vehicles to shift to its backup plan: flying another batch of components from Asia to its North American plants.“It’s more expensive, but it’s better than stopping assembly lines,” BRP Chief Executive Officer Jose Boisjoli said Friday in a phone interview.Ever Given Was Refloated From Stern (8:42 p.m. London)The elite salvage team working with the Suez Canal Authority was able to float the vessel from its “stern/aft” and released the rudder at approximately 9 p.m. local time, according to Inchcape Shipping Services, a maritime services provider.Another effort will take place using the high tide, with the hope of refloating the vessel entirely.Canal Authority Says Tug Operations Restart (7:54 p.m. London)Pulling operations with tug boats to free the ship restarted after dredging operations were completed, the Suez Canal Authority said on its Facebook page.‘Perfect Storm’ Brewing for Italy’s Ports (7:07 p.m. London)Once the Suez blockage ends, the huge backlog of ships will create a traffic jam for ports on the Mediterranean.“When traffic will flow again, ships will flood Italian ports,” said Daniele Rossi, chief of Italian ports association (Assoporti). Operational difficulties will make that difficult to “cope” with, he said.“The perfect storm is coming.”About 40% of Italian imports and exports pass through the Suez Canal, according to Assoporti/SRM research on Italian maritime economy.Logjam Nears 300 Ships (6:16 p.m. London)About 293 ships ranging from livestock carriers to liquefied natural gas tankers are waiting to transit the clogged waterway, compared with 238 on Thursday, shipping data compiled by Bloomberg show.White House Sees Energy Impacts (5:17 p.m. London)The White House is concerned about the impact on global energy, said Press Secretary Jen Psaki, who added that the Biden administration is monitoring market conditions.“We do see some potential impacts on energy markets,” she told reporters at a briefing on Friday.Earlier, a spokesman for the White House National Security Council said the U.S. government had offered Egypt assistance removing the grounded ship, the Ever Given.Wind Turbine Projects Seen Delayed (4 p.m. London)Germany’s Enercon expects delays in wind turbine components from Europe to projects in Asia, according to a company spokesperson. The wind turbine maker also sees risks of congestion at ports, once the ships held up at the Suez Canal arrive at their destinations. Enercon is examining to what extent the problem will affect its supply chains.North Sea Crude Loading Delays Likely (3:28 p.m. London)At least seven supertankers are expected to load North Sea crude in April, with two or three of them likely to face delays due to the blockage in the Suez Canal, according to tanker fixture reports and ship tracking data compiled by Bloomberg.Tanker Shares Surge (3:10 p.m. London)With diversions starting to pop up, the shares of oil tanker companies surged. Frontline Ltd. rose as much as 11% in Oslo, the biggest intraday gain since September. Other owners were jumping too: Euronav NV climbed as much as 7%, while DHT Holdings Inc. was also up 7%, and International Seaways Inc. added as much as 6.4%. It comes as the prospect of ships taking the longer route around the southern tip of Africa raises the chances for higher earnings for ships.British Retailers Say Impact Manageable (3:21 p.m. London)Dixons Carphone Plc has a small number of containers on the grounded Ever Given vessel but “we don’t believe this will cause any meaningful disruption to our stock levels,” the company said in an emailed statement. While some U.K. grocers are reporting small quantities of stock stuck both on the Ever Given and some container ships behind it, the products are mostly general merchandise and clothing which is less time-sensitive as perishable food items.Tug Boats Get Ready to Try and Tow Ever Given (2.22 p.m. London)Tug boats are tying themselves up to the Ever Given in order to attempt to tow the container ship, according to Inchcape Shipping Services, a maritime services provider. Suez Canal Authority dredgers were earlier being used to clear away sand, the firm said.Multiple Ships in Indian Ocean Take Detour (1:58 p.m. London)Several ships in the Indian Ocean, initially bound for the Suez Canal, have changed course away from the waterway after it became blocked, according to vessel-tracking data compiled by Bloomberg. The vessels include container ships Ever Greet, HMM Stockholm and OOCL United Kingdom; vehicle carrier Morning Calm; and cargo ship Angelic.U.S.-Asia Naphtha Arbitrage Opens on Suez Canal Blockage: BNEF (9:55 a.m. New York)The U.S. Gulf Coast-to-East Asia naphtha arbitrage has opened as naphtha shipments from key exporters such as Russia and Algeria delayed by the blockage in the Suez Canal.Oil Tanker for North Sea Loading Delayed a Week (12:59 p.m. London)Supertanker Olympic Lady is expected to reach the North Sea for planned loading around April 26-30, roughly a week’s delay, amid the blockage at the Suez Canal, according to person familiar with the matter. The Very large Crude Carrier was originally set to have used the canal to reach the North Sea for loading around April 20-25.Oil Tanker Rates Rise (12:40 p.m. London)Freight rates have jumped 20% for large oil-product tankers known as LR2s traveling from the Mediterranean in mid-April, according to Torm A/S, one of the largest owners of oil-product tankers in the world.The market is reacting to uncertainty of the duration of the Suez Canal jam, it said. “We are giving several pricing options to go via South Africa,” the company said.A handful of Torm vessels are scheduled to pass through the Suez Canal, and it is talks with customers about whether to divert them.Economists Predict Inflation Pressure (12:19 p.m. London)The blockage adds to supply-chain disruptions that have already cost world trade more than $200 billion since the start of the year, according to Allianz SE calculations. Every week the Suez Canal remains closed adds as much as $10 billion to the bill.Economists predict higher prices as a result.“I’m relatively sanguine about the additional hit to trade,” said Joanna Konings, senior economist at ING. But “with everyone’s tolerance for absorbing higher shipping costs run down, we might see some pass through from this episode. It’s an inflationary shock that could come right to the consumer.”Oil Tanker Diverts, May Be First to Do So (11:50 a.m. London)The oil tanker Marlin Santorini, a 1 million-barrel capacity Suezmax, switched destinations away from the Suez Canal, according to tanker tracking data compiled by Bloomberg.The vessel had been sailing east in the Atlantic Ocean toward the Mediterranean Sea, signaling Port Said at the northern end of the canal. It then turned south and looks to be heading around Africa.Two shipbrokers said they’d seen no other oil tanker diversions to avoid the Suez since the blockage, although multiple other vessel types, including LNG carriers and container ships, have done so.Food Supply Chain Faces Risk (11:44 a.m. London)The Suez blockage may mean limited availability of food, supply delays and higher prices at a time when economies and households are already grappling with rising food inflation and disruptions from Covid-19.Wealthy but food-deficit Gulf states and food aid-dependent Horn of Africa nations are particularly vulnerable to disturbances to grain flows. The canal handles at least 15% of global rice and wheat exports, according to research from Chatham House.“If it’s a delay of a month or longer it will put on a significant price pressure and reduce availability in some places,” Tim Benton, research director in emerging risks at Chatham House in London and a food security expert, said in an interview. “There are lots of compounding issues. The global food system is already under pressure from Covid. And clearly anything that adds a further straw to the camel’s back makes things bad.”Japan’s Oil Supply Won’t be Affected (11:40 a.m. London)The Suez canal blockage won’t immediately impact Japan’s crude supplies, Finance Minister Taro Aso told reporters in Tokyo.“Unlike in the past, Japan currently has enough of an oil stockpile for around 200 days, so I don’t think this issue will immediately impact Japan’s oil supplies,” Aso said.Russian Wheat Flows Largely Unaffected (11:35 a.m. London)The blockage isn’t causing major problems for Russian grain exports because sales of wheat from the world’s top shipper are currently low, said Eduard Zernin, chairman of the Russian Union of Grain Exporters. There’s no sign yet of any significant Russian sales being caught up in the queue, he said.Elsewhere in the Black Sea region, Ukraine’s deputy economy minister and the head of the country’s grain group, which includes the top shippers, said they don’t see any threats to the nation’s exports if the situation is resolved soon.Europe Natural Gas Prices Rise (11:15 a.m. London)The prospect of the container vessel blocking the Suez Canal for up to a week boosted European natural gas prices as cargoes laden with fuel destined for the region face severe delays. The blockage may create a supply gap that could be filled by pipeline gas from Russia or U.S. LNG. Benchmark Dutch and U.K. gas for next month both jumped on Friday.Three tankers near the canal’s entry will struggle to deliver LNG from Qatar for scheduled arrivals in early April. Vessels that are already waiting are unlikely to turn around at this stage, said Fauziah Marzuki, an analyst at BloombergNEF.Dredger Deployed in Effort to Refloat Ship (10:35 p.m. London)A specialized dredger has been deployed in efforts to dislodge the stuck ship. The Mashhour has completed 87% of its targeted work of that’s removing sand surrounding the vessel, displacing 17,000 cubic meters of material per hour. It started operations 100 meters from the stuck ship on Thursday and can get as close as 10 meters.The Ever Given will start to be pulled once the dredging operations are completed.Ever Given Owner Plans to Float Vessel Saturday (10:19 a.m. LondonJapan’s Shoei Kisen Kaisha Ltd., which owns the stricken Ever Given, aims to refloat the ship Saturday night Tokyo time, according to a company spokeswoman. Attempts to free the ship with 10 tugboats have failed so far, and the company plans to use two additional ships to help with the effort, the Nikkei reported, citing company officials at a briefing.The company said earlier this week that it was working with local authorities and ship manager Bernhard Schulte Shipmanagement to refloat the vessel but the situation is “extremely difficult.”Ikea Supply Chain May be Affected (10:13 a.m. London)Swedish furniture giant Ikea has confirmed there are containers with its products on ships that are waiting to make passage via the Suez Canal.“Depending on how this work proceeds and how long it takes to finish the operation, it may create constraints on our supply chain,” a spokesman for Inter Ikea Systems, the franchisor of the Ikea brand, told Bloomberg.Ikea said it’s now considering all supply options to help secure the availability of its products.Ever Given Could Have Almost $1 Billion Cargo, IHS Says (9:20 a.m. London)The total cargo value of a containership the size of the Ever Given is almost $1 billion, considering average value of products in an ocean container of about $40,000, according to IHS Markit.In the seven days since the Ever Given ran aground on Tuesday, 49 container ships carrying an estimated 400,000 TEU were set to pass through the Suez Canal in both directions, the consultant said.About 51 million container tons normally pass through the Suez every month, according to a Bernstein report Friday. With passage blocked, the volume of stuck containers would amount to over 200,000 18-ton trucks worth, the equivalent of a traffic jam from Chicago to El Paso. Adding in tankers and other ships, that jam would double, they wrote.Once the backlog starts clearing, it will overwhelm terminals in Europe, which are experiencing labor shortages because of Covid-19, said Greg Knowler, senior European editor at JOC by IHS Markit.“Rotterdam and Antwerp expect ship wait times to lengthen, and expect it will take longer to handle ships and clear containers from the yards, and businesses will have to wait longer for their imports,” Knowler said.Refloat Efforts Resume, Inchcape Says (6:00 a.m. London)Operations to refloat the Ever Given using tugs and dredgers resumed at 7am local time, according to Inchcape, a maritime services provider.At Least 12 U.S. Grain Shipments Impacted (2:48 p.m. HK)The congestion in the Suez Canal may delay nearly 7% of seaborne U.S. major grain shipments, according to USDA and vessel data analyzed by Bloomberg.Since the Bellatrix left Zen-Noh’s grain export elevator on the Mississippi River in late February, just 12 of 184 bulk carriers and general cargo ships have opted to take the Suez route, as many vessels take the Panama Canal and the route around South Africa to access Asia.More than 80% of the impacted grain shipments are corn, with close to 60% of them on six vessels headed to China. At least one ship, the Ledra, hauling corn to Vietnam, recently diverted toward the route around South Africa.HMM-Chartered Ship Diverts Around Africa (2:47 p.m. HK)The Hyundai Prestige container ship is detouring around the Cape of Good Hope to avoid gridlock in the Suez Canal, the vessel’s South Korean charterer HMM Co. said.The ship departed from Southampton, U.K., on Monday and has been told to go around Africa, a spokesman for HMM said. The vessel isn’t part of a scheduled service after its temporary deployment in January to help South Korea exporters, and is scheduled to reach Thailand’s Laem Chabang by late April.Cheap LNG Shipping Rates Ease Detour Pain (12:37 p.m. HK)Liquefied natural gas suppliers are beginning to send shipments around Africa, a journey that takes more time but — given current charter rates — isn’t that costly. Unlike oil tanker rates, prices for shipping LNG have remained subdued amid the crisis in Suez. A shift to milder temperatures in Europe and Asia has reduced gas demand, also curbing needs for tankers that ferry the fuel.At least seven LNG vessels have diverted away from their intended paths through the Suez Canal due to its continued blockage, according to Kpler analyst Rebecca Chia. At least two shipments from the U.S. headed to Asia have changed course in the Atlantic toward South Africa, according to Bloomberg ship-tracking data.Seabay Owner Says Week’s Delay Will Be ‘Horrible’ (12:23 p.m. HK)Mark Ma, owner of Seabay International Freight Forwarding Ltd., a company in Shenzhen that handles Chinese goods sold on platforms such as Amazon.com Inc., said his company has 20 to 30 containers on the ships waiting to cross the blocked canal.“If it can’t be resumed in a week, it will be horrible,” said Ma. “We will see freight fares spike again. The products are delayed, containers can’t return to China and we can’t deliver more goods.”Detouring ships doesn’t seem like a viable option at the moment, due to risks of taking unfamiliar routes, limited supply to the crew and an extended shipment time. “What if the canal got cleared in 8-10 days? You lose even more time,” said Ma.Crisis Isn’t Deterring Orders for Mega Ships (12:08 p.m. HK)The container ship blocking the Suez Canal has done little to deter shipping companies from ordering similarly mega-sized vessels. Korea Shipbuilding & Offshore Engineering Co.and Samsung Heavy Industries Co. — two of the world’s three biggest shipbuilders — announced they’d won orders worth a combined 3.45 trillion won ($3 billion) on Friday to build 25 container vessels that are all longer than the Eiffel Tower.Orders for mega-big ships have been increasing since this year after the lines saw their profits jump in 2020.Backlog of Vessels Will Take Days to Clear (9:19 a.m. HK)Even if the Ever Given sails away immediately, there’s a backlog of about 200 vessels of all types that will take days to clear, leading to an ever-increasing pile-up, according to Arthur Richier, a senior freight analyst at Vortexa. That’s assuming an average transit of 50 vessels a day via the canal.Egyptian authorities appear to want to wait until Monday for a higher tide to try and tow the vessel away, indicating that the most realistic return to normal for vessel traffic will only happen in a minimum of 10 days, Richier said.Ships in Red Sea Seen Leaving if Crisis Lasts 2 Weeks (9:12 a.m. HK)Ships in the Red Sea will be rerouted only if there is an extended delay in unblocking the Suez Canal, according to Randy Giveans, senior vice president of equity research for energy maritime at Jefferies LLC.So far, only ships outside the Red Sea that were hoping to use the canal are rerouting around the Cape of Good Hope. For vessels already in the area, it would only make a difference if the canal outage was certain to be over two weeks, since that’s how much additional time they would need to get around the Cape.Heavy-Lift Helicopters May Be Needed to Unload Containers (8:50 a.m. HK)The failed attempts to move the Ever Given are increasing the odds that heavy-lift helicopters may be needed to unburden it of at least part of its load of 500 containers, according to Nick Sloane, the salvage master responsible for re-floating the Costa Concordia, which capsized off Italy in 2012.The so-called sky-crane helicopters, able to lift a load of 25,000 pounds, and Russian MI-26 helicopters would be the only ones able to perform the task. The challenge is to find these helicopters and transport them to the site.There aren’t many of those that are privately owned, said Keith Sailor, director of commercial operations at Aurora, Oregon-based Columbia Helicopters Inc., a company that operates a fleet of heavy-lift helicopters. “If you can’t find one in the region, you’d need to fly one over there in an Antonov cargo plane,” he said. That could take five to eight days.Canal Traffic Jam Has Doubled to 238 Ships (5:37 p.m. London)The number of ships waiting to enter the Suez Canal is growing as the waterway remains blocked.Data compiled by Bloomberg shows there were 238 vessels queued up Thursday, compared with 186 counted on Wednesday and around 100 at the start of the blockage.Not Much Room to Maneuver (3:39 p.m. London)It’s no wonder the stuck Ever Given in the Suez Canal is creating such a headache.The key trade route is narrow — less than 675 feet wide (205 meters) in some places — and can be difficult to navigate. Work to re-float the giant container ship — about a quarter mile long (400 meters) — and allow passage for oceangoing carriers hauling almost $10 billion of everything from commodities to consumer goods continued without success on Thursday in Egypt.The blockage highlights a major risk faced by the shipping industry as more and more vessels, which are getting bigger and bigger, transit maritime choke points including the Suez, Panama Canal and the Strait of Hormuz.For more articles like this, please visit us at bloomberg.comSubscribe now to stay ahead with the most trusted business news source.©2021 Bloomberg L.P.

See also  ELI5: Bento - Interactive Notebook that Empowers Development Collaboration & Best Practices

Read More

Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

FACEBOOK

Upcoming Restriction Period for US ads about social issues, elections, or politics

Published

on

By

upcoming-restriction-period-for-us-ads-about-social-issues,-elections,-or-politics

In recent years, Meta has developed a comprehensive approach to protecting elections on our technologies. These efforts continue in advance of the US 2022 Midterms, which you can read more about in our Newsroom.

Implementing a restriction period for ads about social issues, elections or politics in the US

Consistent with our approach during the US 2020 General Election, we are introducing a restriction period for ads about social issues, elections or politics in the US. The restriction period will run from 12:01 AM PT on Tuesday, November 1, 2022 through 11:59 PM PT on Tuesday, November 8, 2022.

We are putting this restriction period in place again because we found that the restriction period achieves the right balance of giving campaigns a voice while providing additional time for scrutiny of issue, electoral, and political ads in the Ad Library. We are sharing the requirements and key dates ahead of time, so advertisers are able to prepare their campaigns in the months and weeks ahead.

What to know about the ad restriction period in the US

We will not allow any new ads about social issues, elections or politics in the US from 12:01 AM PT on Tuesday, November 1, 2022 through 11:59 PM PT on Tuesday, November 8, 2022.

In order to run ads about social issues, elections or politics in the US during the restriction period, the ads must be created with a valid disclaimer and have delivered an impression prior to 12:01 AM PT on Tuesday, November 1, 2022, but with limited editing capabilities.

Advertisement
free widgets for website

What advertisers can do during the restriction period for eligible ads:

  • Edit bid amount, budget amount and scheduled end date
  • Pause and unpause eligible ads that have already served at least 1 impression with a valid disclaimer prior to the restriction period going into effect
See also  Women's Day 2021: Safety tips for your Facebook and Instagram accounts

What advertisers cannot do during the restriction period for eligible ads, includes but is not limited to:

  • Editing certain aspects of eligible ads, such as ad creative (including ad copy, image/video assets, website URL)
  • Editing targeting, placement, optimization or campaign objective
  • Removing or adding a disclaimer
  • Copy, duplicating or boosting ads

See the Help Center for detailed requirements of what is or isn’t allowed during the restriction period.

Planning ahead for key dates

Keep in mind the following dates as you plan your campaign to avoid delays or disapprovals that may prevent your ads from running during the restriction period:

  • By Tuesday, October 18, 2022: Complete the ad authorization process to get authorized to run ads about social issues, elections or politics, which includes setting up an approved disclaimer for your ads.

  • By Tuesday, October 25, 2022: Submit your issue, electoral or political ads in order to best ensure that your ads are live and have delivered at least 1 impression with a valid disclaimer before the restriction period begins.
    • Please ensure that you add your approved disclaimer to these ads by choosing ISSUES_ELECTIONS_POLITICS in the special_ad_categories field. You will not be able to add a disclaimer after 12:01 AM PT on Tuesday, November 1, 2022.

  • Between Tuesday, November 1, 2022 and Tuesday, November 8, 2022: The ad restriction period will be in effect. We will not allow any new ads to run about social issues, elections or politics in the US starting 12:01 AM PT on Tuesday, November 1 through 11:59 PM PT on Tuesday, November 8, 2022.
  • At 12:00 AM PT on Wednesday, November 9, 2022: We will allow new ads about social issues, elections or politics to be published.

As the restriction period approaches, we encourage you to review these ad restriction period best practices to properly prepare ahead of time.

We will continue to provide updates on our approach to elections integrity or on any changes regarding the restriction period via this blog.

Visit the Elections Hub or our FAQ for more advertising resources.

First seen at developers.facebook.com

Advertisement
free widgets for website
Continue Reading

FACEBOOK

Signals in prod: dangers and pitfalls

Published

on

By

signals-in-prod:-dangers-and-pitfalls

In this blog post, Chris Down, a Kernel Engineer at Meta, discusses the pitfalls of using Linux signals in Linux production environments and why developers should avoid using signals whenever possible.

What are Linux Signals?

A signal is an event that Linux systems generate in response to some condition. Signals can be sent by the kernel to a process, by a process to another process, or a process to itself. Upon receipt of a signal, a process may take action.

Signals are a core part of Unix-like operating environments and have existed since more or less the dawn of time. They are the plumbing for many of the core components of the operating system—core dumping, process life cycle management, etc.—and in general, they’ve held up pretty well in the fifty or so years that we have been using them. As such, when somebody suggests that using them for interprocess communication (IPC) is potentially dangerous, one might think these are the ramblings of someone desperate to invent the wheel. However, this article is intended to demonstrate cases where signals have been the cause of production issues and offer some potential mitigations and alternatives.

Signals may appear attractive due to their standardization, wide availability and the fact that they don’t require any additional dependencies outside of what the operating system provides. However, they can be difficult to use safely. Signals make a vast number of assumptions which one must be careful to validate to match their requirements, and if not, one must be careful to configure correctly. In reality, many applications, even widely known ones, do not do so, and may have hard-to-debug incidents in the future as a result.

Let us look into a recent incident that occurred in the Meta production environment, reinforcing the pitfalls of using signals. We’ll go briefly over the history of some signals and how they led us to where we are today, and then we’ll contrast that with our current needs and issues that we’re seeing in production.

Advertisement
free widgets for website

The Incident

First, let’s rewind a bit. The LogDevice team cleaned up their codebase, removing unused code and features. One of the features that was deprecated was a type of log that documents certain operations performed by the service. This feature eventually became redundant, had no consumers and as such was removed. You can see the change here on GitHub. So far, so good.

The next little while after the change passed without much to speak about, production continued ticking on steadily and serving traffic as usual. A few weeks later, a report that service nodes were being lost at a staggering rate was received. It was something to do with the rollout of the new release, but what exactly was wrong was unclear. What was different now that had caused things to fall over?

The team in question narrowed the problem to the code change we mentioned earlier, deprecating these logs. But why? What’s wrong with that code? If you don’t already know the answer, we invite you to look at that diff and try to work out what’s wrong because it’s not immediately obvious, and it’s a mistake anyone could make.

logrotate, Enter the Ring

logrotate is more or less the standard tool for log rotation when using Linux. It’s been around for almost thirty years now, and the concept is simple: manage the life cycle of logs by rotating and vacuuming them.

logrotate doesn’t send any signals by itself, so you won’t find much, if anything, about them in the logrotate main page or its documentation. However, logrotate can take arbitrary commands to execute before or after its rotations. Just as a basic example from the default logrotate configuration in CentOS, you can see this configuration:

Advertisement
free widgets for website
 /var/log/cron /var/log/maillog /var/log/messages /var/log/secure /var/log/spooler {     sharedscripts     postrotate         /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true     endscript } 

A bit brittle, but we’ll forgive that and assume that this works as intended. This configuration says that after logrotate rotates any of the files listed, it should send SIGHUP to the pid contained in /var/run/syslogd.pid, which should be that of the running syslogd instance.

This is all well and good for something with a stable public API like syslog, but what about something internal where the implementation of SIGHUP is an internal implementation detail that could change at any time?

A History of Hangups

One of the problems here is that, except for signals which cannot be caught in user space and thus have only one meaning, like SIGKILL and SIGSTOP, the semantic meaning of signals is up to application developers and users to interpret and program. In some cases, the distinction is largely academic, like SIGTERM, which is pretty much universally understood to mean “terminate gracefully as soon as possible.” However, in the case of SIGHUP, the meaning is significantly less clear.

SIGHUP was invented for serial lines and was originally used to indicate that the other end of the connection had dropped the line. Nowadays, we still carry our lineage with us of course, so SIGHUP is still sent for its modern equivalent: where a pseudo or virtual terminal is closed (hence tools like nohup, which mask it).

In the early days of Unix, there was a need to implement daemon reloading. This usually consists at least of configuration/log file reopening without restarting, and signals seemed like a dependency-free way to achieve that. Of course, there was no signal for such a thing, but as these daemons have no controlling terminal, there should be no reason to receive SIGHUP, so it seemed like a convenient signal to piggyback onto without any obvious side effects.

Advertisement
free widgets for website

There is a small hitch with this plan though. The default state for signals is not “ignored,” but signal-specific. So, for example, programs don’t have to configure SIGTERM manually to terminate their application. As long as they don’t set any other signal handler, the kernel just terminates their program for free, without any code needed in user space. Convenient!

What’s not so convenient though, is that SIGHUP also has the default behavior of terminating the program immediately. This works great for the original hangup case, where these applications likely aren’t needed anymore, but is not so great for this new meaning.

This would be fine of course, if we removed all the places which could potentially send SIGHUP to the program. The problem is that in any large, mature codebase, that is difficult. SIGHUP is not like a tightly controlled IPC call for which you can easily grep the codebase for. Signals can come from anywhere, at any time, and there are few checks on their operation (other than the most basic “are you this user or have CAP_KILL“). The bottom line is that it’s hard to determine where signals could come from, but with more explicit IPC, we would know that this signal doesn’t mean anything to us and should be ignored.

See also  Facebook is hated — and rich

From Hangup to Hazard

By now, I suppose you may have started to guess what happened. A LogDevice release started one fateful afternoon containing the aforementioned code change. At first, nothing had gone awry, but at midnight the next day, everything mysteriously started falling over. The reason is the following stanza in the machine’s logrotate configuration, which sends a now unhandled (and therefore fatal) SIGHUP to the logdevice daemon:

 /var/log/logdevice/audit.log {   daily   # [...]   postrotate     pkill -HUP logdeviced   endscript } 

Missing just one short stanza of a logrotate configuration is incredibly easy and common when removing a large feature. Unfortunately, it’s also hard to be certain that every last vestige of its existence was removed at once. Even in cases that are easier to validate than this, it’s common to mistakenly leave remnants when doing code cleanup. Still, usually, it’s without any destructive consequences, that is, the remaining detritus is just dead or no-op code.

Advertisement
free widgets for website

Conceptually, the incident itself and its resolution are simple: don’t send SIGHUP, and spread LogDevice actions out more over time (that is, don’t run this at midnight on the dot). However, it’s not just this one incident’s nuances that we should focus on here. This incident, more than anything, has to serve as a platform to discourage the use of signals in production for anything other than the most basic, essential cases.

The Dangers of Signals

What Signals are Good For

First, using signals as a mechanism to affect changes in the process state of the operating system is well founded. This includes signals like SIGKILL, which are impossible to install a signal handler for and does exactly what you would expect, and the kernel-default behavior of SIGABRT, SIGTERM, SIGINT, SIGSEGV, and SIGQUIT and the like, which are generally well understood by users and programmers.

What these signals all have in common is that once you’ve received them, they’re all progressing towards a terminal end state within the kernel itself. That is, no more user space instructions will be executed once you get a SIGKILL or SIGTERM with no user space signal handler.

A terminal end state is important because it usually means you’re working towards decreasing the complexity of the stack and code currently being executed. Other desired states often result in the complexity actually becoming higher and harder to reason about as concurrency and code flow become more muddled.

Dangerous Default Behavior

You may notice that we didn’t mention some other signals that also terminate by default. Here’s a list of all of the standard signals that terminate by default (excluding core dump signals like SIGABRT or SIGSEGV, since they’re all sensible):

Advertisement
free widgets for website
  • SIGALRM
  • SIGEMT
  • SIGHUP
  • SIGINT
  • SIGIO
  • SIGKILL
  • SIGLOST
  • SIGPIPE
  • SIGPOLL
  • SIGPROF
  • SIGPWR
  • SIGSTKFLT
  • SIGTERM
  • SIGUSR1
  • SIGUSR2
  • SIGVTALRM

At first glance, these may seem reasonable, but here are a few outliers:

  • SIGHUP: If this was used only as it was originally intended, defaulting to terminate would be sensible. With the current mixed usage meaning “reopen files,” this is dangerous.
  • SIGPOLL and SIGPROF: These are in the bucket of “these should be handled internally by some standard function rather than your program.” However, while probably harmless, the default behavior to terminate still seems nonideal.
  • SIGUSR1 and SIGUSR2: These are “user-defined signals” that you can ostensibly use however you like. But because these are terminal by default, if you implement USR1 for some specific need and later don’t need that, you can’t just safely remove the code. You have to consciously think to explicitly ignore the signal. That’s really not going to be obvious even to every experienced programmer.

So that’s almost one-third of terminal signals, which are at best questionable and, at worst, actively dangerous as a program’s needs change. Worse still, even the supposedly “user-defined” signals are a disaster waiting to happen when someone forgets to explicitly SIG_IGN it. Even an innocuous SIGUSR1 or SIGPOLL may cause incidents.

This is not simply a question of familiarity. No matter how well you know how signals work, it’s still extremely hard to write signal-correct code the first time around because, despite their appearance, signals are far more complex than they seem.

Code flow, Concurrency, and the Myth of SA_RESTART

Programmers generally do not spend their entire day thinking about the inner workings of signals. This means that when it comes to actually implementing signal handling, they often subtly do the wrong thing.

I’m not even talking about the “trivial” cases, like safety in a signal handling function, which is mostly solved by only bumping a sig_atomic_t, or using C++’s atomic signal fence stuff. No, that’s mostly easily searchable and memorable as a pitfall by anyone after their first time through signal hell. What’s a lot harder is reasoning about the code flow of the nominal portions of a complex program when it receives a signal. Doing so requires either constantly and explicitly thinking about signals at every part of the application life cycle (hey, what about EINTR, is SA_RESTART enough here? What flow should we go into if this terminates prematurely? I now have a concurrent program, what are the implications of that?), or setting up a sigprocmask or pthread_setmask for some part of your application life cycle and praying that the code flow never changes (which is certainly not a good guess in an atmosphere of fast-paced development). signalfd or running sigwaitinfo in a dedicated thread can help somewhat here, but both of these have enough edge cases and usability concerns to make them hard to recommend.

We like to believe that most experienced programmers know by now that even a facetious example of correctly writing thread-safe code is very hard. Well, if you thought correctly writing thread-safe code was hard, signals are significantly harder. Signal handlers must only rely on strictly lock-free code with atomic data structures, respectively, because the main flow of execution is suspended and we don’t know what locks it’s holding, and because the main flow of execution could be performing non-atomic operations. They must also be fully reentrant, that is, they must be able to nest within themselves since signal handlers can overlap if a signal is sent multiple times (or even with one signal, with SA_NODEFER). That’s one of the reasons why you can’t use functions like printf or malloc in a signal handler because they rely on global mutexes for synchronization. If you were holding that lock when the signal was received and then called a function requiring that lock again, your application would end up deadlocked. This is really, really hard to reason about. That’s why many people simply write something like the following as their signal handling:

 static volatile sig_atomic_t received_sighup;   static void sighup(int sig __attribute__((unused))) { received_sighup = 1; }  static int configure_signal_handlers(void) {   return sigaction(     SIGHUP,     &(const struct sigaction){.sa_handler = sighup, .sa_flags = SA_RESTART},     NULL); }  int main(int argc, char *argv[]) {   if (configure_signal_handlers()) {        /* failed to set handlers */   }    /* usual program flow */    if (received_sighup) {     /* reload */     received_sighup = 0;   }    /* usual program flow */ }  

The problem is that, while this, signalfd, or other attempts at async signal handling might look fairly simple and robust, it ignores the fact that the point of interruption is just as important as the actions performed after receiving the signal. For example, suppose your user space code is doing I/O or changing the metadata of objects that come from the kernel (like inodes or FDs). In this case, you’re probably actually in a kernel space stack at the time of interruption. For example, here’s how a thread might look when it’s trying to close a file descriptor:

Advertisement
free widgets for website
# cat /proc/2965230/stack  [<0>] schedule+0x43/0xd0  [<0>] io_schedule+0x12/0x40  [<0>] wait_on_page_bit+0x139/0x230  [<0>] filemap_write_and_wait+0x5a/0x90  [<0>] filp_close+0x32/0x70  [<0>] __x64_sys_close+0x1e/0x50  [<0>] do_syscall_64+0x4e/0x140  [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

Here, __x64_sys_close is the x86_64 variant of the close system call, which closes a file descriptor. At this point in its execution, we’re waiting for the backing storage to be updated (that’s this wait_on_page_bit). Since I/O work is usually several orders of magnitude slower than other operations, schedule here is a way of voluntarily hinting to the kernel’s CPU scheduler that we are about to perform a high-latency operation (like disk or network I/O) and that it should consider finding another process to schedule instead of the current process for now. This is good, as it allows us to signal to the kernel that it is a good idea to go ahead and pick a process that will actually make use of the CPU instead of wasting time on one which can’t continue until it’s finished waiting for a response from something that may take a while.

Imagine that we send a signal to the process we were running. The signal that we have sent has a user space handler in the receiving thread, so we’ll resume in user space. One of the many ways this race can end up is that the kernel will try to come out of schedule, further unwind the stack and eventually return an errno of ESYSRESTART or EINTR to user space to indicate that we were interrupted. But how far did we get in closing it? What’s the state of the file descriptor now?

Now that we’ve returned to user space, we’ll run the signal handler. When the signal handler exits, we’ll propagate the error to the user space libc’s close wrapper, and then to the application, which, in theory, can do something about the situation encountered. We say “in theory” because it’s really hard to know what to do about many of these situations with signals, and many services in production do not handle the edge cases here very well. That might be fine in some applications where data integrity isn’t that important. However, in production applications that do care about data consistency and integrity, this presents a significant problem: the kernel doesn’t expose any granular way to understand how far it got, what it achieved and didn’t and what we should actually do about the situation. Even worse, if close returns with EINTR, the state of the file descriptor is now unspecified:

“If close() is interrupted by a signal [...] the state of [the file descriptor] is unspecified.”

Good luck trying to reason about how to handle that safely and securely in your application. In general, handling EINTR even for well-behaved syscalls is complicated. There are plenty of subtle issues forming a large part of the reason why SA_RESTART is not enough. Not all system calls are restartable, and expecting every single one of your application’s developers to understand and mitigate the deep nuances of getting a signal for every single syscall at every single call site is asking for outages. From man 7 signal:

Advertisement
free widgets for website

“The following interfaces are never restarted after being interrupted by a signal handler, regardless of the use of SA_RESTART; they always fail with the error EINTR [...]”

Likewise, using a sigprocmask and expecting code flow to remain static is asking for trouble as developers do not typically spend their lives thinking about the bounds of signal handling or how to produce or preserve signal-correct code. The same goes for handling signals in a dedicated thread with sigwaitinfo, which can easily end up with GDB and similar tools being unable to debug the process. Subtly wrong code flows or error handling can result in bugs, crashes, difficult to debug corruptions, deadlocks and many more issues that will send you running straight into the warm embrace of your preferred incident management tool.

High Complexity in Multithreaded Environments

If you thought all this talk of concurrency, reentrancy and atomicity was bad enough, throwing multithreading into the mix makes things even more complicated. This is especially important when considering the fact that many complex applications run separate threads implicitly, for example, as part of jemalloc, GLib, or similar. Some of these libraries even install signal handlers themselves, opening a whole other can of worms.

Overall, man 7 signal has this to say on the matter:

“A signal may be generated (and thus pending) for a process as a whole (e.g., when sent using kill(2)) or for a specific thread [...] If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal.”

Advertisement
free widgets for website

More succinctly, “for most signals, the kernel sends the signal to any thread that doesn’t have that signal blocked with sigprocmask“. SIGSEGV, SIGILL and the like resemble traps, and have the signal explicitly directed at the offending thread. However, despite what one might think, most signals cannot be explicitly sent to a single thread in a thread group, even with tgkill or pthread_kill.

This means that you can’t trivially change overall signal handling characteristics as soon as you have a set of threads. If a service needs to do periodic signal blocking with sigprocmask in the main thread, you need to somehow communicate to other threads externally about how they should handle that. Otherwise, the signal may be swallowed by another thread, never to be seen again. Of course, you can block signals in child threads to avoid this, but if they need to do their own signal handling, even for primitive things like waitpid, it will end up making things complex.

Just as with everything else here, these aren’t technically insurmountable problems. However, one would be negligent in ignoring the fact that the complexity of synchronization required to make this work correctly is burdensome and lays the groundwork for bugs, confusion and worse.

Lack of Definition and Communication of Success or Failure

Signals are propagated asynchronously in the kernel. The kill syscall returns as soon as the pending signal is recorded for the process or thread’s task_struct in question. Thus, there’s no guarantee of timely delivery, even if the signal isn’t blocked.

Even if there is timely delivery of the signal, there’s no way to communicate back to the signal issuer what the status of their request for action is. As such, any meaningful action should not be delivered by signals, since they only implement fire-and-forget with no real mechanism to report the success or failure of delivery and subsequent actions. As we’ve seen above, even seemingly innocuous signals can be dangerous when they are not configured in user space.

Advertisement
free widgets for website

Anyone using Linux for long enough has undoubtedly run into a case where they want to kill some process but find that the process is unresponsive even to supposedly always fatal signals like SIGKILL. The problem is that misleadingly, kill(1)’s purpose isn’t to kill processes, but just to queue a request to the kernel (with no indication about when it will be serviced) that someone has requested some action to be taken.

The kill syscall’s job is to mark the signal as pending in the kernel’s task metadata, which it does successfully even when a SIGKILL task doesn’t die. In the case of SIGKILL in particular, the kernel guarantees that no more user mode instructions will be executed, but we may still have to execute instructions in kernel mode to complete actions that otherwise may result in data corruption or to release resources. For this reason, we still succeed even if the state is D (uninterruptible sleep). Kill itself doesn’t fail unless you provided an invalid signal, you don’t have permission to send that signal or the pid that you requested to send a signal to does not exist and is thus not useful to reliably propagate non-terminal states to applications.

In Conclusion

  • Signals are fine for terminal state handled purely in-kernel with no user space handler. For signals that you actually would like to immediately kill your program, leave those signals alone for the kernel to handle. This also means that the kernel may be able to exit early from its work, freeing up your program resources more quickly, whereas a user space IPC request would have to wait for the user space portion to start executing again.
  • A way to avoid getting into trouble handling signals is to not handle them at all. However, for applications handling state processing that must do something about cases like SIGTERM, ideally use a high-level API like folly::AsyncSignalHandler where a number of the warts have already been made more intuitive.

  • Avoid communicating application requests with signals. Use self-managed notifications (like inotify) or user space RPC with a dedicated part of the application life cycle to handle it instead of relying on interrupting the application.
  • Where possible, limit the scope of signals to a subsection of your program or threads with sigprocmask, reducing the amount of code that needs to be regularly scrutinized for signal-correctness. Bear in mind that if code flows or threading strategies change, the mask may not have the effect you intended.
  • At daemon start, mask terminal signals that are not uniformly understood and could be repurposed at some point in your program to avoid falling back to kernel default behavior. My suggestion is the following:
 signal(SIGHUP, SIG_IGN); signal(SIGQUIT, SIG_IGN); signal(SIGUSR1, SIG_IGN); signal(SIGUSR2, SIG_IGN); 

Signal behavior is extremely complicated to reason about even in well-authored programs, and its use presents an unnecessary risk in applications where other alternatives are available. In general, do not use signals for communicating with the user space portion of your program. Instead, either have the program transparently handle events itself (for example, with inotify), or use user space communication that can report back errors to the issuer and is enumerable and demonstrable at compile time, like Thrift, gRPC or similar.

I hope this article has shown you that signals, while they may ostensibly appear simple, are in reality anything but. The aesthetics of simplicity that promote their use as an API for user space software belie a series of implicit design decisions that do not fit most production use cases in the modern era.

Let’s be clear: there are valid use cases for signals. Signals are fine for basic communication with the kernel about a desired process state when there’s no user space component, for example, that a process should be killed. However, it is difficult to write signal-correct code the first time around when signals are expected to be trapped in user space.

Signals may seem attractive due to their standardization, wide availability and lack of dependencies, but they come with a significant number of pitfalls that will only increase concern as your project grows. Hopefully, this article has provided you with some mitigations and alternative strategies that will allow you to still achieve your goals, but in a safer, less subtly complex and more intuitive way.

Advertisement
free widgets for website

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Continue Reading

FACEBOOK

Meet the Developers – Linux Kernel Team (David Vernet)

Published

on

By

meet-the-developers-–-linux-kernel-team-(david-vernet)

Credit: Larry Ewing (lewing@isc.tamu.edu) and The GIMP for the original design of Tux the penguin.

Intro

For today’s interview, we have David Vernet, a core systems engineer on the Kernel team at Meta. He works on the BPF (Berkeley Packet Filter) and the Linux kernel scheduler. This series highlights Meta Software Engineers who contribute to the Linux kernel. The Meta Linux Kernel team works with the broader Linux community to add new features to the kernel and makes sure that the kernel works well in Meta production data centers. Engineers on the team work with peers in the industry to make the kernel better for Meta’s workloads and to make Linux better for everyone.

Tell us about yourself.

I’m a systems engineer who’s spent a good chunk of his career in the kernel space, and some time in the user-space as well working on a microkernel. Right now, I’m focusing most of my time on BPF and the Linux kernel scheduler.

I started my career as a web developer after getting a degree in math. After going to grad school, I realized that I was happiest when hacking on low-level systems and figuring out how computers work.

As a kernel developer at Meta, what does your typical day look like?

I’m not a maintainer of any subsystems in the kernel, so my typical day is filled with almost exclusively coding and engineering. That being said, participating in the upstream Linux kernel community is one of the coolest parts of being on the kernel team, so I still spend some time reading over upstream discussions. A typical day goes something like this:

Advertisement
free widgets for website
  1. Read over some of the discussions taking place on various upstream lists, such as BPF and mm. I usually spend about 30-60 minutes or so per day on this, though it depends on the day.

  2. Hack on the project that I’m working on. Lately, that’s adding a user-space ringbuffer map type to BPF.

  3. Work on drafting an article for lwn.net.

What have you been excited about or incredibly proud of lately?

I recently submitted a patch-set to enable a new map type in BPF. This allows user-space to publish messages to BPF programs in the kernel over the ringbuffer. This map type is exciting because it sets the stage to enable frameworks for user-space to drive logic in BPF programs in a performant way.

Is there something especially exciting about being a kernel developer at a company like Meta?

The Meta kernel team has a strong upstream-first culture. Bug fixes that we find in our Meta kernel, and features that we’d like to add, are almost always first submitted to the upstream kernel, and then they are backported to our internal kernel.

Do you have a favorite part of the kernel dev life cycle?

I enjoy architecting and designing APIs. Kernel code can never crash and needs to be able to run forever. I find it gratifying to architect systems in the kernel that make it easy to reason about correctness and robustness and provide intuitive APIs that make it easy for other parts of the kernel to use your code.

I also enjoy iterating with the upstream community. It’s great that your patches have a whole community of people looking at them to help you find bugs in your code and suggest improvements that you may never have considered on your own. A lot of people find this process to be cumbersome, but I find that it’s a small price to pay for what you get out of it.

Tell us a bit about the topic you presented at the Linux Plumbers Conference this year.

We presented the live patch feature in the Linux kernel, describing how we have utilized it at Meta and how our hyper-scale has shown some unique challenges with the feature.

Advertisement
free widgets for website

What are some of the misconceptions about kernel or open source software development that you have encountered in your career?

The biggest misconception is that it’s an exclusive, invite-only club to contribute to the Linux kernel. You certainly must understand operating systems to be an effective contributor and be ready to receive constructive criticism when there is scope for improvement in your code. Still, the community always welcomes people who come in with an open mind and want to contribute.

What resources are helpful in getting started in kernel development?

There is a lot of information out there that people have written on how to get integrated into the Linux kernel community. I wrote a blog post on how to get plugged into Linux kernel upstream mailing list discussions, and another on how to submit your first patch. There is also a video on writing and submitting your first Linux kernel patch from Greg Kroah-Hartman.

In terms of resources to learn about the kernel itself, there are many resources and books, such as:

Where can people find you and follow your work?

I have a blog where I talk about my experiences as a systems engineer: https://www.bytelab.codes/. I publish articles that range from topics that are totally newcomer friendly to more advanced topics that discuss kernel code in more detail. Feel free to check it out and let me know if there’s anything you’d like me to discuss.

To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

First seen at developers.facebook.com

Advertisement
free widgets for website
See also  Facebook will reportedly launch smartwatch by 2022
Continue Reading

Trending