Declarative Data Pipelines with Hoptimator

For the last several years, internal infrastructure at LinkedIn has been built around a self-service model, enabling developers to onboard themselves with minimal support. We have various user experiences that let application teams provision their own resources and infrastructure, generally by filling out forms or using command-line tools. For example, developers can provision Kafka topics, Espresso tables, Venice stores and more via Nuage, our internal cloud-like infra management platform. These self-service integrations are typically owned by the teams that build and support the underlying infrastructure.
However, we’ve found that this vertical self-service model doesn’t work particularly well for data pipelines, which involve wiring together many different systems into end-to-end data flows. Data pipelines power foundational parts of LinkedIn’s infrastructure, including replication between data centers. Just as important, a growing number of use cases are driven by developers building applications.
To support these use-cases, we have built convenient onboarding experiences for a small subset of data pipelines, including Kafka-to-Kafka replication and Espresso CDC (Change Data Capture). However, despite running on the same infrastructure (Brooklin), these two examples have slightly different onboarding experiences, as they deal with completely different data sources. Moreover, developing each of these onboarding experiences requires considerable time and effort. This means that developers frequently encounter gaps in self-service, requiring them to build their own solutions.
To reduce onboarding friction across a growing number of use-cases, we’ve been working on a unified control plane for all data pipelines at LinkedIn. Instead of having unique user experiences for each specific use-case, we are building a unified experience which leverages our existing infrastructure under the hood. As part of that effort, we have developed a new end-to-end data pipeline orchestrator called Hoptimator.
Current Gaps in Self-Service
Depending on the systems involved, developers can often use Nuage, Azkaban, or a command-line tool to create a single “hop” from one system to another:
Sources | ||||||
Destinations | Kafka | Brooklin | Espresso | MySQL | HDFS | Venice |
Kafka | Huage | N/A | Azkaban | |||
Brooklin | N/A | N/A | Nuage | Nuage | ||
Espresso | N/A | |||||
MySQL | N/A | |||||
HDFS | auto | Nuage | Nuage | N/A | Nuage | |
Blob | CLI | |||||
Venice | Nuage | Azkaban | N/A | |||
Pinot | Nuage | Azkaban |
Table 1: Partial listing of user onboarding experiences
The holes in the above chart – the majority of spaces – represent gaps where self-service does not exist yet. In those cases, creating an end-to-end data pipeline involves writing custom code to bridge the gaps. For streaming data pipelines, this involves writing stream processing jobs.
For example, to create an end-to-end data pipeline which brings data from Espresso into Pinot, we have self-service solutions for the Espresso→Brooklin hop and for the Kafka→Pinot hop, but not for the Brooklin→Kafka hop in between. A developer would need to write and operationalize a custom stream processing job to replicate their Brooklin datastream into a Kafka topic. A number of Samza and Beam jobs exist for such purposes.
The records streaming through these data pipelines often require transformation into a more convenient format. For example, Pinot ingestion by default expects records to be flat, with field names and types that are compatible with the Pinot table definition. It is unlikely that an Espresso table and a Pinot table happen to agree on these details. This sort of mismatch can occur between any pair of systems. Thus, data pipelines often involve some stream processing logic to transform records of one schema into another, filter out unnecessary records, or drop extra fields.
This means that data pipelines almost always require some form of stream processing in the middle. We have historically thought about these as two different technologies (e.g. Brooklin vs Samza), and have left it to developers to string them together. In order to provide an end-to-end data pipeline experience, we need a way to combine stream processing and data pipelines into a single concept.
Enter Flink
We’ve recently adopted Apache Flink at Linkedin, and Flink SQL has changed the way we think about data pipelines and stream processing. Flink is often seen as a stream processing engine, and historically the APIs have reflected that. But since the introduction of the Table API and Flink SQL, Flink has evolved to support more general-purpose data pipelines.
This is in large part due to the Table API’s concept of Connectors, which are not unlike the connectors of Brooklin or Kafka Connect. Connectors are the glue between different systems, and thus are associated with data pipelines. To some extent, the Table API subsumes Brooklin’s use-cases by pulling Connectors into a converged stream processing platform.
This means we can express data pipelines and stream processing in the same language (SQL) and run them on the same runtime (Flink). End-to-end data pipelines that would normally span multiple systems and require custom code can be written as a bit of Flink SQL and deployed in one shot.
Toward Declarative Data Pipelines
From a user perspective, the ideal end-to-end experience is a single authoring language (e.g. Flink SQL) for a single runtime (e.g. Flink) on a single big cluster. Users want to deploy a data pipeline with kubectl apply -f my-pipeline.yaml. The reality, however, is considerably more complex. A single end-to-end data pipeline at LinkedIn may span multiple purpose-built data plane systems (e.g. Brooklin, Gobblin), run on multiple stream processing engines (e.g. Samza, Flink), and talk to multiple storage systems (e.g. Espresso, Venice). Each of these may require manual onboarding, custom code, imperative API calls, and so on.
Starting from the ideal user experience and working backwards, we can imagine a declarative model for end-to-end data pipelines. This would present data pipelines as a single construct, but would implement them by assembling the required components from the data plane and compute layer. If the data pipeline requires some stream processing, we could automatically provision a Flink job. If part of the data pipeline requires an approval or review process, we could automatically trigger the workflow.
Leaning into the Kubernetes ecosystem, it’s clear this would involve a sophisticated operator. This would take a custom resource spec (essentially, a YAML file) and turn it into various physical resources in the data plane. Ultimately, a single pipeline spec would result in new Flink jobs, Kafka topics, and so on.
However, it’s not hard to imagine the proliferation of complex configuration that may result from such a model. It may be nice to have a single YAML file, but only insofar as that YAML file is itself simple.
To solve this problem, we started looking into expressing end-to-end data pipelines in SQL. We use streaming SQL extensively at LinkedIn, but existing SQL only expresses one “hop” of a data pipeline, e.g. from one Kafka topic to another. This has resulted in data pipelines that span hundreds of SQL statements. Ideally, an entire data pipeline could be codified as a single, high-level construct. What if an entire end-to-end, multi-hop data pipeline were just a SQL query?
Hoptimator: SQL-based Multi-hop Data Pipeline Orchestrator
We’ve been building an experimental data pipeline orchestrator called Hoptimator. It’s essentially a sophisticated Kubernetes operator that constructs end-to-end, multi-hop data pipelines based on SQL queries. Hoptimator’s user experience is based on a high-level concept we call “subscriptions”, which represent a materialized view. Given a subscription, Hoptimator automatically creates a data pipeline to materialize the corresponding view. This enables developers to create complex data pipelines with shocking simplicity:
$ cat my-subscription.yaml apiVersion: ... kind: Subscription metadata: name: sample-subscription-1 namespace: sample spec: sql: SELECT "name", "age" FROM ESPRESSO."SampleTable" database: KAFKA $ kubectl apply -f my-subscription.yaml
In response, Hoptimator might create a new Kafka topic, provision a Brooklin CDC datastream, deploy an auto-generated Flink job, etc. The Flink job will include all the configuration, DDL, SQL, connectors, etc that it needs to run.
Notice that the SQL above makes no mention of Brooklin at all. A Brooklin CDC datastream is implied when accessing an online database (in this case, Espresso). The resulting Flink job will read from the datastream, not from the database directly. This is important at scale, because we never want stream or batch processing jobs to impact online database performance.
Flink on its own can read and write to external systems via connectors, but Hoptimator provides a mechanism to incorporate arbitrary infrastructure into a pipeline. This has the potential to yield the best of both worlds: highly performant, purpose-built infrastructure like Brooklin and Gobblin, but folded into a Flink SQL-like experience. Under the hood, a subscription may involve multiple hops through various systems, and may leverage multiple auto-configured connectors.
To do this, Hoptimator has a plugin model enabling custom integrations with external systems like Espresso. Unlike Flink Connectors, Hoptimator’s “adapters” do not deal with reading and writing to those systems directly. Instead, they express what external resources will be required by the pipeline. This may be something simple like an existing Kafka topic, or something complex like a new Brooklin CDC datastream, a new Couchbase cache, or some part of an existing data pipeline.
As you may have guessed, adapters are also declarative. Adapters do not need to do much work – they simply declare what resources are needed for part of a pipeline. For example, an adapter does not need to know how to create a Flink job – it just needs to generate a FlinkDeployment spec.
Yes, it’s open source!
We are just getting started with Hoptimator, and there are currently no production workloads using it directly. However, the project has attracted a lot of internal interest and excitement, and it’s being used to quickly prototype new data pipelines. We are focusing on specific use-cases with especially thorny onboarding processes, but we think the model is broadly appealing. That’s why we’ve recently open-sourced a big chunk of Hoptimator, including support for Kafka and Flink.
To get started, try using the RAWKAFKA adapter, which doesn’t require a Schema Registry or other existing infrastructure. You can spin up a test environment easily:
$ git clone git@github.com:linkedin/Hoptimator.git $ cd Hoptimator $ make quickstart
Once the cluster is initialized, you can generate test data using the built-in DATAGEN adapter:
$ cat random-names-subscription.yaml apiVersion: hoptimator.linkedin.com/v1alpha1 kind: Subscription metadata: name: random-names spec: sql: SELECT NAME FROM DATAGEN.PERSON database: RAWKAFKA $ kubectl apply -f random-names-subscription.yaml
For this simple subscription, Hoptimator will automatically provision a Kafka topic and a Flink job. Within a few seconds, you will see some random names appear on the new Kafka topic.
To further explore the tables and adapters available, you can launch the hoptimator SQL CLI, which is based on sqlline:
$ ./bin/hoptimator > !intro > !tables > !q
Within the CLI, you can execute SQL statements without deploying anything:
$ ./bin/hoptimator > SELECT NAME FROM DATAGEN.PERSON; > SELECT * FROM RAWKAFKA."random-names" LIMIT 5; > !q
To serve these queries, the CLI will generate Fink SQL, run it in-process, and tail the results. Run the !intro command to see additional examples, and !help for other commands.
Happy hopping!
Special thanks to Gerardo Viedma, Subbu Subramaniam, and Naveenkumar Selvaraj from the Pinot team, Abhishek Mendheka from the Flink team, Felix GV from the Venice team, Aditya Toomula, Vaibhav Maheshwari, Harshil Shukla, Eric Honer, Joseph Grogan, and intern Hui Wang from the Brooklin team for contributing to the design, review, and implementation of Hoptimator.
Topics
Career stories: Influencing engineering growth at LinkedIn

Since learning frontend and backend skills, Rishika’s passion for engineering has expanded beyond her team at LinkedIn to grow into her own digital community. As she develops as an engineer, giving back has become the most rewarding part of her role.
From intern to engineer—life at LinkedIn
My career with LinkedIn began with a college internship, where I got to dive into all things engineering. Even as a summer intern, I absorbed so much about frontend and backend engineering during my time here. When I considered joining LinkedIn full-time after graduation, I thought back to the work culture and how my manager treated me during my internship. Although I had a virtual experience during COVID-19, the LinkedIn team ensured I was involved in team meetings and discussions. That mentorship opportunity ultimately led me to accept an offer from LinkedIn over other offers.
Before joining LinkedIn full-time, I worked with Adobe as a Product Intern for six months, where my projects revolved around the core libraries in the C++ language. When I started my role here, I had to shift to using a different tech stack: Java for the backend and JavaScript framework for the frontend. This was a new challenge for me, but the learning curve was beneficial since I got hands-on exposure to pick up new things by myself. Also, I have had the chance to work with some of the finest engineers; learning from the people around me has been such a fulfilling experience. I would like to thank Sandeep and Yash for their constant support throughout my journey and for mentoring me since the very beginning of my journey with LinkedIn.
Currently, I’m working with the Trust team on building moderation tools for all our LinkedIn content while guaranteeing that we remove spam on our platform, which can negatively affect the LinkedIn member experience. Depending on the project, I work on both the backend and the frontend, since my team handles the full-stack development. At LinkedIn, I have had the opportunity to work on a diverse set of projects and handle them from end to end.
Mentoring the next generation of engineering graduates
I didn’t have a mentor during college, so I’m so passionate about helping college juniors find their way in engineering. When I first started out, I came from a biology background, so I was not aware of programming languages and how to translate them into building a technical resume. I wish there would have been someone to help me out with debugging and finding solutions, so it’s important to me to give back in that way.
I’m quite active in university communities, participating in student-led tech events like hackathons to help them get into tech and secure their first job in the industry. I also love virtual events like X (formally Twitter) and LinkedIn Live events. Additionally, I’m part of LinkedIn’s CoachIn Program, where we help with resume building and offer scholarships for women in tech.
Influencing online and off at LinkedIn
I love creating engineering content on LinkedIn, X, and other social media platforms, where people often contact me about opportunities at LinkedIn Engineering. It brings me so much satisfaction to tell others about our amazing company culture and connect with future grads.
When I embarked on my role during COVID-19, building an online presence helped me stay connected with what’s happening in the tech world. I began posting on X first, and once that community grew, I launched my YouTube channel to share beginner-level content on data structures and algorithms. My managers and peers at LinkedIn were so supportive, so I broadened my content to cover aspects like soft skills, student hackathons, resume building, and more. While this is in addition to my regular engineering duties, I truly enjoy sharing my insights with my audience of 60,000+ followers. And the enthusiasm from my team inspires me to keep going! I’m excited to see what the future holds for me at LinkedIn as an engineer and a resource for my community on the LinkedIn platform.
About Rishika
Rishika holds a Bachelor of Technology from Indira Gandhi Delhi Technical University for Women. Before joining LinkedIn, she interned at Google as part of the SPS program and as a Product Intern at Adobe. She currently works as a software engineer on LinkedIn’s Trust Team. Outside of work, Rishika loves to travel all over India and create digital art.
Editor’s note: Considering an engineering/tech career at LinkedIn? In this Career Stories series, you’ll hear first-hand from our engineers and technologists about real life at LinkedIn — including our meaningful work, collaborative culture, and transformational growth. For more on tech careers at LinkedIn, visit: lnkd.in/EngCareers.
Career Stories: Learning and growing through mentorship and community

Lekshmy has always been interested in a role in a company that would allow her to use her people skills and engineering background to help others. Working as a software engineer at various companies led her to hear about the company culture at LinkedIn. After some focused networking, Lekshmy landed her position at LinkedIn and has been continuing to excel ever since.
How did I get my job at LinkedIn? Through LinkedIn.
Before my current role, I had heard great things about the company and its culture. After hearing about InDays (Investment Days) and how LinkedIn supports its employees, I knew I wanted to work there.
While at the College of Engineering, Trivandrum (CET), I knew I wanted to pursue a career in software engineering. Engineering is something that I’m good at and absolutely love, and my passion for the field has only grown since joining LinkedIn. When I graduated from CET, I began working at Groupon as a software developer, starting on databases, REST APIs, application deployment, and data structures. From that role, I was able to advance into the position of software developer engineer 2, which enabled me to dive into other software languages, as well as the development of internal systems. That’s where I first began mentoring teammates and realized I loved teaching and helping others. It was around this time that I heard of LinkedIn through the grapevine.
Joining the LinkedIn community
Everything I heard about LinkedIn made me very interested in career opportunities there, but I didn’t have connections yet. I did some research and reached out to a talent acquisition manager on LinkedIn and created a connection which started a path to my first role at the company.
When I joined LinkedIn, I started on the LinkedIn Talent Solutions (LTS) team. It was a phenomenal way to start because not only did I enjoy the work, but the experience served as a proper introduction to the culture at LinkedIn. I started during the pandemic, which meant remote working, and eventually, as the world situation improved, we went hybrid. This is a great system for me; I have a wonderful blend of being in the office and working remotely. When I’m in the office, I like to catch up with my team by talking about movies or playing games, going beyond work topics, and getting to know each other. With LinkedIn’s culture, you really feel that sense of belonging and recognize that this is an environment where you can build lasting connections.
LinkedIn: a people-first company
If you haven’t been able to tell already, even though I mostly work with software, I truly am a people person. I just love being part of a community. At the height of the pandemic, I’ll admit I struggled with a bit of imposter syndrome and anxiety. But I wasn’t sure how to ask for help. I talked with my mentor at LinkedIn, and they recommended I use the Employee Assistance Program (EAP) that LinkedIn provides.
I was nervous about taking advantage of the program, but I am so happy that I did. The EAP helped me immensely when everything felt uncertain, and I truly felt that the company was on my side, giving me the space and resources to help relieve my stress. Now, when a colleague struggles with something similar, I recommend they consider the EAP, knowing firsthand how effective it is.
Building a path for others’ growth
With my mentor, I was also able to learn about and become a part of our Women in Technology (WIT) WIT Invest Program. WIT Invest is a program that provides opportunities like networking, mentorship check-ins, and executive coaching sessions. WIT Invest helped me adopt a daily growth mindset and find my own path as a mentor for college students. When mentoring, I aim to build trust and be open, allowing an authentic connection to form. The students I work with come to me for all kinds of guidance; it’s just one way I give back to the next generation and the wider LinkedIn community. Providing the kind of support my mentor gave me early on was a full-circle moment for me.
Working at LinkedIn is everything I thought it would be and more. I honestly wake up excited to work every day. In my three years here, I have learned so much, met new people, and engaged with new ideas, all of which have advanced my career and helped me support the professional development of my peers. I am so happy I took a leap of faith and messaged that talent acquisition manager on LinkedIn. To anyone thinking about applying to LinkedIn, go for it. Apply, send a message, and network—you never know what one connection can bring!
About Lekshmy
Based in Bengaluru, Karnataka, India, Lekshmy is a Senior Software Engineer on LinkedIn’s Hiring Platform Engineering team, focused on the Internal Mobility Project. Before joining LinkedIn, Lekshmy held various software engineering positions at Groupon and SDE 3. Lekshmy holds a degree in Computer Science from the College of Engineering, Trivandrum, and is a trained classical dancer. Outside of work, Lekshmy enjoys painting, gardening, and trying new hobbies that pique her interest.
Editor’s note: Considering an engineering/tech career at LinkedIn? In this Career Stories series, you’ll hear first-hand from our engineers and technologists about real life at LinkedIn — including our meaningful work, collaborative culture, and transformational growth. For more on tech careers at LinkedIn, visit: lnkd.in/EngCareers.
Topics
Solving Espresso’s scalability and performance challenges to support our member base

Espresso is the database that we designed to power our member profiles, feed, recommendations, and hundreds of other Linkedin applications that handle large amounts of data and need both high performance and reliability. As Espresso continued to expand in support of our 950M+ member base, the number of network connections that it needed began to drive scalability and resiliency challenges. To address these challenges, we migrated to HTTP/2. With the initial Netty based implementation, we observed a 45% degradation in throughput which we needed to analyze then correct.
In this post, we will explain how we solved these challenges and improved system performance. We will also delve into the various optimization efforts we employed on Espresso’s online operation section, implementing one approach that resulted in a 75% performance boost.
Espresso Architecture
Figure 1. Espresso System Overview
Figure 1 is a high-level overview of the Espresso ecosystem, which includes the online operation section of Espresso (the main focus of this blog post). This section comprises two major components – the router and the storage node. The router is responsible for directing the request to the relevant storage node and the storage layer’s primary responsibility is to get data from the MySQL database and present the response in the desired format to the member. Espresso utilizes the open-source framework Netty for the transport layer, which has been heavily customized for Espresso’s needs.
Need for new transport layer architecture
In the communication between the router and storage layer, our earlier approach involved utilizing HTTP/1.1, a protocol extensively employed for interactions between web servers and clients. However, HTTP/1.1 operates on a connection-per-request basis. In the context of large clusters, this approach led to millions of concurrent connections between the router and the storage nodes. This resulted in constraints on scalability, resiliency, and numerous performance-related hurdles.
Scalability: Scalability is a crucial aspect of any database system, and Espresso is no exception. In our recent cluster expansion, adding an additional 100 router nodes caused the memory usage to spike by around 2.5GB. The additional memory can be attributed to the new TCP network connections within the storage nodes. Consequently, we experienced a 15% latency increase due to an increase in garbage collection. The number of connections to storage nodes posed a significant challenge to scaling up the cluster, and we needed to address this to ensure seamless scalability.
Resiliency: In the event of network flaps and switch upgrades, the process of re-establishing thousands of connections from the router often breaches the connection limit on the storage node. This, in turn, causes errors and the router to fail to communicate with the storage nodes.
Performance: When using the HTTP/1.1 architecture, routers maintain a limited pool of connections to each storage node within the cluster. In some larger clusters, the wait time to acquire a connection can be as high as 15ms at the 95th percentile due to the limited pool. This delay can significantly affect the system’s response time.
We determined that all of the above limitations could be resolved by transitioning to HTTP/2, as it supports connection multiplexing and requires a significantly lower number of connections between the router and the storage node.
We explored various technologies for HTTP/2 implementation but due to the strong support from the open-source community and our familiarity with the framework, we went with Netty. When using Netty out of the box, the HTTP/2 implementation throughput was 45% less than the original (HTTP/1.1) implementation. Because the out of the box performance was very poor, we had to implement different optimizations to enhance performance.
The experiment was run on a production-like test cluster and the traffic is a combination of access patterns, which include read and write traffic. The results are as follows:
Protocol | QPS | Single Read Latency (P99) | Multi-Read Latency (P99) |
HTTP/1.1 | 9K | 7ms | 25ms |
HTTP/2 | 5K (-45%) | 11ms (+57%) | 42ms (+68%) |
On the routing layer, after further analysis using flame graphs, major differences between the two protocols are shown in the following table.
CPU overhead | HTTP/1.1 | HTTP/2 |
Acquiring a connection and processing the request | 20% | 32% (+60%) |
Encode/Decode HTTP request | 18% | 32% (+77%) |
Improvements to Request/Response Handling
Reusing the Stream Channel Pipeline
One of the core concepts of Netty is its ChannelPipeline. As seen in Figure 1, when the data is received from the socket, it is passed through the pipeline which processes the data. Channel Pipeline contains a list of Handlers, each working on a specific task.
Figure 2. Netty Pipeline
In the original HTTP/1.1 Netty pipeline, a set of 15-20 handlers was established when a connection was made, and this pipeline was reused for all subsequent requests served on the same connection.
However, in HTTP/2 Netty’s default implementation, a fresh pipeline is generated for each new stream or request. For instance, a multi-get request to a router with over 100 keys can often result in approximately 30 to 35 requests being sent to the storage node. Consequently, the router must initiate new pipelines for all 35 storage node requests. The process of creating and dismantling pipelines for each request involving a considerable number of handlers turned out to be notably resource-intensive in terms of memory utilization and garbage collection.
To address this concern, a forked version of Netty’s Http2MultiplexHandler has been developed to maintain a queue of local stream channels. As illustrated in Figure 2, on receiving a new request, the multiplex handler no longer generates a new pipeline. Instead, it retrieves a local channel from the queue and employs it to process the request. Subsequent to request completion, the channel is returned to the queue for future use. Through the reuse of existing channels, the creation and destruction of pipelines are minimized, leading to a reduction in memory strain and garbage collection.
Figure 3. Sequence diagram of stream channel reuse
Addressing uneven work distribution among Netty I/O threads
When a new connection is created, Netty assigns this connection to one of the 64 I/O threads. In Espresso, the number of I/O threads is equal to twice the number of cores present. The I/O thread associated with the connection is responsible for I/O and handling the request/response on the connection. Netty’s default implementation employs a rudimentary method for selecting an appropriate I/O thread out of the 64 available for a new channel. Our observation revealed that this approach leads to a significantly uneven distribution of workload among the I/O threads.
In a standard deployment, we observed that 20% of I/O threads were managing 50% of all the total connections/requests. To address this issue, we introduced a BalancedEventLoopGroup. This entity is designed to evenly distribute connections across all available worker threads. During channel registration, the BalancedEventLoopGroup iterates through the worker threads to ensure a more equitable allocation of workload
After this change, during registering of a channel, an event loop with the number of connections below the average is selected.
private EventLoop selectLoop() { int average = averageChannelsPerEventLoop(); EventLoop loop = next(); if (_eventLoopCount > 1 && isUnbalanced(loop, average)) { ArrayList list = new ArrayList<>(_eventLoopCount); _eventLoopGroup.forEach(eventExecutor -> list.add((EventLoop) eventExecutor)); Collections.shuffle(list, ThreadLocalRandom.current()); Iterator it = list.iterator(); do { loop = it.next(); } while (it.hasNext() && isUnbalanced(loop, average)); } return loop; }
Reducing context switches when acquiring a connection
In the HTTP/2 implementation, each router maintains 10 connections to every storage node. These connections serve as communication pathways for the router I/O threads interfacing with the storage node. Previously, we utilized Netty’s FixedChannelPool implementation to oversee connection pools, handling tasks like acquiring, releasing, and establishing new connections.
However, the underlying queue within Netty’s implementation is not inherently thread-safe. To obtain a connection from the pool, the requesting worker thread must engage the I/O worker overseeing the pool. This process led to two context switches. To resolve this, we developed a derivative of the Netty pool implementation that employs a high-performance, thread-safe queue. Now, the task is executed by the requesting thread instead of a distinct I/O thread, effectively eliminating the need for context switches.
Improvements to SSL Performance
The following section describes various optimizations to improve the SSL performance.
Offloading DNS lookup and handshake to separate thread pool
During an SSL handshake, the DNS lookup procedure for resolving a hostname to an IP address functions as a blocking operation. Consequently, the I/O thread responsible for executing the handshake might be held up for the entirety of the DNS lookup process. This delay can result in request timeouts and other issues, especially when managing a substantial influx of incoming connections concurrently.
To tackle this concern, we developed an SSL initializer that conducts the DNS lookup on a different thread prior to initiating the handshake. This method involves passing the InetAddress, that contains both the IP address and hostname, to the SSL handshake procedure, effectively circumventing the need for a DNS lookup during the handshake.
Enabling Native SSL encryption/decryption
Java’s default built-in SSL implementation carries a significant performance overhead. Netty offers a JNI-based SSL engine that demonstrates exceptional efficiency in both CPU and memory utilization. Upon enabling OpenSSL within the storage layer, we observed a notable 10% reduction in latency. (The router layer already utilizes OpenSSL.)
To employ Netty Native SSL, one must include the pertinent Netty Native dependencies, as it interfaces with OpenSSL through the JNI (Java Native Interface). For more detailed information, please refer to https://netty.io/wiki/forked-tomcat-native.html.
Improvements to Encode/Decode performance
This section focuses on the performance improvements we made when converting bytes to Http objects and vice versa. Approximately 20% of our CPU cycles are spent on encode/decode bytes. Unlike a typical service, Espresso has very rich headers. Our HTTP/2 implementation involves wrapping the existing HTTP/1.1 pipeline with HTTP/2 functionality. While the HTTP/2 layer handles network communication, the core business logic resides within the HTTP/1.1 layer. Due to this, each incoming request required the conversion of HTTP/2 requests to HTTP/1.1 and vice versa, which resulted in high CPU usage, memory consumption, and garbage creation.
To improve performance, we have implemented a custom codec designed for efficient handling of HTTP headers. We introduced a new type of request class named Http1Request. This class effectively encapsulates an HTTP/2 request as an HTTP/1.1 by utilizing wrapped Http2 headers. The primary objective behind this approach is to avoid the expensive task of converting HTTP/1.1 headers to HTTP/2 and vice versa.
For example:
public class Http1Headers extends HttpHeaders { private final Http2Headers _headers; …. }
And Operations such as get, set, and contains operate on the Http2Headers:
@Override public String get(String name) { return str(_headers.get(AsciiString.cached(name).toLowerCase()); }
To make this possible, we developed a new codec that is essentially a clone of Netty’s Http2StreamFrameToHttpObjectCodec. This codec is designed to translate HTTP/2 StreamFrames to HTTP/1.1 requests/responses with minimal overhead. By using this new codec, we were able to significantly improve the performance of encode/decode operations and reduce the amount of garbage generated during the conversions.
Disabling HPACK Header Compression
HTTP/2 introduced a new header compression algorithm known as HPACK. It works by maintaining an index list or dictionaries on both the client and server. Instead of transmitting the complete string value, HPACK sends the associated index (integer) when transmitting a header. HPACK encompasses two key components:
-
Static Table – A dictionary comprising 61 commonly used headers.
-
Dynamic Table – This table retains the user-generated header information.
The Hpack header compression is tailored to scenarios where header contents remain relatively constant. But Espresso has very rich headers with stateful information such as timestamps, SCN, and so on. Unfortunately, HPACK didn’t align well with Espresso’s requirements.
Upon examining flame graphs, we observed a substantial stack dedicated to encoding/decoding dynamic tables. Consequently, we opted to disable dynamic header compression, leading to an approximate 3% enhancement in performance.
In Netty, this can be disabled using the following:
Http2FrameCodecBuilder.forClient() .initialSettings(Http2Settings.defaultSettings().headerTableSize(0));
Results
Latency Improvements
P99.9 Latency | HTTP/1.1 | HTTP/2 |
Single Key Get | 20ms | 7ms (-66%) |
Multi Key Get | 80ms | 20ms (-75%) |
We observed a 75% reduction in 99th and 99.9th percentile multi-read and read latencies, decreasing from 80ms to 20ms.
Figure 4. Latency reduction after HTTP/2
We observed similar latency reductions across the 90th percentile and higher.
Reduction in TCP connections
HTTP/1.1 | HTTP/2 | |
No of TCP Connections | 32 million | 3.9 million (-88%) |
We observed an 88% reduction in the number of connections required between routers and storage nodes in some of our largest clusters.
Figure 5. Total number of connections after HTTP/2
Reduction in Garbage Collection time
We observed a 75% reduction in garbage collection times for both young and old gen.
GC | HTTP/1.1 | HTTP/2 |
Young Gen | 2000 ms | 500ms (+75%) |
Old Gen | 80 ms | 15 ms (+81%) |
Figure 6. Reduction in time for GC after HTTP/2
Waiting time to acquire a Storage Node connection
HTTP/2 eliminates the need to wait for a storage node connection by enabling multiplexing on a single TCP connection, which is a significant factor in reducing latency compared to HTTP/1.1.
HTTP/1.1 | HTTP/2 | |
Wait time in router to get a storage node connection | 11ms | 0.02ms (+99%) |
Figure 7. Reduction is wait time to get a connection after HTTP/2
Conclusion
Espresso has a large server fleet and is mission-critical to a number of LinkedIn applications. With HTTP/2 migration, we successfully solved Espresso’s scalability problems due to the huge number of TCP connections required between the router and the storage nodes. The new architecture also reduced the latencies by 75% and made Espresso more resilient.
Acknowledgments
I would like to thank my colleagues Antony Curtis, Yaoming Zhan, BinBing Hou, Wenqing Ding, Andy Mao, and Rahul Mehrotra who worked on this project. The project demanded a great deal of time and effort due to the complexity involved in optimizing the performance. I would like to thank Kamlakar Singh and Yun Sun for reviewing the blog and providing valuable feedback.
We would also like to thank our management Madhur Badal, Alok Dhariwal and Gayatri Penumetsa for their support and resources, which played a crucial role in the success of this project. Their encouragement and guidance helped the team overcome challenges and deliver the project on time.
Topics
-
FACEBOOK2 weeks ago
Introducing Facebook Graph API v18.0 and Marketing API v18.0
-
Uncategorized2 weeks ago
3 Ways To Find Your Instagram Reels History
-
Uncategorized1 week ago
Community Manager: Job Description & Key Responsibilities
-
LINKEDIN2 weeks ago
Career Stories: Learning and growing through mentorship and community
-
Uncategorized2 weeks ago
Facebook Page Template Guide: Choose the Best One
-
OTHER6 days ago
WhatsApp iPad Support Spotted in Testing on Latest iOS Beta, Improved Group Calls Interface on Android
-
Uncategorized1 week ago
The Complete Guide to Social Media Video Specs in 2023
-
LINKEDIN5 days ago
Career stories: Influencing engineering growth at LinkedIn