Connect with us

LINKEDIN

REACH turns five – Celebrating the power of apprenticeships

Published

on

reach-turns-five-–-celebrating-the-power-of-apprenticeships
  • Reach-logo-with-the-phrases-channel-your-passion-and-build-the-future-below

Every tech company wants to hire great talent, but many have historically hired based on pedigree — where candidates went to school, where they worked and who they know. However, there’s a pool of qualified talent who have followed non-traditional paths but may have been overlooked in traditional recruiting pipelines. Five years ago, we piloted our REACH program to help solve this problem by providing those with non-traditional software engineering backgrounds the opportunity to get their foot in the door and develop their coding skills. What began as a small six-month program across three engineering focus areas (UI, Apps, Mobile) has grown into a multi-year apprenticeship model that has helped over 170 people transform their careers and become a core part of our hiring rhythm. Today, we are celebrating REACH’s five year anniversary, which includes more than 90 active program apprentices working on more than 10 focus areas throughout North America, with plans to expand in the coming years.

REACH’s Skills-First Mission

REACH has created a new pipeline for hiring at LinkedIn that’s rooted in our vision to provide economic opportunity for every member of the global workforce. We consider the program to be an effort in Investment Hiring, a committed approach to hiring and fostering talent centered around skill-building. REACH helps apprentices build their skill sets to create a stronger future for these apprentices, at LinkedIn, and the industry at large.

We started with a small dream to challenge recruiting norms and find passionate talent with great potential. As the program has grown, the impact has grown far beyond skills learned and apprentices hired – it has had a profound effect on the lives of individuals and their families. Apprentices come from a variety of economic and social backgrounds with previous roles ranging from warehouse managers, small business owners, non-profit workers, teachers, dieticians, and even a poker player. Over the years, many have grown into software engineers, site reliability engineers and technical leaders in their space, but more importantly, we have seen the impact that diversity, inclusion and belonging have had on LinkedIn and the industry at large. This is a testament to the power of doing good in the world along with doing good for the business.

Advertisement
free widgets for website
See also  Career Stories: Breaking barriers with LinkedIn

Helping Apprentices Transform Their Careers

Programs like REACH provide opportunities that are more important than ever as people rethink their jobs and reevaluate what they are genuinely passionate about. With the Great Reshuffle, we’ve seen the workforce shift and certain jobs like machine learning engineers, back-end developers, and site reliability engineers rise in demand.

Advertisement
free widgets for website

These career transformations are highlighted in some of the inspiring stories from current and past REACH apprentices:

  • Born in Uruguay, Javier Orman was a professional violinist and music teacher in Chicago but when the COVID-19 pandemic upended the music industry, he began exploring new career paths. He started coding in 2020 and found an interest in AI and has been an AI Apprentice in REACH since July 2021, where he continues his newfound passion for turning data into smart decisions.
  • Svitlana Anoshchenko, originally from Ukraine, worked as a Quality Assurance Engineer for six years. In 2018, her family moved to the United States where she had an H4 visa, with no permission to work in the USA, until May 2021. During those 3.5 years, she gave birth and when she could work again, decided to switch her career path. She joined REACH as an Android Engineer Apprentice in February 2022 to kick off her journey to becoming a Software Engineer.
  • Judith Lung, LinkedIn’s first blind engineer, went to school as an English major and while getting her masters in rehab counseling, took programming courses on the side. These courses and her appreciation for how assisted technology enabled her to work better in school inspired her to apply to the REACH program. She’s been an Accessibility Engineering Apprentice since December 2019 (learn more about her journey here).
  • Alex Harding moved to California from Sierra Leone in 2013. Within two months of his arrival, he began working at a warehouse and while his logistics industry career was promising, Alex had a burning desire to be a Software Engineer. Being mono-visioned and legally blind he opted for a “do it yourself” route and in June 2019, Alex joined the REACH program as a Software Engineer Apprentice. After 2.5 years, he moved beyond REACH to become an Information Security Engineer at LinkedIn.
See also  Open Sourcing Venice – LinkedIn’s Derived Data Platform

What’s Next?

Over the next five years, we envision expanding our areas of focus and bringing in more apprentices into additional roles across our R&D team. One particular initiative that I’m excited about is our partnership with LinkedIn’s long-standing partner, Year Up, a workforce development non-profit organization supporting underserved young adults in accessing the economic mainstream. This year, we are creating pathways for Year Up interns to move into REACH apprenticeships with the goal of increasing opportunities for these talented and motivated young adults to continue their technical careers at LinkedIn.

Most importantly, I’m excited to continue creating real economic opportunities for those that have often been overlooked. At the core of REACH is the fundamental belief that top talent can come from anywhere. We’ll continue to find those passionate individuals and help them learn the skills necessary to realize a successful career in engineering.

If you or someone you know is interested in the REACH program, visit https://careers.linkedin.com/reach.

Advertisement
free widgets for website
Advertisement
free widgets for website

Topics

    Continue Reading
    Advertisement free widgets for website
    Click to comment

    Leave a Reply

    Your email address will not be published.

    LINKEDIN

    Super Tables: The road to building reliable and discoverable data products

    Published

    on

    By

    super-tables:-the-road-to-building-reliable-and-discoverable-data-products

    Figure 1: Evolving the current data ecosystem to leveraging Super Tables

    The Super Tables initiative aims to mitigate these challenges as illustrated in Figure 1. Our goal is to build only a small number of enterprise-grade Super Tables that will be leveraged by many downstream users. Each domain may have only very few Super Tables that are highly leveraged by users. By consolidating hundreds of SOTs into these few Super Tables, a large number of SOTs will eventually be deprecated and retired.

    Increases Discoverability

    • Having one or two Super Tables for a business entity or event (i.e., in a domain) makes it easier to explore and consume data for data analytics. It minimizes data duplication and reduces time to find the right data.

    Strengthens Reliability & Usability

    • Super Tables are materialized with precomputed joins of related business entities or events to consolidate data that is commonly needed for downstream analytics. This obviates the need to perform such joins in many downstream use cases leading to simpler and more performant downstream flows and better resource utilization.
    • Super Tables provide and publish SLA commitments on availability and supportability with proactive data quality checks (structural, semantics and variance) and a programmatic way of querying the current and historical data quality check results.

    Improves Change Management

    • Super Tables have well-defined governance policies for change management and communication and committed change velocity (e.g., monthly deployment of changes). Upstream data producers and downstream consumers (cross teams/domains) are both involved in the governance of changes to the STs.

    Super Tables design principles

    In this section, we highlight several important design principles for Super Tables. We want to emphasize that most of these principles are applicable to any dataset, not just Super Tables.

    Data Architecture

    Building a data product requires a good understanding of the domain and the available data sources. It is important to establish the source of truth and document such information. Choosing the right sources will promote data consistency. For example, if a source is continuously refreshed via streaming, the metric calculation may lead to inconsistent results at a different time. Consumers may not realize the situation and can be surprised by this outcome. Once sources are identified, we need to look at the upstream availability and business requirements so that the ST’s SLA can be established. One may argue that we should consolidate as many datasets into a single ST as possible. However, adding a data source to the ST will increase the resource needed to materialize the ST, and potentially jeopardize its SLA commitment. A good understanding of how the extra data source will be leveraged downstream (e.g., the extra data source is needed to compute a critical metric) is warranted.

    Advertisement
    free widgets for website

    Schema Design

    Field naming conventions and field groupings are established so that users can easily understand the meaning of the fields and frozen fields (immutable values) are identified. For example, a job poster may switch companies but the hiring company for the job posting shouldn’t change – it is important to include the hiring company information (immutable) instead of the job poster’s company. Future flow executions should not change the field values. By default, any schema changes (both compatible and incompatible) in data sources would not affect the ST. For example, if a new column is added to a source, the column will not appear in the ST by default. Similarly if an existing source column is deleted, its value is nullified and the ST team will be notified. Schema changes in a ST are decoupled from its sources so that any change will not accidently break the ST flow. Planned changes are documented and communicated to consumers through a distribution list.

    See also  LinkedIn’s journey to Java 11

    Retention Policy

    The retention policy of the ST is well established and published so that downstream consumers are fully aware of the policy and any future changes. Likewise, the retention policies of its data sources are tracked and monitored.

    Establishing Upstream SLA Commitment

    Advertisement
    free widgets for website

    To meet its own SLA commitment, the SLA commitment of all data sources must be established and agreed on. Any changes will be notified for proper actions.

    Data Quality

    Data quality checks on all data sources as well as the ST itself must be put in place. For example, the ST’s primary key column should not be NULL or have duplicates. Data profiles on fields are performed to identify any potential outliers. If a data source has seriously bad data quality, the ST flow may be terminated until the quality issue is resolved.

    Documentation

    It is very important to have both dataset and column-level documentation available to users to determine if the ST can be leveraged. Very often users want to understand the sources and how the ST and its fields are derived.

    Advertisement
    free widgets for website

    High Availability / Disaster Recovery

    ST aims to reach 99+% availability. For a daily ST flow, it translates to approximately one SLA miss per quarter. To improve availability, STs can be materialized in multiple clusters. With an active-active configuration, the ST flows will be executed independently and consistency is guaranteed on two clusters (JOBS flows run on 2 production clusters). In case of SLA miss and disaster recovery, data can be copied from the mirror dataset across clusters.

    Monitoring

    ST flows must be monitored closely for any deviation from the norm.

    • Flow performance: the flow’s runtime trending may lead the prevention of SLA misses
    • Data quality: the sources must adhere to established quality standards and data quality metrics are shown on dashboards
    • Cross cluster: the datasets in multiple clusters must be compared for deviation detection

    Governance Process

    A governance body (comprising teams from upstream and downstream) is established to ensure that the ST design principles are followed, the dataflow operation meets SLA and data quality is preserved, and changes are communicated to consumers. For example, if a downstream user wants to include a new field from a source, the user will have to submit the request to the governance body for evaluation and recommendation. In contrast to the past, a field very large in size would have been automatically included and blown up the final dataset size as well as jeopardized the delivery SLA. The governance body will have to consider all the tradeoffs in the final recommendation. A minimum of monthly release cadence is established to accept change requests so that the ST has the agility to serve the business needs.

    Advertisement
    free widgets for website

    A brief introduction to two Super Tables

    The first Super Table is JOBS. Before the JOBS ST was built, there were a dozen tables and views that were built over the years; each joined with the job posting dataset (more than one) with different dimensional tables for various analytics, such as the job poster information, and standardized/normalized job attributes (location, titles, industries, and skills etc.). These tables/views may be created using different formats (such as Apache Avro or Apache ORC) or on different clusters. They may have different arrival frequencies and times (SLAs). Adding more complexity to the situation is that there are many different kinds of jobs, some of which may even be deleted or archived, and various job listing and posting types. Choosing the right dataset for a particular use case requires a complicated task of understanding and analyzing various data sources, joining the right dimensional datasets, and in some cases repeating the join redundantly. As such, the learning curve is steep.

    See also  Applying multitask learning to AI models at LinkedIn

    The JOBS ST combines data elements from 57+ critical data sources from the multiple LinkedIn teams across different organizations. Totaling 158 columns, this JOBS ST precomputes and combines the most needed information for job-related analyses and insights. It was our intent that JOBS ST can be leveraged extensively for the most critical use cases. The JOBS ST has a daily SLA in the morning and the daily flow is run on two different production clusters to provide high availability. Data quality of all data sources and JOBS itself are enforced and monitored continuously.

    Leveraging the JOBS ST is easier and more efficient than its original source datasets. In fact, in many downstream flows, the flow logic is simplified by just scanning the JOBS ST or joining it with another dimensional table, making the logic more efficient. With the availability of JOBS, the existing dozen job-related tables/views will be deprecated and migrated to JOBS.

    The second Super Table is Ad Events. Before the Ad Events ST was built, there were seven different advertisement (ads) related tables including ad impressions, clicks, and video views. They have many fields in common such as campaign and advertiser information. Downstream frequently needs to join multiple ads tables, the campaign dimension, and the advertiser dimension tables to get insights on the ads revenue, performance, etc. The duplicate fields and frequent downstream joins added some unnecessary storage and computation.

    Upon analyzing the commonly joined tables and downstream metrics, the Ad Events ST is created with 150+ columns to provide the precomputed information ready for Ads insights analysis and reporting.

    Advertisement
    free widgets for website

    Lessons learned

    From designing and implementing the Super Tables, we have learned many valuable lessons. First, we were able to learn that understanding the use cases and usage patterns is crucial in determining the main benefits of using a Super Table. Before building a new Super Table, it’s important to look around and see what similar tables are already in place. Sometimes it’s better to strengthen an existing table than to build one from scratch. When weighing these two options,  factors to consider include quality, coverage, support, and usage of the existing tables.

    Next, we learned that while building Super Tables, it’s important to identify those semantic logic and their owners to ensure that it is correct and understand how it will evolve over time. Data transformation includes structural logic (e.g., table joins, deduplication, and data type conversion) and semantic logic (e.g., a method of predicting the likelihood of a future purchase based on browsing history). Semantic logic is usually owned by a specific team with deep domain knowledge. Without proper communication and collaboration, semantic logic would be poorly maintained and outdated. A better solution is to separate semantic logics into a different layer (such as Dali Views built on top of ST) and be managed by the team with the domain knowledge.

    See also  Open Sourcing Venice – LinkedIn’s Derived Data Platform

    With Super Table SLAs, the stricter the SLA, the less tolerant you can be of issues. This means implementing mechanisms like defensive data loading, safe data transformation, ironclad alerting and escalation, and runtime data quality checks. For softer SLAs, you can tolerate a failure, triage and resolve the issue. With strict SLAs, sometimes you cannot tolerate a single failure.

    Another learning was that, in an age with an emphasis on privacy, where data inaccuracy can have disastrous cascading effects, and lowering costs is at the forefront, reducing redundant datasets is the closest thing to a panacea there is. Often, it is better to align a Super Table to fulfill a distinct use case and ensure all requirements are met, instead of tolerating duplication for the sake of speed. Likewise, consolidating multiple similar datasets (which target different use cases) into a single Super Table is extremely critical.

    Lastly, after the release of ST, one of the critical tasks is to migrate users of the legacy SOTs to the new ST. The sooner the migration is complete, the more resources can be freed up for other tasks. We have learned that it is imperative to provide awareness and support to the downstream users who need to perform the migrations. To that end, we have created a migration guide that outlines all the impacted SOTs. It includes detailed column-to-column mappings and suggested validations to perform to ensure data correctness and quality. The release of JOBS ST has significantly improved the life of both the owners of jobs data sources and the downstream consumers. Before ST, the knowledge of jobs business logic was scattered across several data sources owned by different teams and none of them had the full picture, making it extremely difficult for downstream consumers to figure out what is the right data source to use per their use cases. Usually time-consuming communications were required among different teams and it could easily lead to a misuse of data but since the release of ST, we formed a governance body involving various stakeholders. The governance body manages the evolution and maintenance of the ST. For instance, the ST consumers requested a new field. The governance body discussed the best design and implementation approach, which involved gathering raw data from the sources and implementing the transformation logic in the ST. The monthly release cadence allows development agility and JOBS data are integrated with easier access and much less confusion.

    Advertisement
    free widgets for website

    Conclusions and future work

    Democratization of data authoring brings agility to analytics throughout the LinkedIn community. However, it introduces lots of challenges such as discoverability, redundancy, and inconsistency. We have launched two Super Tables at LinkedIn that address these challenges, and are in the process of identifying and building more Super Tables. Both STs have simplified data discovery by providing the “go-to” tables. They have also simplified downstream logic and hence saved computation resources. The created value is also amplified due to the high-leverage nature of these tables.

    The following table summarizes the benefits of building and leveraging Super Tables.

    Advertisement
    free widgets for website
    Continue Reading

    LINKEDIN

    Open Sourcing Venice – LinkedIn’s Derived Data Platform

    Published

    on

    By

    open-sourcing-venice-–-linkedin’s-derived-data-platform

    We are proud to announce the open sourcing of Venice, LinkedIn’s derived data platform that powers more than 1800 of our datasets and is leveraged by over 300 distinct applications. Venice is a high-throughput, low-latency, highly-available, horizontally-scalable, eventually-consistent storage system with first-class support for ingesting the output of batch and stream processing jobs.

    Venice entered production at the end of 2016 and has been scaling continuously ever since, gradually replacing many other systems, including Voldemort and several proprietary ones. It is used by the majority of our AI use cases, including People You May Know, Jobs You May Be Interested In, and many more.

    The very first commit in the Venice codebase happened a little more than eight years ago, but development picked up in earnest with a fully staffed team in the beginning of 2016, and continues to this day. In total, 42 people have contributed more than 3700 commits to Venice, which currently equates to 67 years of development work.

    Throughout all this, the project traversed many phases, increasing in both scope and scale. The team learned many lessons along the way, which led to a plethora of minor fixes and optimizations, along with several rounds of major architectural enhancements. The last such round brought active-active replication to Venice and served as a catalyst for clearing many layers of tech debt. Against this backdrop, we think Venice is in the best position it’s ever been in to propel the industry forward, beyond the boundaries of LinkedIn.

    This blog post provides an overview of Venice and some of its use cases, divided across three sections:

    Advertisement
    free widgets for website
    • Writing to Venice
    • Reading from Venice
    • Operating Venice at scale
    • Writing To Venice

    Writing data seems like a natural place to start, since anyone trying out a new database needs to write data first, before reading it.

    How we write into Venice also happens to be one of the significant architectural characteristics that makes it different from most traditional databases. The main thing to keep in mind with Venice is that, since it is a derived data store that is fed from offline and nearline sources, all writes are asynchronous. In other words, there is no support for performing strongly consistent online write requests, as one would get in MySQL or Apache HBase. Venice is not the only system designed this way—another example of a derived data store with the same constraints is Apache Pinot. This architecture enables Venice to achieve very high write throughputs.

    In this section, we will go over the various mechanisms provided by Venice to swap an entire dataset, as well as to insert and update rows within a dataset. Along the way, we will also mention some use cases that leverage Venice, and reference previously published information about them.

    Push Job

    The most popular way to write data into Venice is using the Push Job, which takes data from an Apache Hadoop grid and writes it into Venice. Currently, LinkedIn has more than 1400 push jobs running daily that are completely automated.

    Full Push

    By default, the Venice Push Job will swap the data, meaning that the new dataset version gets loaded in the background without affecting online reads, and when a given data center is done loading, then read requests within that data center get swapped over to the new version of the dataset. The previous dataset version is kept as a backup, and we can quickly roll back to it if necessary. Older backups get purged automatically according to configurable policies. Since this job performs a full swap, it doesn’t make sense to run multiple Full Pushes against the same dataset at the same time, and Venice guards against this, treating it as a user error. Full Pushes are ideal in cases where all or most of the data needs to change, whereas other write methods described in the next sections are better suited if changing only small portions of the data.

    Advertisement
    free widgets for website

    By 2018, the roughly 500 Voldemort Read-Only use cases running in production had been fully migrated to Venice, leveraging its Full Push capability as a drop-in replacement.

    A large portion of Full Push use cases are AI workflows that retrain daily or even several times a day. A prominent example of this is People You May Know. More generally, LinkedIn’s AI infrastructure has adopted a virtual feature store architecture where the underlying storage is pluggable, and Venice is the most popular choice of online storage to use with our feature store, Feathr, which was recently open sourced. Tight integration between Venice and Feathr is widely used within LinkedIn, and this integration will be available in a future release of Feathr.

    Incremental Push

    Another way to leverage the Venice Push Job is in incremental mode, in which case the new data gets added to the existing dataset, rather than swapping it wholesale. Data from incremental pushes get written both to the current version and backup version (if any) of the dataset. Contrary to the default (Full Push) mode, Venice supports having multiple Incremental Push jobs run in parallel, which is useful in cases where many different offline sources need to be aggregated into a single dataset.

    Stream processor

    Besides writing from Hadoop, Venice also supports writing from a stream processor. At LinkedIn, we use Apache Samza for stream processing, and that is what Venice is currently integrated with. We are looking forward to hearing from the community about which other stream processors they would like to integrate with Venice. Because Venice has a sufficiently flexible architecture, integrating additional stream processors is possible.

    Advertisement
    free widgets for website

    Streaming write

    Similar to incremental pushes, data written from stream processors gets applied both to the current and backup versions of a dataset, and multiple stream processing jobs can all write concurrently into the same dataset.

    Since 2017, LinkedIn’s A/B testing experimentation platform has leveraged Venice with both full pushes and streaming writes for its member attribute stores. We will come back to this use case in the next section.

    Stream reprocessing

    There is an alternative way to leverage a stream processor that is similar to a Full Push. If the stream processor is leveraging an input source such as Brooklin that supports bootstrapping the whole upstream dataset, then the owner of the stream processing job can choose to reprocess a snapshot of its whole input. The stream processor’s output is written to Venice as a new dataset version, loaded in the background, and, finally, the read traffic gets swapped over to the new version. This is conceptually similar to the Kappa Architecture.

    Advertisement
    free widgets for website

    We’ve described before on the blog our data standardization work, which helps us build taxonomies to power our Economic Graph. These standardization use cases started off using a different database, but since then have fully migrated their more than 100 datasets to Venice. Having built-in support for reprocessing allows standardization engineers to run a reprocessing job in hours instead of days, and to do more of them at the same time, both of which greatly increases their productivity.

    See also  Improving Post Search at LinkedIn

    Partial updates

    For both incremental pushes and streaming writes, it is not only possible to write entire rows, but also to update specific columns of the affected rows. This is especially useful in cases where multiple sources need to be merged together into the same dataset, with each source contributing different columns.

    Another supported flavor of partial updates is collection merging, which enables the user to declaratively add or remove certain elements from a set or a map.

    The goal of partial updates is for the user to mutate some of the data within a row without needing to know the rest of that row. This is important because all writes to Venice are asynchronous, and it is therefore not possible to perform “read-modify-write” workloads. Compared to systems which provide direct support for read-modify-write workloads (e.g., via optimistic locking), Venice’s approach of declaratively pushing down the update into the servers is more efficient and better at dealing with large numbers of concurrent writers. It is also the only practical way to make hybrid workloads work (see below).

    Hybrid write workloads

    Venice supports hybrid write workloads that include both swapping the entire dataset (via Full Push jobs or reprocessing jobs) and nearline writes (via incremental pushes or streaming). All of these data sources get merged together seamlessly by Venice. The way this works is that after having loaded a new dataset version in the background, but before swapping the reads over to it, there is a replay phase where recent nearline writes get written on top of the new dataset version. When the replay is caught up, then reads get swapped over. How far back the replay should start from is configurable.

    Advertisement
    free widgets for website

    This previous blog post on Venice Hybrid goes into more detail. Of note, the bookkeeping implementation details described in that post have since been redesigned.

    As of today, including both incremental pushes and streaming writes, there are more than 200 hybrid datasets in production.

    Putting it all together

    In summary, we can say that Venice supports two data sources: Hadoop and Samza (or eventually any stream processor, if the open source community wants it), which are the columns in the following table. In addition, Venice supports three writing patterns, which are the rows of the following table. This 2×3 matrix results in 6 possible combinations, all of which are supported.

    Advertisement
    free widgets for website
      Hadoop Samza
    Full dataset swap Full Push job Reprocessing job
    Insertion of some rows into an existing dataset Incremental push job Streaming writes
    Updates to some columns of some rows Incremental push job

    doing partial updates
    Streaming writes

    doing partial updates

    In addition to supporting these individual workloads, Venice also supports hybrid workloads, where any of the “full dataset swap” methods can be merged with any of the insertion and update methods, thanks to the built-in replay mechanism.

    In terms of scale, if we add up all of the write methods together, Venice at LinkedIn continuously ingests an average of 14 GB per second, or 39M rows per second. But the average doesn’t tell the full story, because Full Pushes are by nature spiky. At peak throughput, Venice ingests 50 GB per second, or 113M rows per second. These figures add up to 1.2 PB daily and a bit more than three trillion rows daily.

    On a per-server basis, we throttle the asynchronous ingestion of data by throughput of both bytes and rows. Currently, our servers with the most aggressive tunings are throttling at 200 MB per second, and 400K rows per second, all the while continuing to serve low latency online read requests without hiccups. Ingestion efficiency and performance is an area of the system that we continuously invest in, and we are committed to keep pushing the envelope on these limits.

    Advertisement
    free widgets for website
    • Reading From Venice

    Now that we know how to populate Venice from various sources, let’s explore how data can be read from it. We will go over the three main read APIs: Single Gets, Batch Gets, and Read Compute. We will then look at a special client mode called Da Vinci.

    Single Get

    The simplest way to interact with Venice is as if it were a key-value store: for a given key, retrieve the associated value.

    LinkedIn’s A/B testing platform, mentioned earlier, is one of the heavy users of this functionality. Including all use cases, Venice serves 6.25M Single Get queries per second at peak.

    Batch Get

    It is also possible to retrieve all values associated with a set of keys. This is a popular usage pattern across search and recommendation use cases, where we need to retrieve features associated with a large number of entities. We then score each entity by feeding its features into some ML model, and finally return the top K most relevant entities.

    Advertisement
    free widgets for website

    A prominent example of this is the LinkedIn Feed, which makes heavy use of batch gets. Including all Batch Get use cases, Venice serves 45M key lookups per second at peak.

    Read Compute

    Another way to get data out of Venice is via its declarative read computation DSL, which supports operations such as projecting a subset of columns, as well as executing vector math functions (e.g., dot product, cosine similarity, etc.) on the embeddings (i.e., arrays of floats) stored in the datasets.

    Since 2019, these functions have been used by People You May Know (PYMK) to achieve horizontally scalable online deep learning. While it is common to see large scale deep learning get executed offline (e.g., via TonY), online deep learning tends to be constrained to a smaller scale (i.e., single instance). It is thus fairly novel, in our view, to achieve horizontal scalability in the context of online inference. The data stored in the PYMK dataset includes many large embeddings, and retrieving all of this data into the application is prohibitively expensive, given its latency budget. To solve this performance bottleneck, PYMK declares in its query what operations to perform on the data, and Venice performs it on PYMK’s behalf, returning only the very small scalar results. A previous talk given at QCon AI provides more details on this use case.

    At peak, read compute use cases perform 46M key lookups per second, and every single one of these lookups also performs an average of 10 distinct arithmetic operations on various embeddings stored inside the looked up value.

    Da Vinci

    So far, we have been exploring use cases that perform various kinds of remote queries against the Venice backend. These are the original and oldest use cases supported by Venice, and collectively we refer to them as Classical Venice. In 2020, we built an alternative client library called Da Vinci, which supports all of the same APIs we already looked at, but provides a very different cost-performance tradeoff, by serving all calls from local state rather than remote queries.

    Advertisement
    free widgets for website

    Da Vinci is a portion of the Venice server code, packaged as a client library. Besides the APIs of the Classical Venice client, it also offers APIs for subscribing to all or some partitions of the dataset. Subscribed partitions are fully loaded in the host application’s local state, and also eagerly updated by listening to Venice’s internal changelog of updates. In this way, the state inside Da Vinci mirrors that of the Venice server, and the two are eventually consistent with one other. All of the write methods described earlier are fully compatible with both clients.

    See also  Overcoming challenges with Linux cgroups memory accounting

    The same dataset can be used in both Classical and Da Vinci fashion at the same time. The application can choose which client to use, and switch between them easily, if cost-performance requirements change over time.

    Eager cache

    Da Vinci is what we call an “eager cache,” in opposition to the more traditional read-through cache, which needs to be configured with a TTL. The difference between these kinds of caches is summarized in the following table:

    Advertisement
    free widgets for website
    • Eager cache table

    As we can see, the main advantage of a read-through cache is that the size of the local state is configurable, but the drawback is needing to deal with the awkward tradeoff of tuning TTL, which pits freshness against hit rate. An eager cache provides the opposite tradeoff, not giving the ability to configure the size of the local state (besides the ability to control which partitions to subscribe to), but providing 100% hit rate and optimal freshness.

    Partitioning

    The easiest way to leverage Da Vinci is to eagerly load all partitions of the dataset of interest. This works well if the total dataset size is small—let’s say a few GB. We have some online applications in the ads space that do this.

    Alternatively, Da Vinci also supports custom partitioning. This enables an application to control which partition gets ingested in each instance, and thus reduces the size of the local state in each instance. This is a flexible mechanism, which is usable in a variety of ways, but also more advanced. It is appropriate for use within distributed systems that already have a built-in notion of partitioning. In this way, Da Vinci becomes a building block for other data systems.

    Advertisement
    free widgets for website

    Galene

    Galene, our proprietary search stack, uses Da Vinci to load AI feature data inside its searchers, where it performs ranking on many documents at high throughput. It leverages Da Vinci with its all-in-RAM configuration*, since the state is not that large (e.g., 10 GB per partition) and the latency requirement is very tight. These use cases usually have few partitions (e.g., 16) but can have a very high replication factor (e.g., 60), adding up to a thousand instances all subscribing to the same Venice dataset.

    Samza

    Samza stream processing jobs also leverage Da Vinci to load AI feature data. In this case, the state can be very large, in the tens of TB in total, so many more partitions are used (e.g., one to two thousand) to spread it into smaller chunks. For stream processing, only one replica is needed, or two if a warm standby is desired, so it is the inverse of the Galene use cases. These use cases leverage the SSD-friendly configuration* to be more cost-effective.

    * N.B.: Both the all-in-RAM and SSD-friendly configurations are described in more detail in the storage section.

    Advertisement
    free widgets for website
    • Operating Venice

    Venice was designed from the ground up for massive scale and operability, and a large portion of development efforts is continuously invested in these important areas. As such, the system has built-in support for, among other things:

    • Multi-region
    • Multi-cluster
    • Multi-tenancy
    • Operator-driven configuration
    • Self-healing
    • Elastic & linear scalability

    In the subsequent sections, we’ll discuss each of these in turn.

    Multi-region

    It is a requirement to have our data systems deployed in all data centers. Not all data systems have the same level of integration between data centers, however. In the case of Venice, the system is globally integrated in such a way that keeping data in-sync across regions is easy, while also leveraging the isolation of each data center to better manage risks.

    Geo-replication

    Advertisement
    free widgets for website

    The Push Job writes to all data centers, and we have Hadoop grids in more than one region, all of which can serve as the source of a push. Nearline writes are also replicated to all regions in an active-active manner, with deterministic conflict resolution in case of concurrent writes.

    These are important characteristics for our users, who want data to keep getting refreshed even if the data sources are having availability issues in some of the regions.

    Administrative operations, such as dataset creation or schema evolution, are performed asynchronously by each region, so that if a region has availability issues, it can catch up to what it missed when it becomes healthy again. This may seem like a trivial detail, but at LinkedIn we have datasets getting created and dropped every day, from users leveraging our self-service console and also from various systems doing these operations dynamically. As such, it is infeasible for Venice operators to supervise these procedures.

    Fault recovery

    Operators also have the ability to repair the data in an unhealthy region by copying it from a healthy region. In rare cases of correlated hardware failures, this last resort mechanism can save the day.

    Advertisement
    free widgets for website

    Multi-cluster

    Each region contains many Venice clusters, some of which are dedicated to very sensitive use cases, while others are heavily multi-tenant. Currently, there are 17 production Venice clusters per region, spanning a total of 4000 physical hosts.

    Dataset to cluster assignment is fully operator-driven and invisible to users. Users writing to and reading from Venice need not know what cluster their dataset is hosted on. Moreover, operators can migrate datasets between clusters, also invisibly for users. Migration is used to optimize resource utilization, or to leverage special tunings that some clusters have.

    Multi-tenancy

    Each Venice cluster may contain many datasets; our most densely populated cluster currently holds more than 500 datasets. In such an environment, it is critical to have mechanisms for preventing noisy neighbors from affecting each other. This is implemented in the form of quotas, of which there are three kinds, all configurable on a per-dataset basis.

    There is a storage quota, which is the total amount of persistent storage a dataset is allowed to take. For Full Push jobs, this is verified prior to the push taking place, so that an oversized push doesn’t even begin to enter the system. For other write methods, there is an ongoing verification that can cause ingestion to stall if a dataset is going over its limit, in which case dataset owners are alerted and can request more quota.

    There are two kinds of read quotas: one for the maximum number of keys queryable in a Batch Get query, and the other for the maximum rate of keys looked up per second. In other words, N Single Get queries cost the same amount of quota as one Batch Get query of N keys.

    Advertisement
    free widgets for website

    Operator-driven configuration

    One of the core principles of LinkedIn’s data infrastructure teams is Invisible Infra, which means that the users should be empowered to achieve their business goals without needing to know how the infrastructure helps them fulfill those goals. An example of this is given in the previous multi-cluster section, in the form operator-driven cluster migration. There are many more such examples.

    See also  Career stories: Rejoining LinkedIn to scale our media infrastructure

    Compression

    Venice supports end-to-end compression, which helps minimize storage and transmission costs. Various compression schemes are supported, and operators can choose to enable them on behalf of users. This configuration change takes effect on the next Full Push, which can be triggered by the user’s own schedule, or triggered by the operator. The latest compression scheme we integrated is Zstandard, which we use to build a shared dictionary as part of the Push Job. This dictionary is stored just once per dataset version, thus greatly reducing total storage space in cases where some bits of data are repeated across records. In the most extreme case, we’ve seen a dataset deflate by 40x!

    Repartitioning

    Venice also supports repartitioning datasets, which is another type of configuration change taking effect on the next Full Push. Partitions are the unit of scale in the system, and as datasets grow, it is sometimes useful to be able to change this parameter. For a given dataset size, having a greater number of smaller partitions enables faster push and rebalancing (more on this later).

    Advertisement
    free widgets for website

    Storage

    Venice has a pluggable storage engine architecture. Previously, Venice leveraged BDB-JE as its storage engine, but since then, we have been able to fully swap this for RocksDB, without needing any user intervention. Currently, we use two different RocksDB formats: block-based (optimized for larger-than-RAM datasets, backed by SSD) and plain table (optimized for all-in-RAM datasets). As usual, changing between these formats is operator-driven. A previous blog post chronicles some of our recent RocksDB-related challenges and how we solved them.

    We are looking forward to seeing what the open source community will do with this abstraction, and whether new kinds of use cases may be achievable within the scope of Venice by plugging in new storage engines.

    Self-healing

    Venice leverages Apache Helix for partition-replica placements in each cluster, which enables the system to react readily to hardware failures. If servers crash, Helix automatically rebalances the partitions they hosted onto healthy servers. In addition, we leverage Helix’s support for rack-awareness, which ensures that at most one replica of a given partition is in the same rack. This is useful so that network switch maintenance or failures don’t cause us to lose many replicas of a partition at once. Rack-awareness is also useful in that it allows us to simultaneously bounce all Venice servers in a rack at once, thus greatly reducing the time required to perform a rolling restart of a full cluster. In these scenarios, Helix is smart enough to avoid triggering a surge of self-healing rebalance operations. A previous post provides more details on Venice’s usage of Helix, although the state model has been redesigned since the post was published.

    Elastic & linear scalability

    Again thanks to Helix, it is easy to inject extra hardware into a cluster and to rebalance the data into it. The exact same code path that is leveraged more than 1400 times a day to rewrite entire datasets is also leveraged during rebalance operations, thus making it very well-tested and robust. This important design choice also means that whenever we further optimize the write path (which we are more or less continuously doing), it improves both push and rebalance performance, thus giving more elasticity to the system.

    Advertisement
    free widgets for website

    Beyond this, another important aspect of scalability is to ensure that the capacity to serve read traffic increases linearly, in proportion to the added hardware. For the more intense use cases described in this post—Feed and PYMK—the traffic is high enough that we benefit from over-replicating these clusters beyond the number of replicas needed for reliability purposes. Furthermore, we group replicas into various sets, so that a given Batch Get or Read Compute query can be guaranteed to hit all partitions within a bounded number of servers. This is important, because otherwise the query fanout size could increase to arbitrarily high amounts and introduce tail latency issues, which would mean the system’s capacity scales sublinearly as the hardware footprint increases. This and many other optimizations are described in more detail in a previous blog post dedicated to supporting large fanout use cases.

    • Conclusion

    It would be difficult to exhaustively cover all aspects of a system as large as Venice in a single blog post. Hopefully, this overview of functionalities and use cases gives a glimpse of the range of possibilities it enables. Below is a quick recap to put things in a broader context.

    What is Venice for?

    We have found Venice to be a useful component in almost every user-facing AI use case at LinkedIn, from the most basic to cutting edge ones like PYMK. The user-facing aspect is significant, since AI can also be leveraged in offline-only ways, in which case there may be no need for a scalable storage system optimized for serving ML features for online inference workloads.

    Advertisement
    free widgets for website

    Besides AI, there are other use cases where we have found it useful to have a derived data store with first-class support for ingesting lots of data in various ways. One example is our use of Venice as the stateful backbone of our A/B testing platform. There are other examples as well, such as caching a massaged view of source-of-truth storage systems, when cost and performance matter, but strong consistency does not.

    At the end of the day, Venice is a specialized solution, and there is no intent for it to be anyone’s one and only data system, nor even their first one. Most likely, all organizations need to have some source of truth database, like a RDBMS, a key-value store, or a document store. It is also likely that any organization would already have some form of batch or stream processing infrastructure (or both) before they feel the need to store the output of these processing jobs in a derived data store. Once these other components are in place, Venice may be a worthwhile addition to complement an organization’s data ecosystem.

    The road ahead

    In this post, we have focused on the larger scale and most mature capabilities of Venice, but we also have a packed roadmap of exciting projects that we will share more details about soon.

    We hope this upcoming chapter of the project’s life in the open source community will bring value to our peers and propel the project to the next level.

    Advertisement
    free widgets for website

    Topics

    Advertisement
    free widgets for website
    Continue Reading

    LINKEDIN

    Real-time analytics on network flow data with Apache Pinot

    Published

    on

    By

    real-time-analytics-on-network-flow-data-with-apache-pinot

    The LinkedIn infrastructure has thousands of services serving millions of queries per second. At this scale, having tools that provide observability into the LinkedIn infrastructure is imperative to ensure that issues in our infrastructure are quickly detected, diagnosed, and remediated. This level of visibility helps prevent the occurrence of outages so we can deliver the best experience for our members. To provide observability, there are various data points that need to be collected, such as metrics, events, logs, and flows. Once collected, the data points can then be processed and made available, in real-time, for engineers to use for alerting, troubleshooting, capacity planning, and other operations.

    At LinkedIn, we developed InFlow to provide observability into network flows. A network flow describes the movement of a packet through a network and is the metadata of a packet sampled at a network device that describes the packet in terms of the 5-tuple: source IP, source port, destination IP, destination port, and protocol. It may also contain source and destination autonomous system numbers (ASNs), the IP address of the network device that has captured this flow, input and output interface indices of the network device where the traffic was sampled, and the number of bytes transferred.

    Network devices can be configured to export this information to an external collector using various protocols. InFlow understands the industry standard sFlow and IPFIX protocols for collecting flows.

    How LinkedIn leverages flow data

    InFlow provides a rich set of time-series network data having over 50 dimensions such as source and destination sites, security zones, ASNs, IP address type, and protocol. With this data, various types of analytical queries can be run to get meaningful insights about network health and characteristics.

    Advertisement
    free widgets for website
    • InFlow UI Top Services

    Figure 1.  A screenshot from InFlow UI’s Top Services tab which shows the 5 services consuming the most network bandwidth and the variation of this traffic over the last 2 hours

    Most commonly, InFlow is used for operational troubleshooting to get complete visibility into the traffic. For example, if there is an outage due to a network link capacity exhaustion, InFlow can be used to find out the top talkers for that link based on hosts/services that are consuming the most bandwidth (Figure 1) and based on the nature of the service, further steps can be taken to remediate the issue.

    Flow data also provides source and destination ASN information, which can be used for optimizing cost, based on bandwidth consumption of different kinds of peering with external networks. It can also be used for analyzing data based on several dimensions for network operations. For example, finding the distribution of traffic by IPv4 or IPv6 flows or the distribution of traffic based on Type of Service (ToS) bits.

    InFlow architecture overview

    Advertisement
    free widgets for website
    • InFlow architecture

    Figure 2. InFlow architecture

    Figure 2 shows the overall InFlow architecture. The platform is divided into 3 main components: flow collector, flow enricher, and InFlow API with Pinot as a storage system. Each component has been modeled as an independent microservice to provide the following benefits:

    1. It enforces the single responsibility principle and prevents the system from becoming a monolith.
    2. Each of the components have different requirements in terms of scaling. Separate microservices ensure that each can be scaled independently.
    3. This architecture creates loosely coupled pluggable services which can be reused for other scenarios.
    See also  9 Social Media Trends You Need to Know to Plan Your 2022 Strategy

    Flow collection

    InFlow receives 50k flows per second from over 100 different network devices on the LinkedIn backbone and edge devices. InFlow supports sFlow and IPFIX as protocols for collecting flows from network devices. This is based on the device’s vendor support for the protocols and minimal impact of flow export on the device’s performance. The InFlow collector receives and parses these incoming flows, aggregates the data into unique flows for a minute, and pushes them to a Kafka topic for raw flows.

    Flow enrichment

    The data processing pipeline for InFlow leverages Apache Kafka and Apache Samza for stream processing of incoming flow events. Our streaming pipeline processes 50k messages per second, enriching the data with 40 additional fields (like service, source and destination sites, security zones, ASNs, and IP address type), which are fetched from various internal services at LinkedIn. For example, our data center infrastructure management system, InOps, provides information on the site, security zone, security domain of the source, and destination IPs for a flow. The incoming raw flow messages are consumed by a stream processing job on Samza and after adding the additional enriched fields, the result is pushed to an enriched Kafka topic.

    Data storage

    InFlow requires storage of tens of TBs of data with a retention of 30 days. To support its real-time troubleshooting use case, the data must be queryable in real-time with sub-second latency so that engineers can query the data without any hassles during outages. For the storage layer, InFlow leverages Apache Pinot.

    Advertisement
    free widgets for website

    InFlow UI

    • A screenshot from InFlow UI’s Explore tab

    Figure 3.  A screenshot from InFlow UI’s Explore tab which provides a self-service interface for users to visualize flow data by grouping and filtering on different dimensions

    The InFlow UI is a dashboard with some of the commonly used visualizations on flow data pre-populated that provides a rich interface where the data can be filtered or grouped by any of the 40 different dimension fields. The UI also has an Explore section, which allows for creation of ad-hoc queries. The UI is based on top of InFlow API, which is a middleware responsible for translating user input into Pinot queries and issuing them to the Pinot cluster.

    Pinot as a storage layer

    In the first version of InFlow, data was ingested from the enriched Kafka topic to HDFS. We leveraged Trino for facilitating user queries on the data present in HDFS. However, the ETL and aggregation pipeline added a 15-20 minute delay to the pipeline, reducing the freshness of the data. Additionally, query latencies to HDFS using Presto were in the order of 15-30 seconds. This latency and delay was acceptable for doing historical data analytics, however, for real-time troubleshooting, the data needs to be available in real-time with a maximum delay of 1 minute.

    Advertisement
    free widgets for website

    Based on the query latency and data freshness requirements, we explored several storage solutions available at LinkedIn (like EspressoKusto, and Pinot) and decided on onboarding our data to Apache Pinot. When looking for solutions, we needed a reliable system providing real-time ingestion and sub-second query latencies. Pinot’s support for Lambda and Lamda-less architecturereal-time ingestion, and low latency at high throughput could help us achieve optimal results. Additionally, the Pinot team at LinkedIn is experimenting with supporting a new use case called Real-time Operational Metrics Analysis (ROMA), which enables engineers to slice and dice metrics along different combinations of dimensions to help monitor infrastructure near real-time, analyze the last few weeks/months/years of data to discover trends and patterns to forecast and plan capacity, and helps to find the root cause of outages quickly and reduce the time to recovery. These objectives aligned well with our problem statement of processing large numbers of metrics in real-time.

    See also  Career stories: Rejoining LinkedIn to scale our media infrastructure

    The Pinot ingestion pipeline consumes directly from the enriched Kafka topic and creates the segments on the Pinot servers, which improves the freshness of the data in the system to less than a minute. User requests from InFlow UI are converted to Pinot SQL queries and sent to the Pinot broker for processing. Since Pinot servers keep data and indices in cache-friendly data structures, the query latencies are a huge improvement from the previous version where data was queried from disk (HDFS).

    Several optimizations were done to reach this query latency and ingestion parameters. Because the data volume for the input Kafka topic is high, several considerations were made to decide the optimal number of partitions in the topic to allow for parallel consumption into segments in Pinot after several experiments with the ingestion parameters. Most of our queries involved a regexp_like condition on the devicehostname column, which is the name of the network device that has exported the flow. This is used to narrow down on a specific plane of the network. regexp_like is inefficient as it cannot leverage any index so to resolve this, we set up an ingestion transformation using Pinot. These are various transformation functions that can be applied to your data before it is ingested into Pinot. The transformation created a derived column flowType, which classifies a flow based on the name of the network device that has exported this flow into a specific plane of the network. For example, if the exporting device is at the edge of our network, then this flow can be classified as an Internet-facing flow. The flowType column is now an indexed column used for equality comparisons instead of regexp_like and this helped improve query latency by 50%.

    See also  LinkedIn’s journey to Java 11

    Queries from InFlow always request for data from a specific range in time. To improve query performance, timestamp based pruning was enabled on Pinot. This improved query latencies since only relevant segments are filtered in for processing based on the filter conditions on the timestamp column in queries. Based on the Pinot team’s input, indexes on the different dimension columns were set up to aid query performance.

    Conclusion

    Advertisement
    free widgets for website
    • Latency metric for InFlow API query

    Figure 4.  Latency metric for InFlow API query for top flows in the last 12 hours before and after onboarding to Pinot

    Following the successful onboarding of flow data to a real-time table on Pinot, freshness of data improved from 15 mins to 1 minute and query latencies were reduced by as much as 95%. For some of the more expensive queries, which took as much as 6 minutes using Presto queries, the query latency reduced to 4 seconds using Pinot.This has been helpful in making it easier for the network engineers at LinkedIn to easily get the data they need for troubleshooting or running real-time analytics on network flow data.

    What’s next

    The current network flow data only provides us with sampled flows from the LinkedIn backbone and edge network. Skyfall is an eBPF-based agent, developed at LinkedIn, that collects flow data and network metrics from the host’s kernel with minimal overhead. The agent captures all flows for the host without sampling and will be deployed across all servers in the LinkedIn fleet. This would provide us with a 100% coverage of flows across our data centers and enable us to support more use cases on flow data that require unsampled information such as security audit and validation based use cases. Because the agent collects more data and from more devices, the scale of data collected by Skyfall is expected to be 100 times that of InFlow. We are looking forward to leveraging the InFlow architecture to support this scale and provide real-time analytics on top of the rich set of metrics exported by the Skyfall agent. Another upcoming feature that we are excited about is leveraging InFlow data for anomaly detection and more traffic analytics.

    Acknowledgements

    Onboarding our data to Pinot was a collaborative effort and we would like to express our gratitude to Subbu SubramaniamSajjad MoradiFlorence Zhang, and the Pinot team at LinkedIn for their patience and efforts in understanding our requirements and working on the optimizations required for getting us to the optimal performance.

    Advertisement
    free widgets for website

    Thanks to Prashanth Kumar for the continuous dialogue in helping us understand the network engineering perspective on flow data. Thanks to Varoun P and Vishwa Mohan for their leadership and continued support.

    Advertisement
    free widgets for website

    Topics

    Continue Reading

    Trending