Connect with us

LINKEDIN

LinkedIn’s journey to Java 11

Published

on

linkedin’s-journey-to-java-11

*EBWR, EBPR and EPR are all throughput metrics. Java 11 G1, ZGC and Shenandoah all perform extremely well in comparison to Java 8 G1 for Brooklin.

As a side note here, we tested some applications with ZGC and Shenandoah, which were experimental and not for production usage, and saw that some applications performed exceptionally well with these collectors. These results helped us confirm our desire to move to Java 11.

Automation

In addition to changing the build processes for over 2,000 repositories, it is necessary to change the runtime for over 1,000 applications, which is a lofty goal for a very small working group. Thankfully, automation really saved us here! We were able to automate many of the changes needed to migrate to Java 11. Although this did not give us a 100% success rate, automating any percentage of the repositories’ migration to Java 11 would make the workload much more palatable.

After some minor changes to our infrastructure, it was possible to change repository build systems to use Java 11. We then were able to trigger mass Java 11 builds in a test environment to find out what issues needed to be addressed. This was, without a doubt, one of the most important features and learnings that we had in the Java 11 migration. This testing allowed us to identify a plethora of edge cases as well as several major challenges. Here are some of the major challenges that we identified:

JDK cross-compatibility issues

The first and most pressing issue was the cross-compatibility between Java 8 and Java 11. We realized it would take multiple years to complete this upgrade for the company and that means that we would be in a transition state where both JDK 8 and JDK 11 would be used for a while. LinkedIn runs with multi-repo source control, which meant that we needed to ensure every repository can work for both Java 8 and Java 11 upstreams. The reason we call this cross-compatibility and not backwards-compatibility (because of bytecode level) is that we also found cases where code could be compiled on Java 8 but failed to run properly on Java 11. These cases include the removal of JavaEE libraries, changing the default classloader type, and stricter class casting in Java 11. 

Advertisement
free widgets for website

We found that there were too many of these issues to address individually so we decided to use the “–release 8” flag in order to make the Java 11 compiler compile down to Java 8 level bytecode as well as to restrict the usages of the new APIs. The downside of this is that new APIs and language features, like Set.of(), and the var keyword cannot be used. However, the upside is that we were able to maintain compatibility between Java 8 and 11 much easier, a tradeoff that the team unanimously agreed on.

See also  Career Stories: Breaking barriers with LinkedIn

Removal of libraries

JavaEE libraries were removed from JDK 11, but they were widely used in our codebase. Many of these libraries have open source replacements

We had to make a decision here about whether to have repo owners manually replace instances of these libraries or to add it into our build toolchain. We decided that the cost of removal for these usages was too high for our working force so we decided to add a static final version of the JavaEE libraries to the build toolchain by default. These libraries are relatively lightweight so it wasn’t a big deal to patch them in. 

JVM option changes

JVM options also changed quite a bit between Java 8 and 11. Several options were made obsolete and other options were deprecated in favor of newer options. For most options, we used an open source service called JaCoLine that helped remove obsolete options. GC Logging options is one of the set of options that received a major revamp due to JEP 271. After realizing that the logging would look completely different and there wasn’t always a good mapping between old and new GC logging options, we decided to just create a default option and asked users to modify it if needed. 

That being said, unified GC Logging is another strong reason to move past Java 8. It makes reading GC logs significantly easier and it’s a feature that can be leveraged to streamline lots of tooling.

Advertisement
free widgets for website

Internal dependencies

LinkedIn runs on a microservice architecture. This means there are many repositories that are linked to each other through a dependency graph. The challenge here is if a dependent repo is not finished migrating, it may block a dependee repo from migrating because a dependent repo may need changes to be compatible with Java 11. This is not an easy problem to solve. By using some graphing algorithms on the dependency graph, we found that the targeted applications had more than 25 levels of dependencies. We wanted to encourage the lower level of dependencies to migrate first but following a strict ordering would restrict the migration velocity. 

In the end, we decided to use rough bucketing to basically split the migration into three parts. During each part, applications around the same level in the dependency graph would be migrated. This was the compromise we made between correctness and velocity, allowing most applications to not be blocked at all by dependencies, while maintaining a decent migration throughput. Learning about our dependency graph was certainly key in making an informed decision about how to do this.

See also  How LinkedIn Ditched the "One Size Fits All" Hiring Approach for InfoSec and Won

After dealing with these roadblocks and more, the infrastructure changes and automation fixes were tested iteratively using our infrastructure’s dry-run testing mechanisms until we managed to automatically migrate about half of the library repositories (~500). We applied the automation to applications as well but did not attempt to commit it as we required owners to still do runtime validation. This runtime validation included both functional and non-functional constraints. 

There were more problems than we had previously anticipated and we realized that several of these changes would need to be addressed going forward with future major Java version upgrades. Therefore, it was imperative to spend some time building quality infrastructure that we could reuse and now that Java 17 has arrived, we couldn’t be happier that we did! 

All in all, preparation for the migration including early adopter testing, infrastructure changes, building automation, and automatically upgrading 500 libraries took three quarters. 

Advertisement
free widgets for website

Migration

The actual migration was planned for an additional three quarters in which 500 libraries and about 1,100 applications would be migrated to Java 11, led by a team of two engineers and one Technical Project Manager. 

Thanks to our thorough pre-migration testing and automation, we did not see too many issues throughout the migration. Preparation really does pay off! Most teams were able to finish their migration within a few hours. 

However, we did see a couple of common runtime issues:

One challenge we faced was some applications suffering in GC performance due to the Java process having fewer GC threads because the JVM respects cgroup limits. Migrating to Java 11 exposed this issue in several applications that were basically taking advantage of LinkedIn’s soft limits (cpu.shares) where CPU cycles could be “borrowed” from idle “neighbor” applications on the same host. With cgroup limits being enforced, access to these cores were lost. In some cases, increasing the number of GC threads manually was required to maintain the same performance.

Another issue we saw with all Java 11 versions was a stark increase in off-heap memory usage. This did not seem to map down to any specific operation and seemed more like a fragmentation issue. Switching from the glibc memory allocator to either mimalloc or jemalloc helped tremendously with these issues.

Advertisement
free widgets for website

Though these issues were a bit scary at first, it was nice to be able to dig down to the root cause, find a proper resolution, and to be able to share our findings in this blog post. 

See also  Operating System Snapshot Automation

During and after the migration, we tried to measure performance as well as we could. We built automation that leveraged our metric collection system in order to get a rough measurement of performance before and after the Java 11 migration. In total, we collected data from 200+ applications and found that Java 11 decreased P99 latency by an average of 10% and increased maximum throughput by an average of 20%. It’s worth noting that we did not change the GC type in the migration to reduce the degree of disruption and have a more fair comparison of performance. Hopefully, these numbers over a decent sample size can be helpful to readers.

In addition to the performance improvements, Java 11 also brings some runtime improvements like the now open-sourced JFR tool. Overall, this migration can be deemed to be very fruitful! 

Future work

There’s still a lot of work to be done. While Jetty is done, we still need to migrate three remaining Java application tracks to Java 11. Afterwards, it will be possible to enable full Java 11 bytecode with minimal effort. In addition, new features like ZGC, Shenandoah, and Project Jigsaw can all be experimented with to see if there are any benefits to be gained there. CMS is also deprecated in Java 11 and is removed in Java 14, which means that LinkedIn off of CMS usage will be another major initiative. Finally, Java 17 has appeared and needs to be in consideration going forward.

Acknowledgements

I’m very happy to be able to write this blog post and it goes without saying that this would not be possible without major support from many people at LinkedIn and Microsoft. First of all, thanks to Vivek Deshpande and Alex Dubrouski for participating heavily in the Java 11 working group. Thanks to my manager, Xialin Zhu, for being a guiding hand when needed. Thanks to our Technical Project Manager Andrew Ding for keeping everyone on track. And I’d also like to give a special thanks to LinkedIn’s Build Tools team, especially Kyle Moore and Yiming Wang, who helped consult on many of the build infrastructure changes that needed to be made for Java 11.

Advertisement
free widgets for website
Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

LINKEDIN

Our Approach to Research and A/B Testing

Published

on

By

our-approach-to-research-and-a/b-testing

We are constantly striving to improve the experience on LinkedIn for our members and customers, with research and experimentation, such as A/B Testing, playing a key role in that work. 

Nearly a decade ago, I discussed the importance of these techniques in our journey to create economic opportunity for every member of the global workforce. Today we have a strong principled approach to how we design and run A/B tests on everything from UI designs to AI algorithms, and feature launches to bug fixes. As our platform continues to grow and evolve, these techniques have become even more essential for us to deliver on our vision of creating economic opportunity for every member of the global workforce. More specifically, we use these techniques to:

  • Deliver the best experiences to our community by leveraging innovation at scale. Through testing and measuring, we continuously evolve our products to add more value, making our platform safer, more engaging and more enjoyable with every interaction. We use various methods to evolve our products and services from member surveys to in-depth offline data analysis to online A/B tests when we have a new feature that we think will benefit our members. For example, we recently tested new ways to help members discover relevant news, conversations, and voices from people and organizations they might not otherwise know. 

  • Avoid guessing; instead test, measure and test again. We don’t assume that we inherently know what is best for our members, as their needs evolve over time. By testing and constantly measuring, we seek feedback and insights to help guide us in the right direction. If a product feature we build doesn’t deliver the impact we intended, we make adjustments.

  • Move quickly and thoughtfully. Our well-defined process includes design evaluations, committee reviews, and quality checks aimed at preventing unintended consequences. We also use observational causal studies to analyze historical data and discover causal patterns whenever applicable. The development of our T-REX platform has also standardized and improved our A/B testing processes. 

Throughout all of this, we believe in the importance of sharing knowledge. We regularly share insights from our tests with the broader engineering community through papers, open source, and academic partnerships.

We are proud to have developed a culture at LinkedIn where research and experimentation is celebrated. Not only does this work help us stay innovative, it reminds us that everything we do is in service of our community, and is to create more economic opportunities for our members around the world.

Advertisement
free widgets for website
See also  TopicGC: How LinkedIn cleans up unused metadata for its Kafka clusters
Continue Reading

LINKEDIN

Operating System Snapshot Automation

Published

on

By

operating-system-snapshot-automation

Co-authors: Rohit Jamuar, Tianxin Zhou

Introduction

LinkedIn has a large set of physical servers geographically spread across several locations. Every application is hosted on a physical server and is distributed and managed across one of these hosts. With a reasonably sizable footprint of servers in data centers, LinkedIn is responsible for ensuring that these hosts are always on an operating system (OS) version deemed the “latest and greatest” for all intents and purposes. The Production Systems Software Engineering (PSSE)  organization within LinkedIn has taken the responsibility of creating timely OS snapshots that are installed on these servers regularly. This blog discusses how this process was implemented and the impetus behind the OS Snapshot Automation (OSSA) project.

Historically, there were less rigid constraints around building snapshots and refreshing the same across our server fleet.  At LinkedIn, we started pursuing the creation and release of OS snapshots at a defined cadence, as it’s ideal for servers to upgrade to the latest snapshots regularly and older snapshots (with potential security vulnerabilities) to be retired. With this vision in mind, we wanted newly built OS snapshots validated once per month with due process and released the same with a tightly managed tempo. The main incentive behind creating a dedicated product for conducting these in an automated manner is rooted in improving overall operational excellence – being able to build snapshots automatically at a regular cadence would allow timely validation and release of the same to the fleet; doing so is a necessity for enabling confidence in customers that their data and private information is not exploited due to potential OS-level vulnerabilities on servers.

Motivation

Pre-OSSA, the OS snapshot process was a manual one closely tied to a handful of one-off shell scripts, and the entire ecosystem was tied to a single server. Moreover, metadata about snapshots and their respective lifecycles were stored in an internal wiki document, and there wasn’t a way to reference this data programmatically. The existing solution had challenges with its maintainability, scalability, and high availability. Another big challenge for this ecosystem was how snapshotting was conducted. It required human effort to create, promote, and deprecate a release. The current infrastructure and the processes to get a snapshot created and boot-tested could, at best, be defined as a stopgap solution, meaning that everything was manually conducted and needed dedicated full-time engineer (FTE) time. OSSA was envisioned with the requirement of making this ecosystem highly available, well-monitored, and programmatically configurable by its consumers. Aside from improving this ecosystem’s availability, we also wanted to coalesce different one-off scripts into a multiproduct to improve the code base’s craftsmanship and maintainability. 

Improving data accessibility via RESTful API

Advertisement
free widgets for website

Initially, processes and information were tied to asynchronous communications over Slack and Jira tickets, which lacked visibility and became cumbersome for tracking necessary information. One of the ways we hoped to solve the visibility aspect of this ecosystem was by enabling the dissemination of this information via a RESTful API, which was the first important step for OSSA to perform. This allowed us to bridge the gap between the information strewn across internal wiki pages and Jira tickets and expose OS snapshot data (release_name, kernel_version, base_release, and expiration_date) via an HTTP-GET call. Another step we wanted to explore was enabling partner teams validating OS snapshots to interact with OSSA so that they could relay their validation results without having to do so over mediums that aren’t necessarily queryable. With this support enabled, we aimed to display this data via an API call where anyone interested could make a simple HTTP-GET call and look at what teams have validated an OS snapshot and the status of validation. Additionally, we enabled support for building OS snapshots and managing their lifecycle via POST calls to this API. In the scheme of things, enabling these functionalities over an API made integration with external teams’ products feasible. It allowed us to continue working towards a solution where all disparate workflows can be triggered via a one-touch model.

When talking internally and with partner teams, it was vital that we were able to gauge the authentication and authorization of incoming POST requests to OSSA.  We decided to rely on DataVault’s token-based authorization service for this, as DataVault has a well-established ecosystem that drives the majority of authorization requests at LinkedIn and fits our expectations. We created custom ACLs, with access rights per validating team, and ensured that these ACLs are enforced by DataVault when an external user submits a POST request with a token. The individual/automation must retrieve the same after authenticating to the DataVault token service. 

See also  TopicGC: How LinkedIn cleans up unused metadata for its Kafka clusters

High availability of API

With this design in place, the next step for us was to ensure that this API remains highly available, as we already had several stakeholders depending on the metadata provided by this API.

Advertisement
free widgets for website

(DC = datacenter)

We decided to have two nodes per site and put all services running on these nodes behind ATS. Our partner teams expected traffic to be routed from outside of the environment where these servers were present, and without ATS, interacting parties would have to open network ACLs for interaction. With the service spread geographically, we had to ensure that OSSA’s API reports the same dataset, and we decided to replicate data between different data centers using GoldenGate replication.

Improved visibility into overall processing

While the API paved the way for managing the OS snapshot lifecycle, OSSA also enabled more granular visibility into the overall OS snapshot process by exposing related data via sources like Iris, and an internal event bus. We use Iris-based notifications to learn more about the state of the OS snapshot during its build, testing, and monitoring. We also emit events to the event bus for anyone to consume via an intuitive UI; external teams are not tied to interacting with the API for this information.

Advertisement
free widgets for website

Now that we have discussed OSSA’s API and HA design in detail, we will dive into what an OS snapshot constitutes, how we have been building it, and the essential validation performed by OSSA before creating an event.

What is an OS snapshot?

Before we dive into how an OS snapshot is built, it’s good to understand what an OS snapshot is. An OS snapshot is a collection of bootfiles (initrd, vmlinuz), RPMs, and a few extra metadata. The snapshot in “OS snapshot” comes from the fact that we take a proverbial snapshot of all of the locally available latest RPMs and bundle them together into an entity that is meant to be immutable by nature. This is deliberate because it helps us isolate issues if we can reliably install the same RPMs across different test environments.

How do we build an OS snapshot?

Advertisement
free widgets for website

Our team inherited creating OS snapshots from a partner team internal to PSSE. At the time of the takeover of this effort, nightly replication of RPMs from upstream sources was preconfigured using an open-source tool (called mrepo).  For RHEL packages, we’d interact with RH7 CDN using certificate-based authorization; for CentOS packages, we’d point to a publicly open mirror (from kernel.org).  At the time of snapshot creation, we’d rely on open-source tools like createrepo and repomanage for building an OS snapshot. Once an OS snapshot is built, it’s replicated over a highly distributed yum infrastructure, and our internal server lifecycle-management tooling refers to this distributed data for triggering in-place or full-reimages of physical servers.

The ability to build snapshots was also exposed with an ACL-enforced endpoint in the API. This endpoint accepts necessary metadata from authorized users and relays that data back to backend logic, referenced when creating a new test snapshot. This data flow is crucial as we build test snapshots for different distros. For example, to build a test snapshot for RHEL7 and RHEL8, we use RPMs verbatim that were fetched from upstream, and we use a separate methodology for creating snapshots for CentOS7. While it’s similar to how we perform these steps for RH* distros, the stark difference comes from the kernel RPMs embedded into CentOS7 snapshots. 

See also  Improving Post Search at LinkedIn

Once a snapshot is built and replicated, the next important step for OSSA is to validate if we can install the newly minted test OS snapshot to a server. Aside from just an OS install, we validate if all of our internal toolings bootstrap the server as expected. Pre-OSSA, a dedicated engineer took over the responsibility of this responsibility and was a time sink considering the frequency with which these had to be done. This is especially true when considering the current engagement model, where multiple experimental test snapshots can be built for internal validation. We saw an opportunity to include automated boot-testing of OS snapshots into OSSA and decided on leveraging an existing product, MaaS – Metal as a Service, a self-service API that allows reimages of servers for triggering a reimage of the same.

Advertisement
free widgets for website

Boot-Testing workflow

OS snapshot validation workflow

Advertisement
free widgets for website

OS snapshot creation workflow

Before diving into the overall workflow, it’d be good to understand how different partner teams pitch into validating an OS snapshot. The OSSA team creates a new test OS snapshot and boot-tests it. Then the Maize team performs application testing on the test-snapshot created by OSSA and InfoSec performs vulnerability scanning of the test-snapshot. Next, the hardware and capacity engineering (HCE) team performs hardware and regression testing on the test-snapshot across multiple hardware SKUs, and the PSSE team owns the promotion of a test snapshot and deprecation of a release snapshot. Lastly, the OS Upgrade Automation team submits imaging requests with the test-snapshot under validation.

The Maize, InfoSec, and HCE teams do their testing in parallel and report back to OSSA with the result. A successful validation would be relayed back as a “nomination,” and failure would be reported as a “deprecation.” Few members from the PSSE organization have been given access to promote a test snapshot as it is meant for making the test-snapshot generally available and building a corresponding release snapshot open for everyone to install on their hosts (as all the necessary validation has been conducted). PSSE also holds the key for deprecating previously released OS snapshots. We could deprecate such OS snapshots if a new CVE is found or unforeseen behavior is observed during runtime.

The following figure describes the general workflow for OSSA’s interactions with different external teams for managing the lifecycle of test and release OS snapshots:

Advertisement
free widgets for website

Monitoring

  1. With OSSA, we saw an opportunity to improve the monitoring of snapshots and the RPM-fetch process. To this point, there wasn’t a reliable way to perform this, as no source of truth could disambiguate and/or spot issues. From the perspective of OS snapshot monitoring, we had to design changes for OSSA to track missing RPM(s), missing or modified metadata, or incorrect checksum(s).

Monitoring the mentioned items not only plays a crucial role in enforcing the immutability of OS snapshots but also plays a relevant role in ensuring that what was vetted by partner teams remains the same throughout a snapshot’s lifetime. For tracking any modifications, we started computing digest per snapshot creation. This digest (JSON) would track the RPMs in a snapshot along with their SHA256. This file would be distributed with the OS snapshot and uploaded to an Ambry container to ensure that a local modification of such files could be verified against the one from the Ambry blobstore.

During this effort, we ran into a few escalations due to newly built snapshots lacking the latest versions of certain RPMs. This was yet another opportunity for improvement! We added logic for validation if the last run of upstream fetch could retrieve the difference of the newly available RPMs. If not, relevant members were notified so the underlying issue could be triaged. A scheduled task drives this check daily and notifies engineers of discrepancies.

As the data contained within OSSA and reported directly impacts various production services, we also implemented monitoring around possible data tampering for items stored in the database. We added an extra column per table, which contains HMAC-SHA256 of other columns whenever any data in a row is modified. There is a scheduled task that, at a regular cadence, iterates over these columns and matches the existing data with the one computed during execution. If there is a mismatch, it auto-disables those OS snapshots from the list of valid snapshots and notifies the developers about the data-integrity violation. Any data modification can be isolated by this means. Because we use HMAC with the private key persisted in a managed keystore, it’s highly improbable to compute this data reliably after tampering with the dataset.

Purging redundant/expired snapshots

Advertisement
free widgets for website

Before OSSA, we kept creating new OS snapshots; over time, the cumulative size grew to 3TB. Many of these snapshots continued to persist as there was no clear path for retiring OS snapshots. With OSSA in place and OSSA-defined workflow for snapshot deprecation, we enabled the purge of older snapshots that are either past their expiration or have been deprecated for a while. In either of these cases, OSSA would step in and purge snapshots lingering around and add no value. In the first iteration of this process, OSSA cleaned ~500GB of redundant data and aimed to remove more redundant data by fine-tuning our expectations. This is a nudge toward operational excellence by controlling the data we continue to own and the network implications of transferring the same for replication.

Conclusion and future work

Before OSSA, information about snapshots was tied to another source of truth, which limited the number of OS snapshots that could be concurrently supported per distribution. This limitation was particularly hindering as the number of concurrent snapshots for testing and general availability could be more than one per distro, considering that we were building snapshots at a much higher frequency. Removing reliance on that SoT and solely depending on OSSA for retrieving OS snapshots’ metadata removed the hard dependency, and with OSSA, we can have as many snapshots of any type per distribution.

OSSA has emerged as a source of truth for anything and everything related to OS snapshots within LinkedIn. A product that emerged with a need to improve visibility and operability has brought itself to a point where multiple critical services depend on data from OSSA being served on demand. It also enabled (authorized) users to trigger OS snapshot builds without explicit intervention from our team and organization. A plethora of checks and padding were added to OSSA to ensure that internal processes leave audit trails and actionable HTTP responses that make interaction with an inherently complex ecosystem reliable and further reduce dependency on tribal knowledge for driving this process from end-to-end. While our significant deliverables are live, we are still seeking to improve the product to ensure that it continues scaling with requirements. Some of the near-term requirements are to add support for partially validating and conditionally releasing snapshots, exploring templatization of the snapshot creation process for different distros, upstream sync containerization, and overall improvement to the ecosystem.

Acknowledgements

OSSA has culminated into the product it is today because of continuous feedback and guidance from many engineering leaders, technical program managers, and engineering managers that helped mold design considerations and deliverables. Shoutout to Steve Fantin for driving the work to enable repodB monitoring and interfacing OSSA with Ambry and to Jayita Roy and Khushboo Kuchhal for scaling service to a new data center and adding a dedicated staging environment. Many thanks to Cynthia Arriaga and Carlton Giles for keeping our deliverables under close watch and helping us unblock issues by effectively liaising with external teams. Thanks, Franck Martin, for supporting this initiative and helping us roadshow this product into a viable product at the heart of multiple design and development endeavors across LinkedIn. Thanks, Nishan Weragama, Adam Debus, and Sean Patrick, for providing valuable feedback during the initial design and helping us stay aligned with the Fleet Compliance initiative. Many thanks to Nitin Sonawane and Milind Talekar for supporting this effort.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
See also  Overcoming challenges with Linux cgroups memory accounting
Continue Reading

LINKEDIN

Building LinkedIn’s Skills Graph to Power a Skills-First World

Published

on

By

building-linkedin’s-skills-graph-to-power-a-skills-first-world

Co-authors: Sofus Macskássy, Yi Pan, Ji Yan, Yanen Li, Di Zhou, Shiyong Lin

As industries rapidly evolve, so do the skills necessary for success. Skill sets for jobs globally have changed by 25% since 2015 and this number is expected to double by 2027. Yet, we’ve long relied on insufficient and unequal signals when evaluating talent and predicting success – who you know, where you went to school, or who your last employer was. If we look at the labor market instead through the lens of skills – the skills you have and the skills a role or industry demands – we can create a transparent and fair job matching process that drives better outcomes for employers and employees. 

This new reality requires a common understanding of skills, backed by better data. For nearly a decade, our Economic Graph has helped leaders benchmark and compare labor markets and economies across the world. A critical element of this analysis is the insight provided by LinkedIn’s Skills Graph, which creates a common language around skills to help us all better understand the skills that power the global workforce. The Skills Graph does this by dynamically mapping the relationships between 39K skills, 875M people, 59M companies, and other organizations globally. 

It also drives relevance and matching across LinkedIn – helping learners find content more relevant to their career path; helping job seekers find jobs that are a good fit; and helping recruiters find the highest quality candidates. For example, these relationships between skills means we can detect that “cost management” in a job seekers’ profile is relevant to a job posting that lists “project budgeting” as a required skill.

Building the LinkedIn Skills Graph

At the heart of our Skills Graph lies our skills taxonomy. The taxonomy is a curated list of unique skills and their intertwined relationships, each with detailed information about those skills. It’s built on a deep understanding of how skills power professional journeys, including what skills are required in a job, what skills a member has, and how members move from one position to the next. 

Today, our taxonomy consists of over 39,000 skills spanning 26 languages, over 374,000 aliases (different ways to refer to the same skill – e.g., “data analysis” and “data analytics”), and more than 200,000 links between skills. Even more important than the volume of data, the key to unlocking the power of skills lies in the structure and relationships between the skills. To create a stronger network of connected skills in our taxonomy, we utilize a framework we call, “Structured Skills.” This framework increases our understanding of every skill in our database by mapping the relationships it has to other skills around it, and creates richer, more accurate skill-driven experiences for our members and customers. For example,

Advertisement
free widgets for website
  • If a member knows about Artificial Neural Networks, the member knows something about Deep Learning, which means the member knows something about Machine Learning.

  • If a job requires Supply Chain Engineering, having a skill in Supply Chain Management or Industry Engineering is definitely also relevant.

Creating meaningful and accurate relationships between skill sets is critical to getting the most out of our Structured Skills. To do this, our machine learning and artificial intelligence combs through massive amounts of data and suggests new skills and relations between them. As our Skills Graph continues to grow and learn with AI, we are committed to maintaining the high quality of the data and connections found in our taxonomy. We do this with the help of trained taxonomists on our team, who manually review our skills data and ensure that we can verify its integrity and relevancy.

Structured skills consists of meaningful relationships between skills that empower deep reasoning to match members to relevant content such as jobs, learning material, and feed posts

But, building the taxonomy and Structured Skills is meaningless without connecting these to the jobs and members on our platform. Together, the Structured Skills and mapping to our members and jobs make up our Skills Graph and both are needed to unlock the full potential of a skill-based job market.

Advertisement
free widgets for website

Structured skills enrich the set of skills for both members and jobs to ensure we can find all the relevant jobs for a member. We show the skill overlap so that members can see which of their skills are a match and also potential skill gaps that they might want to address for their own career growth

Leveraging Machine Learning to map skills to members and jobs

Although millions of LinkedIn members have added skills to their profile, many have not added their most relevant skills to their skills sections or kept their skills section up to date. Instead, they list relevant skills in their summary sections, within the job experience descriptions in their profiles or on the resumes they submit. On the other hand, many jobs on LinkedIn don’t comprehensively describe what skills are needed. Many listings also come through an online job posting that a recruiter has submitted but are ingested from our customers’ websites. In these scenarios where skills are not explicitly provided, it’s critical to pull skills data from the job descriptions, summaries, and more, to create a tool that drives reliable insights.

As you can imagine, this process requires processing a lot of text. So, we have built machine learning models that leverage natural language understanding, deep learning, and information extraction technologies. To help train these models, our human labelers use AI to connect text found across jobs, profiles, and learning courses, to specific skills in our taxonomy. Our system then learns to recognize different ways to refer to the same type of skill. Combined with natural language processing, we extract skills from many different types of text – with a high degree of confidence – to make sure we have high coverage and high precision when we map skills to our members and job posts.

Advertisement
free widgets for website

We also leverage various clustering and machine learning algorithms to identify the core skills relating to a given job or function. We do this by applying these tools to all member histories and all job descriptions on our platform, which identify the skills that are likely associated with a job post or member job experience. These techniques, together with Structured Skills, create a holistic picture of skills a member has and skills needed to do a job. 

When hirers create a job post on the LinkedIn platform, we use machine learning and Structured skills to suggest explicit skills that we can tag the post with to increase discoverability

These models are designed to continuously improve and learn over time based on engagement from members on the LinkedIn platform, job seekers, hirers, and learners. For example, when a hirer posts a new job on our platform and the hirer types in the job description, our machine learning model automatically suggests the skills that are associated with that job posting. The hirer can refine the selection of skills that best represent the qualification of this job by removing and adding these suggested skills manually.

Advertisement
free widgets for website

Looking forward

Beyond streamlining the hiring process, understanding members’ skills allows us to surface more relevant posts in their feed, suggest people they should connect with, and companies to follow. It also helps sales and marketing professionals on Linkedin be more effective by using skills for ads targeting and provides insights to our sales and marketing customers by sharing details on the skill sets of those who engage with their content. As our Skills Graph continues to evolve in parallel with the global workforce, it will only become smarter and deliver better outcomes for hirers, learners, job seekers, customers, and members. 

Realizing a more equitable and efficient future of work will rely on building a deeper understanding of peoples’ abilities and potential. To keep up, some companies are already utilizing skills to identify qualified candidates – more than 40% of hirers on LinkedIn explicitly use skills data to fill their roles. 

As our CEO Ryan Roslansky stated at LinkedIn’s Talent Connect event this year, “We can build a world where everyone has access to opportunity not because of where they were born, who they know, or where they went to school, but because of their actual skills and ability.” Our Skills Graph will continue to be a critical part of how we help make a skills-based labor market a reality. We’re excited to share updates as our work continues on this journey.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
See also  TopicGC: How LinkedIn cleans up unused metadata for its Kafka clusters
Continue Reading

Trending