Connect with us

LINKEDIN

Migration madness: How to navigate the chaos of large cross-team initiatives towards a common goal

Published

on

migration-madness:-how-to-navigate-the-chaos-of-large-cross-team-initiatives-towards-a-common-goal

Co-authors: Lily Wittle and Cathy Ji

Introduction

The Gateway-as-a-Platform (GaaP) team at LinkedIn builds infrastructure to support our secure third-party integrations for our customers. LinkedIn features that require sending or receiving data from external sources, such as importing email contacts and sharing LinkedIn posts as tweets, use our infrastructure. 

In 2020, we sought to deprecate our legacy deployment backend, which impacted every third-party integration at the company, and spanned use cases across dozens of engineering teams. Migrating 600+ use cases to our new deployment backend required technical effort shared amongst 100+ individual engineers. Driving a cross-functional initiative (called a Horizontal Initiative or HI) at the scale of our project was a huge planning and coordination effort. Here we share our learnings, tips, and tricks for successfully executing distributed-effort, cross-functional initiatives. 

Chaos-navigating strategies

Large, cross-team projects can seem daunting. They require hours of work from dozens of different individuals, and there may be dependencies between different individuals’ work. Driving these projects is a huge feat of coordination, even when everything runs smoothly! Plus, it’s not unusual for unexpected issues to pop up along the way, requiring additional work or adjusted timelines. All of these factors often make large projects hectic, but planning, documentation, and communication can bring order to the chaos. These are some of the strategies we used in our migration to help it run as smoothly as possible. 

Get clients aligned with the benefits of the new system

Large migrations are notoriously a lot of work and generally, no one likes doing a lot of work if they don’t see the need. One of our first steps was to get clients on board with the benefits of the new system, giving a purpose to the migration work. We framed these benefits in terms of reducing/eliminating pain points that clients had previously experienced with our platform. For example, the new system would give them the ability to search, checkout, and deploy their integrations using standard LinkedIn tooling, instead of using custom mechanisms that required a learning curve. In a few instances, these pain points actually occurred during the migration process, which gave us an opportunity to highlight the benefits that they would experience once the migration was complete. We found that once clients understood the motivation behind the migration, they were much more agreeable to working through the process. Plus, some even got excited about the benefits they would soon receive from the new system! 

Advertisement
free widgets for website

Write clear step-by-step instructions

After documenting all of the necessary steps to perform this migration, we used these instructions to migrate a pilot use case ourselves, following only the exact written instructions without considering any of our prior knowledge. This helped ensure that our instructions were sufficiently detailed and easy for anyone to follow. When working across multiple teams, your collaborators don’t have as much knowledge as you do about your specific domain. Documentation explaining the full context makes it easier for them to ramp up, understand what you’re asking of them, and complete the work on their own. As the first few clients performed their migrations, we asked them for feedback on how we could improve the instructions, and ended up adding even more details along with copy/pastable commands, examples, and an FAQ. Eventually, our instructions were clear enough so that many engineers were able to complete the migration without any help from us at all. Documenting every little detail can be painstaking, but enabling self-service pays off in the long run. 

See also  Our Approach to Research and A/B Testing

Provide automated tooling wherever possible

For our migration, each client team had to set up a new directory with a specific structure, and then copy several files into a subdirectory for each of their individual use cases. These code changes were almost identical for all clients, and tedious to accomplish with “mkdir” and “cp.” So, we created Python scripts that clients could use to perform this setup/copying. For our larger clients who had 10+ use cases to migrate, this cut 1-2 hours of work down to < 5 minutes. 

In any distributed project, there are often similar tasks that need to be done by many different people and/or tasks that are trivial but time-consuming. Finding all possible opportunities to leverage automation can save a huge amount of cumulative time. For example, saving one hour per engineer in our migration reduced the overall project timeline by three working weeks. Minimizing time spent on monotonous tasks is also good for morale; most engineers would much rather spend their time solving interesting problems than drudging through repetitive labor! 

Make it easy for clients to find help

Having been on the other side of big projects like this, we know it can sometimes be difficult to chase down answers when encountering an issue. We aimed to minimize friction in our clients’ experience by making it easy for them to find support. One support method was a Slack channel specifically dedicated to questions and issues that came up during the migration process. As the project deadline neared, we also set up “workstation” days where clients could sign up for 1:1 support. These were especially useful for live debugging sessions, in which we could quickly work through any issues that came up and ensure the clients left the workstation with functioning code. Dedicated support time and prompt replies help issues get resolved faster and with less frustration. 

See also  Super Tables: The road to building reliable and discoverable data products

Utilize decision frameworks

At LinkedIn, organizationally complex, high impact, and large scale projects operate within the framework of HIs. The HI framework ensures such projects 1) have proven importance and urgency,  2) are built on a high quality technical solution with a clear and specific execution plan,  and 3) have avenues for building alignment with partner teams on roles, responsibilities, priorities, and timelines. By modeling our project within the HI framework during our planning phase, we were able to obtain commitment and buy-in from partner teams that increased confidence in our project’s successful completion. 

Advertisement
free widgets for website

Even if such a framework does not exist formally in your organization, prioritizing communication and alignment with partner teams at project onset sets cross-org projects up for success before execution even begins.

Track project progress

Effective tracking can make the difference between a project that finishes on time, and one that never quite makes it to the end. The upfront cost for setting up JIRA and experimenting with different project management methodologies paid off in the long run with increased focus and operating efficiency. Regular status meetings, clear categorization of issues, and utilization of workflows and dashboards on JIRA helped us monitor, visualize, and quantify progress across the 600+ use cases affected by the migration.

When running an initiative with many dependencies, proper tracking helps ensure nothing slips through the cracks. Issues and roadblocks are bound to appear in any project, but effective project management surfaces risks early on, reducing the probability of setbacks significantly impacting the overall project goal. Also, by maintaining consistent and transparent communication with partner teams and stakeholders throughout the process, we built trust and strengthened working relationships with colleagues that continued to pay off well after this particular project was wrapped. 

Advertisement
free widgets for website
See also  Career stories: Mobilizing learners worldwide
Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

LINKEDIN

Our Approach to Research and A/B Testing

Published

on

By

our-approach-to-research-and-a/b-testing

We are constantly striving to improve the experience on LinkedIn for our members and customers, with research and experimentation, such as A/B Testing, playing a key role in that work. 

Nearly a decade ago, I discussed the importance of these techniques in our journey to create economic opportunity for every member of the global workforce. Today we have a strong principled approach to how we design and run A/B tests on everything from UI designs to AI algorithms, and feature launches to bug fixes. As our platform continues to grow and evolve, these techniques have become even more essential for us to deliver on our vision of creating economic opportunity for every member of the global workforce. More specifically, we use these techniques to:

  • Deliver the best experiences to our community by leveraging innovation at scale. Through testing and measuring, we continuously evolve our products to add more value, making our platform safer, more engaging and more enjoyable with every interaction. We use various methods to evolve our products and services from member surveys to in-depth offline data analysis to online A/B tests when we have a new feature that we think will benefit our members. For example, we recently tested new ways to help members discover relevant news, conversations, and voices from people and organizations they might not otherwise know. 

  • Avoid guessing; instead test, measure and test again. We don’t assume that we inherently know what is best for our members, as their needs evolve over time. By testing and constantly measuring, we seek feedback and insights to help guide us in the right direction. If a product feature we build doesn’t deliver the impact we intended, we make adjustments.

  • Move quickly and thoughtfully. Our well-defined process includes design evaluations, committee reviews, and quality checks aimed at preventing unintended consequences. We also use observational causal studies to analyze historical data and discover causal patterns whenever applicable. The development of our T-REX platform has also standardized and improved our A/B testing processes. 

Throughout all of this, we believe in the importance of sharing knowledge. We regularly share insights from our tests with the broader engineering community through papers, open source, and academic partnerships.

We are proud to have developed a culture at LinkedIn where research and experimentation is celebrated. Not only does this work help us stay innovative, it reminds us that everything we do is in service of our community, and is to create more economic opportunities for our members around the world.

Advertisement
free widgets for website
See also  LinkedIn’s GraphQL journey for integrations and partnerships: How we accelerated development by 90%
Continue Reading

LINKEDIN

Operating System Snapshot Automation

Published

on

By

operating-system-snapshot-automation

Co-authors: Rohit Jamuar, Tianxin Zhou

Introduction

LinkedIn has a large set of physical servers geographically spread across several locations. Every application is hosted on a physical server and is distributed and managed across one of these hosts. With a reasonably sizable footprint of servers in data centers, LinkedIn is responsible for ensuring that these hosts are always on an operating system (OS) version deemed the “latest and greatest” for all intents and purposes. The Production Systems Software Engineering (PSSE)  organization within LinkedIn has taken the responsibility of creating timely OS snapshots that are installed on these servers regularly. This blog discusses how this process was implemented and the impetus behind the OS Snapshot Automation (OSSA) project.

Historically, there were less rigid constraints around building snapshots and refreshing the same across our server fleet.  At LinkedIn, we started pursuing the creation and release of OS snapshots at a defined cadence, as it’s ideal for servers to upgrade to the latest snapshots regularly and older snapshots (with potential security vulnerabilities) to be retired. With this vision in mind, we wanted newly built OS snapshots validated once per month with due process and released the same with a tightly managed tempo. The main incentive behind creating a dedicated product for conducting these in an automated manner is rooted in improving overall operational excellence – being able to build snapshots automatically at a regular cadence would allow timely validation and release of the same to the fleet; doing so is a necessity for enabling confidence in customers that their data and private information is not exploited due to potential OS-level vulnerabilities on servers.

Motivation

Pre-OSSA, the OS snapshot process was a manual one closely tied to a handful of one-off shell scripts, and the entire ecosystem was tied to a single server. Moreover, metadata about snapshots and their respective lifecycles were stored in an internal wiki document, and there wasn’t a way to reference this data programmatically. The existing solution had challenges with its maintainability, scalability, and high availability. Another big challenge for this ecosystem was how snapshotting was conducted. It required human effort to create, promote, and deprecate a release. The current infrastructure and the processes to get a snapshot created and boot-tested could, at best, be defined as a stopgap solution, meaning that everything was manually conducted and needed dedicated full-time engineer (FTE) time. OSSA was envisioned with the requirement of making this ecosystem highly available, well-monitored, and programmatically configurable by its consumers. Aside from improving this ecosystem’s availability, we also wanted to coalesce different one-off scripts into a multiproduct to improve the code base’s craftsmanship and maintainability. 

Improving data accessibility via RESTful API

Advertisement
free widgets for website

Initially, processes and information were tied to asynchronous communications over Slack and Jira tickets, which lacked visibility and became cumbersome for tracking necessary information. One of the ways we hoped to solve the visibility aspect of this ecosystem was by enabling the dissemination of this information via a RESTful API, which was the first important step for OSSA to perform. This allowed us to bridge the gap between the information strewn across internal wiki pages and Jira tickets and expose OS snapshot data (release_name, kernel_version, base_release, and expiration_date) via an HTTP-GET call. Another step we wanted to explore was enabling partner teams validating OS snapshots to interact with OSSA so that they could relay their validation results without having to do so over mediums that aren’t necessarily queryable. With this support enabled, we aimed to display this data via an API call where anyone interested could make a simple HTTP-GET call and look at what teams have validated an OS snapshot and the status of validation. Additionally, we enabled support for building OS snapshots and managing their lifecycle via POST calls to this API. In the scheme of things, enabling these functionalities over an API made integration with external teams’ products feasible. It allowed us to continue working towards a solution where all disparate workflows can be triggered via a one-touch model.

When talking internally and with partner teams, it was vital that we were able to gauge the authentication and authorization of incoming POST requests to OSSA.  We decided to rely on DataVault’s token-based authorization service for this, as DataVault has a well-established ecosystem that drives the majority of authorization requests at LinkedIn and fits our expectations. We created custom ACLs, with access rights per validating team, and ensured that these ACLs are enforced by DataVault when an external user submits a POST request with a token. The individual/automation must retrieve the same after authenticating to the DataVault token service. 

See also  Our Approach to Research and A/B Testing

High availability of API

With this design in place, the next step for us was to ensure that this API remains highly available, as we already had several stakeholders depending on the metadata provided by this API.

Advertisement
free widgets for website

(DC = datacenter)

We decided to have two nodes per site and put all services running on these nodes behind ATS. Our partner teams expected traffic to be routed from outside of the environment where these servers were present, and without ATS, interacting parties would have to open network ACLs for interaction. With the service spread geographically, we had to ensure that OSSA’s API reports the same dataset, and we decided to replicate data between different data centers using GoldenGate replication.

Improved visibility into overall processing

While the API paved the way for managing the OS snapshot lifecycle, OSSA also enabled more granular visibility into the overall OS snapshot process by exposing related data via sources like Iris, and an internal event bus. We use Iris-based notifications to learn more about the state of the OS snapshot during its build, testing, and monitoring. We also emit events to the event bus for anyone to consume via an intuitive UI; external teams are not tied to interacting with the API for this information.

Advertisement
free widgets for website

Now that we have discussed OSSA’s API and HA design in detail, we will dive into what an OS snapshot constitutes, how we have been building it, and the essential validation performed by OSSA before creating an event.

What is an OS snapshot?

Before we dive into how an OS snapshot is built, it’s good to understand what an OS snapshot is. An OS snapshot is a collection of bootfiles (initrd, vmlinuz), RPMs, and a few extra metadata. The snapshot in “OS snapshot” comes from the fact that we take a proverbial snapshot of all of the locally available latest RPMs and bundle them together into an entity that is meant to be immutable by nature. This is deliberate because it helps us isolate issues if we can reliably install the same RPMs across different test environments.

How do we build an OS snapshot?

Advertisement
free widgets for website

Our team inherited creating OS snapshots from a partner team internal to PSSE. At the time of the takeover of this effort, nightly replication of RPMs from upstream sources was preconfigured using an open-source tool (called mrepo).  For RHEL packages, we’d interact with RH7 CDN using certificate-based authorization; for CentOS packages, we’d point to a publicly open mirror (from kernel.org).  At the time of snapshot creation, we’d rely on open-source tools like createrepo and repomanage for building an OS snapshot. Once an OS snapshot is built, it’s replicated over a highly distributed yum infrastructure, and our internal server lifecycle-management tooling refers to this distributed data for triggering in-place or full-reimages of physical servers.

The ability to build snapshots was also exposed with an ACL-enforced endpoint in the API. This endpoint accepts necessary metadata from authorized users and relays that data back to backend logic, referenced when creating a new test snapshot. This data flow is crucial as we build test snapshots for different distros. For example, to build a test snapshot for RHEL7 and RHEL8, we use RPMs verbatim that were fetched from upstream, and we use a separate methodology for creating snapshots for CentOS7. While it’s similar to how we perform these steps for RH* distros, the stark difference comes from the kernel RPMs embedded into CentOS7 snapshots. 

See also  Welcome to the Inaugural LinkedIn-Cornell grant recipients

Once a snapshot is built and replicated, the next important step for OSSA is to validate if we can install the newly minted test OS snapshot to a server. Aside from just an OS install, we validate if all of our internal toolings bootstrap the server as expected. Pre-OSSA, a dedicated engineer took over the responsibility of this responsibility and was a time sink considering the frequency with which these had to be done. This is especially true when considering the current engagement model, where multiple experimental test snapshots can be built for internal validation. We saw an opportunity to include automated boot-testing of OS snapshots into OSSA and decided on leveraging an existing product, MaaS – Metal as a Service, a self-service API that allows reimages of servers for triggering a reimage of the same.

Advertisement
free widgets for website

Boot-Testing workflow

OS snapshot validation workflow

Advertisement
free widgets for website

OS snapshot creation workflow

Before diving into the overall workflow, it’d be good to understand how different partner teams pitch into validating an OS snapshot. The OSSA team creates a new test OS snapshot and boot-tests it. Then the Maize team performs application testing on the test-snapshot created by OSSA and InfoSec performs vulnerability scanning of the test-snapshot. Next, the hardware and capacity engineering (HCE) team performs hardware and regression testing on the test-snapshot across multiple hardware SKUs, and the PSSE team owns the promotion of a test snapshot and deprecation of a release snapshot. Lastly, the OS Upgrade Automation team submits imaging requests with the test-snapshot under validation.

The Maize, InfoSec, and HCE teams do their testing in parallel and report back to OSSA with the result. A successful validation would be relayed back as a “nomination,” and failure would be reported as a “deprecation.” Few members from the PSSE organization have been given access to promote a test snapshot as it is meant for making the test-snapshot generally available and building a corresponding release snapshot open for everyone to install on their hosts (as all the necessary validation has been conducted). PSSE also holds the key for deprecating previously released OS snapshots. We could deprecate such OS snapshots if a new CVE is found or unforeseen behavior is observed during runtime.

The following figure describes the general workflow for OSSA’s interactions with different external teams for managing the lifecycle of test and release OS snapshots:

Advertisement
free widgets for website

Monitoring

  1. With OSSA, we saw an opportunity to improve the monitoring of snapshots and the RPM-fetch process. To this point, there wasn’t a reliable way to perform this, as no source of truth could disambiguate and/or spot issues. From the perspective of OS snapshot monitoring, we had to design changes for OSSA to track missing RPM(s), missing or modified metadata, or incorrect checksum(s).

Monitoring the mentioned items not only plays a crucial role in enforcing the immutability of OS snapshots but also plays a relevant role in ensuring that what was vetted by partner teams remains the same throughout a snapshot’s lifetime. For tracking any modifications, we started computing digest per snapshot creation. This digest (JSON) would track the RPMs in a snapshot along with their SHA256. This file would be distributed with the OS snapshot and uploaded to an Ambry container to ensure that a local modification of such files could be verified against the one from the Ambry blobstore.

During this effort, we ran into a few escalations due to newly built snapshots lacking the latest versions of certain RPMs. This was yet another opportunity for improvement! We added logic for validation if the last run of upstream fetch could retrieve the difference of the newly available RPMs. If not, relevant members were notified so the underlying issue could be triaged. A scheduled task drives this check daily and notifies engineers of discrepancies.

As the data contained within OSSA and reported directly impacts various production services, we also implemented monitoring around possible data tampering for items stored in the database. We added an extra column per table, which contains HMAC-SHA256 of other columns whenever any data in a row is modified. There is a scheduled task that, at a regular cadence, iterates over these columns and matches the existing data with the one computed during execution. If there is a mismatch, it auto-disables those OS snapshots from the list of valid snapshots and notifies the developers about the data-integrity violation. Any data modification can be isolated by this means. Because we use HMAC with the private key persisted in a managed keystore, it’s highly improbable to compute this data reliably after tampering with the dataset.

Purging redundant/expired snapshots

Advertisement
free widgets for website

Before OSSA, we kept creating new OS snapshots; over time, the cumulative size grew to 3TB. Many of these snapshots continued to persist as there was no clear path for retiring OS snapshots. With OSSA in place and OSSA-defined workflow for snapshot deprecation, we enabled the purge of older snapshots that are either past their expiration or have been deprecated for a while. In either of these cases, OSSA would step in and purge snapshots lingering around and add no value. In the first iteration of this process, OSSA cleaned ~500GB of redundant data and aimed to remove more redundant data by fine-tuning our expectations. This is a nudge toward operational excellence by controlling the data we continue to own and the network implications of transferring the same for replication.

Conclusion and future work

Before OSSA, information about snapshots was tied to another source of truth, which limited the number of OS snapshots that could be concurrently supported per distribution. This limitation was particularly hindering as the number of concurrent snapshots for testing and general availability could be more than one per distro, considering that we were building snapshots at a much higher frequency. Removing reliance on that SoT and solely depending on OSSA for retrieving OS snapshots’ metadata removed the hard dependency, and with OSSA, we can have as many snapshots of any type per distribution.

OSSA has emerged as a source of truth for anything and everything related to OS snapshots within LinkedIn. A product that emerged with a need to improve visibility and operability has brought itself to a point where multiple critical services depend on data from OSSA being served on demand. It also enabled (authorized) users to trigger OS snapshot builds without explicit intervention from our team and organization. A plethora of checks and padding were added to OSSA to ensure that internal processes leave audit trails and actionable HTTP responses that make interaction with an inherently complex ecosystem reliable and further reduce dependency on tribal knowledge for driving this process from end-to-end. While our significant deliverables are live, we are still seeking to improve the product to ensure that it continues scaling with requirements. Some of the near-term requirements are to add support for partially validating and conditionally releasing snapshots, exploring templatization of the snapshot creation process for different distros, upstream sync containerization, and overall improvement to the ecosystem.

Acknowledgements

OSSA has culminated into the product it is today because of continuous feedback and guidance from many engineering leaders, technical program managers, and engineering managers that helped mold design considerations and deliverables. Shoutout to Steve Fantin for driving the work to enable repodB monitoring and interfacing OSSA with Ambry and to Jayita Roy and Khushboo Kuchhal for scaling service to a new data center and adding a dedicated staging environment. Many thanks to Cynthia Arriaga and Carlton Giles for keeping our deliverables under close watch and helping us unblock issues by effectively liaising with external teams. Thanks, Franck Martin, for supporting this initiative and helping us roadshow this product into a viable product at the heart of multiple design and development endeavors across LinkedIn. Thanks, Nishan Weragama, Adam Debus, and Sean Patrick, for providing valuable feedback during the initial design and helping us stay aligned with the Fleet Compliance initiative. Many thanks to Nitin Sonawane and Milind Talekar for supporting this effort.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
See also  (Re)building Threat Detection and Incident Response at LinkedIn
Continue Reading

LINKEDIN

Building LinkedIn’s Skills Graph to Power a Skills-First World

Published

on

By

building-linkedin’s-skills-graph-to-power-a-skills-first-world

Co-authors: Sofus Macskássy, Yi Pan, Ji Yan, Yanen Li, Di Zhou, Shiyong Lin

As industries rapidly evolve, so do the skills necessary for success. Skill sets for jobs globally have changed by 25% since 2015 and this number is expected to double by 2027. Yet, we’ve long relied on insufficient and unequal signals when evaluating talent and predicting success – who you know, where you went to school, or who your last employer was. If we look at the labor market instead through the lens of skills – the skills you have and the skills a role or industry demands – we can create a transparent and fair job matching process that drives better outcomes for employers and employees. 

This new reality requires a common understanding of skills, backed by better data. For nearly a decade, our Economic Graph has helped leaders benchmark and compare labor markets and economies across the world. A critical element of this analysis is the insight provided by LinkedIn’s Skills Graph, which creates a common language around skills to help us all better understand the skills that power the global workforce. The Skills Graph does this by dynamically mapping the relationships between 39K skills, 875M people, 59M companies, and other organizations globally. 

It also drives relevance and matching across LinkedIn – helping learners find content more relevant to their career path; helping job seekers find jobs that are a good fit; and helping recruiters find the highest quality candidates. For example, these relationships between skills means we can detect that “cost management” in a job seekers’ profile is relevant to a job posting that lists “project budgeting” as a required skill.

Building the LinkedIn Skills Graph

At the heart of our Skills Graph lies our skills taxonomy. The taxonomy is a curated list of unique skills and their intertwined relationships, each with detailed information about those skills. It’s built on a deep understanding of how skills power professional journeys, including what skills are required in a job, what skills a member has, and how members move from one position to the next. 

Today, our taxonomy consists of over 39,000 skills spanning 26 languages, over 374,000 aliases (different ways to refer to the same skill – e.g., “data analysis” and “data analytics”), and more than 200,000 links between skills. Even more important than the volume of data, the key to unlocking the power of skills lies in the structure and relationships between the skills. To create a stronger network of connected skills in our taxonomy, we utilize a framework we call, “Structured Skills.” This framework increases our understanding of every skill in our database by mapping the relationships it has to other skills around it, and creates richer, more accurate skill-driven experiences for our members and customers. For example,

Advertisement
free widgets for website
  • If a member knows about Artificial Neural Networks, the member knows something about Deep Learning, which means the member knows something about Machine Learning.

  • If a job requires Supply Chain Engineering, having a skill in Supply Chain Management or Industry Engineering is definitely also relevant.

Creating meaningful and accurate relationships between skill sets is critical to getting the most out of our Structured Skills. To do this, our machine learning and artificial intelligence combs through massive amounts of data and suggests new skills and relations between them. As our Skills Graph continues to grow and learn with AI, we are committed to maintaining the high quality of the data and connections found in our taxonomy. We do this with the help of trained taxonomists on our team, who manually review our skills data and ensure that we can verify its integrity and relevancy.

Structured skills consists of meaningful relationships between skills that empower deep reasoning to match members to relevant content such as jobs, learning material, and feed posts

But, building the taxonomy and Structured Skills is meaningless without connecting these to the jobs and members on our platform. Together, the Structured Skills and mapping to our members and jobs make up our Skills Graph and both are needed to unlock the full potential of a skill-based job market.

Advertisement
free widgets for website

Structured skills enrich the set of skills for both members and jobs to ensure we can find all the relevant jobs for a member. We show the skill overlap so that members can see which of their skills are a match and also potential skill gaps that they might want to address for their own career growth

Leveraging Machine Learning to map skills to members and jobs

Although millions of LinkedIn members have added skills to their profile, many have not added their most relevant skills to their skills sections or kept their skills section up to date. Instead, they list relevant skills in their summary sections, within the job experience descriptions in their profiles or on the resumes they submit. On the other hand, many jobs on LinkedIn don’t comprehensively describe what skills are needed. Many listings also come through an online job posting that a recruiter has submitted but are ingested from our customers’ websites. In these scenarios where skills are not explicitly provided, it’s critical to pull skills data from the job descriptions, summaries, and more, to create a tool that drives reliable insights.

As you can imagine, this process requires processing a lot of text. So, we have built machine learning models that leverage natural language understanding, deep learning, and information extraction technologies. To help train these models, our human labelers use AI to connect text found across jobs, profiles, and learning courses, to specific skills in our taxonomy. Our system then learns to recognize different ways to refer to the same type of skill. Combined with natural language processing, we extract skills from many different types of text – with a high degree of confidence – to make sure we have high coverage and high precision when we map skills to our members and job posts.

Advertisement
free widgets for website

We also leverage various clustering and machine learning algorithms to identify the core skills relating to a given job or function. We do this by applying these tools to all member histories and all job descriptions on our platform, which identify the skills that are likely associated with a job post or member job experience. These techniques, together with Structured Skills, create a holistic picture of skills a member has and skills needed to do a job. 

When hirers create a job post on the LinkedIn platform, we use machine learning and Structured skills to suggest explicit skills that we can tag the post with to increase discoverability

These models are designed to continuously improve and learn over time based on engagement from members on the LinkedIn platform, job seekers, hirers, and learners. For example, when a hirer posts a new job on our platform and the hirer types in the job description, our machine learning model automatically suggests the skills that are associated with that job posting. The hirer can refine the selection of skills that best represent the qualification of this job by removing and adding these suggested skills manually.

Advertisement
free widgets for website

Looking forward

Beyond streamlining the hiring process, understanding members’ skills allows us to surface more relevant posts in their feed, suggest people they should connect with, and companies to follow. It also helps sales and marketing professionals on Linkedin be more effective by using skills for ads targeting and provides insights to our sales and marketing customers by sharing details on the skill sets of those who engage with their content. As our Skills Graph continues to evolve in parallel with the global workforce, it will only become smarter and deliver better outcomes for hirers, learners, job seekers, customers, and members. 

Realizing a more equitable and efficient future of work will rely on building a deeper understanding of peoples’ abilities and potential. To keep up, some companies are already utilizing skills to identify qualified candidates – more than 40% of hirers on LinkedIn explicitly use skills data to fill their roles. 

As our CEO Ryan Roslansky stated at LinkedIn’s Talent Connect event this year, “We can build a world where everyone has access to opportunity not because of where they were born, who they know, or where they went to school, but because of their actual skills and ability.” Our Skills Graph will continue to be a critical part of how we help make a skills-based labor market a reality. We’re excited to share updates as our work continues on this journey.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
See also  LinkedIn’s GraphQL journey for integrations and partnerships: How we accelerated development by 90%
Continue Reading

Trending