Connect with us

LINKEDIN

Career stories: Next plays, jungle gyms, and Python

Published

on

career-stories:-next-plays,-jungle-gyms,-and-python

Since she was a child, Deepti has been motivated to help people. This drive led her on a career journey with many pivots and moves — akin to navigating a children’s jungle gym — between industries and around the world. Based in Bangalore, this biomedical engineer turned data scientist shares how LinkedIn helped her gain new technical skills, dive into meaningful work, and grow. 

  • picture-of-deepti-walking-in-a-jungle

Growing up in Mumbai, India, I always imagined myself in a career where I could give back. I once dreamed of becoming a neurosurgeon, but early in my career, I took a different path and earned a bachelor’s in electronics engineering. While studying engineering gave me the foundation for my future career, I quickly realized that my job options wouldn’t help me make the difference that I wanted to. So, I decided to complete a master’s program in biomedical engineering at Drexel University in Philadelphia. 

After graduating, I found an opportunity at the Toyota Technical Center in Boston, where I helped build driver safety systems that incorporated human physiological considerations into injury prevention. Toyota is where I first began to reconsider my perspective on what it means to help others, realizing that I could draw on my STEM background to build safer systems that would benefit everyone.

Advertisement
free widgets for website
  • picture-of-deepti-and-her-daughter

Embracing a data-driven career change

Soon, however, home and family called me back to India where at the time, biomedical research in India was not as exciting as the work I was doing in the U.S. While CT scans and MRIs are, of course, critical instruments, I increasingly felt that I wasn’t giving back in the way I’d hoped. After two years, I knew it was time to push myself out of my comfort zone once again, which led me to data science. 

When I first broke into the field, data science was more like informal analytics. Yet I was intrigued by this new discipline, where I could use the skills I gained as an engineer, like problem-solving and logical thinking, while also gaining unique expertise. When I started, my mantra was to keep focused on learning and not worry about my experience (or lack of) when surrounded by data scientists, who were just out of school, with more experience than me.

See also  REACH turns five - Celebrating the power of apprenticeships

My instincts served me well, and I quickly grew from an analyst to a senior manager — this pace of career progression is the norm in startups, where fast growth is expected. In a short period, my time became less focused on getting my hands dirty with data, and more centered on managing clients and stakeholders and putting out fires. After seven years, I missed building things and solving problems, which is when the perfect opportunity opened up at LinkedIn. 

Advertisement
free widgets for website
  • picture-of-deepti-sitting-on-couch-with-plants-behind-her

Giving back to the global community at LinkedIn

With a desire to do more, I was recruited at the right time for a data scientist position on LinkedIn’s Economic Graph, our digital representation of the global economy. The Economic Graph research team I was on was a global team with people based in the U.S., Europe, and Singapore. What appealed to me most about the Economic Graph was that we work and collaborate alongside the government and other non-governmental organizations (NGOs) to deliver insights that enable our members to succeed and connect with the right opportunities for them. 

The Economic Graph partners with public sector organizations to provide data insights that improve policy decisions. For example, if a government ministry is considering where to invest in education, they need data on issues like labor market demand and skills gaps. Using our member (i.e., LinkedIn user) data, our team would deliver such power-packed insights using LinkedIn platform data. At LinkedIn, we ensure that member data is used safely, and we’re proud that the trust we’ve built with our members enables us to deliver these insights. 

See also  Career Stories: Breaking barriers with LinkedIn

Python, Scala, and people management

When the Economic Graph team consolidated, I knew it was time for the next stage of my career, or my Next Play as we call it here. My manager pushed me to consider taking on a tech lead role in data science within the Business Operations team at LinkedIn in India. I admit I was reluctant to go back to a position focused on business revenues, as I had grown attached to the research mission in my previous role. Soon, though, I realized that everything we do at LinkedIn helps advance our mission and vision for the community. 

Now, I’m managing a newsletter and leading a team of data scientists solving business-critical problems across the company. It’s precisely the kind of exposure I’m looking for at this point in my career, gaining horizontal expertise by engaging all these different domains. LinkedIn is all about learning. Here, managers encourage people to take charge of their careers, experiment, and move into other roles according to their interests and goals. 

Advertisement
free widgets for website

We don’t shy away from challenges and learning curves. For example, I’ve had to upskill myself in coding. For nearly 14 years, I primarily used the R programming language. Now, we’ve moved on to Python and Scala, building on our everyday work with statistics and math. 

It’s not all about tech, though. We deal with unique questions, so problem-solving skills are critical. It’s also essential to think about the business contexts and ask the right questions. Then, we bring it all together with technology to solve a problem in a structured manner.

Advertisement
free widgets for website
  • picture-of-deepti-and-her-family-on-the-beach

Moving forward on the LinkedIn learning path

When thinking about career trajectories, I always return to this metaphor of a children’s jungle gym. I tell my team that it’s about moving through a matrix rather than climbing a ladder one step at a time. You’re still moving from one point to the next, but the next step isn’t necessarily upward. The reality is that, sometimes, you have to move down a level to reach a specific endpoint. 

See also  Measuring marketing incremental impacts beyond last click attribution

I’ve moved from senior management roles into positions with no one reporting to me. At first, I thought, “Am I down-leveling?” Then I would remind myself that my final goal is always to do something meaningful. At that point, taking on a more technical role was a step in that direction. 

Then, with the knowledge and expertise I gained, I could return to leading teams and tackling more considerable challenges. Growth looks different to different people but as long as you have that fire inside you to keep learning and growing, changing domains, jobs, or even countries, it will only help you in your journey. 

Advertisement
free widgets for website

About Deepti 

Based in Bengaluru, India, Deepti is a senior data scientist at LinkedIn. Before LinkedIn, she spent nearly seven years as a senior analyst and senior manager for [24]7.ai working on customer engagement solutions. Born and raised in India, she holds a bachelor’s degree in electronics engineering from the University of Mumbai, and a master’s in biomedical engineering from Drexel University in the U.S. Outside of work, Deepti spends time with her two daughters, and shares her passions for interior design and gender equality issues on social media. 

Editor’s note: Considering an engineering/tech career at LinkedIn? In this Career Stories series, you’ll hear first-hand from our engineers and technologists about real life at LinkedIn — including our meaningful work, collaborative culture, and transformational growth. For more on tech careers at LinkedIn, visit: lnkd.in/EngCareers.

Advertisement
free widgets for website

Topics

Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

LINKEDIN

Our Approach to Research and A/B Testing

Published

on

By

our-approach-to-research-and-a/b-testing

We are constantly striving to improve the experience on LinkedIn for our members and customers, with research and experimentation, such as A/B Testing, playing a key role in that work. 

Nearly a decade ago, I discussed the importance of these techniques in our journey to create economic opportunity for every member of the global workforce. Today we have a strong principled approach to how we design and run A/B tests on everything from UI designs to AI algorithms, and feature launches to bug fixes. As our platform continues to grow and evolve, these techniques have become even more essential for us to deliver on our vision of creating economic opportunity for every member of the global workforce. More specifically, we use these techniques to:

  • Deliver the best experiences to our community by leveraging innovation at scale. Through testing and measuring, we continuously evolve our products to add more value, making our platform safer, more engaging and more enjoyable with every interaction. We use various methods to evolve our products and services from member surveys to in-depth offline data analysis to online A/B tests when we have a new feature that we think will benefit our members. For example, we recently tested new ways to help members discover relevant news, conversations, and voices from people and organizations they might not otherwise know. 

  • Avoid guessing; instead test, measure and test again. We don’t assume that we inherently know what is best for our members, as their needs evolve over time. By testing and constantly measuring, we seek feedback and insights to help guide us in the right direction. If a product feature we build doesn’t deliver the impact we intended, we make adjustments.

  • Move quickly and thoughtfully. Our well-defined process includes design evaluations, committee reviews, and quality checks aimed at preventing unintended consequences. We also use observational causal studies to analyze historical data and discover causal patterns whenever applicable. The development of our T-REX platform has also standardized and improved our A/B testing processes. 

Throughout all of this, we believe in the importance of sharing knowledge. We regularly share insights from our tests with the broader engineering community through papers, open source, and academic partnerships.

We are proud to have developed a culture at LinkedIn where research and experimentation is celebrated. Not only does this work help us stay innovative, it reminds us that everything we do is in service of our community, and is to create more economic opportunities for our members around the world.

Advertisement
free widgets for website
See also  Operating System Snapshot Automation
Continue Reading

LINKEDIN

Operating System Snapshot Automation

Published

on

By

operating-system-snapshot-automation

Co-authors: Rohit Jamuar, Tianxin Zhou

Introduction

LinkedIn has a large set of physical servers geographically spread across several locations. Every application is hosted on a physical server and is distributed and managed across one of these hosts. With a reasonably sizable footprint of servers in data centers, LinkedIn is responsible for ensuring that these hosts are always on an operating system (OS) version deemed the “latest and greatest” for all intents and purposes. The Production Systems Software Engineering (PSSE)  organization within LinkedIn has taken the responsibility of creating timely OS snapshots that are installed on these servers regularly. This blog discusses how this process was implemented and the impetus behind the OS Snapshot Automation (OSSA) project.

Historically, there were less rigid constraints around building snapshots and refreshing the same across our server fleet.  At LinkedIn, we started pursuing the creation and release of OS snapshots at a defined cadence, as it’s ideal for servers to upgrade to the latest snapshots regularly and older snapshots (with potential security vulnerabilities) to be retired. With this vision in mind, we wanted newly built OS snapshots validated once per month with due process and released the same with a tightly managed tempo. The main incentive behind creating a dedicated product for conducting these in an automated manner is rooted in improving overall operational excellence – being able to build snapshots automatically at a regular cadence would allow timely validation and release of the same to the fleet; doing so is a necessity for enabling confidence in customers that their data and private information is not exploited due to potential OS-level vulnerabilities on servers.

Motivation

Pre-OSSA, the OS snapshot process was a manual one closely tied to a handful of one-off shell scripts, and the entire ecosystem was tied to a single server. Moreover, metadata about snapshots and their respective lifecycles were stored in an internal wiki document, and there wasn’t a way to reference this data programmatically. The existing solution had challenges with its maintainability, scalability, and high availability. Another big challenge for this ecosystem was how snapshotting was conducted. It required human effort to create, promote, and deprecate a release. The current infrastructure and the processes to get a snapshot created and boot-tested could, at best, be defined as a stopgap solution, meaning that everything was manually conducted and needed dedicated full-time engineer (FTE) time. OSSA was envisioned with the requirement of making this ecosystem highly available, well-monitored, and programmatically configurable by its consumers. Aside from improving this ecosystem’s availability, we also wanted to coalesce different one-off scripts into a multiproduct to improve the code base’s craftsmanship and maintainability. 

Improving data accessibility via RESTful API

Advertisement
free widgets for website

Initially, processes and information were tied to asynchronous communications over Slack and Jira tickets, which lacked visibility and became cumbersome for tracking necessary information. One of the ways we hoped to solve the visibility aspect of this ecosystem was by enabling the dissemination of this information via a RESTful API, which was the first important step for OSSA to perform. This allowed us to bridge the gap between the information strewn across internal wiki pages and Jira tickets and expose OS snapshot data (release_name, kernel_version, base_release, and expiration_date) via an HTTP-GET call. Another step we wanted to explore was enabling partner teams validating OS snapshots to interact with OSSA so that they could relay their validation results without having to do so over mediums that aren’t necessarily queryable. With this support enabled, we aimed to display this data via an API call where anyone interested could make a simple HTTP-GET call and look at what teams have validated an OS snapshot and the status of validation. Additionally, we enabled support for building OS snapshots and managing their lifecycle via POST calls to this API. In the scheme of things, enabling these functionalities over an API made integration with external teams’ products feasible. It allowed us to continue working towards a solution where all disparate workflows can be triggered via a one-touch model.

When talking internally and with partner teams, it was vital that we were able to gauge the authentication and authorization of incoming POST requests to OSSA.  We decided to rely on DataVault’s token-based authorization service for this, as DataVault has a well-established ecosystem that drives the majority of authorization requests at LinkedIn and fits our expectations. We created custom ACLs, with access rights per validating team, and ensured that these ACLs are enforced by DataVault when an external user submits a POST request with a token. The individual/automation must retrieve the same after authenticating to the DataVault token service. 

See also  Career stories: Mobilizing learners worldwide

High availability of API

With this design in place, the next step for us was to ensure that this API remains highly available, as we already had several stakeholders depending on the metadata provided by this API.

Advertisement
free widgets for website

(DC = datacenter)

We decided to have two nodes per site and put all services running on these nodes behind ATS. Our partner teams expected traffic to be routed from outside of the environment where these servers were present, and without ATS, interacting parties would have to open network ACLs for interaction. With the service spread geographically, we had to ensure that OSSA’s API reports the same dataset, and we decided to replicate data between different data centers using GoldenGate replication.

Improved visibility into overall processing

While the API paved the way for managing the OS snapshot lifecycle, OSSA also enabled more granular visibility into the overall OS snapshot process by exposing related data via sources like Iris, and an internal event bus. We use Iris-based notifications to learn more about the state of the OS snapshot during its build, testing, and monitoring. We also emit events to the event bus for anyone to consume via an intuitive UI; external teams are not tied to interacting with the API for this information.

Advertisement
free widgets for website

Now that we have discussed OSSA’s API and HA design in detail, we will dive into what an OS snapshot constitutes, how we have been building it, and the essential validation performed by OSSA before creating an event.

What is an OS snapshot?

Before we dive into how an OS snapshot is built, it’s good to understand what an OS snapshot is. An OS snapshot is a collection of bootfiles (initrd, vmlinuz), RPMs, and a few extra metadata. The snapshot in “OS snapshot” comes from the fact that we take a proverbial snapshot of all of the locally available latest RPMs and bundle them together into an entity that is meant to be immutable by nature. This is deliberate because it helps us isolate issues if we can reliably install the same RPMs across different test environments.

How do we build an OS snapshot?

Advertisement
free widgets for website

Our team inherited creating OS snapshots from a partner team internal to PSSE. At the time of the takeover of this effort, nightly replication of RPMs from upstream sources was preconfigured using an open-source tool (called mrepo).  For RHEL packages, we’d interact with RH7 CDN using certificate-based authorization; for CentOS packages, we’d point to a publicly open mirror (from kernel.org).  At the time of snapshot creation, we’d rely on open-source tools like createrepo and repomanage for building an OS snapshot. Once an OS snapshot is built, it’s replicated over a highly distributed yum infrastructure, and our internal server lifecycle-management tooling refers to this distributed data for triggering in-place or full-reimages of physical servers.

The ability to build snapshots was also exposed with an ACL-enforced endpoint in the API. This endpoint accepts necessary metadata from authorized users and relays that data back to backend logic, referenced when creating a new test snapshot. This data flow is crucial as we build test snapshots for different distros. For example, to build a test snapshot for RHEL7 and RHEL8, we use RPMs verbatim that were fetched from upstream, and we use a separate methodology for creating snapshots for CentOS7. While it’s similar to how we perform these steps for RH* distros, the stark difference comes from the kernel RPMs embedded into CentOS7 snapshots. 

See also  Operating System Snapshot Automation

Once a snapshot is built and replicated, the next important step for OSSA is to validate if we can install the newly minted test OS snapshot to a server. Aside from just an OS install, we validate if all of our internal toolings bootstrap the server as expected. Pre-OSSA, a dedicated engineer took over the responsibility of this responsibility and was a time sink considering the frequency with which these had to be done. This is especially true when considering the current engagement model, where multiple experimental test snapshots can be built for internal validation. We saw an opportunity to include automated boot-testing of OS snapshots into OSSA and decided on leveraging an existing product, MaaS – Metal as a Service, a self-service API that allows reimages of servers for triggering a reimage of the same.

Advertisement
free widgets for website

Boot-Testing workflow

OS snapshot validation workflow

Advertisement
free widgets for website

OS snapshot creation workflow

Before diving into the overall workflow, it’d be good to understand how different partner teams pitch into validating an OS snapshot. The OSSA team creates a new test OS snapshot and boot-tests it. Then the Maize team performs application testing on the test-snapshot created by OSSA and InfoSec performs vulnerability scanning of the test-snapshot. Next, the hardware and capacity engineering (HCE) team performs hardware and regression testing on the test-snapshot across multiple hardware SKUs, and the PSSE team owns the promotion of a test snapshot and deprecation of a release snapshot. Lastly, the OS Upgrade Automation team submits imaging requests with the test-snapshot under validation.

The Maize, InfoSec, and HCE teams do their testing in parallel and report back to OSSA with the result. A successful validation would be relayed back as a “nomination,” and failure would be reported as a “deprecation.” Few members from the PSSE organization have been given access to promote a test snapshot as it is meant for making the test-snapshot generally available and building a corresponding release snapshot open for everyone to install on their hosts (as all the necessary validation has been conducted). PSSE also holds the key for deprecating previously released OS snapshots. We could deprecate such OS snapshots if a new CVE is found or unforeseen behavior is observed during runtime.

The following figure describes the general workflow for OSSA’s interactions with different external teams for managing the lifecycle of test and release OS snapshots:

Advertisement
free widgets for website

Monitoring

  1. With OSSA, we saw an opportunity to improve the monitoring of snapshots and the RPM-fetch process. To this point, there wasn’t a reliable way to perform this, as no source of truth could disambiguate and/or spot issues. From the perspective of OS snapshot monitoring, we had to design changes for OSSA to track missing RPM(s), missing or modified metadata, or incorrect checksum(s).

Monitoring the mentioned items not only plays a crucial role in enforcing the immutability of OS snapshots but also plays a relevant role in ensuring that what was vetted by partner teams remains the same throughout a snapshot’s lifetime. For tracking any modifications, we started computing digest per snapshot creation. This digest (JSON) would track the RPMs in a snapshot along with their SHA256. This file would be distributed with the OS snapshot and uploaded to an Ambry container to ensure that a local modification of such files could be verified against the one from the Ambry blobstore.

During this effort, we ran into a few escalations due to newly built snapshots lacking the latest versions of certain RPMs. This was yet another opportunity for improvement! We added logic for validation if the last run of upstream fetch could retrieve the difference of the newly available RPMs. If not, relevant members were notified so the underlying issue could be triaged. A scheduled task drives this check daily and notifies engineers of discrepancies.

As the data contained within OSSA and reported directly impacts various production services, we also implemented monitoring around possible data tampering for items stored in the database. We added an extra column per table, which contains HMAC-SHA256 of other columns whenever any data in a row is modified. There is a scheduled task that, at a regular cadence, iterates over these columns and matches the existing data with the one computed during execution. If there is a mismatch, it auto-disables those OS snapshots from the list of valid snapshots and notifies the developers about the data-integrity violation. Any data modification can be isolated by this means. Because we use HMAC with the private key persisted in a managed keystore, it’s highly improbable to compute this data reliably after tampering with the dataset.

Purging redundant/expired snapshots

Advertisement
free widgets for website

Before OSSA, we kept creating new OS snapshots; over time, the cumulative size grew to 3TB. Many of these snapshots continued to persist as there was no clear path for retiring OS snapshots. With OSSA in place and OSSA-defined workflow for snapshot deprecation, we enabled the purge of older snapshots that are either past their expiration or have been deprecated for a while. In either of these cases, OSSA would step in and purge snapshots lingering around and add no value. In the first iteration of this process, OSSA cleaned ~500GB of redundant data and aimed to remove more redundant data by fine-tuning our expectations. This is a nudge toward operational excellence by controlling the data we continue to own and the network implications of transferring the same for replication.

Conclusion and future work

Before OSSA, information about snapshots was tied to another source of truth, which limited the number of OS snapshots that could be concurrently supported per distribution. This limitation was particularly hindering as the number of concurrent snapshots for testing and general availability could be more than one per distro, considering that we were building snapshots at a much higher frequency. Removing reliance on that SoT and solely depending on OSSA for retrieving OS snapshots’ metadata removed the hard dependency, and with OSSA, we can have as many snapshots of any type per distribution.

OSSA has emerged as a source of truth for anything and everything related to OS snapshots within LinkedIn. A product that emerged with a need to improve visibility and operability has brought itself to a point where multiple critical services depend on data from OSSA being served on demand. It also enabled (authorized) users to trigger OS snapshot builds without explicit intervention from our team and organization. A plethora of checks and padding were added to OSSA to ensure that internal processes leave audit trails and actionable HTTP responses that make interaction with an inherently complex ecosystem reliable and further reduce dependency on tribal knowledge for driving this process from end-to-end. While our significant deliverables are live, we are still seeking to improve the product to ensure that it continues scaling with requirements. Some of the near-term requirements are to add support for partially validating and conditionally releasing snapshots, exploring templatization of the snapshot creation process for different distros, upstream sync containerization, and overall improvement to the ecosystem.

Acknowledgements

OSSA has culminated into the product it is today because of continuous feedback and guidance from many engineering leaders, technical program managers, and engineering managers that helped mold design considerations and deliverables. Shoutout to Steve Fantin for driving the work to enable repodB monitoring and interfacing OSSA with Ambry and to Jayita Roy and Khushboo Kuchhal for scaling service to a new data center and adding a dedicated staging environment. Many thanks to Cynthia Arriaga and Carlton Giles for keeping our deliverables under close watch and helping us unblock issues by effectively liaising with external teams. Thanks, Franck Martin, for supporting this initiative and helping us roadshow this product into a viable product at the heart of multiple design and development endeavors across LinkedIn. Thanks, Nishan Weragama, Adam Debus, and Sean Patrick, for providing valuable feedback during the initial design and helping us stay aligned with the Fleet Compliance initiative. Many thanks to Nitin Sonawane and Milind Talekar for supporting this effort.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
See also  How LinkedIn Ditched the "One Size Fits All" Hiring Approach for InfoSec and Won
Continue Reading

LINKEDIN

Building LinkedIn’s Skills Graph to Power a Skills-First World

Published

on

By

building-linkedin’s-skills-graph-to-power-a-skills-first-world

Co-authors: Sofus Macskássy, Yi Pan, Ji Yan, Yanen Li, Di Zhou, Shiyong Lin

As industries rapidly evolve, so do the skills necessary for success. Skill sets for jobs globally have changed by 25% since 2015 and this number is expected to double by 2027. Yet, we’ve long relied on insufficient and unequal signals when evaluating talent and predicting success – who you know, where you went to school, or who your last employer was. If we look at the labor market instead through the lens of skills – the skills you have and the skills a role or industry demands – we can create a transparent and fair job matching process that drives better outcomes for employers and employees. 

This new reality requires a common understanding of skills, backed by better data. For nearly a decade, our Economic Graph has helped leaders benchmark and compare labor markets and economies across the world. A critical element of this analysis is the insight provided by LinkedIn’s Skills Graph, which creates a common language around skills to help us all better understand the skills that power the global workforce. The Skills Graph does this by dynamically mapping the relationships between 39K skills, 875M people, 59M companies, and other organizations globally. 

It also drives relevance and matching across LinkedIn – helping learners find content more relevant to their career path; helping job seekers find jobs that are a good fit; and helping recruiters find the highest quality candidates. For example, these relationships between skills means we can detect that “cost management” in a job seekers’ profile is relevant to a job posting that lists “project budgeting” as a required skill.

Building the LinkedIn Skills Graph

At the heart of our Skills Graph lies our skills taxonomy. The taxonomy is a curated list of unique skills and their intertwined relationships, each with detailed information about those skills. It’s built on a deep understanding of how skills power professional journeys, including what skills are required in a job, what skills a member has, and how members move from one position to the next. 

Today, our taxonomy consists of over 39,000 skills spanning 26 languages, over 374,000 aliases (different ways to refer to the same skill – e.g., “data analysis” and “data analytics”), and more than 200,000 links between skills. Even more important than the volume of data, the key to unlocking the power of skills lies in the structure and relationships between the skills. To create a stronger network of connected skills in our taxonomy, we utilize a framework we call, “Structured Skills.” This framework increases our understanding of every skill in our database by mapping the relationships it has to other skills around it, and creates richer, more accurate skill-driven experiences for our members and customers. For example,

Advertisement
free widgets for website
  • If a member knows about Artificial Neural Networks, the member knows something about Deep Learning, which means the member knows something about Machine Learning.

  • If a job requires Supply Chain Engineering, having a skill in Supply Chain Management or Industry Engineering is definitely also relevant.

Creating meaningful and accurate relationships between skill sets is critical to getting the most out of our Structured Skills. To do this, our machine learning and artificial intelligence combs through massive amounts of data and suggests new skills and relations between them. As our Skills Graph continues to grow and learn with AI, we are committed to maintaining the high quality of the data and connections found in our taxonomy. We do this with the help of trained taxonomists on our team, who manually review our skills data and ensure that we can verify its integrity and relevancy.

Structured skills consists of meaningful relationships between skills that empower deep reasoning to match members to relevant content such as jobs, learning material, and feed posts

But, building the taxonomy and Structured Skills is meaningless without connecting these to the jobs and members on our platform. Together, the Structured Skills and mapping to our members and jobs make up our Skills Graph and both are needed to unlock the full potential of a skill-based job market.

Advertisement
free widgets for website

Structured skills enrich the set of skills for both members and jobs to ensure we can find all the relevant jobs for a member. We show the skill overlap so that members can see which of their skills are a match and also potential skill gaps that they might want to address for their own career growth

Leveraging Machine Learning to map skills to members and jobs

Although millions of LinkedIn members have added skills to their profile, many have not added their most relevant skills to their skills sections or kept their skills section up to date. Instead, they list relevant skills in their summary sections, within the job experience descriptions in their profiles or on the resumes they submit. On the other hand, many jobs on LinkedIn don’t comprehensively describe what skills are needed. Many listings also come through an online job posting that a recruiter has submitted but are ingested from our customers’ websites. In these scenarios where skills are not explicitly provided, it’s critical to pull skills data from the job descriptions, summaries, and more, to create a tool that drives reliable insights.

As you can imagine, this process requires processing a lot of text. So, we have built machine learning models that leverage natural language understanding, deep learning, and information extraction technologies. To help train these models, our human labelers use AI to connect text found across jobs, profiles, and learning courses, to specific skills in our taxonomy. Our system then learns to recognize different ways to refer to the same type of skill. Combined with natural language processing, we extract skills from many different types of text – with a high degree of confidence – to make sure we have high coverage and high precision when we map skills to our members and job posts.

Advertisement
free widgets for website

We also leverage various clustering and machine learning algorithms to identify the core skills relating to a given job or function. We do this by applying these tools to all member histories and all job descriptions on our platform, which identify the skills that are likely associated with a job post or member job experience. These techniques, together with Structured Skills, create a holistic picture of skills a member has and skills needed to do a job. 

When hirers create a job post on the LinkedIn platform, we use machine learning and Structured skills to suggest explicit skills that we can tag the post with to increase discoverability

These models are designed to continuously improve and learn over time based on engagement from members on the LinkedIn platform, job seekers, hirers, and learners. For example, when a hirer posts a new job on our platform and the hirer types in the job description, our machine learning model automatically suggests the skills that are associated with that job posting. The hirer can refine the selection of skills that best represent the qualification of this job by removing and adding these suggested skills manually.

Advertisement
free widgets for website

Looking forward

Beyond streamlining the hiring process, understanding members’ skills allows us to surface more relevant posts in their feed, suggest people they should connect with, and companies to follow. It also helps sales and marketing professionals on Linkedin be more effective by using skills for ads targeting and provides insights to our sales and marketing customers by sharing details on the skill sets of those who engage with their content. As our Skills Graph continues to evolve in parallel with the global workforce, it will only become smarter and deliver better outcomes for hirers, learners, job seekers, customers, and members. 

Realizing a more equitable and efficient future of work will rely on building a deeper understanding of peoples’ abilities and potential. To keep up, some companies are already utilizing skills to identify qualified candidates – more than 40% of hirers on LinkedIn explicitly use skills data to fill their roles. 

As our CEO Ryan Roslansky stated at LinkedIn’s Talent Connect event this year, “We can build a world where everyone has access to opportunity not because of where they were born, who they know, or where they went to school, but because of their actual skills and ability.” Our Skills Graph will continue to be a critical part of how we help make a skills-based labor market a reality. We’re excited to share updates as our work continues on this journey.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
See also  Towards data quality management at LinkedIn
Continue Reading

Trending