Connect with us

LINKEDIN

Career stories: Four engineering careers. One LinkedIn.

Published

on

career-stories:-four-engineering-careers-one-linkedin.

LinkedIn’s Next Play culture celebrates transformational growth and internal mobility, and engineering leader Shalini is its biggest champion. Based in Silicon Valley, this mom of three walks us through her impactful career journey from our LinkedIn consumer and data teams, then woring on the LinkedIn Sales Solutions team, to now working for LinkedIn Talent Solutions.

  • picture-of-shalini-and-her-family

Before joining LinkedIn, I worked my way up in Silicon Valley’s consumer software engineering space as a backend (apps) engineer. As my career propelled forward, and I transitioned to managing a team of backend, frontend (UI), and mobile engineers, I really found fulfillment in supporting the engineers around me as we built products together. I quickly realized that management could offer this exciting path of learning and helping engineers grow. 

When I started at LinkedIn in 2015 as a senior engineering manager on the search experience  team, I was impressed with how supportive the company culture was to my management style. In my first project, my team retooled the LinkedIn mobile site to give our global user base a better experience, as part of our Consumer Flagship product team. 

Then, I was asked to pivot and spearhead the creation of a new data applications team that involved full stack development. Even though building internal tools for engineering was new territory for me, I was eager to learn, and — as I like to say — stumble forward with LinkedIn’s support. What initially started as a small team of about 10 has blossomed into 50+ engineers. 

Advertisement
free widgets for website
  • picture-of-shalini-and-colleagues

My work on the Data team included a site-wide project around changing the way we track events to make our data cleaner and more effective, to be used by both the data science and artificial intelligence (AI) teams. The following video explains how my multi-year project involved extracting the complexity, so when you make one change on one side it brought together product engineering, data science, site-reliability (SRE), analyst and the marketing teams, and how this changed how we think and use data.

See also  Open Sourcing Venice – LinkedIn’s Derived Data Platform

LinkedIn is unique where we have both consumer and enterprise products. And I wanted to learn about how it is to build and manage an enterprise software. I explored various roles and products and joined the Sales Navigator team in the LinkedIn Sales Solutions organization. We were about 80-100 engineers that operated as a close-knit small org which was simpler and faster to navigate more like a start-up compared to my previous roles that were part of much larger organizations.   

Advertisement
free widgets for website

Transformation and looking ahead are both integral to the collaborative culture here at LinkedIn. When I was planning out my next career move, my managers at LinkedIn were invaluable, giving me the individual attention I needed to progress into my ideal role. As I explored different departments, the mentors I spoke with were generous with their time, advice, and coaching in helping me find the best career fit for me. They truly invested in my career development, ensuring that I found roles I was passionate about and aligned with where I wanted to grow. 

Their mentorship was pivotal in landing my latest role as an engineering senior director (see video below) on the LinkedIn Talent Solutions team, where I focus on creating a more equitable and efficient talent marketplace. After the pandemic, skills-based hiring became increasingly valuable to the job search. This is where my role comes into play — my team works on products to build a world where people can be seen by their skills, not just the title or degree. 

We build product features such as tools that help companies search for candidates based on skills and to explicitly list skills in job postings, as well as allowing job seekers to clearly compare how their skills match up to a position’s requirements.  At LinkedIn, we aim to transform the skills-matching process in the job search, and I am excited about what lies ahead in my career with this team. 

Advertisement
free widgets for website
See also  Career stories: From Hollywood videographer to frontend engineer

Through all these career transformations, my team and managers have been incredibly supportive and flexible, as a mom of three. There have been many instances of juggling work and kids especially as school events happen during working hours. However, I have had support from my team and management to balance my time. 

One particular instance I remember was when I had a career conversation scheduled with a vice president, and it was right before the play audition for my 5th grader. I had to drive 30 minutes to get there. It was the only time available, so we decided to chat on the phone while I drove to school. Such flexibility and understanding have helped me to integrate my family and work in a balanced way.

Advertisement
free widgets for website
  • image-of-shalini-and-her-family-on-the-beach

My passion for skills-building led to founding one of my most fulfilling projects to date: LinkedIn’s engineering apprenticeship program, REACH. It’s a multi-year program that helps our engineers uplevel their technical skills in areas from data science, to artificial intelligence (AI), to backend (apps) engineering and user experience. 

Per the video below, this started as a grassroots initiative over five years ago and now, the program has now helped over 100 apprentices become part of LinkedIn’s engineering teams. It has been so fulfilling to watch the participants channel their passions into building their future here and I’m proud to have recently celebrated our five year anniversary.

Advertisement
free widgets for website

When engineers who are just starting their careers tell me they want to be managers, I always ask about their motivations. I believe a desire to support people should drive that decision, because at the end of the day you remember some of your projects as a manager, but you always remember the people you worked with. 

See also  Challenges and practical lessons from building a deep-learning-based ads CTR prediction model

Ask yourself where do you find the most joy? If helping others learn and grow brings you joy, such a role will come more naturally to you. Seek out what brings you joy at work, and your career will follow suit.

Advertisement
free widgets for website

Based in Silicon Valley, Shalini is an engineering senior director on our LinkedIn Talent Solutions team. A great example of LinkedIn’s Next Play internal-mobility culture, she also served in consumer and enterprise engineering management roles on our Consumer, Data and LinkedIn Sales Solutions teams. Before joining LinkedIn, she worked for Universal Planet and Sonasoft as a senior software engineer before taking on new engineering leadership roles at eBay as a principal software engineer and product engineering director. 

Shalini holds a bachelor’s and master’s in mathematics and computer applications from Banasthali Vidyapith University, and a master’s in computer science from California State University, Hayward. Shalini enjoys spending her free time with her three kids, and helping them learn and grow. 

Editor’s note: Considering an engineering/tech career at LinkedIn? In this Career Stories series, you’ll hear first-hand from our engineers and technologists about real life at LinkedIn — including our meaningful work, collaborative culture, and transformational growth. For more on tech careers at LinkedIn, visit: lnkd.in/EngCareers.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

LINKEDIN

Building LinkedIn’s Skills Graph to Power a Skills-First World

Published

on

By

building-linkedin’s-skills-graph-to-power-a-skills-first-world

Co-authors: Sofus Macskássy, Yi Pan, Ji Yan, Yanen Li, Di Zhou, Shiyong Lin

As industries rapidly evolve, so do the skills necessary for success. Skill sets for jobs globally have changed by 25% since 2015 and this number is expected to double by 2027. Yet, we’ve long relied on insufficient and unequal signals when evaluating talent and predicting success – who you know, where you went to school, or who your last employer was. If we look at the labor market instead through the lens of skills – the skills you have and the skills a role or industry demands – we can create a transparent and fair job matching process that drives better outcomes for employers and employees. 

This new reality requires a common understanding of skills, backed by better data. For nearly a decade, our Economic Graph has helped leaders benchmark and compare labor markets and economies across the world. A critical element of this analysis is the insight provided by LinkedIn’s Skills Graph, which creates a common language around skills to help us all better understand the skills that power the global workforce. The Skills Graph does this by dynamically mapping the relationships between 39K skills, 875M people, 59M companies, and other organizations globally. 

It also drives relevance and matching across LinkedIn – helping learners find content more relevant to their career path; helping job seekers find jobs that are a good fit; and helping recruiters find the highest quality candidates. For example, these relationships between skills means we can detect that “cost management” in a job seekers’ profile is relevant to a job posting that lists “project budgeting” as a required skill.

Building the LinkedIn Skills Graph

At the heart of our Skills Graph lies our skills taxonomy. The taxonomy is a curated list of unique skills and their intertwined relationships, each with detailed information about those skills. It’s built on a deep understanding of how skills power professional journeys, including what skills are required in a job, what skills a member has, and how members move from one position to the next. 

Today, our taxonomy consists of over 39,000 skills spanning 26 languages, over 374,000 aliases (different ways to refer to the same skill – e.g., “data analysis” and “data analytics”), and more than 200,000 links between skills. Even more important than the volume of data, the key to unlocking the power of skills lies in the structure and relationships between the skills. To create a stronger network of connected skills in our taxonomy, we utilize a framework we call, “Structured Skills.” This framework increases our understanding of every skill in our database by mapping the relationships it has to other skills around it, and creates richer, more accurate skill-driven experiences for our members and customers. For example,

Advertisement
free widgets for website
  • If a member knows about Artificial Neural Networks, the member knows something about Deep Learning, which means the member knows something about Machine Learning.

  • If a job requires Supply Chain Engineering, having a skill in Supply Chain Management or Industry Engineering is definitely also relevant.

Creating meaningful and accurate relationships between skill sets is critical to getting the most out of our Structured Skills. To do this, our machine learning and artificial intelligence combs through massive amounts of data and suggests new skills and relations between them. As our Skills Graph continues to grow and learn with AI, we are committed to maintaining the high quality of the data and connections found in our taxonomy. We do this with the help of trained taxonomists on our team, who manually review our skills data and ensure that we can verify its integrity and relevancy.

Structured skills consists of meaningful relationships between skills that empower deep reasoning to match members to relevant content such as jobs, learning material, and feed posts

But, building the taxonomy and Structured Skills is meaningless without connecting these to the jobs and members on our platform. Together, the Structured Skills and mapping to our members and jobs make up our Skills Graph and both are needed to unlock the full potential of a skill-based job market.

Advertisement
free widgets for website

Structured skills enrich the set of skills for both members and jobs to ensure we can find all the relevant jobs for a member. We show the skill overlap so that members can see which of their skills are a match and also potential skill gaps that they might want to address for their own career growth

Leveraging Machine Learning to map skills to members and jobs

Although millions of LinkedIn members have added skills to their profile, many have not added their most relevant skills to their skills sections or kept their skills section up to date. Instead, they list relevant skills in their summary sections, within the job experience descriptions in their profiles or on the resumes they submit. On the other hand, many jobs on LinkedIn don’t comprehensively describe what skills are needed. Many listings also come through an online job posting that a recruiter has submitted but are ingested from our customers’ websites. In these scenarios where skills are not explicitly provided, it’s critical to pull skills data from the job descriptions, summaries, and more, to create a tool that drives reliable insights.

As you can imagine, this process requires processing a lot of text. So, we have built machine learning models that leverage natural language understanding, deep learning, and information extraction technologies. To help train these models, our human labelers use AI to connect text found across jobs, profiles, and learning courses, to specific skills in our taxonomy. Our system then learns to recognize different ways to refer to the same type of skill. Combined with natural language processing, we extract skills from many different types of text – with a high degree of confidence – to make sure we have high coverage and high precision when we map skills to our members and job posts.

Advertisement
free widgets for website

We also leverage various clustering and machine learning algorithms to identify the core skills relating to a given job or function. We do this by applying these tools to all member histories and all job descriptions on our platform, which identify the skills that are likely associated with a job post or member job experience. These techniques, together with Structured Skills, create a holistic picture of skills a member has and skills needed to do a job. 

When hirers create a job post on the LinkedIn platform, we use machine learning and Structured skills to suggest explicit skills that we can tag the post with to increase discoverability

These models are designed to continuously improve and learn over time based on engagement from members on the LinkedIn platform, job seekers, hirers, and learners. For example, when a hirer posts a new job on our platform and the hirer types in the job description, our machine learning model automatically suggests the skills that are associated with that job posting. The hirer can refine the selection of skills that best represent the qualification of this job by removing and adding these suggested skills manually.

Advertisement
free widgets for website

Looking forward

Beyond streamlining the hiring process, understanding members’ skills allows us to surface more relevant posts in their feed, suggest people they should connect with, and companies to follow. It also helps sales and marketing professionals on Linkedin be more effective by using skills for ads targeting and provides insights to our sales and marketing customers by sharing details on the skill sets of those who engage with their content. As our Skills Graph continues to evolve in parallel with the global workforce, it will only become smarter and deliver better outcomes for hirers, learners, job seekers, customers, and members. 

Realizing a more equitable and efficient future of work will rely on building a deeper understanding of peoples’ abilities and potential. To keep up, some companies are already utilizing skills to identify qualified candidates – more than 40% of hirers on LinkedIn explicitly use skills data to fill their roles. 

As our CEO Ryan Roslansky stated at LinkedIn’s Talent Connect event this year, “We can build a world where everyone has access to opportunity not because of where they were born, who they know, or where they went to school, but because of their actual skills and ability.” Our Skills Graph will continue to be a critical part of how we help make a skills-based labor market a reality. We’re excited to share updates as our work continues on this journey.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
See also  Challenges and practical lessons from building a deep-learning-based ads CTR prediction model
Continue Reading

LINKEDIN

TopicGC: How LinkedIn cleans up unused metadata for its Kafka clusters

Published

on

By

topicgc:-how-linkedin-cleans-up-unused-metadata-for-its-kafka-clusters

Introduction

Apache Kafka is an open-sourced event streaming platform where users can create Kafka topics as data transmission units, and then publish or subscribe to the topic with producers and consumers. While most of the Kafka topics are actively used, some  are not needed anymore because business needs changed or the topics themselves are ephemeral. Kafka itself doesn’t have a mechanism to automatically detect unused topics and delete them. It is usually not a big concern, since a Kafka cluster can hold a considerable amount of topics, hundreds to thousands. However, if the topic number keeps growing, it will eventually hit some bottleneck and have disruptive effects on the entire Kafka cluster. The TopicGC service was born to solve this exact problem. It was proven to reduce Kafka pressure by deleting ~20% of topics, and improved Kafka’s produce and consume performance by at least 30%.

Motivation

As the first step, we need to understand how unused topics can cause pressure on Kafka. Like many other storage systems, all Kafka topics have a retention period, meaning that for any unused topics, the data will be purged after a period of time and the topic will become empty. A common question here is, “How could empty topics affect Kafka?” 

Metadata pressure

For topic management purposes, Kafka stores the metadata of topics in multiple places, including Apache ZooKeeper and a metadata cache on every single broker. Topic metadata contains information of partition and replica assignments. 

Let’s do some simple calculation here:  topic A can have 25 partitions, with a replication factor of three, meaning each partition has three replicas. Even if topic A is not used anymore, Kafka still needs to store the location info of all 75 replicas somewhere.

Advertisement
free widgets for website

The effect of metadata pressure may not be that obvious for a single topic, but it can make a big difference if there are a lot of topics. The metadata can consume memory from Kafka brokers and ZooKeeper nodes, and can add payload to metadata requests. 

Fetch requests

In Kafka, the follower replicas periodically send fetch requests to the leader replicas to keep sync with the leader. Even for empty topics and partitions, the followers still try to sync with the leaders. Because Kafka does not know whether a topic is permanently unused, it always forces the followers to fetch from the leaders. These redundant fetch requests will further lead to more fetch threads being created, which can cause extra network, CPU, and memory utilization, and can dominate the request queues, causing other requests to be delayed or even dropped.

See also  Career stories: Next plays, jungle gyms, and Python

Controller initialization

Kafka controller is a broker that coordinates and manages other brokers in a Kafka cluster. Many Kafka requests have to be handled by the controller, thus the controller availability is crucial to Kafka. 

Advertisement
free widgets for website

On controller failover, a new controller has to be elected and take over the role of managing the cluster. The new controller will take some time to load the metadata of the entire cluster from ZooKeeper before it can act as the controller, which is called the controller initialization time. As mentioned earlier in this post, unused topics can generate extra metadata that makes the controller initialization slower, and threaten the Kafka availability. Issues can arise when the ZooKeeper response is larger than 1MB. For one of our largest clusters, the ZooKeeper response has already reached 0.75MB, and we anticipate within two to three years it will hit a bottleneck.

Service design

While designing TopicGC, we kept in mind a number of requirements. Functionality, we determined that the system must set criteria to determine whether a topic should be deleted, constantly run the garbage collector (GC) process to remove the unused topics, and notify the user before topic deletion.

Additionally, we identified non-functional requirements for the system. The requirements include ensuring no data loss during topic deletion, removal of all dependencies from unused topics before deletion, and the ability to recover the topic states from service failures.

To satisfy those requirements, we designed TopicGC based on a state machine model, which we will discuss in more detail in the following sections.

Topic state machine

Advertisement
free widgets for website

To achieve all of the functional requirements, TopicGC internally runs a state machine. Each topic instance is associated with a state and there are several background jobs that periodically run and transit the topic states if needed. Table 1 describes all possible states in TopicGC.

Table 1: Topic states and descriptions

Advertisement
free widgets for website

TopicGC workflow

With the help of internal states, TopicGC follows a certain workflow to delete unused topics.

  • Graphic of Topic GC state machine

Figure 1: TopicGC state machine

Detect topic usage

Advertisement
free widgets for website

TopicGC has a background job to find unused topics. Internally, we use the following criteria to determine whether a topic is unused:

  • The topic is empty
  • There is no BytesIn/BytesOut
  • There is no READ/WRITE access event in the past 60 days
  • The topic is not newly created in the past 60 days 

The TopicGC service fetches the above information from ZooKeeper and a variety of internal data sources, such as our metrics reporting system.

See also  Towards data quality management at LinkedIn

Send email notification

If a topic is in the UNUSED state, TopicGC will trigger the email sending service to find the LDAP user info of the topic owner and send email notifications. This is important because we don’t know whether the topic is temporarily idle or permanently unused. In the former case, once the topic owner receives the email, they can take actions to prevent the topic from being deleted.

Block write access

This is the most important step in the TopicGC workflow. Think of a case: if a user produces some data right at the last second before topic deletion, the data will be lost with the topic deletion. Thus, avoiding data loss is a crucial challenge for TopicGC. To ensure the TopicGC service doesn’t delete the topics that have last minute write, we introduced a block-write-access step before the topic deletion. After the write access is blocked on the topic, there is no chance that TopicGC can cause data loss.

Advertisement
free widgets for website

Notice that Kafka doesn’t have a mechanism to “seal” a topic. Here we leverage LinkedIn’s internal way to block topic access. In LinkedIn, we have some access to services to allow us to control the access for all data resources, including Kafka topics. To seal a topic, TopicGC sends a request to the access service to block any read and write access to the topic.

Disable mirroring

The data of a topic can be mirrored to other clusters via Brooklin. Brooklin is open-sourced by LinkedIn, as a framework to stream data between various heterogeneous sources and destination systems with high reliability and throughput at scale. Before deleting the topic, we need to disable Brooklin mirroring of the topic. Brooklin can be regarded as a wildcard consumer for all Kafka topics. If the topic is deleted without informing Brooklin, Brooklin will throw exceptions about consuming from non-existent topics. For the same reason, before topic deletion, if there are any other services that consume from all topics, TopicGC should tell those services to stop consuming from the garbage topics before topic deletion.

Delete topics

Once all preparations are done, the TopicGC service will trigger the topic deletion by calling the Kafka admin client. The topic deletion process can be customized and in our case, we delete topics in batches. Because topic deletion can introduce extra load to Kafka clusters, we set an upper limit of the concurrent topic deletion number to three.

Advertisement
free widgets for website

Last minute usage check

See also  TopicGC: How LinkedIn cleans up unused metadata for its Kafka clusters

Before any of the actual changes made to the topic (including blocking write access, disabling mirroring, and topic deletion), we run a last minute usage check for the topic. This is to add an extra secure layer to prevent data loss. If TopicGC detects usage during the whole deletion process, it will mark the topic as INCOMPLETE state, and start recovering the topic back to USED state.

Impact of TopicGC

We launched TopicGC in one of our largest data pipelines, and were able to reduce the topic count by nearly 20%. In the graph, each color represents a distinct Kafka cluster in the pipeline.

Advertisement
free widgets for website

Figure 2: Total topic count during TopicGC

Improvement on CPU usage

The topic deletion helps to reduce the total fetch requests in the Kafka clusters and as a result, the CPU usage drops significantly after the unused topics are deleted. The total Kafka CPU usage had about a 30% reduction.

Advertisement
free widgets for website

Figure 3: CPU usage improvement by TopicGC

Improvement On Client Request Performance

Due to the CPU usage reduction, Kafka brokers are able to handle the requests more efficiently. As a result, Kafka’s request handling performance improved, and request latencies dropped by up to 40%. Figure 4 shows the decrease in latency for Metadata Request.

Advertisement
free widgets for website
  • Image of Kafka request performance improvement by Topic GC

Figure 4: Kafka request performance improvement by TopicGC

Conclusion

After we launched TopicGC to delete unused topics for Kafka, it has deleted nearly 20% of topics, and significantly reduced the metadata pressure of our Kafka clusters. From our metrics, the client request performance is improved around 40% and CPU usage is reduced by up to 30%. 

Future plans

As TopicGC has shown its ability to clean up Kafka clusters and improve Kafka performance, we have decided to launch the service to all of our internal Kafka clusters. We are hoping to see that TopicGC can help LinkedIn have a more effective resource usage on Kafka.

Acknowledgements

Many thanks to Joseph Lin and Lincong Li for coming up with the idea of TopicGC and implementing the original design. We are also grateful for our managers Rohit Rakshe and Adem Efe Gencer, who provided significant support for this project. Last but not least, we want to shout out to the Kafka SRE team and Brooklin SRE team to act as helpful partners. With their help, we smoothly launched TopicGC and were able to see these exciting results. 

Advertisement
free widgets for website
Advertisement
free widgets for website

Topics

Continue Reading

LINKEDIN

Render Models at LinkedIn

Published

on

By

render-models-at-linkedin

Co-Authors: Mahesh VishwanathEric BabyakSonali BhadraUmair Saeed

Introduction

We use render models for passing data to our client applications to describe the content (text, images, buttons etc.) and the layout to display on the screen. This means most of such logic is moved out of the clients and centralized on the server. This enables us to deliver new features faster to our members and customers while keeping the experience consistent and being responsive to change.

Overview

Traditionally, many of our API models tend to be centered around the raw data that’s needed for clients to render a view, which we refer to as data modeling. With this approach, clients own the business logic that transforms the data into a view model to display. Often this business logic layer can grow quite complex over time as more features and use cases need to be supported.

This is where render models come into the picture. A render model is an API modeling strategy where the server returns data that describes the view that will be rendered. Other commonly used terms that describe the same technique are Server Driven User Interface (SDUI), or View Models. With render models, the client business logic tends to be much thinner, because the logic that transforms raw data into view models now resides in the API layer. For any given render model, the client should have a single, shared function that is responsible for generating the UI representation of the render model.

Advertisement
free widgets for website
  • A diagram of an architectural comparison between data modeling and render modeling

Architectural comparison between data modeling and render modeling

Example

To highlight the core differences in modeling strategy between a render model and data model, let’s walk through a quick example of how we can model the same UI with these two strategies. In the following UI, we want to show a list of entities that contain some companies, groups, and profiles.

Advertisement
free widgets for website
  • A diagram of an example UI of an ‘interests’ card to display to members

An example UI of an ‘interests’ card to display to members

Following the data model approach, we would look at the list as a mix of different entity types (members, companies, groups, etc.) and design a model so that each entity type would contain the necessary information for clients to be able to transform the data into the view shown in the design.

Advertisement
free widgets for website
record FollowableEntity {   /**    * Each model in the union below contains data that is related    * to the entity it represents.    */   entity: union[     Profile,     Company,     Group   ] }   record Profile {   // Details for a Profile.   … }   record Company {   // Details for a Company.   … }   record Group {   // Details for a Group.   … } 

When applying a render model approach, rather than worry about the different entity types we want to support for this feature, we look at the different UI elements that are needed in the designs.

  • A diagram of an ‘interests’ card categorized by UI elements

An ‘interests’ card categorized by UI elements

In this case, we have one image, one title text, and two other smaller subtexts. A render model represents these fields directly.

Advertisement
free widgets for website
record FollowableEntity {   /**    * An image to represent the logo for each element    * e.g. the Microsoft logo.    */   image: Image     /**    * Text to represent the main bold text    * e.g. ‘Microsoft’    */   titleText: Text     /**    * Text to represent the small sub text that displays a statistic    * about the entity this element represents.    * e.g. ‘10,975,744 followers’    */   statisticText: Text     /**    * Optional text to provide more information about the entity.    * Empty in the first element case, ‘CEO of Microsoft’ in the 2nd one.    */   caption: optional Text } 

With the above modeling, the client layer remains very thin as it simply displays each image/text returned from the API. The clients are unaware of which underlying entity each element represents, as the server is responsible for transforming the data into displayable content.

API design with render models

API modeling with render models can live on a spectrum between the two extremes of frontend modeling strategies, such as pure data models and pure view models. With pure data models, different types of content use different models, even if they look the same on UI. Clients know exactly what entity they are displaying and most of the business logic is on clients, so complex product UX can be implemented as needed. Pure view models are heavily-templated and clients have no context on what they are actually displaying with almost all business logic on the API. In practice, we have moved away from using pure view models due to difficulties in supporting complex functionality, such as client animations and client-side consistency support, due to the lack of context on the clients’ end.

See also  Career stories: Mobilizing learners worldwide

Typically, when we use render models, our models have both view model and data model aspects. We prefer to use view modeling most of the time to abstract away most of the view logic on the API and to keep the view layer on the client as thin as possible. We can mix in data models as needed, to support the cases where we need specific context about the data being displayed.

Advertisement
free widgets for website
  • A diagram of a spectrum of modeling strategies between pure view models and pure data models

A spectrum of modeling strategies between pure view models and pure data models

To see this concretely, let’s continue our previous example of a FollowableEntity. The member can tap on an entity to begin following the profile, company, or group. As a slightly contrived example, imagine that we perform different client side actions based on the type of the entity. In such a scenario, the clients need to know the type of the entity and at first brush it might appear that the render models approach isn’t feasible. However, we can combine theseapproaches to get the best of both worlds. We can continue to use a render model to display all the client data but embed the data model inside the render model to provide context for making the follow request.

Advertisement
free widgets for website
  record FollowableEntity {   /**    * An image to represent the logo for each element    * e.g. the Microsoft logo.    */   image: Image     /**    * Text to represent the main bold text    * e.g. ‘Microsoft’    */   titleText: Text     /**    * Text to represent the small sub text that displays a statistic    * about the entity this element represents.    * e.g. ‘10,975,744 followers’    */   statisticText: Text     /**    * Optional text to provide more information about the entity.    * Empty in the first element case, ‘CEO of Microsoft’ in the 2nd one.    */   caption: optional Text     /**    * An embedded data model that provides context for interacting    * with this entity.    */   entity: union[     Profile,     Company,     Group   ] } 

Client theming, layout, and accessibility

Clients have the most context about how information will be displayed to users. Understanding the dynamics of client-side control over the UX is an important consideration when we build render models. This is particularly important because clients can alter display settings like theme, layout, screen size, and dynamic font size without requesting new render models from the server.

Properties like colors, local image references, borders, or corner radius are sent using semantic tokens (e.g., color-action instead of blue) from our render models. Our clients maintain a mapping from these semantic tokens to concrete values based on the design language for the specific feature on a given platform (e.g. iOS, Android, etc.). Referencing theme properties with semantic tokens enables our client applications to maintain dynamic control over the theme.

For the layout, our render models are not intended to dictate the exact layout of the UI because they are not aware of the total available screen space. Instead, the models describe the order, context, and priorities for views, allowing client utilities to ultimately determine how the components should be placed based on available space (screen size and orientation). One way we accomplish this is by referring to the sizes of views by terms like “small” or “large” and allowing clients to apply what that sizing means based on the context and screen size.

Advertisement
free widgets for website

It is critical that we maintain the same level of accessibility when our UIs are driven by render models. To do so, we provide accessibility text where necessary in our models, map our render models to components that have accessibility concerns baked in (minimum tap targets), and use semantics instead of specific values when describing sizes, layouts, etc.

See also  Career stories: Next plays, jungle gyms, and Python

Write use cases

One of the most challenging aspects of render models is dealing with write use cases, like filling forms and taking actions on the app (such as following a company, connecting with a person, sending a message, etc.). These use cases need specific data to be written to backends and cannot be modeled in a completely generic way, making it hard to use render models.

Actions are modeled by sending the current state of the action and its other possible states from the server to the clients. This tells the clients exactly what to display. In addition, it allows them to maintain any custom logic to implement a complex UI or perform state-changing follow-up actions.

To support forms, we created a standardized library to read and write forms, with full client infrastructure support out of the box. Similar to how traditional read-based render models attempt to leverage generic fields and models to represent different forms of data, our standardized forms library leverages form components as its backbone to generically represent data in a form by the type of UI element it represents (such as a ‘single line component’ or a ‘toggle component’).

Advertisement
free widgets for website

Render models in practice

As we have mentioned above, the consistency of your UI is an important factor when leveraging render models. LinkedIn is built on a semantics-based design system that includes foundations like color and text, as well as shared components such as buttons and labels. Similarly, we have created layers of common UX render models in our API that include foundational and component models, which are built on top of those foundations.

Our foundational models include rich representations of text and images and are backed by client infrastructure that renders these models consistently across LinkedIn. Representing rich text through a common model and render utilities enables us to provide a consistent member experience and maintain our accessibility standards (for instance, we can restrict the usage of underlining in text that is not a link). Our image model and processing ensures that we use the correct placeholders and failure images based on what the actual image being fetched presents (e.g., a member profile). These capabilities of the foundational models are available without any client consumer knowledge of what the actual text or image represents and this information is all encapsulated by the server-driven model and shared client render utilities.

The foundational models can be used on their own or through component models that are built on top of the foundations. They foster re-use and improve our development velocity by providing a common model and shared infrastructure that resolves the component. One example is our common insight model, which combines an image with some insightful text.

Advertisement
free widgets for website
  • A commonly used ‘insight’ model used throughout the site

A commonly used ‘insight’ model used throughout the site

Over the years, many teams at LinkedIn have taken on large initiatives to re-architect their pages based on render model concepts built on top of these foundational models. No two use cases are exactly alike, but a few of the major use cases include:

  • The profile page, which is built using a set of render model-based components stitched together to compose the page. For more details on this architecture, see this blog post published earlier this year.

  • The search results page, built using multiple card render model templates to display different types of search results in a consistent manner. See this blog post for more details.

  • The main feed, built centered around the consistent rendering of one update with optional components to allow for variability based on different content types.

  • Image of a feed component designed around a several components

A feed component designed around a several components

  • The notifications tab, which helped standardize 50+ notification types into one simple render model template.
  • Image of a notifications card designed using a standardized UI template

A notifications card designed using a standardized UI template

All of these use cases have seen some of the key benefits highlighted in this post: simpler client-side logic, a consistent design feel, faster iteration, and development and experimentation velocity for new features and bugs.

Render model tradeoffs

Render models come with their pros and cons, so it is important to properly understand your product use case and vision before implementing them.

Advertisement
free widgets for website

Benefits

With render models, teams are able to create leverage and control when a consistent visual experience, within a defined design boundary, is required across diverse use cases. This is enabled by centralizing logic on the server rather than duplicating logic across clients. It fosters generalized and simpler client-side implementation, with clients requiring less logic to render the user interface since most business logic lives on the server.

Render models also decrease repeated design decisions and client-side work to onboard use cases when the use case fits an existing visual experience. It fosters generalized API schemas, thereby encouraging reuse across different features if the UI is similar to an existing feature.

With more logic pushed to the API and a thin client-side layer, it enables faster experimentation and iteration as changes can be made by only modifying the server code without needing client-side changes on all platforms (iOS, Android, and Web). This is especially advantageous with mobile clients that might have older, but still supported versions in the wild for long periods of time.

Similarly, as most of the business logic is on the server, it is likely that any bugs will be on the server instead of clients. Render models enable faster turnaround time to get these issues fixed and into production, as server-side fixes apply to all clients without needing to wait for a new mobile app release and for users to upgrade.

Advertisement
free widgets for website

Disadvantages

As mentioned previously, render models rely on consistent UIs. However, if the same data backs multiple, visually-distinct UIs, it reduces the reusability of your API because the render model needs more complexity to be able to handle the various types of UIs. If the UI does need to change outside the framework, the client-code and server code needs to be updated, sometimes in invasive ways. By comparison, UI-only changes typically do not require changes to data models. For some of these reasons, upfront costs to implement and design render models are often higher due to the need to define the platform and its boundaries, especially on the client.

Render models are un-opinionated about writes and occasionally require write-only models or additional work to write data. This is contrasted with data models where the same data models can be used in a CRUD format.

Client-side tracking with render models has to be conceived at the design phase, where tracking with data models is more composable from the client. It can be difficult to support use case-specific custom tracking in a generic render model.

Finally, there are some cases where client business logic is unavoidable such as in cases with complex interactions between various user interface elements. These could be animations or client-data interactions. In such scenarios, render models are likely not the best approach as, without the specific context, it becomes difficult to have any client-side business logic.

Advertisement
free widgets for website

When to use render models?

Render models are most beneficial when building a platform that requires onboarding many use cases that have a similar UI layout. This is particularly useful when you have multiple types of backend data entities that will all render similarly on clients. Product and design teams must have stable, consistent requirements and they, along with engineering, need to have a common understanding of what kinds of flexibility they will need to support and how to do so.

Additionally, if there are complex product requirements that need involved client-side logic, this may be a good opportunity to push some of the logic to the API. For example, it is often easier to send a computed text from the API directly rather than sending multiple fields that the client then needs to handle in order to construct the text. Being able to consolidate/centralize logic on the server, and thus simplifying clients, makes their behavior more consistent and bug-free.

On the flip side, if there is a lack of stability or consistency in products and designs, any large product or design changes are more difficult to implement with render models due to needing schema changes.

Render models are effective when defining generic templates that clients can render. If the product experience does not need to display different variants of data with the same UI, it would be nearly impossible to define such a generic template, and would often be simpler to use models that are more use case-specific rather than over-generalizing the model designs.

Acknowledgments

Render models have been adapted through many projects and our best practices have evolved over several years. Many have contributed to the design and implementation behind this modeling approach and we want to give a special shoutout to Nathan HibnerZach MooreLogan Carmody, and Gabriel Csapo for being key drivers in formulating these guidelines and principles formally for the larger LinkedIn community.

Advertisement
free widgets for website
Advertisement
free widgets for website

Topics

Continue Reading

Trending