Connect with us

LINKEDIN

Challenges and practical lessons from building a deep-learning-based ads CTR prediction model

Published

on

challenges-and-practical-lessons-from-building-a-deep-learning-based-ads-ctr-prediction-model
  • Ads CTR Graphic

Introduction

At LinkedIn, our ads business is powered by click-through-rate (CTR) prediction, a core machine learning model. CTR prediction estimates the probability of clicks between a LinkedIn member and a potential advertisement. That probability is then used for ads auctions, which decide the order of ads being displayed to members. A better CTR model can enhance the member and advertiser experience by bringing more relevant ads and more efficient advertiser budget spending.

In the past, we predicted ads CTR through a GLMix model. Being a highly optimized framework coupled with abundant efforts of feature engineering, it was a baseline that was hard to surpass. We recently replaced this model with a deep-learning-based system and in this blog post we will describe some of the challenges we tackled, practical lessons we learned, and explain how the transition brought large relevance lifts (+8.5% CTR) for our ads business.

We would also like to highlight that this work was enabled by LinkedIn ML frameworks and infrastructure including GDMixLambda Learner, and other libraries.

Three towers, three challenges

Advertisement
free widgets for website
  • Tower illustration

Figure 1: The three-tower model architecture. The shallow and the deep towers take in generalization features and are trained at daily frequency while the wide tower takes in memorization features and is re-trained at hourly frequency.

Our deep CTR model has a three-tower architecture, the “deep tower,” the “wide tower,” and the “shallow tower.” The output of these three towers is summed together and fed into a sigmoid layer and a regular cross-entropy loss function is used. While at first glance it looks similar to the popular wide-and-deep model, the actual setup is quite different. In this section, we will use each tower to introduce one unique challenge we tackled, thus three towers, and three challenges.

The deep tower: complete feature interaction

The deep tower is a vanilla multilayer perceptron (MLP) where input features include member and advertiser profiles, activities, and context features. Those features will first be converted to dense embedding through embedding layers, then concatenated and fed to fully connected layers. The challenge with the deep tower is getting complete feature interaction across member, ad, and context features.

In general, there are two ways to productionize deep learning:

  • (a) Train a deep model to generate some type of offline embedding, such as ads embedding and member embedding, then inject the embedding as a feature to the baseline model. The embedding can be stored in a key-value store and fetched during online scoring time.
  • (b) Train a deep model and serve the entire deep model online.

When comparing the two approaches, (a) has a much lower engineering cost to achieve because approach (b) requires setting up the entire deep model online in the serving stack. The downside of (a), however, is that its deep model is only based on members or ads, and it cannot capture the complete feature interaction across members, ads, and context.

Earlier, we attempted to use deep learning for ads CTR through approach (a). Unfortunately, those attempts did not succeed, which led us to taking approach (b) to have the complete feature interaction. While how to build a large-scale deep learning serving system under the strict latency requirements of ads auction is out of the scope of this blog, we did some post-ramping analysis and found that the interaction between context features and other features is critical to the relevance lift we saw during A/B tests. This proved that the complete feature interaction enabled by end-to-end deep model serving is a key to success.

See also  Feathr joins LF AI & Data Foundation

The wide tower: fast memorization

The wide tower is a linear layer that takes in sparse ID features such as ad ID and advertiser ID. Essentially these features help the model memorize the historical performance of each entity. Freshness is important to this type of feature as the performance of ads can trend differently through time and date and as new ads/advertisers keep entering our platform. To ensure the freshness of our model, we perform frequent partial re-training of the wide tower. For each model, we first perform cold-start training on the other two towers. Then, we freeze their coefficients and perform frequent warm-start training on the wide tower using the latest data. The generalization features of that latest data will be instantly scored by the other two towers and stored on HDFS as cold-start offset after they are collected from Kafka tracking events. Then the cold-start offset and sparse ID features are used to update the coefficients of the wide tower. Because the input features are lightweight, the warm-start retraining process is fast, and with GDMix and Lambda learner as our backbone, we are able to perform this partial re-training on an hourly basis.

Advertisement
free widgets for website
  • Wide tower graphic

Figure 2: The complete training process is decomposed into 3 steps. Step 1: Training a model with the deep and the shallow towers only using generalization features. Step 2: Whenever new data comes into our offline system, we take the generalization features of the new data and perform inference using the deep & shallow towers trained in Step 1 and get a “cold-start offset”. Step 3: Training the wide tower only with the cold-start offset plus memorization features. Step 1 happens on a daily basis. Step 2 happens whenever a new batch of tracking data becomes available (every few minutes). Step 3 happens on an hourly basis.

We did ablation studies and found that 1) the wide tower has a significant boost to model performance during A/B tests, and 2) increasing re-training frequency from daily level to hourly level makes noticeable improvements to model performance.

The shallow tower: ease of calibration

Challenge

Unlike many verticals where better relevance is the only major goal, ads consider monetization values in its ranking objective, which is called Expected Cost Per Click (ECPI). For click-type ads, a simplified formula of ECPI is as follows:

Advertisement
free widgets for website
  • Equation

where pCTR is the prediction score from our CTR model and biddingPrice is the amount of money advertisers are willing to pay if the member clicks on the ad.

ECPI is not just used for ranking, in many cases it is also used to charge advertisers. Thus apart from the relative order derived from ranking, the absolute value of ECPI and pCTR matters  because inaccurate pCTR can lead to the overcharging or under-charging of advertisers. The process of getting pCTR to the right absolute value (i.e. oCTR, the observed ground truth probability of click) is called calibration. Deep models tend to produce a different distribution of pCTR and calibrating it is rather challenging. For the GLMix baseline model and the new deep model, we use isotonic regression as a post-training calibration module. However, it was not solving the problem for the deep model. When we tested the first version of the deep model, it produced pCTR that was on average 40% higher than our baseline model, which meant that it could overcharge and hurt advertisers’ ROI if it was ramped to production. We call this issue over-prediction.

See also  Applying multitask learning to AI models at LinkedIn

The shallow tower trick

We found that a simple trick that alleviates the over-prediction problem is inserting a shallow tower into the model. The shallow tower is a linear layer that takes in almost the same features as the deep tower. While the theoretical explanation for this needs more study in the future, we can provide a hypothesis on why the shallow tower trick works. It has been empirically studied that deep models tend to be overconfident in their predictions when compared with linear models. The shallow and deep tower architecture can be thought of as a special residual block that combines a linear model and a deep model. Instead of optimizing for desired underlying mapping directly, the deep tower is now optimizing for the residual between the desired mapping and the linear model mapping. We hypothesize that this architecture can not only prevent model degradation, but also produce a mapping function that is closer to the linear models and reduce calibration error. However, more case studies are needed to reach this conclusion.

Advertisement
free widgets for website

In practice, we found that adding the shallow tower reduces over-prediction from 40% to about 10%. Note that while both the shallow tower and the wide tower are linear layers, we do not combine them because the shallow tower takes in heavy features that cannot be processed and trained at hourly frequency.

  • Comparison graphs

Figure 3: Comparing the distribution before and after inserting the shallow tower: The Deep+Wide Network generates over-confident predictions (e.g. pCTR>0.5) while the Deep+Wide+Shallow Network has less of the issue.

The position feature

Another twist we made is removing the position feature from the deep tower and only feeding it to the shallow tower. Position refers to the position of the ad on the LinkedIn home feed, e.g., the second feed position. The position feature is special in the sense that it is a de-biasing feature that is only available during training time but not available during online serving due to the nature of our ads system. We performed an ablation study and found that putting the position feature into the deep tower enables the model to learn unwanted interaction between position and other features, which makes calibration harder.

Advertisement
free widgets for website

Beyond the shallow tower: calibration and exposure bias

Despite the shallow tower trick alleviating the issue, we still had about 10% over-prediction, which prevented us from ramping the model to production. So another question we wanted to answer was, why is our isotonic-regression-based calibration module not fixing over-prediction?

The short answer is that exposure bias in the system leads to different distribution of data in offline dataset and online dataset, so the calibration models trained on offline dataset cannot generalize to the online request data. This has been observed in other industry applications and been called “selection bias,” but we think “exposure bias” can be a more accurate term here.

See also  Migration madness: How to navigate the chaos of large cross-team initiatives towards a common goal

For each request in our online system, the model scores a few hundred ad candidates but only the most competitive ads (based on their bids and predicted CTRs) can win the auction, get exposed to members, and then be collected into our offline dataset. In other words, our offline dataset is biased by the baseline linear model (that was used for predicting CTRs) and thus different from our true online test set.

Advertisement
free widgets for website
  • Baseline comparable image

Figure 4: A system with the baseline model and a system with the deep model cause different biases on which ads get exposed and collected into the offline dataset. Their data samples and distribution are denoted in yellow and blue color separately. When we trained the first deep model, all our offline data came from the baseline model scoring (yellow) and the deep model did not show over-prediction offline. However, when the deep model was ramped online, it suffered from over-prediction on its own distribution of exposure data (blue) . Thus the solution is collecting its data samples into the offline dataset then training calibration models based on it.

One naive solution to the problem was ramping our deep model to 100% traffic, collecting data that has over-prediction issues into the offline dataset, and then training the calibration model based on that data. However, the solution was not practical because it could cause drastic business metrics shifts. So instead, we made a compromise. First, we ramped the deep model to a small percentage of traffic, then only used the tracking data generated by the deep model to train its calibration module. We gradually ramped up the deep model and found that the over-prediction eventually came down to 0 as we ramped it higher and collected more data for calibration training.

Conclusion

The new ads CTR model combines deep feature interaction, fast memorization, and ease of calibration. In this blog, we discussed some practical lessons when building a deep-learning based CTR model such as using embedding as features vs end-to-end deep model serving, ways to achieve hourly model retraining frequency etc. In particular, we shared that solving the over-prediction issues caused by deep models is a unique challenge to the ads domain and that we are doing more studies to confirm our solution.

This article is a slice of a larger project that spanned more than one year and involved multiple teams. In particular, we would like to thank our teammates and leadership from the Ads AI team: Renpeng FangMark YangZhenqi HuDavid PardoeHiroto UdagawaArjun KulothungunOnkar Dalal, and our collaborators from the AI Foundations team and Machine Learning Infra team: Jun ShiSida WangKeerthi SelvarajHaichao WeiYun DaiPei-Lun Liao. We would also like to thank Rupesh GuptaKayla GuglielmoKatherine H. Vaiente and the LinkedIn Editorial team for your reviews and suggestions.

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
Continue Reading
Advertisement free widgets for website
Click to comment

Leave a Reply

Your email address will not be published.

LINKEDIN

Real-time analytics on network flow data with Apache Pinot

Published

on

By

real-time-analytics-on-network-flow-data-with-apache-pinot

The LinkedIn infrastructure has thousands of services serving millions of queries per second. At this scale, having tools that provide observability into the LinkedIn infrastructure is imperative to ensure that issues in our infrastructure are quickly detected, diagnosed, and remediated. This level of visibility helps prevent the occurrence of outages so we can deliver the best experience for our members. To provide observability, there are various data points that need to be collected, such as metrics, events, logs, and flows. Once collected, the data points can then be processed and made available, in real-time, for engineers to use for alerting, troubleshooting, capacity planning, and other operations.

At LinkedIn, we developed InFlow to provide observability into network flows. A network flow describes the movement of a packet through a network and is the metadata of a packet sampled at a network device that describes the packet in terms of the 5-tuple: source IP, source port, destination IP, destination port, and protocol. It may also contain source and destination autonomous system numbers (ASNs), the IP address of the network device that has captured this flow, input and output interface indices of the network device where the traffic was sampled, and the number of bytes transferred.

Network devices can be configured to export this information to an external collector using various protocols. InFlow understands the industry standard sFlow and IPFIX protocols for collecting flows.

How LinkedIn leverages flow data

InFlow provides a rich set of time-series network data having over 50 dimensions such as source and destination sites, security zones, ASNs, IP address type, and protocol. With this data, various types of analytical queries can be run to get meaningful insights about network health and characteristics.

Advertisement
free widgets for website
  • InFlow UI Top Services

Figure 1.  A screenshot from InFlow UI’s Top Services tab which shows the 5 services consuming the most network bandwidth and the variation of this traffic over the last 2 hours

Most commonly, InFlow is used for operational troubleshooting to get complete visibility into the traffic. For example, if there is an outage due to a network link capacity exhaustion, InFlow can be used to find out the top talkers for that link based on hosts/services that are consuming the most bandwidth (Figure 1) and based on the nature of the service, further steps can be taken to remediate the issue.

Flow data also provides source and destination ASN information, which can be used for optimizing cost, based on bandwidth consumption of different kinds of peering with external networks. It can also be used for analyzing data based on several dimensions for network operations. For example, finding the distribution of traffic by IPv4 or IPv6 flows or the distribution of traffic based on Type of Service (ToS) bits.

InFlow architecture overview

Advertisement
free widgets for website
  • InFlow architecture

Figure 2. InFlow architecture

Figure 2 shows the overall InFlow architecture. The platform is divided into 3 main components: flow collector, flow enricher, and InFlow API with Pinot as a storage system. Each component has been modeled as an independent microservice to provide the following benefits:

  1. It enforces the single responsibility principle and prevents the system from becoming a monolith.
  2. Each of the components have different requirements in terms of scaling. Separate microservices ensure that each can be scaled independently.
  3. This architecture creates loosely coupled pluggable services which can be reused for other scenarios.
See also  Social Networking Sites Market 2020 (COVID-19 Worldwide Spread Analysis) by Key Players ...

Flow collection

InFlow receives 50k flows per second from over 100 different network devices on the LinkedIn backbone and edge devices. InFlow supports sFlow and IPFIX as protocols for collecting flows from network devices. This is based on the device’s vendor support for the protocols and minimal impact of flow export on the device’s performance. The InFlow collector receives and parses these incoming flows, aggregates the data into unique flows for a minute, and pushes them to a Kafka topic for raw flows.

Flow enrichment

The data processing pipeline for InFlow leverages Apache Kafka and Apache Samza for stream processing of incoming flow events. Our streaming pipeline processes 50k messages per second, enriching the data with 40 additional fields (like service, source and destination sites, security zones, ASNs, and IP address type), which are fetched from various internal services at LinkedIn. For example, our data center infrastructure management system, InOps, provides information on the site, security zone, security domain of the source, and destination IPs for a flow. The incoming raw flow messages are consumed by a stream processing job on Samza and after adding the additional enriched fields, the result is pushed to an enriched Kafka topic.

Data storage

InFlow requires storage of tens of TBs of data with a retention of 30 days. To support its real-time troubleshooting use case, the data must be queryable in real-time with sub-second latency so that engineers can query the data without any hassles during outages. For the storage layer, InFlow leverages Apache Pinot.

Advertisement
free widgets for website

InFlow UI

  • A screenshot from InFlow UI’s Explore tab

Figure 3.  A screenshot from InFlow UI’s Explore tab which provides a self-service interface for users to visualize flow data by grouping and filtering on different dimensions

The InFlow UI is a dashboard with some of the commonly used visualizations on flow data pre-populated that provides a rich interface where the data can be filtered or grouped by any of the 40 different dimension fields. The UI also has an Explore section, which allows for creation of ad-hoc queries. The UI is based on top of InFlow API, which is a middleware responsible for translating user input into Pinot queries and issuing them to the Pinot cluster.

Pinot as a storage layer

In the first version of InFlow, data was ingested from the enriched Kafka topic to HDFS. We leveraged Trino for facilitating user queries on the data present in HDFS. However, the ETL and aggregation pipeline added a 15-20 minute delay to the pipeline, reducing the freshness of the data. Additionally, query latencies to HDFS using Presto were in the order of 15-30 seconds. This latency and delay was acceptable for doing historical data analytics, however, for real-time troubleshooting, the data needs to be available in real-time with a maximum delay of 1 minute.

Advertisement
free widgets for website

Based on the query latency and data freshness requirements, we explored several storage solutions available at LinkedIn (like EspressoKusto, and Pinot) and decided on onboarding our data to Apache Pinot. When looking for solutions, we needed a reliable system providing real-time ingestion and sub-second query latencies. Pinot’s support for Lambda and Lamda-less architecturereal-time ingestion, and low latency at high throughput could help us achieve optimal results. Additionally, the Pinot team at LinkedIn is experimenting with supporting a new use case called Real-time Operational Metrics Analysis (ROMA), which enables engineers to slice and dice metrics along different combinations of dimensions to help monitor infrastructure near real-time, analyze the last few weeks/months/years of data to discover trends and patterns to forecast and plan capacity, and helps to find the root cause of outages quickly and reduce the time to recovery. These objectives aligned well with our problem statement of processing large numbers of metrics in real-time.

See also  Migration madness: How to navigate the chaos of large cross-team initiatives towards a common goal

The Pinot ingestion pipeline consumes directly from the enriched Kafka topic and creates the segments on the Pinot servers, which improves the freshness of the data in the system to less than a minute. User requests from InFlow UI are converted to Pinot SQL queries and sent to the Pinot broker for processing. Since Pinot servers keep data and indices in cache-friendly data structures, the query latencies are a huge improvement from the previous version where data was queried from disk (HDFS).

Several optimizations were done to reach this query latency and ingestion parameters. Because the data volume for the input Kafka topic is high, several considerations were made to decide the optimal number of partitions in the topic to allow for parallel consumption into segments in Pinot after several experiments with the ingestion parameters. Most of our queries involved a regexp_like condition on the devicehostname column, which is the name of the network device that has exported the flow. This is used to narrow down on a specific plane of the network. regexp_like is inefficient as it cannot leverage any index so to resolve this, we set up an ingestion transformation using Pinot. These are various transformation functions that can be applied to your data before it is ingested into Pinot. The transformation created a derived column flowType, which classifies a flow based on the name of the network device that has exported this flow into a specific plane of the network. For example, if the exporting device is at the edge of our network, then this flow can be classified as an Internet-facing flow. The flowType column is now an indexed column used for equality comparisons instead of regexp_like and this helped improve query latency by 50%.

See also  18 LinkedIn Stats from 2019 to Guide Your Social Media Strategy in 2020 [Infographic]

Queries from InFlow always request for data from a specific range in time. To improve query performance, timestamp based pruning was enabled on Pinot. This improved query latencies since only relevant segments are filtered in for processing based on the filter conditions on the timestamp column in queries. Based on the Pinot team’s input, indexes on the different dimension columns were set up to aid query performance.

Conclusion

Advertisement
free widgets for website
  • Latency metric for InFlow API query

Figure 4.  Latency metric for InFlow API query for top flows in the last 12 hours before and after onboarding to Pinot

Following the successful onboarding of flow data to a real-time table on Pinot, freshness of data improved from 15 mins to 1 minute and query latencies were reduced by as much as 95%. For some of the more expensive queries, which took as much as 6 minutes using Presto queries, the query latency reduced to 4 seconds using Pinot.This has been helpful in making it easier for the network engineers at LinkedIn to easily get the data they need for troubleshooting or running real-time analytics on network flow data.

What’s next

The current network flow data only provides us with sampled flows from the LinkedIn backbone and edge network. Skyfall is an eBPF-based agent, developed at LinkedIn, that collects flow data and network metrics from the host’s kernel with minimal overhead. The agent captures all flows for the host without sampling and will be deployed across all servers in the LinkedIn fleet. This would provide us with a 100% coverage of flows across our data centers and enable us to support more use cases on flow data that require unsampled information such as security audit and validation based use cases. Because the agent collects more data and from more devices, the scale of data collected by Skyfall is expected to be 100 times that of InFlow. We are looking forward to leveraging the InFlow architecture to support this scale and provide real-time analytics on top of the rich set of metrics exported by the Skyfall agent. Another upcoming feature that we are excited about is leveraging InFlow data for anomaly detection and more traffic analytics.

Acknowledgements

Onboarding our data to Pinot was a collaborative effort and we would like to express our gratitude to Subbu SubramaniamSajjad MoradiFlorence Zhang, and the Pinot team at LinkedIn for their patience and efforts in understanding our requirements and working on the optimizations required for getting us to the optimal performance.

Advertisement
free widgets for website

Thanks to Prashanth Kumar for the continuous dialogue in helping us understand the network engineering perspective on flow data. Thanks to Varoun P and Vishwa Mohan for their leadership and continued support.

Advertisement
free widgets for website

Topics

Continue Reading

LINKEDIN

Feathr joins LF AI & Data Foundation

Published

on

By

feathr-joins-lf-ai-&-data-foundation

In April 2022, Feathr was released under the Apache 2.0 license and we announced, in close conjunction with our Microsoft Azure partners, native integration and support for Feathr on Azure. Since being open sourced, Feathr has achieved substantial popularity among the machine learning operations (MLOps) community. It has been adopted by companies of various sizes across multiple industries and the community continues to grow rapidly. Most excitingly, more and more open-source enthusiasts are contributing code to Feathr.

It’s clear that many others experience the same pain points that Feathr aims to address. That’s why we are excited to share it with a broader audience and for Feathr to be adopted by a broader open-source community with help from LF AI & Data.

Donating to the LF AI & Data will help ensure that Feathr continues to grow and evolve across various dimensions, including visibility, user base, and contributor base. Also, the Feathr development team will have more opportunities to collaborate with other member companies and projects, such as achieving richer online store support via integration with Milvus and JanusGraph, and adopting open data lineage standard from OpenLineage.  As a result, we hope Feathr helps AI engineers build and scale feature pipelines and feature applications in ways that push MLOps tech stacks and the industry forward for years to come. 

The Feathr feature store provides an abstraction layer between raw data and ML models. This abstraction layer standardizes and simplifies feature definition, transformation, serving, storage, and access from within ML workflows or applications. Feathr empowers AI engineers to focus on feature engineering while it takes care of data serialization format, connecting to various databases, performance optimization, and credential management. More specifically, Feathr helps:

  • Define features once and use them in different scenarios, like model training and model serving
  • Create training dataset with point-in-time correct semantics
  • Connect to various offline data sources (data lakes, and data warehouses), and then transform source data into features
  • Deliver feature data from offline system to online store for faster online serving
  • Discover features or share features among colleagues or teams with ease
See also  REACH turns five - Celebrating the power of apprenticeships

To learn more please visit Feathr’s GitHub page here and our April 2022 blog, Open sourcing Feathr – LinkedIn’s feature store for productive machine learning.

Acceptance into the LF AI & Data indicates an important recognition from the Linux Foundation. We believe a large, diverse, healthy, and self-sustained Feathr open-source community is important. We’re excited for the new chapter of Feathr and to welcome more people into the Feathr community.

Advertisement
free widgets for website
Continue Reading

LINKEDIN

Career stories: Rejoining LinkedIn to scale our media infrastructure

Published

on

By

career-stories:-rejoining-linkedin-to-scale-our-media-infrastructure

Originally from Argentina, systems & infrastructure engineering leader Federico was a founding member of the Media Infrastructure team in 2015. Now based in Bellevue, Wash., Federico shares how his supportive mentor, LinkedIn’s “sweet spot” scale, and the distinctive engineering challenges here ultimately brought him back to LinkedIn in 2019.

  • picture of Federico and partner

My love for engineering started in my home country of Argentina. After working as an engineer in a corporate setting for a few years, I decided to start my own company focused on custom software development. I loved the interesting problems I could solve every day for my clients, but I was searching for greater economic opportunities in the U.S., where most of my clients were based. After working as a contractor for YouTube, I found my passion for media and engineering of video systems.

Joining and rejoining LinkedIn

When LinkedIn reached out to me with an opportunity to build their video platform in 2015, I jumped at the chance. It was thrilling to join LinkedIn at a time when we launching in-feed video. What originally started as a team of two grew to nine people, and that’s when LinkedIn began training me to step into my first management role for the Media Infrastructure team.

After growing in my management position for a few years, I left LinkedIn for an opportunity working on larger scale systems. But I quickly became burned out and missed my original role as an individual contributor at LinkedIn. My previous manager at LinkedIn was so supportive. I was offered a role as a technical architect (i.e., Senior Staff) for media infrastructure, which allowed me to return to LinkedIn with new technical knowledge, and the same passion for my work.

Advertisement
free widgets for website
  • group of friends
See also  INWED2022: How Skills Can Help Shape Your Future in Engineering

Making the move to a new LinkedIn home base

Once our team had grown to almost 40 people, we reached the point at which it made sense to look for additional engineering talent outside the San Francisco Bay and New York City areas. It is challenging to find engineers in the media domain since very few companies are doing what LinkedIn does at scale. That’s when we started considering the next office location as an opportunity to bring in more talent.

Ultimately, we decided on Bellevue, Washington. After eight years in the Bay Area, I was ready for a move, and Bellevue was the right fit for my wife and me for many reasons. For example, many of the media companies we partnered with had a strong engineering presence in Seattle. Our driving motivation was to spearhead the company culture and to build an identity for a new LinkedIn office. The Bellevue office just turned one year old and we have been able to build a thriving engineering community here that’s growing quickly.

Advertisement
free widgets for website
  • Federico and partner at stadium

Taking ownership and giving back

In my current role as a Principal Staff Software Engineer, I love that I can mix the technical side of engineering with driving the strategic and product roadmap for my organization.

As an infrastructure engineer, there’s a sweet spot here between the scale of your work and the size of your engineering team at LinkedIn. We have relatively small teams tackling very large problems in complex technical domains. This creates great opportunities for individual ownership over a significant engineering problem on a large scale. We have space to get involved and truly make a difference instead of simply being a cog in a wheel.

Advertisement
free widgets for website
  • Federico working as DJ
See also  18 LinkedIn Stats from 2019 to Guide Your Social Media Strategy in 2020 [Infographic]

Throughout my time in Silicon Valley, so many mentors were instrumental in shaping my career. As I’ve grown, I’ve tried to prioritize paying it forward by mentoring my team and other engineers at LinkedIn. Relationships matter, especially at LinkedIn. Building your network is a really core value here, because we thrive on connections.

More About Federico

Based in Bellevue, Washington, Federico is a Principal Staff Systems & Infrastructure Engineer on LinkedIn Media Infrastructure team. Prior to his time at LinkedIn, Federico’s engineering career led him from launching his own software development company, ESTUDIO42, to software engineering roles at YouTube and Instagram. Federico holds a degree in Computer Engineering from the Universidad Nacional de Tucuman in Argentina. Outside of work, Federico enjoys traveling with his wife, cooking, visiting shuttle expeditions, and mixing music. 

Advertisement
free widgets for website

Topics

Advertisement
free widgets for website
Continue Reading

Trending