Metal as a Service (MaaS): DIY server-management at scale

Guaranteeing that our servers are continually upgraded to secure and vetted operating systems is one major step that we take to ensure our members and customers can access LinkedIn to look for new roles, access new learning programs, or exchange knowledge with other professionals. LinkedIn has quite a large fleet of servers on-premise that depend on internal tooling to ensure they stay on the latest operating systems. This post will introduce an internal tool that serves as an interface for managing servers’ lifecycles at the LinkedIn scale. We will emphasize the rationale behind this tool’s existence, the path to making it available for our major consumers (i.e., site reliability engineers), and how we rearchitected and scaled this service from only being able to accept a maximum of 72,000 server submissions per day to having no limitations on acceptance rate.
The need for a solution
Before Metal as a Service (MaaS), all the server-upgrade requests were relayed to engineers (from the Production Systems Software Engineering organization) via Jira tickets. It became their sole responsibility to drive this effort manually. Despite being well-defined, it required a transfer of control from host/pool owners to an engineer from a different organization. Often, unexpected delays crept in due to working through communications over a ticket; some issues often required a more hands-on approach and data-center technicians to intercede to take things forward. The biggest challenge thus was aptly delegating server lifecycle management to corresponding SREs and pool owners; removing an extra layer of coordination would help isolate issues quicker and give the owners a sense of control over the servers they were responsible for.
Background
To set the context for the upcoming discussion, we will define the following terms:
Overlapping hosts
One of the core concerns raised during interactions with partner SRE teams was how this product would determine overlapping requests, i.e., how can this product isolate and deter members from attempting different sets of operations on a standard set of hosts? Before the alpha release, we added a check verifying if any hosts in a currently submitted batch are part of another. If so, the current batch would be invalidated by providing appropriate messaging to the end user.
Reimage
An OS upgrade that purges existing data from HDD and restates the server back to a clean state with host-specific configurations bootstrapped.
There was a version of “reimager” before MaaS. While the previous version of “reimager” was an effective tool, it was not queryable and required specific prerequisites to be manually fulfilled. With the overarching theme of enabling Site Reliability engineers (SREs) to take ownership of this entire process, we had to think outside the existing solution, which led to designing a tool that could allow direct access to SREs for managing server lifecycle. A good option for this was exposing different functionalities via an API. Another shift we hoped to bring in was the concept of a batch; this was explicitly being investigated to break the hard dependence on Jira tickets as the source of truth. Batching was a relevant concept because it was a common grouping semantic under which host owners could define a standard set of configurations to apply, such as which OS release to upgrade all hosts with, which action to perform on all the hosts, etc. We also wanted to ensure that the right set audience has access rights to interact with MaaS as the majority of actions performed by this product (like reimage, reboot, decommission, disk-wipe, etc.) can be destructive. The anticipated outcome was meant to be a thin layer that would perform data validation, update external sources to reflect that submitted servers are about to be mutated and submit these hosts to a downstream service for further action.
Aloha MaaS!
With a basic understanding of how server-upgrade workflow needed to evolve, the PSSEBuild team interacted with various SRE teams to gather requirements that best fit their usability and necessities. With the completed design, we wrote an API that SRE teams could directly interact with. Metal as a Service (MaaS) was a self-service API that allows end-users to upgrade (reimage), reboot, power on/off, wipe attached disks, and decommission servers in batches. At the heart of it, MaaS, designed as a CRUD Flask-based application, would be managed by systemd or something akin. This application would expose a RESTful API that authorized users would have access to exercise. We chose user association with an internal Active Directory (AD) group to enforce authorization; all authentication requests were also AD-based. With a basic structure in place, we worked on exposing endpoints for:
-
Submitting new requests to process could be any from “reimage,” “reboot,” etc. On successful submission, MaaS would return a batch-id that users could use as a reference for future interactions.
-
Querying the status of a batch relative to batch-id and hostnames
-
Querying batches submitted by AD username
-
Querying statistics of batch runtime
-
Canceling batches that were accidentally submitted
-
Querying backend service to gauge the count of active server upgrades across data centers
More than just exposing functionalities over an API, we also aimed at improving the visibility aspect of the entire pipeline. We leveraged Iris-based alerting and an internally available event bus through which SREs could know the state of their submissions without continually interacting with the API. Iris-based alerting is more granular and synchronous; submitters can be notified via diverse delivery methods (email, slack, SMS, etc.). Iris would ping them once a batch was successfully accepted and when a batch completed execution. Often there is a requirement to triage a wedged submission; we enabled MaaS to create tickets routed to an internal team that addresses one-off failures while tagging the submitters into the same so that they are aware of the progress.
As MaaS’ adoption grew, its architecture and deployment scheme had to evolve to ensure high availability while reducing the human intervention required for product release. We will now describe the evolution of this tool’s architecture, the challenges faced with growth in its adoption, and share some results of its overall performance over time.
Path to a minimum viable product
Figure 1: Architecture layout of MaaS at GA (hosted out of a single host)
Any software goes through multiple iterations of improvements and releases before being deemed stable. For MaaS, the starting point was co-hosting the web service, relational database (Postgres), and Redis-based caching layer on a server. At the time of the alpha release, we were mainly focused on getting a bare-minimum product out with which clients could interact and give us actionable feedback and one that could appropriately forward requests to a downstream service (AutoBuild). At this stage, all the interactions with MaaS were over HTTP (admittedly, this was far from ideal as clients’ credentials were exchanged over plaintext). The application was managed via systemd and required manual intervention for deployments. MaaS interacts with many external systems to validate the state of servers and mutate properties in some of these systems. Wherever a state mutation is expected, MaaS needs to interact using credentials that have been authorized. Because we needed to get the ball rolling, we added the credentials to an internally distributed GPG keystore. When the service restarts, an engineer receives a prompt to input their authentication credentials for the GPG keystore. On successful authentication, MaaS would be allowed access to necessary credentials for interacting with authorized external services.
Another point to highlight here was that the submission pipeline had specific components which could only process one request at a time. In Figure 2, steps 1 through 6 had to conclude before a new request could be processed. During the minimum viable product (MVP) phase, we empirically determined this value to be approximately two minutes, which implied that MaaS could only process one request per two minutes.
Figure 2: Request-processing workflow between client’s submission and MaaS
The primary rationale behind this design choice was data consistency. In the extant deployment model, multiple API workers functioned in tandem without sharing common memory or database connectors. The underlying database must be consistent for each worker thread to compute “overlapping_hosts” correctly. If multiple copies of the worker threads were trying to calculate this result in parallel, an incorrect computation could occur from dirty- or unflushed- concurrent database sessions.
Issues with MVP
- Unavailability of service redundancy and manual failover mechanism: Traffic was served from a single deployment. In case of an irrecoverable failure of the primary deployment node, service failure would have to be initiated manually. We realized the importance of MaaS moving to an active-active deployment scheme, where losing one node does not result in a pathological service interruption.
- Reliance on non-standard and aged locally managed services (with limited redundancy) and special hardware: At the time of release, as it predominantly intended to have a minimum viable product available, we focussed on having locally available external dependencies; Redis for caching and reverting API responses and PostgreSQL for containing and managing the bulk of data. There was reliance on an unmanaged data layer.Redis (for caching) and PostgreSQL (as primary datastore)served as single points of failure for this product. There was no redundancy for data stored in Redis; any data corruption would halt the cache layer, causing API response times to spike. Managing data replication for data in PostgreSQL could have been more robust.
-
Dependence on clunky credential management: MaaS’ deployment had a strict dependence on the presence of a GPG-secured keystore. It required an engineer (with access to this keystore) to be logged in at the time of deployment.
-
All the interactions were over HTTP: While this was enough for MaaS to be usable, it inherently posed a risk due to an unencrypted transfer of credentials. MaaS’ API should enable all interactions over HTTPS.
-
Globally enforced two-minute backoff per submission: As traffic to MaaS continued to ramp up, its behavior concerning rate-limiting was leading to a bad user experience. We envisioned that MaaS should allow limitless submissions without forcing end users to write fancy contraptions/wrappers to make MaaS accept their requests; this had become a common pain point for end users and warranted remediation. MaaS needed better means of managing bandwidth while keeping its backend performant.
Decoupling overlapping-hosts computation from submissions
While the existing solution allowed us to compute and deliver overlapping-host isolation confidently, with rising traffic/adoption, throttling became a common pet peeve among our users. We oriented ourselves towards ensuring they could submit to MaaS at a frequency of their preference rather than being hindered by a global backoff. We realized the potential of a design change that could break the dependence of data validation on request submission; in essence, we intended to break cohesion between the client-facing piece from the more compute-intensive verification and processing one. It was proposed that MaaS be bifurcated into API (which would accept requests without rate limits) and the backend that would periodically read from a distributed messaging queue and perform necessary operations before processing a request in its entirety. We decided to leverage Kafka as a distributed messaging queue. The choice of Kafka mainly stemmed from its widespread use within LinkedIn and its dedicated support SLA. After wiring our API and backend with Kafka-REST, MaaS could accept as many user requests as needed. The backend processed them sequentially while maintaining FIFO ordering, which was necessary for computing overlapping hosts for incoming requests. The overall workflow started to resemble the following figure:
Figure 3: Proposed changes to MaaS’ internals for removing global submission backoffs
Architecture 2.0
Figure 4: Proposed architectural changes enabling high availability of MaaS (DC* = datacenter)
We ensured that the new architecture had multiple active deployments spread across data centers that could entertain traffic. The catch was that multiple operational deployments could cause issues due to the backend’s asynchronous tasks interacting with external sources. Many interactions are not idempotent and could cause race conditions or failures. To ensure correct behavior while maintaining multiple server footprints, we devised a mutex-inspired design to leverage relational datastore as a source of truth and row-based locks for enforcing isolation. Async tasks (per deployment) would verify if another “copy” were active before marching ahead. This safety net gave us confidence that the parallel execution of async tasks would be synchronized per type and would not cause either MaaS or external sources to be incorrect.
We had to figure out how to distribute traffic evenly between active deployments. We considered a few alternatives, like:
-
Hosting nodes with HAProxy and reverse-proxying incoming requests through them seemed plausible. Still, it would have required us to maintain and monitor another set of hosts for having such a setup live. While only some critical services within LinkedIn heavily depend on HAProxy, adopting such a pattern might demand more from us than we were hoping to solve this problem with.
-
Creating a virtual IP and using ucarp for performing an automated failover in case the service on one host was out. While a good alternative, the version of ucarp that we experimented with posed a pretty stringent restriction on its usage; all the hosts behind the virtual IP should be in the same network subnet, which we could not reliably enforce.
In light of our experimentation with different choices, we decided to defer traffic distribution to something commonly used within LinkedIn – DNSDisco. This internal DNS-based proxy service took the onus of performing periodic health checks of active deployment(s) and making routing decisions relative to the outcome.
We mentioned earlier that the locally hosted caching layer and relational database would not scale horizontally; we needed to remove the data layer and ensure that the same reports received consistent data across multiple deployments. With the well-understood use case, we leveraged managed data services provided by the Couchbase-as-a-Service and MySQL-as-a-Service teams for provisioning managed data sinks. All interactions with MySQL and Couchbase were based on well-defined authentication, authorization, replication, and automated failover protocols.
We had moved away from a bulk of self-managed components and could comfortably host our service on a standard application node with 64GB of memory. The last bit to cover was our move away from the GPG keystore for managing service credentials. We moved all of our credentials to an internal service (KMS) which would allow access to the same via RESTful calls based on application certificates and ACLs associated with those secrets. When this move was complete, we could restart the service or deploy without worrying about the state of the local GPG keystore or manually managing GPG keys. This move was also a precursor and enabler of one-click deployments for MaaS – the possibility of doing such deployments became reasonable once the mandatory “human involvement” piece was moved out of the frame.
Last but not least, we enabled HTTPS-based communication with MaaS. While we’ve maintained AD-based authentication, we also enabled mTLS for MaaS. Now, clients can present verified certificates for exercising authorized endpoints. MaaS was placed behind DataVault; all authorization requests are currently fielded by it. This is the final touch in adopting existing tooling to meet the end instead of perpetuating tech debt.
A few crucial integrations:
-
MaaS leverages internal data sources for blocking requests for hosts that are deemed to be “in use”; doing so allows host owners to not accidentally pave over the machine(s) that are actively hosting application(s) or are part of an active allocation.
-
Hosts being submitted must have a functional IPMI console. MaaS ensures that users can enable this check from API before a submission is accepted, increasing the overall success rate of their submissions.
-
MaaS interfaces with a few internal queryable services that would give the most reliable information about the hosts. It aggregates these data points and uses them as a metric for creating necessary triage tickets with associated teams rather than accepting those hosts blindly and doing those as an after-effect once the associated batch inevitably fails.
-
Server-reclaim workflow: MaaS plays a pivotal role in the server-reclaim workflow. This workflow is automatically triggered for defunct and unallocated physical servers. An automated workflow has been defined that isolates such hosts and submits them to MaaS for starting an OS upgrade which, incidentally, reverts servers to a pristine state. Such newly upgraded servers are returned to a pool of hosts from which other users can allocate.
Insights from telemetry
At MaaS, we collect a wealth of data per submission and derive meaningful and actionable metrics using the same if represented. This is in stark contrast to how this process was conducted previously. There was no direct way of querying and publishing data that could be leveraged for building a metrics-based dashboard. To actualize this, we teamed up with a few members of another engineering team to define a mechanism for their team to pull database models from MaaS, use formulas created by our team for measuring performance, and publish requested PowerBI dashboards.
Few metrics:
-
The overall success rate of “reimage” requests in the last year:
Figure 5: Sample UI representing success percentage and throughput
There are a lot more metrics that further segregate each batch and respective runtimes based on:
-
Hardware SKUs
-
Individual runtime, and failures of different subactions for completing an “action” (like reimage, etc.)
-
Quantifying different lags in between submission to MaaS and external components, etc.
If we were to sift through data persisted within MaaS, this is what the overall performance of MaaS looks like:
Figure 6: Overall YoY throughputs of different actions supported by MaaS
Near-term plans
Some of the work we’re planning is derived from the granular data we gathered from our dashboards.
-
The main pain point for us is the high number of failures in the case of broken IPMI consoles. We’re exploring options to bypass broken consoles and potentially speed up overall upgrade times by conditionally employing in-place upgrades instead of a complete upgrade (which can be time-consuming due to multiple factors outside the control of MaaS).
-
We would focus on improving the API’s overall throughput; with time and multiple iterations of API changes, we’re noticing a relative spike in latency of overall batch acceptance throughput – while still under an acceptable range, we’d like to minimize this behavior from API.
Conclusion
We’ve come a long way from when Jira ticket conversations drove server upgrades. MaaS has been catering to the diverse needs of SRE teams since its MVP days – from server upgrades to reboots and more. MaaS evolved from a point where it could not process more than 720 batches in a day to now, where it can accept and process those many submissions in a few seconds. Albeit, this is not the end of the road for improvements to MaaS. In the grand scheme, having a performant self-service interface through which SREs can manage their server fleet’s state is a boon and help respective teams maintain their applications, eventually leading to a good user experience for end-users utilizing the benefits of a well-oiled internal mechanism.
Acknowledgments
This entire effort has been possible because the people have driven various initiatives, prioritized, and committed to perpetuating MaaS’ stability and utility. A big shoutout to Jon Bringhurst for driving the development and release of the Python library and CLI for interacting with MaaS! Thanks to the engineers who have meaningfully contributed to MaaS and upheld craftsmanship – George Tony, Tianxin Zhou, Steve Fantin, and Jayita Roy. Many thanks to Vjiay Rajendrarao for keeping our timelines in check while the project started to grow – it would have been tough to meet our commitments otherwise. Last but not least, thanks, Franck Martin, for reviewing and providing valuable inputs, and Brian Hart, Nitin Sonawane, Nisheed Meethal, and Milind Talekar, for continually supporting us and providing guidance and feedback for various improvements.
Topics
Career stories: The math-music connection in data science

Making the leap from music to LinkedIn Engineering with REACH
My journey to LinkedIn and passion for coding came from an entirely different background than programming. After studying math and music in college, I performed as a professional violinist touring around the world and composing music for television and film for 15 years.
During the pandemic, I discovered data science after my friends suggested I take programming courses. I became super interested in machine learning and wanted to make a shift in my career, so I was excited to discover LinkedIn’s apprenticeship program for people with non-traditional tech backgrounds like me: REACH. While I was an apprentice, I was given the opportunity to learn and develop skills and also got to have a hand in LinkedIn projects.
I am fortunate that I found a second passion in life. My team and mentors were welcoming and flexible with me as I leaned into my role and adapted to how we work at LinkedIn. It’s been a smooth transition since I also worked remotely during my music career. There’s a great culture of work-life balance at LinkedIn. I can adapt my working hours to California or Chicago hours to accommodate my team’s workload, and the flexibility adds to the balance. Although I love working remotely, I think it’s equally important to further connections with your team in person. I visit the Mountain View office each quarter to share coffee, lunch, and thoughts about our projects at LinkedIn with my team members.
Refining the LinkedIn member experience
In my role at LinkedIn, I’m on one of the consumer-facing teams responsible for the algorithm recommending the feed to LinkedIn members. I program in Python, Scala, and Java as I toggle between analyzing data, running machine learning experiments, and evaluating business impact.
In my first big project, I experimented with sampling our training data for the algorithms. It was thrilling to work with data on a different scale than what I was used to in my personal projects; I went from working with tables of 10,000 rows to 500 million! Using big data technologies like Spark and Hadoop, I sampled different data to feed our algorithms, which turned into business metric gains that I also learned to interpret. I still remember the anticipation right before I pressed the button to share the benefits of my model with 10% of LinkedIn members.
I also love keeping tabs on the member experience through on-call shifts, which is when I’m responsible for LinkedIn’s feed worldwide. If something goes down on a data generation pipeline that will affect our members, I can immediately jump in to solve the issue. The decisions I make in those couple of minutes to ensure that I can effectively direct traffic so as to not impact the experience of millions of members makes the work even more rewarding.
Career stories: Influencing engineering growth at LinkedIn

Since learning frontend and backend skills, Rishika’s passion for engineering has expanded beyond her team at LinkedIn to grow into her own digital community. As she develops as an engineer, giving back has become the most rewarding part of her role.
From intern to engineer—life at LinkedIn
My career with LinkedIn began with a college internship, where I got to dive into all things engineering. Even as a summer intern, I absorbed so much about frontend and backend engineering during my time here. When I considered joining LinkedIn full-time after graduation, I thought back to the work culture and how my manager treated me during my internship. Although I had a virtual experience during COVID-19, the LinkedIn team ensured I was involved in team meetings and discussions. That mentorship opportunity ultimately led me to accept an offer from LinkedIn over other offers.
Before joining LinkedIn full-time, I worked with Adobe as a Product Intern for six months, where my projects revolved around the core libraries in the C++ language. When I started my role here, I had to shift to using a different tech stack: Java for the backend and JavaScript framework for the frontend. This was a new challenge for me, but the learning curve was beneficial since I got hands-on exposure to pick up new things by myself. Also, I have had the chance to work with some of the finest engineers; learning from the people around me has been such a fulfilling experience. I would like to thank Sandeep and Yash for their constant support throughout my journey and for mentoring me since the very beginning of my journey with LinkedIn.
Currently, I’m working with the Trust team on building moderation tools for all our LinkedIn content while guaranteeing that we remove spam on our platform, which can negatively affect the LinkedIn member experience. Depending on the project, I work on both the backend and the frontend, since my team handles the full-stack development. At LinkedIn, I have had the opportunity to work on a diverse set of projects and handle them from end to end.
Mentoring the next generation of engineering graduates
I didn’t have a mentor during college, so I’m so passionate about helping college juniors find their way in engineering. When I first started out, I came from a biology background, so I was not aware of programming languages and how to translate them into building a technical resume. I wish there would have been someone to help me out with debugging and finding solutions, so it’s important to me to give back in that way.
I’m quite active in university communities, participating in student-led tech events like hackathons to help them get into tech and secure their first job in the industry. I also love virtual events like X (formally Twitter) and LinkedIn Live events. Additionally, I’m part of LinkedIn’s CoachIn Program, where we help with resume building and offer scholarships for women in tech.
Influencing online and off at LinkedIn
I love creating engineering content on LinkedIn, X, and other social media platforms, where people often contact me about opportunities at LinkedIn Engineering. It brings me so much satisfaction to tell others about our amazing company culture and connect with future grads.
When I embarked on my role during COVID-19, building an online presence helped me stay connected with what’s happening in the tech world. I began posting on X first, and once that community grew, I launched my YouTube channel to share beginner-level content on data structures and algorithms. My managers and peers at LinkedIn were so supportive, so I broadened my content to cover aspects like soft skills, student hackathons, resume building, and more. While this is in addition to my regular engineering duties, I truly enjoy sharing my insights with my audience of 60,000+ followers. And the enthusiasm from my team inspires me to keep going! I’m excited to see what the future holds for me at LinkedIn as an engineer and a resource for my community on the LinkedIn platform.
About Rishika
Rishika holds a Bachelor of Technology from Indira Gandhi Delhi Technical University for Women. Before joining LinkedIn, she interned at Google as part of the SPS program and as a Product Intern at Adobe. She currently works as a software engineer on LinkedIn’s Trust Team. Outside of work, Rishika loves to travel all over India and create digital art.
Editor’s note: Considering an engineering/tech career at LinkedIn? In this Career Stories series, you’ll hear first-hand from our engineers and technologists about real life at LinkedIn — including our meaningful work, collaborative culture, and transformational growth. For more on tech careers at LinkedIn, visit: lnkd.in/EngCareers.
Career Stories: Learning and growing through mentorship and community

Lekshmy has always been interested in a role in a company that would allow her to use her people skills and engineering background to help others. Working as a software engineer at various companies led her to hear about the company culture at LinkedIn. After some focused networking, Lekshmy landed her position at LinkedIn and has been continuing to excel ever since.
How did I get my job at LinkedIn? Through LinkedIn.
Before my current role, I had heard great things about the company and its culture. After hearing about InDays (Investment Days) and how LinkedIn supports its employees, I knew I wanted to work there.
While at the College of Engineering, Trivandrum (CET), I knew I wanted to pursue a career in software engineering. Engineering is something that I’m good at and absolutely love, and my passion for the field has only grown since joining LinkedIn. When I graduated from CET, I began working at Groupon as a software developer, starting on databases, REST APIs, application deployment, and data structures. From that role, I was able to advance into the position of software developer engineer 2, which enabled me to dive into other software languages, as well as the development of internal systems. That’s where I first began mentoring teammates and realized I loved teaching and helping others. It was around this time that I heard of LinkedIn through the grapevine.
Joining the LinkedIn community
Everything I heard about LinkedIn made me very interested in career opportunities there, but I didn’t have connections yet. I did some research and reached out to a talent acquisition manager on LinkedIn and created a connection which started a path to my first role at the company.
When I joined LinkedIn, I started on the LinkedIn Talent Solutions (LTS) team. It was a phenomenal way to start because not only did I enjoy the work, but the experience served as a proper introduction to the culture at LinkedIn. I started during the pandemic, which meant remote working, and eventually, as the world situation improved, we went hybrid. This is a great system for me; I have a wonderful blend of being in the office and working remotely. When I’m in the office, I like to catch up with my team by talking about movies or playing games, going beyond work topics, and getting to know each other. With LinkedIn’s culture, you really feel that sense of belonging and recognize that this is an environment where you can build lasting connections.
LinkedIn: a people-first company
If you haven’t been able to tell already, even though I mostly work with software, I truly am a people person. I just love being part of a community. At the height of the pandemic, I’ll admit I struggled with a bit of imposter syndrome and anxiety. But I wasn’t sure how to ask for help. I talked with my mentor at LinkedIn, and they recommended I use the Employee Assistance Program (EAP) that LinkedIn provides.
I was nervous about taking advantage of the program, but I am so happy that I did. The EAP helped me immensely when everything felt uncertain, and I truly felt that the company was on my side, giving me the space and resources to help relieve my stress. Now, when a colleague struggles with something similar, I recommend they consider the EAP, knowing firsthand how effective it is.
Building a path for others’ growth
With my mentor, I was also able to learn about and become a part of our Women in Technology (WIT) WIT Invest Program. WIT Invest is a program that provides opportunities like networking, mentorship check-ins, and executive coaching sessions. WIT Invest helped me adopt a daily growth mindset and find my own path as a mentor for college students. When mentoring, I aim to build trust and be open, allowing an authentic connection to form. The students I work with come to me for all kinds of guidance; it’s just one way I give back to the next generation and the wider LinkedIn community. Providing the kind of support my mentor gave me early on was a full-circle moment for me.
Working at LinkedIn is everything I thought it would be and more. I honestly wake up excited to work every day. In my three years here, I have learned so much, met new people, and engaged with new ideas, all of which have advanced my career and helped me support the professional development of my peers. I am so happy I took a leap of faith and messaged that talent acquisition manager on LinkedIn. To anyone thinking about applying to LinkedIn, go for it. Apply, send a message, and network—you never know what one connection can bring!
About Lekshmy
Based in Bengaluru, Karnataka, India, Lekshmy is a Senior Software Engineer on LinkedIn’s Hiring Platform Engineering team, focused on the Internal Mobility Project. Before joining LinkedIn, Lekshmy held various software engineering positions at Groupon and SDE 3. Lekshmy holds a degree in Computer Science from the College of Engineering, Trivandrum, and is a trained classical dancer. Outside of work, Lekshmy enjoys painting, gardening, and trying new hobbies that pique her interest.
Editor’s note: Considering an engineering/tech career at LinkedIn? In this Career Stories series, you’ll hear first-hand from our engineers and technologists about real life at LinkedIn — including our meaningful work, collaborative culture, and transformational growth. For more on tech careers at LinkedIn, visit: lnkd.in/EngCareers.
Topics
-
Uncategorized2 weeks ago
Vanity Metrics: Definition & Examples for Marketing
-
LINKEDIN2 weeks ago
Career stories: Influencing engineering growth at LinkedIn
-
Uncategorized1 week ago
5 B2B Social Media Marketing Tactics That Actually Work
-
OTHER2 weeks ago
YouTube Announces AI-Enabled Editing Products for Video Creators
-
Uncategorized1 week ago
2023 Average Engagement Rates for 13 Industries [STATS]
-
Uncategorized6 days ago
YouTube Shorts Monetization Guide [How Much Can You Make?]
-
Uncategorized2 weeks ago
Instagram Emoji Guide: Meanings, Reactions, Ideas
-
OTHER2 weeks ago
Meta Urged Not to Roll Out End-to-end Encryption on Messenger, Instagram by UK