Level Up Your Business Observability with Custom Metrics (Sponsored)Business observability offers a centralized view of organizations' telemetry enriched with business context, enabling them to make data-driven decisions. Disclaimer: The details in this post have been derived from the LinkedIn Engineering Blog. All credit for the technical details goes to the LinkedIn engineering team. The links to the original articles are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them. One of the primary goals of LinkedIn is to provide a safe and professional environment for its members. At the heart of this effort lies a system called CASAL. CASAL stands for Community Abuse and Safety Application Layer. This platform is the first line of defense against bad actors and adversarial attacks. It combines technology and human expertise to identify and prevent harmful activities. The various aspects of this system are as follows:
Together, these tools form a multi-layered shield, protecting LinkedIn’s community from abuse while maintaining a professional and trusted space for networking. In this article, we’ll look at the design and evolution of LinkedIn’s enforcement infrastructure in detail. Evolution of Enforcement InfrastructureThere have been three major generations of LinkedIn’s restriction enforcement system. Let’s look at each generation in detail. First GenerationInitially, LinkedIn used a relational database (Oracle) to store and manage restriction data. Restrictions were stored in Oracle tables, with different types of restrictions isolated into separate tables for better organization and manageability. CRUD (Create, Read, Update, Delete) workflows were designed to handle the lifecycle of restriction records, ensuring proper updates and removal when necessary. See the diagram below: However, this approach posed a few challenges:
Server-Side Cache ImplementationTo address the scaling challenges, the team introduced server-side caching. This significantly reduced latency by minimizing the need for frequent database queries. A cache-aside strategy was employed that worked as follows:
See the diagram below that shows the server-side cache approach: Restrictions were assigned predefined TTL (Time-to-Live) values, ensuring the cached data was refreshed periodically. There were also shortcomings with this approach:
Client-Side Cache AdditionBuilding on the server-side cache, LinkedIn introduced client-side caching to enhance performance further. This approach enabled upstream applications (like LinkedIn Feed and Talent Solutions) to maintain their local caches. See the diagram below: To facilitate this, a client-side library was developed to cache the restriction data directly on application hosts, reducing the dependency on server-side caches. |