High-Level Designing — The Bigger Picture

In an era of e-commerce where every other company is trying to sell some service or goods online directly to customers, distributed systems have become the backbone of the industry. Scaling horizontally is more economical rather than vertical scaling but it comes with its own set of challenges. In a single server system, the business logic is just a function call away but in the case of distributed systems, the networks come into the picture along with authentication and tiny moving parts. With the advent of these heavy data-driven systems, there is a need to understand how to design these systems and make them fault-tolerant, scalable and maintainable easily.

Most of the modern-day B2C applications follow a basic data flow as below with some extra customisations tailored to their business needs. If you are new to the world of distributed systems, this illustration might feel a bit overwhelming. But I’m gonna break it down so it doesn’t daunt you ever again.

Client — The Entry Point

This would be a general entry point and could be an app on a mobile device or direct web access from any browser. When the client submits the website URL(Let's say https://medium.com), the browser looks for cached IP’s of recently visited websites. If it’s available, it sends the request to the obtained IP, else, it requests the country or global DNS to obtain the IP for the given URL.

DNS — The Know it all

Domain Name Service(DNS) serves as a central repository for the IP’s of all the URL’s that exist on the internet. They are usually hierarchal and could be regional, country-wise or worldwide. When a request comes to the DNS to obtain the IP of the requested URL, it checks its DB to see if it has the information for the URL. If it doesn’t find it, the request is redirected to the higher level DNS and so on until the IP is obtained.

Usually, the DNS returns a list of IP’s for a given URL and once the list is obtained, the client waterfalls(check’s first IP, if it fails, check’s second IP and so on) through the IP’s to redirect to the actual host. The list of IPs is shuffled for each client by the DNS in a way that the load is evenly distributed amongst all the IP’s. So if two clients request the IP’s for the same URL, they both get differently shuffled lists and waterfalling through them doesn’t create a cascading load increase and in turn, cascading failures.

Load Balancer — The Mother to all servers

When the DNS sends an IP back to the client, it’s actually pointing to a load balancer. The name itself is self-explanatory but in layman's terms, it takes in all the requests from millions of clients and distributes them evenly amongst all the app servers and hence the name. Some might say that it can be a single point of failure but usually, it is backed by a secondary load balancer that replaces it immediately. The job of the load balancer is so simple that in case of failure, it can be instantaneously replaced with a new one with a very small or no downtime. It uses an algorithm to distribute the traffic evenly, add new servers or redistribute traffic in case of server failure. Consistent hashing is one of the latest additions to these algorithms which has proven to be highly fault-tolerant. The load balancer also keeps track of all the servers that are up and running at regular intervals using the heartbeat mechanism.

App Server — The Brains

We’ve finally crossed all the hurdles and reached one of our app servers. This is where all the business logic resides and decisions are made. Depending on the type of request, the app server will call various other components in the system. If it’s something related to data, it sends a request to the cache first and checks if there is a cache hit(The data is found in the cache). If not, then it makes a call to the DB and fetches the data. If the request is for some content that is common for all(Eg: Questions in a quiz, an e-commerce catalogue, live streaming of a sporting event, etc), the app server redirects the client after authentication if required to the CDN closest to him to serve the content which piggybanks on the CDN’s bandwidth hence decrease the actual server load.

Cache — The Remembrall

Caches in an application vary widely on the business requirement. They usually store the content that is highly requested to save the DB request time and hence reducing the latency. In general, if cache access time is “x”, then the DB access time would be around “100x”. Hence a cache check barely takes any time and might reduce the latency in a lot of cases. Some social media applications like Instagram also use a special cache which is used to access verified accounts since they are highly requested around the application. It is important to note here that the cache might get outdated due to modification of data and periodic cache invalidation techniques are used to refresh it.

Database — The Librarian

Databases need no introduction. They’ve been around for ages and if you’re someone who develops applications, chances are you’ve encountered at least one of them. Although there has been a heated debate in the community for SQL vs No-SQL DB’s, it all depends on the business use case of your application. In fact, a lot of modern-day applications make use of multiple DB’s for different microservices. If reliability and transactions are the demand, SQL DB’s such as PostgresSQL or MySQL is a no brainer, but if the schema structure is flexible, No-SQL DB’s such as MongoDB is something to be considered.

CDN — The Butler

Content Delivery Networks or CDNs are fairly popular these days due to the increased concurrency of applications. Application contents that are common for all or a subset of users and need no processing can be separated from the main application and served via these CDN’s. This prevents choking of the app server’s with excess users and reduces redundant data transfers. A good example can be given in the form of a very famous app in India called Hotstar that leverages the use of CDN’s to stream live sporting events to millions of users concurrently without choking their own network and hence increasing their availability and network reliability.

Conclusion

Once the initial connection and request is done, the entire process keeps on repeating for new requests(except for the DNS part) amongst the client and server. There are a whole lot of other complications like encryption, authentication, failure handling and other components that’ll vary with the application’s business requirements but this structure will serve as a backbone to them all.

Special thanks to Tarun Malhotra and writers of Designing Data-Intensive Applications for the inspiration for this article.

Also, don’t forget to leave a clap and follow for more such articles in the future.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Sarabjeet singh

Sarabjeet singh

A software engineer solving big problems one git push at a time.