Elastic Load Balancing distributes the incoming traffic through several targets in one or more Availability Zones, such as EC2 instances, containers & IP addresses. It keeps track of the health of its reported targets and only sends traffic to those who are in good condition. Elastic Load Balancing allows you to scale your load balancer as your incoming traffic fluctuates. It has the potential to scale automatically.
Load balancer benefits
- Workloads are distributed through several compute resources, such as virtual machines, by a load balancer. Your apps’ availability and fault tolerance boost when you use a load balancer.
- When your requirements shift, you can add and delete computing resources from your load balancer without interrupting the overall flow of requests to your applications.
- You should set up health tests to track the health of computing resources such that the load balancer only sends requests to those that are in good condition. You can also use the load balancer to perform encryption and decryption so that your compute resources can concentrate on their primary tasks.
Elastic Load Balancing supports the following load balancers:
Application Load Balancers
Network Load Balancers
Gateway Load Balancers
Classic Load Balancers
The way load balancers are designed varies greatly — Targets are registered in target groups and traffic is routed to the target groups using Application Load Balancers, Network Load Balancers, and Gateway Load Balancers. Instances are registered with the load balancer using Classic Load Balancers.
Application Load Balancer Overview
The application layer, the seventh layer of the Open Systems Interconnection (OSI) model, is where an Application Load Balancer operates. After receiving an order, the load balancer prioritizes the listener rules to decide which rule to apply, and then chooses a target from the target category for the rule operation.
Listener rules may be set up to send requests to various target groups depending on the content of the application traffic. And where a target is registered for several target groups, routing is done separately with each target group. At the target group level, you can change the routing algorithm. Round robin is the default routing algorithm; however, you should choose the least outstanding requests routing algorithm.
Network Load Balancer Overview
The fourth layer of the Open Systems Interconnection (OSI) paradigm is occupied by a Network Load Balancer. It is capable of processing millions of requests per second. When a connection request is sent, the load balancer chooses a target from the target category for the default rule. It tries to connect to the chosen target over TCP on the port defined in the listener setup.
The load balancer chooses a target for TCP traffic depending on the protocol, source IP address, source port, destination IP address, destination port, and TCP sequence number using a flow hash algorithm. The load balancer uses a flow hash algorithm to pick a target for UDP traffic based on the protocol, source IP address, source port, destination IP address, and destination port. Since a UDP flow has the same source and destination throughout its lifespan, it is still routed to the same goal.
Gateway Load Balancer Overview
Virtual appliances such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems can be deployed, scaled, and managed using Gateway Load Balancers. It integrates a transparent network gateway with traffic delivery, allowing you to scale the virtual appliances to meet demand.
Gateway Load Balancer endpoints are used to safely swap traffic through VPC boundaries through Gateway Load Balancers. A Gateway Load Balancer endpoint is a VPC endpoint that enables virtual appliances in the service provider VPC to communicate with application servers in the service user VPC. The Gateway Load Balancer is installed in the same virtual private cloud (VPC) as the virtual appliances. For the Gateway Load Balancer, you register the virtual appliances with a target group.
Classic Load Balancer Overview
Incoming application traffic is spread through several EC2 instances in multiple Availability Zones by a load balancer. This improves the apps’ fault tolerance. Elastic Load Balancing detects unhealthy instances and only directs traffic to others who are safe.
Clients communicate with the load balancer via a single point of touch. This improves the application’s availability. If your requirements alter, you can add and delete instances from your load balancer without interrupting the overall flow of requests to your application. The load balancer, by design, distributes traffic equally around the Availability Zones you allow for your load balancer. Allow cross-zone load balancing on your load balancer to spread traffic equally between all registered instances in all enabled Availability Zones.
Please feel free to write @ firstname.lastname@example.org for any queries on AWS Elastic Load Balancing and stay tuned for next article.
Thank You !