Overview

Network Load Balancer (NLB) is a L4 load balancer built on Hyperplane that is more elastic than other ELB products from AWS.

Client IP preservation

Unlike a typical load balancer that terminates incoming connections and initiates outbound connections to targets, Hyperplane operates in a transparent mode and the target resource sees IP information about the client directly from the packets received.

There are several considerations I want to highlight when client IP preservation is enabled:

  1. NAT loopback, also known as hairpinning, is not supported when client IP preservation is enabled. If an instance is a client of a load balancer that it’s registered with, and it has client IP preservation enabled, the connection succeeds only if the request is routed to a different instance.
  2. You might encounter TCP/IP connection limitations related to observed socket reuse on the targets. These connection limitations can occur when a client, or a NAT device in front of the client, uses the same source IP address and source port when connecting to multiple load balancer nodes (e.g. client connects to one IP in each AZ) simultaneously. If the load balancer routes these connections to the same target, the connections appear to the target as if they come from the same source socket, which results in connection errors.
  3. When client IP preservation is enabled, targets must be in the same VPC as the Network Load Balancer, and traffic must flow directly from the Network Load Balancer to the target.
  4. Client IP preservation is not supported when a target group contains AWS PrivateLink ENIs, or the ENI of another Network Load Balancer. This will cause loss of communication to those targets.
  5. Client IP preservation can’t be disabled for instance and IP type target groups with UDP and TCP_UDP protocols.

Health Checks

NLB uses active and passive health checks to determine whether a target is available to handle requests. Other AWS load balancers does not support passive health checks, so this is a feature unique to NLB and Hyperplane, on a scale of tens of milliseconds for a failover to happen.

Fail Open

If all targets fail health checks at the same time in all enabled Availability Zones, the load balancer fails open. The effect of the fail open is to allow traffic to all targets in all enabled Availability Zones, regardless of their health status.

UDP

UDP health checks are not supported. For a UDP service, target availability can be tested using non-UDP health checks on your target group.

TCP RST packets

If no data is sent through a TCP connection by either the client or the target for longer than the idle timeout, the connection is closed. If a client or a target sends data after the idle timeout period elapses, it receives a TCP RST packet to indicate that the connection is no longer valid.

The idle timeout value is 350 seconds for TCP flows, and 120 seconds for UDP flows.

Additionally, if a target becomes unhealthy, the load balancer sends a TCP RST for packets received on the client connections associated with the target, unless the unhealthy target triggers the load balancer to fail open. The behavior can be switched off with the target_health_state.unhealthy.connection_termination.enabled target group attribute, but connections are still subject to idle timeout.

Number of TCP RST packets generated per NLB can be tracked with the TCP_ELB_Reset_Count metric.

Availability Zone Isolation

By default, each load balancer node routes requests only to the healthy targets in its Availability Zone. If you enable cross-zone load balancing, each load balancer node routes requests to the healthy targets in all enabled Availability Zones. See Cross-zone load balancing for an example of traffic distribution comparison.

Availability Zone DNS affinity

When using the default client routing policy, client connections are distributed across the load balancer AZs.

If you are using the Route 53 resolver, you can configure the percentage of zonal affinity, favoring load balancer IP addresses from the same AZ.

If you are not using the Route 53 resolver, you could still implement this in your application with AZ-specific DNS names [az].[name]-[id].elb.[region].amazonaws.com, for example:

us-east-2b.my-load-balancer-1234567890abcdef.elb.us-east-2.amazonaws.com

Note that when using Availability Zone DNS affinity, cross-zone load balancing should be turned off, otherwise traffic from load balancer nodes to targets are still

References