Traffic distribution

NebCompute provides you with tools to ensure that workloads remain scalable, highly available, and responsive by efficiently routing network traffic across multiple resources.

Traffic distribution improves system reliability by redirecting traffic to healthy servers in case of failure, ensuring high availability and fault tolerance. It optimizes performance by balancing network loads, reducing response times, and directing users to the nearest or fastest resource.

Key Concepts

Load balancers

Load balancers are responsible for distributing incoming traffic across multiple servers to ensure high availability and optimized performance. This helps to prevent overloading of individual server instances and makes sure there’s automatic failover in place for any case of an outage.

Server groups

Server groups allow administrators to manage a cluster of related servers that share workloads. These groups can be configured with specific policies to ensure traffic is distributed efficiently.

Health checks

Health checks automatically monitor the status of servers and ensure that traffic is only routed to healthy instances. This improves fault tolerance and prevents traffic from being sent to unhealthy servers.

Listeners

Listeners are rules and configurations that define how incoming traffic is processed before being forwarded to servers.

To ensure proper distribution of traffic on your servers, we recommend you:

  1. Distribute workloads across multiple servers with load balancers.
  2. Use listeners to direct incoming requests from a load balancer port to a specific server group.
  3. Enable health checks to regularly monitor server uptime and responsiveness and automatically redirect traffic from unhealthy instances.

Need help?

If you have any technical questions or encounter any problems, get in touch with our Support team! We are here to help, and will provide support if you encounter any issues with NebCompute.