CloudRaya Documentation

Load Balancer

A Load Balancer distributes incoming traffic across multiple Virtual Machines to improve availability, scalability, and fault tolerance.

In CloudRaya, a Load Balancer is a network service that sits in front of your application and forwards requests to backend VMs inside a VPC.

It helps prevent single-server overload and ensures your application remains accessible even when traffic increases.

What Is a Load Balancer in CloudRaya?

A CloudRaya Load Balancer:

  • Listens on a public IP address
  • Receives incoming traffic on a public port
  • Forwards requests to one or more Virtual Machines on a private port
  • Operates inside a VPC and subnet
  • Uses a configurable load balancing algorithm

The Load Balancer itself does not host applications.

It only manages traffic distribution to backend VMs.

Load Balancer Operating Layer

CloudRaya Load Balancer operates at the transport layer (OSI Layer 4).

This means traffic is distributed based on IP address and port, without inspecting application-level data.

Key characteristics:

  • Works at Layer 4 (TCP)
  • Forwards traffic based on:
    • Public IP
    • Public Port → Private Port
  • Does not inspect:
    • HTTP headers
    • URL paths
    • Hostnames
    • Cookies or request content

Because of this design, CloudRaya Load Balancer focuses on high performance, low latency, and simple traffic distribution.

Advanced application-layer features such as path-based routing or header-based rules should be handled by application-level proxies (for example: NGINX) running inside Virtual Machines.

How Load Balancer Works with VPC and Virtual Machines

A Load Balancer is tightly integrated with CloudRaya networking.

When you create a Load Balancer:

  • It is deployed in a specific Cloud Zone
  • It is associated with:
    • One VPC
    • One subnet
  • It uses a Public IP as the entry point
  • It forwards traffic to VM private IPs inside the selected subnet

Only Virtual Machines inside the same subnet can be assigned as backend targets.

Key Components of a Load Balancer

Public IP

  • Each Load Balancer uses one Public IP
  • The Public IP can be:
    • Newly acquired during creation, or
    • Selected from existing unused Public IPs
  • This IP is what clients connect to

Rules and Port Mapping

Traffic forwarding is defined using rules.

Each rule specifies:

  • Public Port
    The port exposed to the internet (for example: 80, 443)

  • Private Port
    The port used by backend applications on the VM (for example: 80, 8080)

  • Assigned Virtual Machines
    One or more VMs that receive traffic

Multiple rules can exist on a single Load Balancer.

Load Balancing Algorithms

CloudRaya supports multiple traffic distribution methods at the transport layer (Layer 4).

  • Round Robin
    Traffic is distributed evenly across all backend Virtual Machines.
    Best suited for general workloads where backend VMs have similar capacity and performance characteristics.

  • Least Connection
    Traffic is forwarded to the backend VM with the fewest active connections.
    Ideal for workloads where request duration or connection load may vary between VMs.

  • Source
    Traffic is routed based on the source IP address of the client.
    Requests from the same client IP are consistently forwarded to the same backend VM, as long as it remains available.
    This method is useful for basic session persistence scenarios where application-level session handling is not available.

The load balancing algorithm can be updated after the Load Balancer is created.

See: Load Balancer Algorithms

Default Behavior and Requirements

Before creating a Load Balancer, note the following:

  • At least one Virtual Machine must exist in the selected VPC
  • Backend VMs must be:
    • Running
    • Located in the same subnet
  • Network access must be allowed by:
    • Subnet ACL rules
    • VM firewall configuration (if applicable)

If no eligible VM exists, Load Balancer creation will be blocked.

When Should You Use a Load Balancer?

A Load Balancer is recommended when you need to:

  • Distribute traffic across multiple application servers
  • Improve availability and reduce single points of failure
  • Scale applications horizontally
  • Expose services securely using a single Public IP
  • Prepare infrastructure for production workloads

For single-VM or low-traffic setups, a Load Balancer may not be required.

Summary

  • A Load Balancer manages traffic, not compute
  • It operates inside a VPC and subnet
  • Uses one Public IP as an entry point
  • Distributes traffic using configurable algorithms
  • Improves availability, scalability, and reliability

To continue working with Load Balancers, see:

© 2026 CloudRaya Product Team. All rights reserved.

On this page