Skip to content

title: LoadBalancer Service [BETA]

LoadBalancer Service

In UTHPC managed Kubernetes, a way of publishing services to either University network, or to the public internet, is by using the LoadBalancer Kubernetes Service .

This Service uses the same format as ClusterIP or NodePort services, but establishes a network endpoint which gives direct access to the Pods, which are selected by the Service.

Considerations for using LoadBalancer

Using LoadBalancer potentially allows direct access from the internet to your application. Depending on the configuration chosen, it can open traffic for the whole internet, and there's no UTHPC managed security filters in-between. Use this method only if you are willing to commit to the security standard direct internet access requires.

On the other hand, this method is the only one which supports publishing non-HTTP protocol traffic. Do make sure to use encrypted connections with your applications.

There's two networks LoadBalancer can claim IP addresses from - a University "campus" network, and a public network.

Type Purpose Access Subnet
Campus For University of Tartu internal applications and software. Limited by NetworkPolicy of the namespace, accessible only from University of Tartu internal IP addresses, WIFI or via VPN. Addresses are given from 172.16.232.0/24 subnet.
Public For public applications and software. Limited only by the NetworkPolicy of the namespace Addresses are given from 193.40.46.0/24 subnet.

Using a LoadBalancer is a two-step process:

  1. Firstly, you need to setup the Service itself to configure the IP address and interfaces.
  2. Secondly, you need to allow traffic through the default NetworkPolicy, which is set in every namespace.

Creating a LoadBalancer

LoadBalancer needs a specific Kubernetes manifest to be used:

apiVersion: v1
kind: Service
metadata:
  name: <service_name> # (1)!
  namespace: <namespace> # (2)!
  labels:
    cilium.io/public-ip: "false" # (7)!
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP # (3)!
      port: 80 # (4)!
      targetPort: 8080 # (5)!
  selector:
    app: myapp # (6)!
  1. Give a name to your Service.
  2. Make sure the Service runs in your namespace.
  3. Choose a protocol - either TCP or UDP.
  4. This is the port which clients connect to, to access your application.
  5. This is the port the service needs to be listening inside the container.
  6. Make sure the selector matches a Pod which responds to the incoming traffic.
  7. This option controls whether the LoadBalancer claims a public or campus network address. Defaults to false, which means campus.

After deploying this configuration to the cluster, you can view the output of kubectl get services -n <namespace> to see which IP address got claimed for your use:

$ kubectl get services -n default
default  echo-service  LoadBalancer  10.109.161.86  172.16.232.2  80:30152/TCP  1m

In this case, the service received the IP address 172.16.232.2, which it keeps until the service is deleted.

apiVersion: v1
kind: Service
metadata:
  name: <service_name> # (1)!
  namespace: <namespace> # (2)!
  labels:
    cilium.io/public-ip: "true" # (7)!
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP # (3)!
      port: 80 # (4)!
      targetPort: 8080 # (5)!
  selector:
    app: myapp # (6)!
  1. Give a name to your Service.
  2. Make sure the Service runs in your namespace.
  3. Choose a protocol - either TCP or UDP.
  4. This is the port which clients connect to, to access your application.
  5. This is the port the service needs to be listening inside the container.
  6. Make sure the selector matches a Pod which responds to the incoming traffic.
  7. This option controls whether the LoadBalancer claims a public or campus network address. Defaults to false, which means campus.

After deploying this configuration to the cluster, you can view the output of kubectl get services -n <namespace> to see which IP address got claimed for your use:

$ kubectl get services -n default
default  echo-service  LoadBalancer  10.109.161.86  193.40.46.245  80:30152/TCP  1m

In this case, the service received the IP address 193.40.46.245, which it keeps until the service is deleted.

Allowing traffic through the default NetworkPolicy

A default NetworkPolicy is enforced on all namespaces in Kubernetes, which disallows external traffic to the namespace. This includes both traffic from other namespaces, but also traffic from clients, if LoadBalancer is being used.

Allowing traffic through is as simple as making an allow rule, which specifies which traffic can go where.

Best practices

The principle of least privileges dictates, that you should only allow traffic that is needed. Please do not open the whole namespace up to external traffic, if only one Pod requires it.

Here's a two examples of permissive NetworkPolicies. Do keep in mind, that with NetworkPolicies, you target Pod objects.

This option is suitable, when you need to allow one or some specific endpoints access to your deployment.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: <name>
  namespace: <namespace>
spec:
  podSelector:
    matchLabels:
      app: myapp # (1)!
  policyTypes:
    - Ingress # (2)!
  ingress:
    - from:
        - ipBlock:
            cidr: 192.168.1.10/32 # (3)!
      ports:
        - protocol: TCP # (4)!
          port: 80 # (5)!
  1. Make sure to select the correct Pod.
  2. Apply the rule only to incoming (Ingress) traffic. If you include Egress here as well, then starts managing outgoing connections.
  3. Specify the IP address you want to allow traffic from, in this case 192.168.1.10. The /32 part specifies, that a single host should be allowed.
  4. Allow TCP protocol.
  5. Allow traffic to a single port.

After applying this manifest, you should be able to access your Pod via the IP address you gave it, on the correct port. If not, there's a mistake in configuration.

This option is suitable, when you want to allow traffic from everywhere, to a Pod on a certain port. This use case is typical with public websites or services.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: <name>
  namespace: <namespace>
spec:
  podSelector:
    matchLabels:
      app: myapp # (1)!
  policyTypes:
    - Ingress # (2)!
  ingress:
    - ports:
        - protocol: TCP # (4)!
          port: 80 # (5)!
  1. Make sure to select the correct Pod.
  2. Apply the rule only to incoming (Ingress) traffic. If you include Egress here, then it starts managing outgoing connections as well.
  3. Allow TCP protocol.
  4. Allow traffic to a single port.

As you can see, removing the ipBlock part allows traffic from everywhere. After applying this manifest, you should be able to access your Pod via the IP address you gave it, on the correct port. If not, there's a mistake in configuration.

Feel free to write to support@hpc.ut.ee if you have any issues.