Skip to content

Kubernetes quick start

This page is for people who are familiar with Kubernetes in general, and want to start using it as quickly as possible.

Requesting access

In order to obtain access to Kubernetes, please submit a Kubernetes request . Alternatively you can email your request to support@hpc.ut.ee .

Polices set access at the tenant/namespace level. This means that you get a namespace and give access to that specific namespace, mostly with administrator permissions.

UTHPC uses a kubeconfig file to permit access to Kubernetes. Certificate and token are inside kubeconfig file. This means you need to have kubectl installed. Users usually get access via their ETAIS account.

Using ETAIS access

Kubernetes allows access via the MyAccessID authentication system. This is the easiest way to obtain access to the cluster, as everyone shares the same KUBECONFIG file.

You should still write to support@hpc.ut.ee to get necessary permissions though, as by default, a user has no permissions inside the cluster.

Authenticating via MyAccessID requires completing three additional steps:

  • Install the kubelogin kubectl plugin. This is required to authenticate with MyAccessID.
  • Add the shared KUBECONFIG file to your local computer (below). If you add it to ~/.kube/config, which is automatically used for all kubectl commands.
  • When using this configuration for kubectl, the first command you enter opens a browser window. You can login explicitly also with the kubectl oidc-login login command.
  • Inside this browser window you can login with any institution's credentials. Upon success, you'll have access to the cluster.
kubectl configuration
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01Ea3hOVEV3TlRFek1Gb1hEVE13TURreE16RXdOVEV6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkt5CnBtNy9BVmFPQStnT1BEQzBtekZ6d0pzRUw3ZkRMN0taR3R1Y3RycUJIR3JaL0MyZ2pJbEpwN2pCZ0FDU0E2eW4KNEhqNXk0UTdTN0s0R0JhbGNya3QrV2duMkwyckxKK0NUYXhiYmh4alczRDR6dEdtanhJTUFSeXRUV2xDL1ZtVAphTUtCZ3pmTFY5LzBPNUxtM1J4cEFMbm9MN1dUS3lyTmxGR29aSWUxbTVjK0JyenZmZjRKa2dmYWVucEw3Uk5CCjM5TDRvQ3NVdFNXeDZUVGNSN25JTHRiUXZZV0doYnE2UHRzS3BDcmxzMXlSazJDS1QwQUI5akFKMHhzakxkckgKZVZEOFROUFl1aEhBRVhLSVZUenVNUm92Q29DZVVnK002Nk9MNHpJem81aFZadFJJRWtkNi9wSTI1NmpsNVFDMQpJZW5KTDFpK2VwazJvQWpac1RNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNRTRpQldSS0ptRTFaZEFJOTZGbXYzdWdSdkZNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFERXdBem0zd1BIcDcwcFhObHdzNmhTV2ZIRWQ1b1prOWlTSzFMTVhFNm4vZHBCQkhiagpMOUVyVlBnWXlpeFFzZFIwZEtKUEZYQlh5dDlERllPVzJqTzRRMUFBVks1U3RTMjk5K3lZUDBIS1ZrZU5STE40Cm1wbDE0Zy9xNW1mR05pRlIzVm93cmFoR3ZQc1R6bVhScTNMd1pHbFZFSXNRR2w5elhYaVZoV29FTllVN2JTa1IKM0FxS0dQc2VDTmRmTTE3TzVZTno0cUw4VDA1Q21zZ1V3dlUrSU5CdFFIcmxXQVhQN2wyR3h5NzBDdmlxUXh2Qgp2d3NaVkpkcXdJMEg0c3ZWNW5FbElLM2dGY2hsTWoxS2k2RTJORGJNRmY4aWNQc2kxTFo1dllHUnVDVEN2QmgrCnd4eVQwekxRd1A4STBiNWZ5V1V3WnBzMmErcVR3V2xxRVpjdgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://kubernetes.hpc.ut.ee:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: etais-user
  name: etais-user@kubernetes
current-context: etais-user@kubernetes
kind: Config
preferences: {}
users:
- name: etais-user
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - oidc-login
      - get-token
      - --oidc-issuer-url=https://keycloak.hpc.ut.ee/realms/ETAIS
      - --oidc-client-id=kubernetes.hpc.ut.ee
      command: kubectl
      env: null
      provideClusterInfo: false    

Functionalities

UTHPC provides several different functionalities/modules via different Kubernetes applications.

Current capabilities:

In case you need…

Here's a quick information panel what to do in certain situations.

Publish your app to outside world

To control and enforce best practices, monitoring, and security, then publishing to outside world isn't possible by yourself. UTHPC team does it through HTTP proxy cluster called web.cs.ut.ee. If you need to publish an app/software, please contact UTHPC support together with the name/port of your services, and the domain you would like to use.

UTHPC admins direct the domain to HTTP Proxy cluster web.cs.ut.ee, install HTTPS certificates on the proxy, and route the traffic through the proxies to your domain. It's also possible to enable any specific settings on the proxy level.

On the Kubernetes side, publish your app using an Ingress object, using the NGINX ingress controller:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  namespace: <namespace>
  annotations:
    cert-manager.io/cluster-issuer: vault-hpc-issuer # (1)
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: "<domain>" # (2)
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: application-service
            port:
              number: 80
  tls: 
  - hosts:
    - <domain> # (2)
    secretName: <domain> # (2)
  1. Use this issuer to provide network level security between our proxy servers and the Kubernetes ingress controller.
  2. The domain can be anything that already does not exist in the cluster. So called first-come-first-serve policy. Using the domain name as secretName is recommended, but in case of multiple TLS hosts, you can also change that.

A database

Danger

While you can run a database on Kubernetes yourself, even Google raises some considerations about this.

UTHPC Kubernetes cluster has a PostgreSQL operator available, which any cluster user can use to request a database, in their namespace.

For a managed database, please write to UTHPC support with your requirements.

Persistent storage

You can ask for persistent storage using the StorageClasses feature with a Persistent Volume Claim (PVC) in Kubernetes. Please keep your requests limited, as the space and performance of a PVC are tightly related.

Example of using a PVC in Kubernetes, asking for a 2Gi large PVC, and mounting it to /data inside an NGINX container.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-volv-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: volume-test
  namespace: default
spec:
  containers:
  - name: volume-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: volv
      mountPath: /data
    ports:
    - containerPort: 80
  volumes:
  - name: volv
    persistentVolumeClaim:
      claimName: longhorn-volv-pvc

Longhorn also supports ReadWriteMany containers, but these utilize NFS to provide the filesystem to pods. If your software is unable to use NFS, do not use RWX accessMode.

Continuous delivery pipelines to deploy to Kubernetes

Delivery should be automated via service accounts, with as low permissions as possible. Here is an example of ServiceAccount, Role and RoleBinding objects, that follow a sensible least-privilege option:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: cicd-serviceaccount
  namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cicd-serviceaccount-role
  namespace: <namespace>
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log", "configmaps", "secrets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps", "extensions"]
  resources: ["deployments", "replicasets", "statefulsets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cicd-serviceaccount-rolebinding
  namespace: <namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cicd-role
subjects:
- kind: ServiceAccount
  name: cicd-serviceaccount
  namespace: <namespace>

Examples GitLab templates on how to build images in CI/CD pipelines, and how to deploy images into Kubernetes, can be found in this public repository.

These examples have been built for our necessities, yours may vary.

GPUs

UTHPC Kubernetes has several NVIDIA P100 GPUs inside the cluster, which are available for workloads. These P100 have are time-sliced for better handling of multiple workloads. Cluster users can ask GPUs for their workloads like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cuda-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cuda-app
  template:
    metadata:
      labels:
        app: cuda-app
    spec:
      containers:
      - name: cuda-container
        image: "k8s.gcr.io/cuda-vector-add:v0.1"
        resources:
          limits: 
            nvidia.com/gpu: 1 # (1)
      tolerations: # (2)
      - key: nvidia.com/gpu
        operator: Exists
        effect: NoSchedule
  1. This is the important part, which makes the GPU available for the pod, and makes sure the workload runs on a node with an existing and free GPU.
  2. This is also important - this taint keeps non-GPU workflows off of the GPU machines, but GPU workflows need to tolerate this taint.

Last update: 2023-09-28
Created: 2022-04-28