Skip to main content

Working with Kubernetes clusters

All CI/CD jobs now include a KUBECONFIG with contexts for every shared agent connection. Choose the context to run kubectl commands from your CI/CD scripts.

Shared agent

Shared agent runs in the cluster, and you can use it to:

Communicate with a cluster, which is behind a firewall or NAT. Access API endpoints in a cluster in real time. Push information about events happening in the cluster. Enable a cache of Kubernetes objects, which are kept up-to-date with very low latency.

Update your .gitlab-ci.yml file to run kubectl commands

In the project where you want to run Kubernetes commands, edit your project's .gitlab-ci.yml file.

In the first command under the script keyword, set your agent's context. Use the format path/to/agent/repository:agent-name. For example:

 deploy:
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods

If you are not sure what your agent's context is, open a terminal and connect to your cluster. Run kubectl config get-contexts.

Kubecontext for multiple environments

 deploy:
image:
name: bitnami/kubectl:latest
entrypoint: [""]
before_script:
- if [ -n "$KUBE_CONTEXT" ]; then kubectl config use-context "$KUBE_CONTEXT"; fi
script:
- kubectl get pods
environment:
name: production

HELM

Because there are no kubectl in helm images like alpine/helm we shell pass context via argument helm --kube-context string and always use -n to define namespace

Example job YAML

deploy:helm:
image: alpine/helm
script:
- helm list --kube-context $KUBE_CONTEXT -n $KUBE_NAMESPACE
environment:
name: review