Compare commits

..

2 Commits
v2 ... main

Author SHA1 Message Date
fa5133ee88 Update README.md 2025-03-23 20:41:41 +00:00
72d570731d update README 2025-01-01 17:35:32 +02:00

View File

@ -5,6 +5,8 @@ using Docker, with the intention to efficiently deploy to a k3s or k8s cluster u
# How to Use # How to Use
## How to Configure in .github/workflows/main.yaml
```yaml ```yaml
jobs: jobs:
deploy_staging: deploy_staging:
@ -15,11 +17,20 @@ jobs:
with: with:
kust_config: kustomize/overlays/testing kust_config: kustomize/overlays/testing
env: env:
K3S_YAML: ${{ secrets.K3S_YAML }} K3S_YAML: ${{ secrets.K3S_YAML }} # assuming that K3S_YAML is defined in a README, see also below
- name: Check output of previous step (kinda dummy) - name: Check output of previous step (kinda dummy)
run: echo "The start time was ${{ steps.deploy.outputs.time }}" run: echo "The start time was ${{ steps.deploy.outputs.time }}"
``` ```
## How to Setup K3S_YAML
We assume you use k3s. Otherwise, use comparable kubectl configuration.
- Grab k3s.yaml (\`/etc/rancher/k3s/k3s.yaml\`), copy it to /tmp/ and make it readable for you, then copy it from the master node of the k3s cluster: `scp your-node-123.uber5.com:/tmp/k3s.yaml /tmp/`
- Change the `server` entry to use its public DNS name
- Insert `tls-server-name: kubernetes` underneath the `server` key. The value (`kubernetes` in this case) needs to be one of the names that are in the cert. If you get it wrong, the error message in the pipeline will tell you.
- encode k3s.yaml with `base64 -i /tmp/k3s.yaml -o /tmp/encoded`, and set it as the value for a secret `K3S_YAML` in gitea for the repository under "Settings > Actions > Secrets"
# Open Questions # Open Questions
- We use [kustomize](https://kustomize.io/). Is this overkill? As the complexity of deployments is not that high, usually, this may be more technical complexity than necessary. We could go back to using plain kubernetes manifests, and just have different ones for staging and prod. - We use [kustomize](https://kustomize.io/). Is this overkill? As the complexity of deployments is not that high, usually, this may be more technical complexity than necessary. We could go back to using plain kubernetes manifests, and just have different ones for staging and prod.