# demo-lab manifests

Everything the live demo at https://cloudtaser.io/demo-lab applies to its
cluster is served from this directory as a static file. Each scenario
step that mutates cluster state references one of these files by URL,
so anyone watching can read the exact YAML being applied without having
to trust the narration.

## Files

| File | Applied at step | What it does |
|---|---|---|
| [`payments-api-deployment.yaml`](./payments-api-deployment.yaml) | Provision time (Ansible), shown at step 1 | The stand-in "workload with a secret". A busybox httpd pretending to be a payments API; deployed into namespace `demo-app`. |
| [`cloudtaser-helm-values.yaml`](./cloudtaser-helm-values.yaml) | Step 2 | Values passed to the CloudTaser Helm chart — beacon address + fingerprint verification on. |
| [`postgres-annotations.yaml`](./postgres-annotations.yaml) | Step 5 | Annotations that opt the postgres Deployment into CloudTaser injection. |

## Why this exists

The demo's whole pitch is "don't take our word for it, watch the real bytes
move". A narrative that says "the cluster registers with the secret store"
but hides the YAML doing it loses that claim. Every live-cluster mutation
in the demo now comes from a file in this directory, and every file has
comments explaining *why* each field is set the way it is.

## Reading the steps in terms of these files

1. `kubectl get pods -n demo-app` — the pod defined by
   [`payments-api-deployment.yaml`](./payments-api-deployment.yaml) is
   already running. Ansible applied it when VM2 provisioned.
2. `helm install cloudtaser cloudtaser/cloudtaser -f <this values file>` —
   installs operator + webhook + eBPF agent + wrapper, wired for the
   demo's beacon + fingerprint configuration.
3. `cloudtaser target fingerprint` — read-only, produces a UUID derived
   from the cluster's identity (API server cert hash, CA hash,
   kube-system UID, API endpoint).
4. `cloudtaser source register --fingerprint <id>` — writes the UUID
   from step 3 into OpenBao on the secret-store VM. Step 5's wrapper
   fetch checks this; without it, fetch is rejected.
5. `kubectl patch` with
   [`postgres-annotations.yaml`](./postgres-annotations.yaml) —
   opts the Deployment into injection. One YAML change is all
   CloudTaser asks of the app team.
6. `kubectl rollout restart` — new pod is mutated by the webhook, gets
   an init container that fetches the secret via beacon → bridge →
   OpenBao into a `memfd_secret` page.
7. `tail /var/log/cloudtaser-beacon.log` — shows the ciphertext relay
   metadata for the fetch that just happened. Beacon never sees
   plaintext.

## Not in this directory

- The cloudtaser-operator + wrapper + eBPF chart manifests themselves
  live in [`cloudtaser-helm`](https://github.com/cloudtaser/cloudtaser-helm).
- The cloudtaser-beacon + cloudtaser-onprem (bridge) binaries are
  installed on the two EU VMs by Ansible. Playbooks live in
  [`cloudtaser-demo/demo-lab/ansible`](https://github.com/cloudtaser/cloudtaser-demo/tree/main/demo-lab/ansible).
- The orchestrator + probe whitelist + step whitelist also live in
  [`cloudtaser-demo`](https://github.com/cloudtaser/cloudtaser-demo/tree/main/demo-lab/whitelists).
