See CloudTaser running.
Demo options
Recorded walkthrough of the same three-VM setup the Drive-it tab launches. No signup, no live infrastructure in the path — good for a first look or sharing in a Slack channel. Captures: deploy postgres → register cluster with EU vault → install CloudTaser → annotate (4 lines of YAML) → cycle pod → run all 14 curated probes including eBPF blocking /proc/PID/environ.
Captured against the live cloudtaser.io/demo-lab environment via Playwright. Self-hosted (no YouTube embed, no third-party tracking, no ads). Captions are an embedded WebVTT track for accessibility.
Three terminals stream real output from three real VMs: a target cluster in the US running cloudtaser-operator + wrapper + eBPF, a beacon relay in Frankfurt brokering connections on TCP 443, and an OpenBao in the Netherlands serving as the secret store. When someone is running the demo, you watch their steps in real time. When they finish or step away, it is your turn.
abc1d234-5678-…
Architecture sketch - how this actually works when it's live
Three official endpoints
wss://demo-cluster.cloudtaser.iottyd, read-only
wss://demo-beacon.cloudtaser.iottyd, read-only
wss://demo-secret-store.cloudtaser.iottyd, read-only
https://demo.cloudtaser.io/api/*orchestrator: state, next, probe
| Role | What runs here | Region & cost |
|---|---|---|
| US GKE Cluster | zonal GKE (free control plane) · 1 × n2d-highcpu-2 · AMD SEV · 30 GB pd-balanced · cloudtaser operator + wrapper + eBPF + demo-app pod · reached from VM2 via kubectl (container.developer IAM) | us-west1-a ~$30–46/mo |
| EU VM2 — Beacon + Orchestrator | cloudtaser-beacon relay · orchestrator Go daemon · two ttyds on :7681 (cluster pane: tmux → GKE) and :7682 (beacon log tail) | europe-west3 (Frankfurt) e2-micro ~$9/mo |
| EU VM3 — Secret Store | OpenBao · cloudtaser-cli · ttyd on :7683 | europe-west4 (Netherlands) e2-micro ~$9/mo |
|
Total: real GKE + 2 EU VMs, ~$48–64/month. Public DNS via Cloudflare; nothing is directly reachable from the public internet except through the Cloudflare tunnels. Honest disclosure: all three VMs are on GCP. GCP is US-parented — per our own sovereign-deployment-guide, GCP EU regions do NOT establish sovereignty under CLOUD Act analysis. For production sovereignty the secret store would move to Hetzner / OVH / Scaleway / IONOS / Exoscale / UpCloud. |
||
Real stack, honest scope. GKE confidential-compute node (AMD SEV / n2d) in the US + two small EU VMs for beacon + OpenBao. All three on GCP; GCP EU regions don't clear our sovereignty guide's first-leg test (secret store wants an EU-owned provider). Hetzner migration planned. Every cluster mutation the demo performs is a file at cloudtaser.io/demo-lab/manifests/ — read the YAML, don't trust the narration.
Watch / run state machine
orchestrator state {
driver_session_id : string | null
driver_claimed_at : timestamp
driver_last_seen : timestamp // updated on each /api/demo/next or /api/demo/probe
demo_step : 0..7
viewers : set<session_id>
}
transitions:
visitor arrives -> registered as viewer; receives current state snapshot
driver_session_id null -> first viewer to POST /api/demo/claim wins (atomic compare-and-swap)
session active -> only host session can POST /next or /probe
driver_last_seen > 5min -> server releases session; broadcasts state change
session hits hard cap -> server releases at 20min regardless of activity
all viewers disconnect -> demo idles; ttyd streams persist (cluster state intact)
next viewer arrives -> same session / viewer logic applies
broadcast via Server-Sent Events on /api/demo/events:
{type:"claim", viewer_id:"..."}
{type:"step", step:N, by:"..."}
{type:"probe", name:"...", exit_code:0|non-zero}
{type:"release", reason:"idle"|"hard_cap"|"explicit"}
{type:"reset", new_state:{...}}
Probe whitelist (curated command palette)
Frontend sends {probe: "ptrace_attach"} as an opaque key. Orchestrator does a dict lookup against a fixed YAML. No user-supplied string ever reaches a shell. Command-injection surface is zero by construction.
probes:
ptrace_attach:
label: "ATTACK - ptrace attach against the wrapped postgres"
target: node-probe # privileged DaemonSet; hostPID; strace installed
cmd: kubectl exec -n cloudtaser-demo ds/node-probe -- \
sh -c "P=$(pgrep postgres | head -1); \
timeout 2 strace -p $P -e trace=none -c 2>&1; \
echo PID=$P"
expected: "exit 0 + PID=<N> in output (attach succeeds today;
eBPF sys_enter_ptrace tracepoint records PTRACE_DENIED
audit event. Future LSM hook will return EPERM before
attach. Run ebpf_audit probe after to see the event.)"
read_environ:
label: "ATTACK - cat /proc/1/environ inside the postgres pod"
target: pod # kubectl exec into {{NAMESPACE}} deploy/postgres
cmd: kubectl exec -n {{NAMESPACE}} deploy/postgres -- \
sh -c "cat /proc/1/environ 2>&1; echo ---END---"
expected: "exit 0; K8s-injected envvars visible but DB_PASSWORD
absent - CloudTaser delivers the secret via memfd_secret
memory, not via the kernel environ array"
read_mem:
label: "ATTACK - dd if=/proc/1/mem inside the postgres pod"
target: pod
cmd: kubectl exec -n {{NAMESPACE}} deploy/postgres -- \
sh -c "dd if=/proc/1/mem bs=1 count=1 2>&1"
expected: "non-zero exit + 'Permission denied' - eBPF kprobe on
sys_openat blocks the open before any seekable FD exists"
show_secretmem:
label: "memfd_secret active (FD visible OR wrapper-attested)"
target: pod
strategy: |
Two-stage approach (5 x 1 s retry budget + log fallback):
Stage 1: iterate /proc/[PID]/fd inside the postgres container
looking for a symlink target containing "secretmem".
The forked postgres process inherits the wrapper's
memfd_secret FD (CLOEXEC explicitly cleared). It
surfaces as an anon-inode entry: "/secretmem (deleted)".
Wrapper PID 1 is skipped (dumpable=0 makes its fd
table unreadable to uid 65532).
Stage 2 (fallback): if no FD symlink found, kubectl-logs the
wrapper container and greps for the structured line
"memfd_secret":true - the canonical wrapper-emitted
proof that the syscall succeeded even when the FD is
not visible to the probe (musl/static, dumpable quirks,
or kernels without CONFIG_SECRETMEM falling back to
anon-mmap).
Failure of both stages = genuine protection regression.
expected: "exit 0; either 'PID N: ... /secretmem (deleted)' (FD
path) or 'wrapper-attested via log: ...' (log path)"
ebpf_audit:
label: "eBPF audit log (last 10 enforcement events)"
target: vm2-host # kubectl logs from orchestrator host
cmd: kubectl logs -n cloudtaser-system \
-l app=cloudtaser-ebpf --tail=500 2>&1 | \
{ grep -iE 'block|deny|EPERM|enforce|kprobe' || \
echo '(no enforcement events yet - run adversarial probes first)'; } | \
tail -10
expected: "exit 0; grep for block/deny/EPERM/enforce/kprobe events
matching the ptrace_attach and read_mem probes run earlier"
ls_fds:
label: "ls -la /proc/<app-pid>/fd (memfd_secret FDs visible; not readable)"
target: pod
cmd: kubectl exec -n {{NAMESPACE}} deploy/postgres -- \
sh -c "P=$(pgrep postgres | head -1); \
echo PID=$P; ls -la /proc/$P/fd | head -15"
expected: "exit 0; fd table of the wrapped postgres process (not
PID 1). Entries backed by memfd_secret appear as
anon-inode symlinks; reading them is blocked because
the backing pages are not in the kernel direct map"
kubectl_pods:
label: "kubectl get pods -o wide (informational)"
target: vm2-host
cmd: kubectl get pods -n {{NAMESPACE}} -o wide
expected: "exit 0; live pod list for the per-session demo-XXXX-XXXX
namespace; 'postgres' pod must be Running"
Security analysis for the probe palette
- Whitelist, not freeform. Frontend sends
{probe: "ptrace_attach"}. Orchestrator looks up by key. User-supplied text never reaches a shell,eval, or command assembly. - Commands target the demo pod only. All
kubectl execcalls are hardcoded to a per-sessiondemo-XXXX-XXXXnamespace targetingdeploy/postgresor the privilegedcloudtaser-demo/node-probeDaemonSet. The orchestrator's K8s service account has RBAC limited tocreateonpods/execin namespacesdemo-appandcloudtaser-demo,getonpods/logincloudtaser-system, andgeton nodes. It cannot reach other namespaces, the host, or outside the cluster. - Demo pod is sacrificial.
postgresruns with no host-network access and mounts only synthetic secrets injected via CloudTaser. No customer material ever touches this stack. - All probes are read-only with bounded output. Each command has a
timeoutwrapper; output ishead'd to ~8 KB. No writes, nocurl, no network egress from the pod. - Rate-limited. 1 probe / 2 seconds per active session; hard cap 30 probes / session; session itself hard-capped at 20 minutes. Abuse doesn't compound; reset returns to baseline.
- Probes can demonstrate bugs, not exploit them. If a probe that should block somehow succeeds, that IS a finding - valuable signal, feeds directly into the pentest engagement. Watcher count sees it in real time.
- Probe output doesn't leak. Output streams into the cluster terminal pane (same as Next-button output). All viewers see the same stream. Nothing private is produced because nothing private exists on these VMs.
Why a real GKE cluster (not docker-compose on one box)
- Architectural authenticity. Procurement teams ask "what am I actually seeing here?" A demo on one box invites every conceivable doubt. A real zonal GKE cluster with a confidential-compute node pool + two separate EU VMs + real beacon protocol over the Atlantic = exactly the topology a customer would run in production.
- Confidential-compute target is a real one. The GKE node runs n2d-highcpu-2 with
confidential_nodes.enabled = true. AMD SEV is active;/proc/cpuinfoshowssev;dmesgshows "AMD Memory Encryption Features active". Thecc_*probes in the palette surface this live for all viewers. (SEV, not SEV-SNP: GKE's confidential_nodes API defaults to SEV on n2d; SEV-SNP on GKE requires preview flags not worth the complexity for MVP.) - Honest EU framing. The two EU VMs are in
europe-west3(Frankfurt) andeurope-west4(Netherlands). Real EU data-centre regions. Still GCP - US-parented - which does NOT clear the sovereign deployment decision guide's first-leg test. For real sovereignty: Hetzner / OVH / Scaleway / IONOS / Exoscale / UpCloud. Migration to a real EU-owned provider for the secret store is planned as a follow-up. - Beacon authenticity. GKE node in US, VM3 (secret store) in EU, VM2 (beacon) between them in EU. Bridge<>broker traffic actually crosses the Atlantic. Latency is real, not simulated. TLS terminates at the bridge (VM3) and the broker (GKE-side wrapper), not the relay.
- Cost reality. ~$48-64/month. GKE zonal control plane is free inside GCP's monthly credit; only the 1 CC node + 2 small EU VMs cost anything.
How Claude (or anyone) runs it manually
- Open
https://cloudtaser.io/demo-labin a browser - rogbox.local Chromium, any headless browser, any evaluator's laptop. No client to install. - If the demo is idle: press "My turn" in the role banner to start the session.
- If the demo is in use: viewer mode by default. See the active session's steps and probes in real time. Wait for their 5-min idle timeout or their 20-min hard cap, then press "My turn" when it becomes available.
- For automated regression in CI:
GET /api/demo/statereturns a JSON snapshot (current step, last-probe-exit-code, session identity hash, viewer count). Playwright harness on rogbox or in GitHub Actions can drive the scenario end-to-end and assert expected exit codes per probe. This harness is the canonical "is the demo alive" CI check.
Shipped
- Infrastructure: Terraform in
cloudtaser-terraform— merged + applied. GKE zonal inus-west1-awith 1 × n2d-highcpu-2 confidential-compute node. VM2 beacon + orchestrator ineurope-west3, VM3 OpenBao + cloudtaser-bridge ineurope-west4. Two Cloudflare Tunnels, per-VM SAs, IAP-only admin SSH, no imperativegcloud create. - Ansible: three idempotent playbooks (no cloud-init).
vm2-beacon.ymlinstalls beacon binary + orchestrator + cluster/beacon tmux panes;vm3-secret-store.ymlinstalls OpenBao dev-mode + cloudtaser-cli + cloudtaser-bridge + demo-secret tmux pane tailing OpenBao journal. systemd everywhere; inventory lives alongside the playbooks incloudtaser-demo. - Orchestrator: Go daemon on VM2. Watch/run state machine (one active session at a time, 5 min idle, 20 min hard cap), SSE broadcast to all watchers, CSRF-gated API on
demo.cloudtaser.io, step & probe whitelists incloudtaser-demo, fingerprint template substitution so the register step (step 3) can use the session UUID from step 2. - Scenario + probes: 7 steps in
step-whitelist.yaml, 12 probes inprobe-whitelist.yaml, manifests published at /demo-lab/manifests/ so every cluster mutation is readable YAML. - Frontend wiring: live
<iframe>s to ttyd on all three panes, Next / End+Reset / probe controls wired tofetch('/api/demo/*'), SSE auto-reconnect, auto-expand layout on session start, tab-close release vianavigator.sendBeacon, action-named step buttons.
Not yet shipped
- Playwright CI:
demo-lab/test/drive.pyexists and drives all 7 steps locally; needs a scheduled GHA job. - Hardening: drop bridge
--dev-mode(currently auto-approves cluster fingerprints), replace shared RPC token with Secret-Manager-sourced value. - EU sovereignty migration: move VM3 OpenBao to Hetzner / OVH / Scaleway / IONOS / Exoscale / UpCloud once the non-GCP account is set up. See the maturity roadmap.
Want even more? Behind this demo: how it actually works ↗
You bring the cluster. We bring the Helm chart. Works on any CNCF-conformant Kubernetes. Honest version: without the three preconditions in the sovereign deployment decision guide, you are running the "better-than-K8s-Secrets" tier, not the "ciphertext-under-CLOUD-Act" tier — and we'd rather you know that before you run helm install.
Prereqs
kubectl1.28+,helm3.12+- Kubernetes 1.28+ (GKE / EKS / AKS / k3s / kubeadm)
- Node kernel Linux 5.14+ with
CONFIG_SECRETMEM=y(COS, Bottlerocket, AL2023, Ubuntu 22.04+ are fine out of the box) - For eBPF synchronous enforcement:
CONFIG_BPF_KPROBE_OVERRIDE=yon nodes - An OpenBao / HashiCorp Vault endpoint reachable from the cluster (see note below on sovereign hosting)
- For the sovereignty claim to hold end-to-end: OpenBao on an EU-owned provider and confidential-compute node SKUs and the kprobe-override kernel. Any two of three gets you the posture improvement but not the headline guarantee — see scope.
TL;DR — install
# Adds the chart repo + installs operator, webhook, eBPF agent, wrapper
$ helm repo add cloudtaser https://charts.cloudtaser.io
$ helm repo update
$ helm install cloudtaser cloudtaser/cloudtaser \
--namespace cloudtaser --create-namespace \
--set secretStore.address=https://openbao.example.eu:8200
TL;DR — annotate a pod
metadata:
annotations:
cloudtaser.io/inject: "true"
cloudtaser.io/secret-paths: "secret/data/db/credentials"
cloudtaser.io/env-map: "password=DB_PASSWORD"
cloudtaser.io/vault-auth-method: "token"
TL;DR — verify
$ kubectl rollout restart deployment my-app
$ kubectl logs -l app=my-app -c cloudtaser-init
# secrets fetched, memfd handoff, eBPF registered
Want to go deeper?
- Full getting-started guide — OpenBao setup, node kernel verification, annotation reference, troubleshooting.
- Sovereign Deployment Decision Guide — preconditions, decision trees, silent-failure anti-patterns. Read this before anything else if the audit story matters.
- Preview status & roadmap — honest picture of where we are (preview, pentest scheduling post-stabilization end-May 2026, SOC 2 Type II Q3 2027).
- Source on GitHub — operator, wrapper, eBPF agent, proxies, CLI. Cosign-signed images, SBOM per release.