DEMO LABPRIVATE · UNLISTED

See CloudTaser running.

Demo options

LIVE DEMO · WATCH OR DRIVE · ONE DRIVER AT A TIME

Three terminals stream real output from three real VMs: a target cluster in the US running cloudtaser-operator + wrapper + eBPF, a beacon relay in Frankfurt brokering connections on TCP 443, and an OpenBao in the Netherlands serving as the secret store. When someone is running the demo, you watch their steps in real time. When they finish or step away, it is your turn.

TARGET CLUSTER US · GKE n2d · AMD SEV no workload deployed
Demo idle — press “My turn” to start the 7-step live demo.
BEACON RELAY EU·FRA · TCP 443 · mTLS idle · awaiting bridge
SECRET STORE EU·NL · OpenBao no cluster bound
OpenBao at demo-secret-store.cloudtaser.io
Step 3 will register your cluster here.
Session is registering this cluster…
STEP 1 / 7 · postgres install click “Next” to apply
Architecture sketch - how this actually works when it's live

Three official endpoints

browser (xterm.js via ttyd WebSocket)
wss://demo-cluster.cloudtaser.iottyd, read-only wss://demo-beacon.cloudtaser.iottyd, read-only wss://demo-secret-store.cloudtaser.iottyd, read-only https://demo.cloudtaser.io/api/*orchestrator: state, next, probe
Cloudflare TLS · Turnstile · rate-limit · Access policy
Role What runs here Region & cost
US GKE Cluster zonal GKE (free control plane) · 1 × n2d-highcpu-2 · AMD SEV · 30 GB pd-balanced · cloudtaser operator + wrapper + eBPF + demo-app pod · reached from VM2 via kubectl (container.developer IAM) us-west1-a
~$30–46/mo
EU VM2 — Beacon + Orchestrator cloudtaser-beacon relay · orchestrator Go daemon · two ttyds on :7681 (cluster pane: tmux → GKE) and :7682 (beacon log tail) europe-west3 (Frankfurt)
e2-micro ~$9/mo
EU VM3 — Secret Store OpenBao · cloudtaser-cli · ttyd on :7683 europe-west4 (Netherlands)
e2-micro ~$9/mo
Total: real GKE + 2 EU VMs, ~$48–64/month. Public DNS via Cloudflare; nothing is directly reachable from the public internet except through the Cloudflare tunnels.
Honest disclosure: all three VMs are on GCP. GCP is US-parented — per our own sovereign-deployment-guide, GCP EU regions do NOT establish sovereignty under CLOUD Act analysis. For production sovereignty the secret store would move to Hetzner / OVH / Scaleway / IONOS / Exoscale / UpCloud.

Real stack, honest scope. GKE confidential-compute node (AMD SEV / n2d) in the US + two small EU VMs for beacon + OpenBao. All three on GCP; GCP EU regions don't clear our sovereignty guide's first-leg test (secret store wants an EU-owned provider). Hetzner migration planned. Every cluster mutation the demo performs is a file at cloudtaser.io/demo-lab/manifests/ — read the YAML, don't trust the narration.

Watch / run state machine

orchestrator state {
    driver_session_id : string | null
    driver_claimed_at : timestamp
    driver_last_seen  : timestamp   // updated on each /api/demo/next or /api/demo/probe
    demo_step         : 0..7
    viewers           : set<session_id>
}

transitions:
  visitor arrives         -> registered as viewer; receives current state snapshot
  driver_session_id null  -> first viewer to POST /api/demo/claim wins (atomic compare-and-swap)
  session active          -> only host session can POST /next or /probe
  driver_last_seen > 5min -> server releases session; broadcasts state change
  session hits hard cap   -> server releases at 20min regardless of activity
  all viewers disconnect  -> demo idles; ttyd streams persist (cluster state intact)
  next viewer arrives     -> same session / viewer logic applies

broadcast via Server-Sent Events on /api/demo/events:
  {type:"claim",   viewer_id:"..."}
  {type:"step",    step:N, by:"..."}
  {type:"probe",   name:"...", exit_code:0|non-zero}
  {type:"release", reason:"idle"|"hard_cap"|"explicit"}
  {type:"reset",   new_state:{...}}

Probe whitelist (curated command palette)

Frontend sends {probe: "ptrace_attach"} as an opaque key. Orchestrator does a dict lookup against a fixed YAML. No user-supplied string ever reaches a shell. Command-injection surface is zero by construction.

probes:
  ptrace_attach:
    label: "ATTACK - ptrace attach against the wrapped postgres"
    target: node-probe  # privileged DaemonSet; hostPID; strace installed
    cmd:   kubectl exec -n cloudtaser-demo ds/node-probe -- \
             sh -c "P=$(pgrep postgres | head -1); \
                    timeout 2 strace -p $P -e trace=none -c 2>&1; \
                    echo PID=$P"
    expected: "exit 0 + PID=<N> in output (attach succeeds today;
               eBPF sys_enter_ptrace tracepoint records PTRACE_DENIED
               audit event. Future LSM hook will return EPERM before
               attach. Run ebpf_audit probe after to see the event.)"

  read_environ:
    label: "ATTACK - cat /proc/1/environ inside the postgres pod"
    target: pod  # kubectl exec into {{NAMESPACE}} deploy/postgres
    cmd:   kubectl exec -n {{NAMESPACE}} deploy/postgres -- \
             sh -c "cat /proc/1/environ 2>&1; echo ---END---"
    expected: "exit 0; K8s-injected envvars visible but DB_PASSWORD
               absent - CloudTaser delivers the secret via memfd_secret
               memory, not via the kernel environ array"

  read_mem:
    label: "ATTACK - dd if=/proc/1/mem inside the postgres pod"
    target: pod
    cmd:   kubectl exec -n {{NAMESPACE}} deploy/postgres -- \
             sh -c "dd if=/proc/1/mem bs=1 count=1 2>&1"
    expected: "non-zero exit + 'Permission denied' - eBPF kprobe on
               sys_openat blocks the open before any seekable FD exists"

  show_secretmem:
    label: "memfd_secret active (FD visible OR wrapper-attested)"
    target: pod
    strategy: |
      Two-stage approach (5 x 1 s retry budget + log fallback):
        Stage 1: iterate /proc/[PID]/fd inside the postgres container
                 looking for a symlink target containing "secretmem".
                 The forked postgres process inherits the wrapper's
                 memfd_secret FD (CLOEXEC explicitly cleared). It
                 surfaces as an anon-inode entry: "/secretmem (deleted)".
                 Wrapper PID 1 is skipped (dumpable=0 makes its fd
                 table unreadable to uid 65532).
        Stage 2 (fallback): if no FD symlink found, kubectl-logs the
                 wrapper container and greps for the structured line
                 "memfd_secret":true - the canonical wrapper-emitted
                 proof that the syscall succeeded even when the FD is
                 not visible to the probe (musl/static, dumpable quirks,
                 or kernels without CONFIG_SECRETMEM falling back to
                 anon-mmap).
      Failure of both stages = genuine protection regression.
    expected: "exit 0; either 'PID N: ... /secretmem (deleted)' (FD
               path) or 'wrapper-attested via log: ...' (log path)"

  ebpf_audit:
    label: "eBPF audit log (last 10 enforcement events)"
    target: vm2-host  # kubectl logs from orchestrator host
    cmd:   kubectl logs -n cloudtaser-system \
             -l app=cloudtaser-ebpf --tail=500 2>&1 | \
             { grep -iE 'block|deny|EPERM|enforce|kprobe' || \
               echo '(no enforcement events yet - run adversarial probes first)'; } | \
             tail -10
    expected: "exit 0; grep for block/deny/EPERM/enforce/kprobe events
               matching the ptrace_attach and read_mem probes run earlier"

  ls_fds:
    label: "ls -la /proc/<app-pid>/fd (memfd_secret FDs visible; not readable)"
    target: pod
    cmd:   kubectl exec -n {{NAMESPACE}} deploy/postgres -- \
             sh -c "P=$(pgrep postgres | head -1); \
                    echo PID=$P; ls -la /proc/$P/fd | head -15"
    expected: "exit 0; fd table of the wrapped postgres process (not
               PID 1). Entries backed by memfd_secret appear as
               anon-inode symlinks; reading them is blocked because
               the backing pages are not in the kernel direct map"

  kubectl_pods:
    label: "kubectl get pods -o wide (informational)"
    target: vm2-host
    cmd:   kubectl get pods -n {{NAMESPACE}} -o wide
    expected: "exit 0; live pod list for the per-session demo-XXXX-XXXX
               namespace; 'postgres' pod must be Running"

Security analysis for the probe palette

  • Whitelist, not freeform. Frontend sends {probe: "ptrace_attach"}. Orchestrator looks up by key. User-supplied text never reaches a shell, eval, or command assembly.
  • Commands target the demo pod only. All kubectl exec calls are hardcoded to a per-session demo-XXXX-XXXX namespace targeting deploy/postgres or the privileged cloudtaser-demo/node-probe DaemonSet. The orchestrator's K8s service account has RBAC limited to create on pods/exec in namespaces demo-app and cloudtaser-demo, get on pods/log in cloudtaser-system, and get on nodes. It cannot reach other namespaces, the host, or outside the cluster.
  • Demo pod is sacrificial. postgres runs with no host-network access and mounts only synthetic secrets injected via CloudTaser. No customer material ever touches this stack.
  • All probes are read-only with bounded output. Each command has a timeout wrapper; output is head'd to ~8 KB. No writes, no curl, no network egress from the pod.
  • Rate-limited. 1 probe / 2 seconds per active session; hard cap 30 probes / session; session itself hard-capped at 20 minutes. Abuse doesn't compound; reset returns to baseline.
  • Probes can demonstrate bugs, not exploit them. If a probe that should block somehow succeeds, that IS a finding - valuable signal, feeds directly into the pentest engagement. Watcher count sees it in real time.
  • Probe output doesn't leak. Output streams into the cluster terminal pane (same as Next-button output). All viewers see the same stream. Nothing private is produced because nothing private exists on these VMs.

Why a real GKE cluster (not docker-compose on one box)

  • Architectural authenticity. Procurement teams ask "what am I actually seeing here?" A demo on one box invites every conceivable doubt. A real zonal GKE cluster with a confidential-compute node pool + two separate EU VMs + real beacon protocol over the Atlantic = exactly the topology a customer would run in production.
  • Confidential-compute target is a real one. The GKE node runs n2d-highcpu-2 with confidential_nodes.enabled = true. AMD SEV is active; /proc/cpuinfo shows sev; dmesg shows "AMD Memory Encryption Features active". The cc_* probes in the palette surface this live for all viewers. (SEV, not SEV-SNP: GKE's confidential_nodes API defaults to SEV on n2d; SEV-SNP on GKE requires preview flags not worth the complexity for MVP.)
  • Honest EU framing. The two EU VMs are in europe-west3 (Frankfurt) and europe-west4 (Netherlands). Real EU data-centre regions. Still GCP - US-parented - which does NOT clear the sovereign deployment decision guide's first-leg test. For real sovereignty: Hetzner / OVH / Scaleway / IONOS / Exoscale / UpCloud. Migration to a real EU-owned provider for the secret store is planned as a follow-up.
  • Beacon authenticity. GKE node in US, VM3 (secret store) in EU, VM2 (beacon) between them in EU. Bridge<>broker traffic actually crosses the Atlantic. Latency is real, not simulated. TLS terminates at the bridge (VM3) and the broker (GKE-side wrapper), not the relay.
  • Cost reality. ~$48-64/month. GKE zonal control plane is free inside GCP's monthly credit; only the 1 CC node + 2 small EU VMs cost anything.

How Claude (or anyone) runs it manually

  • Open https://cloudtaser.io/demo-lab in a browser - rogbox.local Chromium, any headless browser, any evaluator's laptop. No client to install.
  • If the demo is idle: press "My turn" in the role banner to start the session.
  • If the demo is in use: viewer mode by default. See the active session's steps and probes in real time. Wait for their 5-min idle timeout or their 20-min hard cap, then press "My turn" when it becomes available.
  • For automated regression in CI: GET /api/demo/state returns a JSON snapshot (current step, last-probe-exit-code, session identity hash, viewer count). Playwright harness on rogbox or in GitHub Actions can drive the scenario end-to-end and assert expected exit codes per probe. This harness is the canonical "is the demo alive" CI check.

Shipped

  • Infrastructure: Terraform in cloudtaser-terraform — merged + applied. GKE zonal in us-west1-a with 1 × n2d-highcpu-2 confidential-compute node. VM2 beacon + orchestrator in europe-west3, VM3 OpenBao + cloudtaser-bridge in europe-west4. Two Cloudflare Tunnels, per-VM SAs, IAP-only admin SSH, no imperative gcloud create.
  • Ansible: three idempotent playbooks (no cloud-init). vm2-beacon.yml installs beacon binary + orchestrator + cluster/beacon tmux panes; vm3-secret-store.yml installs OpenBao dev-mode + cloudtaser-cli + cloudtaser-bridge + demo-secret tmux pane tailing OpenBao journal. systemd everywhere; inventory lives alongside the playbooks in cloudtaser-demo.
  • Orchestrator: Go daemon on VM2. Watch/run state machine (one active session at a time, 5 min idle, 20 min hard cap), SSE broadcast to all watchers, CSRF-gated API on demo.cloudtaser.io, step & probe whitelists in cloudtaser-demo, fingerprint template substitution so the register step (step 3) can use the session UUID from step 2.
  • Scenario + probes: 7 steps in step-whitelist.yaml, 12 probes in probe-whitelist.yaml, manifests published at /demo-lab/manifests/ so every cluster mutation is readable YAML.
  • Frontend wiring: live <iframe>s to ttyd on all three panes, Next / End+Reset / probe controls wired to fetch('/api/demo/*'), SSE auto-reconnect, auto-expand layout on session start, tab-close release via navigator.sendBeacon, action-named step buttons.

Not yet shipped

  • Playwright CI: demo-lab/test/drive.py exists and drives all 7 steps locally; needs a scheduled GHA job.
  • Hardening: drop bridge --dev-mode (currently auto-approves cluster fingerprints), replace shared RPC token with Secret-Manager-sourced value.
  • EU sovereignty migration: move VM3 OpenBao to Hetzner / OVH / Scaleway / IONOS / Exoscale / UpCloud once the non-GCP account is set up. See the maturity roadmap.