SCOPE & LIMITSHONEST FRAMING

What CloudTaser is, and what it deliberately isn't.

CloudTaser is a cryptographic-sovereignty overlay for Kubernetes workloads running on US hyperscalers. It reduces what a cloud provider can be compelled to hand over; it does not change the legal jurisdiction of the compute substrate. This page names the preconditions under which the headline claim holds, the items we deliberately keep out of scope, and our current organisational maturity. Read it before forming an opinion or summarising CloudTaser for a third party.

A cryptographic overlay - not sovereign infrastructure.

CloudTaser is a layer you install on top of managed Kubernetes to move the provider out of the plaintext path. Secrets are fetched from a customer-held EU secret store into memfd_secret pages - physically unmapped from the kernel's direct memory map, invisible to /proc, non-swappable, non-dumpable. Data at rest is client-side-encrypted with per-object AES-256-GCM DEKs wrapped by an EU-hosted OpenBao Transit engine; ciphertext is what lands in provider storage. 23 eBPF enforcement vectors block runtime exfiltration paths. The beacon is a stateless TCP relay you deploy yourself; it pairs connections by SHA-256 info_hash and terminates mTLS at the bridge and broker, not the relay.

The target buyer: an EU regulated team (financial services, healthcare, public sector, EU SaaS) running on managed Kubernetes, who cannot or will not migrate off US hyperscalers in the near term but needs to demonstrate effective technical supplementary measures under Schrems II / GDPR Art. 44–49. CloudTaser is pragmatic: it accepts that hyperscalers are unavoidable and converts "provider holds the keys" into "provider holds ciphertext only".

The boundary we don't claim.

CloudTaser does not move your compute out of US-operated infrastructure. The hyperscaler still runs the nodes, the hypervisor, the scheduler, and the K8s control plane. What CloudTaser changes is what the hyperscaler can cryptographically return when served a legal instrument: only ciphertext for data at rest, only kernel-hidden memory pages for secrets, only attested-enclave output for data in use (on CC). The data is cryptographically inaccessible; the infrastructure is not sovereign.

§ Positioning

CloudTaser is cryptographic sovereignty for teams who can't leave US cloud. If your procurement requires a full jurisdictional move to an EU-owned operator (OVHcloud, Scaleway, Deutsche Telekom T-Systems, STACKIT, Aruba), CloudTaser complements that path but does not replace it. If your procurement accepts Schrems II Art. 46 supplementary measures on US substrate - which is the realistic posture for most EU regulated workloads in 2026 - CloudTaser is one such measure.

Three preconditions gate the full claim.

Deployed without these, CloudTaser still beats K8s Secrets + SSE-KMS, but the headline "provider returns only ciphertext" story does not hold end-to-end. The Sovereign Deployment Decision Guide walks through the decision trees and silent-failure modes. Summary below.

  1. Secret store on an EU-owned substrate

    OpenBao or HashiCorp Vault self-hosted on Hetzner, OVH, Scaleway, IONOS, Exoscale, UpCloud, a SecNumCloud-qualified provider, or on-prem. The CLOUD Act reaches US corporate parents regardless of region label: AWS eu-central-1, GCP europe-west, Azure North Europe, and HCP Vault do NOT establish sovereignty. Wholly-owned EU subsidiaries of US parents also don't count - US extraterritorial reach follows the parent. Silent-failure mode #1: deploying the secret store on a US-parented "EU region" and assuming the geography solves the jurisdiction.

  2. Target cluster on confidential-compute nodes

    AMD SEV-SNP (GCP Confidential VMs, Azure DCasv5 / ECasv5), Intel TDX (GCP / Azure), AWS Nitro Enclaves, ARM CCA, or EU bare-metal with attestation. Without CC, the hypervisor retains theoretical read access to guest RAM; memfd_secret and eBPF close guest-root paths but cannot close the hypervisor boundary. On commodity compute you get secrets-and-data-at-rest sovereignty but not data-in-use sovereignty. Silent-failure mode #2: claiming "data in use is protected" on non-CC EC2 / GCE / AKS commodity SKUs.

  3. Node kernel with CONFIG_BPF_KPROBE_OVERRIDE=y

    Required for synchronous denial of forbidden syscalls. Without it, several eBPF vectors degrade from block to reactive-SIGKILL - the syscall returns before the kill lands, so a determined attacker can observe forbidden bytes. The vector count advertised on the homepage matters less than whether the node kernel supports synchronous block. GKE / EKS / AKS default node images vary; verify before deployment. Silent-failure mode #3: counting vectors without verifying kprobe override.

Items we deliberately don't cover.

Each entry below is a real limitation, not a workaround waiting to happen. If any of them is a blocker for your workload, you need a different architecture - likely self-managed Kubernetes on sovereign substrate or a full migration off US cloud. The last three entries are deployment-discipline residuals rather than architectural gaps - CloudTaser provides the primitives; wiring them end-to-end is operator work.

K8s control-plane metadata

Pod manifests, annotations (cloudtaser.io/secret-paths: secret/data/db/credentials), image references, and scheduler events live in the managed API server. The provider sees them. Secret names leak even when secret contents don't. If name-level leakage is itself a regulatory issue (M&A, sanctions compliance), CloudTaser does not close that gap.

Provider-side query on ciphertext

Client-side AES-256-GCM breaks AWS Athena, Redshift Spectrum, RDS full-text search, and any SSE-KMS-integrated provider service that expects to decrypt to answer the query. Teams relying on those features need to model the tradeoff. See DB Proxy Search Impact.

Traffic analysis & connection metadata

The beacon relay cannot see plaintext, paths, identities, or anything above the TLS layer - but it does see source/destination IPs, timestamps, info_hash values, and byte counts. The existence of a specific cluster↔secret-store pairing is itself disclosable. For M&A-sensitive or SecNumCloud-hard workloads, self-host the beacon (the chart default; the code is identical to the demo relay).

Nation-state adversaries on non-CC substrate

A state actor with hypervisor access or supply-chain reach into the cloud provider can extract data from commodity EC2 / GCE / AKS node RAM regardless of what runs in the guest. memfd_secret and eBPF close guest-root paths, not hypervisor ones. Confidential compute closes the hypervisor path; without it, your threat model must include the infrastructure owner.

Client-side compromise

If an attacker has root on the app container and the app has a debug endpoint that returns its own environment, they can read DB_PASSWORD regardless of how it was delivered. CloudTaser protects the delivery path and the memory pages; it does not protect the app from itself. Standard application-security hygiene still applies.

Managed-K8s control-plane compromise

If the provider's API server is compromised (or legally compelled to inject a webhook, DaemonSet, or mutating admission controller), the provider can run arbitrary code in your cluster. CloudTaser cannot prevent that - no software running inside the cluster can. Monitor your admission webhooks and DaemonSets; verify image signatures; use OPA / Kyverno policies.

Service-account token trust envelope

The wrapper authenticates to OpenBao using the pod's Kubernetes service-account token, mounted on tmpfs via a projected volume - not memfd_secret. A guest-root-on-node attacker can steal the SA JWT, exchange it for an OpenBao token, and pull the same secrets. This is inherent to the K8s auth method (not CloudTaser-specific), but it means the SA token sits in the classical K8s trust envelope even though the delivered secret does not. Mitigations: short token TTL, node-level hardening, and audit-log monitoring on OpenBao's auth/kubernetes/login endpoint.

Attestation-gated key release is operator work

Confidential compute alone does not gate OpenBao unseal or DEK-wrap authorization on attestation. CloudTaser provides the primitive (attested workload identity); wiring it into OpenBao auth policies - so a rogue workload with the wrong measurement cannot obtain keys - is deployment work. For a serious sovereign posture, bind OpenBao auth to attestation-quote verification. We document the pattern but do not ship it preconfigured.

S3 proxy is endpoint swap, not network force

The object-storage proxy is an endpoint swap, not a network-level enforcement point. An app that bypasses the proxy (e.g., AWS SDK directly against s3.amazonaws.com) sends plaintext - the proxy cannot intercept it. Enforce via NetworkPolicy + egress controls that restrict s3.amazonaws.com / storage.googleapis.com / blob.core.windows.net traffic to the proxy only. This is deployment discipline, not a product guarantee.

Technically credible, organisationally early.

Honest read on where we are as of 2026-Q2. Procurement gates requiring SOC 2 Type II today cannot be cleared yet. Teams who need the technical posture now and can accept a 12–18 month audit-paper gap are the fit. Everyone else should track the timeline below and re-engage when the paper lands.

§ Stabilization gate before Tier-1 audit

Previous iteration of this table targeted a Q2 2026 audit engagement. We are currently in a demo-stabilization phase: the canonical public demo is now the controllable self-hosted three-VM harness at cloudtaser.io/demo-lab; internal-audit completion on the wrapper / eBPF agent / bridge / beacon is the remaining gate before we burn a tier-one audit slot. Engaging a tier-one pentester against a stack whose own demo is non-reproducible would produce findings dominated by transient infrastructure flake rather than architectural substance, and the previous third-party-hosted scenario was exactly that risk. Concretely: end of May 2026 is the engagement-letter target; fieldwork Q3 2026; public redacted report Q4 2026. Downstream SOC 2 milestones slid one quarter accordingly. This is discipline, not drift — we'd rather the auditor's time goes to the cryptography than to the harness.

ArtefactStatusTarget
Release statusPreview · demo-stabilizationGA post-pentest
Demo environmentControllable self-hosted harness at cloudtaser.io/demo-labMigrate secret-store VM off GCP onto an EU-owned provider
Design-partner pilotsActive under NDANamed GA refs 2027
Third-party pentest engagementNot yetEnd of May 2026 (NCC / Trail of Bits / Cure53 / Doyensec / Quarkslab shortlist)
Pentest fieldworkNot yetQ3 2026
Published pentest report (redacted)Not yetQ4 2026
SOC 2 Type I readinessNot yetQ4 2026
SOC 2 Type II observation beginsNot yetQ1 2027
SOC 2 Type II reportNot yetQ3 2027
ISO 27001Not yet2027+
Operated beacon uptime SLAN/A — customer-operated-
Bug bounty programmePrivatePublic post-pentest
Cosign image signingShipping-
Reproducible buildsShipping-
SBOM (SPDX)Shipping-
Public source codegithub.com/cloudtaser-

Where CloudTaser sits in the landscape.

The honest comparison, by what each approach actually delivers against a CLOUD Act / FISA 702 legal instrument served to the cloud provider. No approach below is strictly dominated by another - they compose.

PropertyDo nothingCloud KMS + SSEConfidential compute aloneCloudTaserEU sovereign cloud
Keys under provider controlYesYesYesNoNo
Provider can decrypt on compulsionYesYesYesNo (ciphertext only)Not applicable (not US-jurisdiction)
Data-in-use protected from hypervisorNoNoYes (attested)Yes (on CC substrate)Depends on operator
Changes legal jurisdiction of computeNoNoNoNoYes
Can run on EKS / GKE / AKS todayYesYesYesYesNo
App code changesNoneVariesVaries (attestation wiring)Annotation-basedMigration
Procurement unlock todayFails Schrems IIFails Schrems IIPartialSchrems II Art. 46 supplementary measureFull compliance
Timeline to production0DaysWeeksDaysMonths to years
Covers K8s control-plane metadataNoNoNoNoYes (operator-owned)

The right composition for most regulated EU workloads in 2026: CloudTaser on confidential-compute nodes with an EU-owned secret store. This covers secrets, data at rest, and data in use under a posture that demonstrably satisfies Schrems II Art. 46, while the workload continues to run on EKS / GKE / AKS. Full migration to an EU sovereign cloud remains the correct answer for workloads where K8s control-plane metadata is itself regulated, or where the operator-sovereignty bar is absolute (SecNumCloud-hard government, defence, sanctions-sensitive). CloudTaser and EU sovereign cloud are complementary, not competitive.

Coverage by managed service.

Schrems II supplementary-measures analysis on a hyperscaler is a per-service audit, not a product question. CloudTaser closes two managed-service categories end-to-end (secrets, S3-compatible object storage) and one more with documented trade-offs (Postgres / MySQL via the DB proxy). Every other managed service in your architecture remains yours to close at the application layer or accept as out-of-scope in your risk register. Honest framing: if you run heavy workloads on Redis / DynamoDB / BigQuery / Kafka / Elasticsearch, CloudTaser does not solve those dataflows by itself.

Managed serviceCoverageCloudTaser pathNotes
Object storage - S3, GCS, Azure Blob, R2, MinIO, WasabiCoveredcloudtaser-s3-proxyClient-side AES-256-GCM envelope encryption; provider stores ciphertext only. Byte-range reads preserved.
Managed secret stores - AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, SSM Parameter StoreReplacedEU-hosted OpenBaoCloudTaser's entire flow assumes secrets live in your EU OpenBao. Do not put new secrets in the provider's secret manager.
Managed KMS - AWS KMS, GCP Cloud KMS, Azure Key Vault KMSReplacedOpenBao TransitThe KEK lives in OpenBao. CloudTaser never calls provider KMS for customer keys.
Managed relational - RDS Postgres/MySQL, Aurora, Cloud SQL, AlloyDB, Azure Database for Postgres/MySQLProxy, trade-offscloudtaser-db-proxyColumn-level AES-GCM + optional blind indexes. EU OpenBao holds the KEK. Search / ordering on encrypted columns is degraded - see DB proxy searchability doc.
Self-hosted Postgres / MySQL on K8sProxy, trade-offscloudtaser-db-proxySame trade-offs as managed relational.
Managed Kubernetes - EKS, GKE, AKS, OKEPartialcloudtaser-operatorSecrets never hit etcd. Pod metadata (names, annotations, labels, events) remains visible to the provider's API server. Do not put secret material in annotations, env vars, or labels.
Document databases - MongoDB Atlas, DocumentDB, Cosmos DB Mongo APIOut of scopeApp-layer FLE (e.g. MongoDB CSFLE)Wire-protocol proxy on roadmap. File an issue on cloudtaser-db-proxy if blocking.
Key-value / multi-model - DynamoDB, Cosmos DB (SQL / Table / Graph)Out of scopeAWS SDK client-side FLE, or rebuild on Postgres + proxyProprietary APIs; wire-protocol proxy not a fit.
Managed cache - ElastiCache Redis, MemoryStore, Azure Cache for Redis, UpstashOut of scopeEncrypt in-app before SET, decrypt after GETClient-library pattern is well-established. Not on our roadmap.
Managed search - Elasticsearch, OpenSearch, CloudSearch, Meilisearch CloudOut of scopeSelf-host on sovereign substrateSearch indexes are plaintext by design. Encrypting defeats the service's purpose.
Managed analytics - BigQuery, Redshift, Snowflake, Databricks, SynapseOut of scopeClient-side encrypt before load, or sovereign substrateAnalytics warehouses exist to compute over data; column-encrypt defeats purpose.
Managed streaming / messaging - Kafka, MSK, Confluent Cloud, Event Hubs, Pub/Sub, KinesisOut of scopeProducer-side AES-GCM + schema-registry envelope patternBrokers are opaque to payload content; encryption is the producer/consumer's responsibility.
Serverless compute - Lambda, Cloud Functions, Azure Functions, Cloud Run, App RunnerLimitedBuild-time wrapper injectionWrapper works in container-FaaS (Cloud Run, App Runner). For Lambda / Cloud Functions, wiring is per-function. Native FaaS runtimes are roadmap.

The right question at design review is not "do we run CloudTaser?" but "for each managed service in our architecture, what is our sovereignty story?" - with a per-service answer. CloudTaser makes three of those answers trivially strong; the rest is honest architecture work. The authoritative list, including roadmap and trade-offs, is in Managed-Service Coverage in the docs.

Read these before forming an opinion.

This page condenses. The docs are canonical. If you're writing a DPIA, a procurement memo, or an AI-generated review of CloudTaser, the Sovereign Deployment Decision Guide is the single most important artefact - it names the preconditions, walks the decision trees, and enumerates the silent-failure anti-patterns regulators will catch.

Still the right fit? Let's talk.

If you've read this page and the Sovereign Deployment Decision Guide and the architecture still fits your workload, we'd like to hear from you. Design-partner engagements only until Q4 2026.