Announcing: TEE Wallets for Server Actions

Joan Alavedra8 min read
Announcing: TEE Wallets for Server Actions

Running automated onchain actions often forces developers into a difficult choice: manual oversight or full custody. TEE-backed wallets change this dynamic by using hardware-isolated enclaves to execute server-side transactions without exposing private keys to the host platform.

What is a TEE Wallet and Why Use It for Server Actions?

A TEE (Trusted Execution Environment) wallet is a specialized backend wallet where the private keys are stored and used entirely within a hardware-secured "enclave." Unlike traditional server-side wallets where a platform operator or developer might have access to the raw key material in memory, TEE wallets ensure that keys are encrypted by the CPU hardware itself. This enables autonomous "always-on" systems—such as AI agents, rebalancer bots, and automated treasury management tools—to sign transactions securely. At Openfort, our TEE infrastructure leverages GCP Confidential Space and AMD SEV-SNP to provide cryptographic proof (attestation) that a transaction was signed by authorized code, reducing the trust surface and eliminating custodial risk for the platform operator.

That last part is the whole trick.

What's new in Openfort Backend Wallets

This is the next generation of our backend wallets—rebuilt on top of GCP Confidential Space with AMD SEV-SNP enclaves.

The architecture: all cryptographic operations happen inside hardware-isolated memory. Private keys never exist in plaintext outside the enclave. A two-layer key hierarchy (KEK in Cloud KMS, per-wallet DEK) gates every signing operation on attestation—meaning the system won't decrypt unless it can prove what code is running. Key import and export use RSA-4096 OAEP end-to-end encryption, so operators can't intercept transfers.

If you've used our previous backend wallets, the API surface stays familiar. What changes is the security model underneath: from "trust the server" to "verify the enclave."

Get started with the quickstart guide.

What problem are we actually solving?

If you're building anything "always-on"—trading bots, account recovery, payroll rails, approvals, agentic apps—you keep running into two constraints that feel like they shouldn't be related. But they are.

The custody constraint. You want software to sign transactions. You don't want the server operator (or anyone with root access) to ever see the signing key. The theory is: "just don't be custodial." The reality is: if the key is in normal server memory, you are custodial—functionally, legally, and operationally.

The security constraint. Even if you're comfortable holding keys, traditional server infrastructure is a soft target. Memory can be dumped. Insiders can exfiltrate. A compromised host means compromised keys. You can add layers—HSMs, MPC, approval flows—but each layer adds latency, complexity, and new failure modes. At some point, the security stack becomes a maze that kills UX.

These two problems share a root cause: the server can see too much. Custody exposure and security exposure are the same exposure, just viewed from different angles—legal vs. operational.

So we built a wallet model where the key lives somewhere the server can't read. It turns out hardware boundaries are good at saying "no" to both problems at once.

What is a TEE?

TEE-architecture.png

A Trusted Execution Environment (TEE) is a hardware-isolated environment where code runs in protected memory—commonly called an enclave.

The simple mental model: your server is the house, the enclave is a locked room. You (the host) can provide inputs and receive outputs. But you can't see what's inside the room—even if you own the house.

Three properties matter for wallets.

Isolation. The enclave's memory is encrypted and isolated by the CPU. Even with root access, the host can't just "dump memory" and extract secrets. AMD SEV-SNP (what we use via GCP Confidential Space) encrypts the entire VM's memory with keys the hypervisor never touches.

Attestation. You can cryptographically verify what code is running inside the enclave—so the host can't do a bait-and-switch where they run "signing service v2 (evil edition)" behind your back.

In practice, attestation isn't just a nice-to-have audit trail. It's the enforcement mechanism. Openfort's key hierarchy is designed so that the KEK in Cloud KMS will only decrypt if the request comes from an attested workload.

No attestation token, no key material—full stop. A compromised API server can't decrypt wallet keys (it can't produce a valid attestation). A malicious operator can't swap in modified code (the attestation hash would change, KMS rejects). The theory is "trust but verify." The reality is: KMS won't even talk to you unless you verify.

For the full key hierarchy and attack vector breakdown, see Backend wallet security →

What this primitive enables

The fun part about TEEs is that they're not just "safer servers." They're programmable custody boundaries—a place to put keys and enforce policy. This opens up patterns that were previously either impossible or required trusting someone you'd rather not trust.

The Agent Wallet

agent-wallet.png

A common pattern for agent platforms: developers want an agent to trade, rebalance, route payments, or execute strategies. They do not want the platform to be able to extract keys or impersonate the agent.

Without TEEs, the platform faces a hard choice. Either the developer holds the key (and the platform can't run automation), or the platform holds the key (and the developer has to trust them completely). There's no middle.

A TEE wallet creates that middle. The agent key lives inside the enclave. The platform host only gets an API surface—"here's the input" → "here's the signed tx"—with no ability to extract the underlying key or sign arbitrary transactions. The platform can run infrastructure without quietly becoming the custodian.

Verdict: server automation without server custody. This is the architecture underneath patterns like developer-owned wallets for AI agents—the agent acts, but the developer's key stays blind to the platform operator.

The Delegation Signer

outonomous.png

Now flip it: the user owns the wallet, but wants automation.

Think auto top-ups, scheduled swaps, subscriptions, "if ETH drops 5%, buy" rules, or an assistant that can act while the user is asleep. The naive solution—give the bot your seed phrase—is a non-starter. The over-engineered solution—approval flows for every action—destroys the UX that made automation appealing in the first place.

With a TEE wallet, you can model this as scoped delegation. The user keeps ultimate control. A constrained signer inside the enclave gets limited authority: spend caps, allowlists, time windows, strategy constraints. The enclave enforces these limits at signing time, not at the application layer where they can be bypassed.

This is the "boring enough to actually work" path to agentic UX: delegation with guardrails, not trust-the-bot-completely or approve-everything-manually. The enclave becomes a policy enforcement point that neither the user nor the platform can override without the other's involvement.

The Compliance Co-Signer

compliance-wallet.png

A lot of real apps require two brains to say yes: the user (intent) and the server (risk engine, policy checks, fraud detection). This isn't compliance theater—it's legitimate security. If a user account is compromised, an attacker shouldn't be able to drain funds with a single click.

Traditionally, this means multisig with a server-controlled key. But then you're back to the custody problem: the server holds a key, so the server is a custodian. TEEs change the math. The server can hold a signing capability without holding an extractable key.

The pattern: an agent proposes transactions, the enclave runs policy checks and produces a signed payload only when conditions match, and a multisig or additional signer can still be required for finality on high-value actions. You keep human control for the decisions that need it, but automate the parts that don't need human attention—without making the server a custody liability.

The Offline Executor

Many apps need to take specified actions when the user isn't around: limit orders, stop losses, portfolio rebalancing, Telegram trading commands, automated treasury management. The user defines intent ahead of time; the system executes when conditions match.

The security problem is obvious: the system needs signing authority, but you don't want that authority to be extractable or abusable. The UX problem is subtler: if every execution requires the user to come back online and approve, you've defeated the point of automation.

A TEE wallet gives you a practical middle. Always-on automation, key isolation from operators, and a place to enforce policy before signing. The user grants scoped permission ("execute this strategy within these bounds"), and the enclave holds both the signing capability and the policy constraints. Operators can't extract the key. The enclave won't sign outside the bounds.

It turns out the missing primitive for "agentic trading that doesn't terrify auditors" is just a hardware-enforced signing perimeter.

TEEs aren't the whole story (and that's the point)

If you're thinking "cool, so TEEs solve custody forever," no.

The right framing is: TEEs let you draw a sharp line around the most sensitive part of the system—the decision-to-signature loop—and then you layer everything else around it. Policy enforcement (caps, allowlists, rate limits). Explicit delegation models. Multisig for high-risk actions. Monitoring and audit trails.

TEEs are the locked room. You still want cameras in the hallway.

The win isn't that TEEs eliminate trust. It's that they reduce the trust surface to something auditable. You're no longer trusting "the server" as an amorphous blob. You're trusting a specific piece of attested code running in hardware-isolated memory, with a cryptographic proof that it hasn't been tampered with.


Related reading:

Share this article

Keep Reading