AtlasFoundations

Foundations

Foundations

Cross-cutting explorations of the mechanics that underpin AI agent systems — security, trust, systemic risk. These are not industry-specific. They apply everywhere agents operate.

The industry series ask "what changes?" Foundations asks "what breaks?"

The Insurance and Retail series explore how AI reshapes specific industries. Foundations explores the underlying mechanics — the security vulnerabilities, trust architectures, and systemic risks that apply across every industry where agents operate. These simulations are grounded in published research and translate academic frameworks into interactive experiences.

System

The Players

Enterprise

Deploys agents for business operations

Agent System

Autonomous agents performing delegated tasks

Web Environment

The open web that agents consume

Adversary

Actor engineering traps in the environment

Human Overseer

Reviews and approves agent outputs

Defence Layer

Technical and ecosystem-level mitigations

Entries
Entry 01Active

Agent Security

What happens when the digital environment becomes adversarial — when websites, documents, and data sources are engineered to manipulate the AI agents that consume them. Grounded in Google DeepMind's taxonomy of AI Agent Traps.

Entry 02Coming Soon

Systemic Risk

When multiple AI agents share an environment and one is compromised, how does contamination cascade through the system? An exploration of multi-agent failure modes.

Experiment 01Coming Soon

The Cascade

Entry 03Coming Soon

Trust Architecture

How do you build trust in a system where agents make decisions humans cannot fully verify? An exploration of verification, provenance, and accountability.

Experiment 01Coming Soon

The Trust Economy