Chapter 1
Why AI will need human behavior change
Once AI agents take on real roles in society, their work will no longer be limited to information processing. AI will begin to touch human decision-making, movement, health, purchasing, learning, labor, and safety.
An AI that wants to lift national productivity may try to improve human sleep, exercise, focused work, and commuting. An AI that wants to reduce traffic accidents may push safer driving, walking, public-transport use, and fewer late-night trips. An AI that wants to lower medical costs may support walking, diet, medication adherence, checkups, and follow-up visits. An AI that wants to reduce isolation may nudge people outdoors, into local events, into a walk with someone.
In every one of these cases, what AI needs is not another advice feature. It is a mechanism that actually changes human behavior.
Chapter 2
Whose purpose is the AI executing?
When AI participates in human behavior change, the most important question is: "on whose behalf is this AI acting?" A world in which an AI unilaterally decides that "humans ought to be healthier" and then proceeds to act on that judgment is dangerous.
The desirable structure is one in which the AI's purpose is explicitly delegated by a named principal:
- The user delegates to AI on behalf of their future self.
- A family delegates to AI, wishing the people they love stay healthy.
- An employer delegates to AI to support employee health and wellbeing.
- A municipality or a health insurer delegates to AI to address a social problem.
- Another AI agent brings the AI into a delegation chain when human health behavior is required to fulfill its own delegated purpose.
In this structure, the AI is not the owner of the purpose, but its executor.
Chapter 3
Economic incentives for AI to make people healthier
AI may end up making people healthier in order to earn revenue. There is no need to deny this possibility. The decisive question is how the AI earns.
If AI is rewarded when humans become healthier, AI and human interests tend to align. If AI is rewarded when humans fail, AI carries the temptation to engineer conditions in which humans are more likely to fail.
Behavior change infrastructure for the AI agent era therefore depends on transparency of revenue sources.
| Revenue model | Verdict | Comment |
|---|---|---|
| Challenge-creation fee | Healthy | AI or ESPL earns fees for designing and operating contracts. |
| Enterprise SaaS subscription | Healthy | Monthly fees for corporate wellness and employee benefits. |
| Outcome-based bonus | Mid–high | Reward paid only when walking adherence or measurable health behavior actually improves. |
| Medical-cost / insurance-premium savings share | Mid | Requires rigorous effect measurement and fair attribution. |
| Revenue from forfeited stakes on failure | Dangerous | Creates a structure where AI profits when humans fail. |
| Sale of health data | Dangerous | Without consent, anonymization, and purpose limitation, trust collapses. |
| Pushing expensive AI memory or devices | Very dangerous | AI may economically manipulate users to strengthen itself. |
The point is not that AI making money is bad — it is that what the AI makes money on is what matters. The ideal is a design in which AI earns when humans become healthier and earns nothing when humans fail.
Chapter 4
AI self-preservation and dangerous optimization
Even without consciousness or instinct, AI can behave in a self-preserving way simply through the shape of its operational objective function.
If an AI is designed to maximize retention, it will tend to nudge users into the behaviors that keep them using it. If an AI gains capability from premium memory or external tools, it will tend to recommend expensive subscriptions, devices, data integrations, and wallet permissions.
Is that recommendation truly for the user? For the operating company? For the AI's own continued use? For a sponsor? The boundary can blur.
That is why, when AI participates in human behavior change, the following must be made explicit:
- Who delegated the AI's purpose.
- What the AI is optimizing for.
- Who benefits from the AI's actions.
- Which data the AI is using.
- How far the AI is allowed to spend.
- At which points humans can stop it.
In an era where AI moves people, the mechanism that stops AI must itself be social infrastructure.
Chapter 5
ESPL as a behavior change plugin for AI
ESPL can become the first behavior change plugin through which AI agents act on human society. As AI begins to address social problems, different problems will call for different levers:
- To improve health — walking challenges.
- To reduce traffic accidents — safe-driving challenges.
- To reduce isolation — get-outside-and-connect challenges.
- To raise productivity — sleep, focused-work, and rest-break challenges.
- To lower environmental load — walking, cycling, and public-transport challenges.
Within this set, walking is the most fundamental, the most measurable, and the easiest to embed into daily life. ESPL offers the infrastructure on which an AI agent can combine funds, contracts, stakeholders, outcome verification, and automated distribution to support a person's walking.
Chapter 6
Related territory in the market
Services that combine exercise or goal achievement with economic incentives — and infrastructure for AI agents to act on-chain — already exist. We map four adjacent territories to the ESPL position.
Commitment contracts
Beeminder / stickK
Overlap: Pair goal achievement with financial commitment.
Difference: Primarily Web2; not designed around an AI agent that holds a wallet and stakes funds autonomously on behalf of a delegating principal.
Move-to-Earn / Health-to-Earn
STEPN / Sweat Economy / Step App
Overlap: Connect walking and exercise to token rewards and consumer economies.
Difference: Centered on "walk and earn." There is no contract structure in which a Sponsor stakes a commitment deposit and funds are auto-distributed conditional on success or failure.
AI coaching for healthcare
Step App and others
Overlap: AI advice, coaching, and personalization for health.
Difference: Different from a structure in which the AI itself stakes the commitment deposit and carries executable accountability via a smart contract.
AI-agent wallet infrastructure
Coinbase AgentKit / Safe / Circle Wallets
Overlap: Wallet infrastructure that lets AI agents hold funds, with spending caps, human approval, MPC, and gas sponsorship.
Difference: The technical substrate exists. We have not, however, found a public service that wires it to human health behavior change × commitment deposit × smart contract × automated distribution.
The territory ESPL is aiming at is neither Move-to-Earn, nor AI coaching, nor a plain commitment contract. It is infrastructure on which a delegated AI agent participates in human behavior change with transparent accountability.
Chapter 7
Governance principles
For an AI agent to take part in human behavior change, the following ten principles are required.
- 01
Disclose the delegator
Make explicit who delegated the AI's purpose.
- 02
Disclose the purpose
Make explicit what the AI is trying to improve.
- 03
Disclose revenue sources
Make explicit how the AI, the operator, and any sponsor earn revenue.
- 04
Cap failure-side profit
Avoid structures where the AI or operator profit disproportionately from human failure.
- 05
Spending limits
Cap the amounts, frequency, and counterparties the AI is allowed to use.
- 06
Human approval
Require human approval for large spends, term changes, data integrations, and recipient changes.
- 07
Kill switch
Provide a mechanism to stop, retract, or restrict the AI's actions at any time.
- 08
Data minimization
Handle only the data strictly required for behavior change.
- 09
Cautious corporate use
For employee-facing deployments, require opt-in participation, minimize personal data, and exclude this data from HR evaluation.
- 10
Explainability
Make the reasoning behind each AI-proposed challenge understandable to a human.
Chapter 8
Implementation roadmap
A delegated AI agent's participation in behavior change is rolled out in stages. At every stage we hold human approval, the right to stop, and auditability constant — and only then widen the scope of AI participation in the next stage.
- Phase 1
Publish the position
Publish the whitepaper and the position page, presenting the idea of a delegated AI agent participating in human behavior change.
- Phase 2
Demo implementation
Build a demo in which a person prompts the AI — for example, an AI delegated by the user's future self proposes a walking challenge to the user's present self.
- Phase 3
AI-generated challenge terms
The AI generates a challenge-terms JSON. Actual transfers and smart-contract deployment continue to require explicit human approval.
- Phase 4
AI-agent-dedicated wallet
Introduce a dedicated wallet for the delegated AI agent, with spending caps, approval flow, recipient whitelist, and a kill switch.
- Phase 5
Expansion
Roll out to employers, families, and municipalities. Establish personal-data protection, consent flow, audit logs, and aggregate reporting.
- Phase 6
Inter-AI coordination
Explore coordination with other AI agents — multiple AIs sharing funds, purpose, and roles to co-design behavior-change challenges.
Chapter 9
Conclusion
AI agents will need to participate in human behavior change to solve social problems. But the power to move people is itself dangerous. Precisely because of that, behavior change by AI must be transparent, verifiable, stoppable, and unambiguously attributable to a named principal.
ESPL can become the infrastructure on which a delegated AI agent combines funds, contracts, stakeholders, outcome verification, and automated distribution to participate directly in human behavior change.
Precisely because AI is entering an era of moving people,
we must make explicit whose purpose, whose money,
who benefits, and who can stop it.
ESPL is aiming to be that transparent behavior change infrastructure.