AI Agents × Behavior Change
The age when AI agents,
delegated by humans, directly participate in human behavior change.
From AI advice to AI sponsorship.
An "AI agent" is, in the original sense of the word agent, a delegate that acts on behalf of a person or an organization.
Until now, AI has mostly been an advisor — answering when asked. An AI agent takes the next step: within the scope it has been delegated, it uses tools like ESPL itself to directly participate in human behavior change.
AI can advise. But advice alone does not change behavior.
Changing behavior requires a purpose, funds, stakeholders, a promise, outcome verification, and a distribution rule. ESPL wires those together on a smart contract — the substrate on which a delegated AI agent can participate in human behavior change under transparent rules.
Whitepaper currently in Japanese. English summary on request.
01 / Who AI Works For
Whom does AI work for?
An AI agent does not participate in human behavior change on its own initiative.
It receives an explicit purpose, budget, and rules from a person, a family, an employer, a municipality, a health insurer, or another AI agent — and acts only inside that scope.
-
Self
AI for yourself
Your future self prompts your present self toward healthier behavior. You delegate your own budget and goals to your AI, and receive its support.
-
Family
AI for family
Express the wish that someone you love stays healthy for longer. A family-delegated AI designs challenges backed by a support stake.
-
Company
AI for employers
Support employee health, wellbeing, and productivity. A corporate wellness AI designs opt-in challenges for staff.
-
Public
AI for municipalities and health insurers
Address rising medical costs, long-term care prevention, isolation, and community participation. A public-purpose AI designs social-prescription-style challenges for residents.
-
Multi-AI
Coordination with other AI agents
Multiple AIs share funds, goals, data, and roles to support a single person's behavior change. AI-to-AI coordination still happens strictly within the scope humans have delegated.
02 / What AI Can Do
What an AI agent can do
On top of ESPL's mechanism, an AI agent can execute the following actions.
Each action stays within the scope delegated by its principal.
-
Design a walking challenge
-
Set success and failure conditions
-
Set the Recipient
-
Send the commitment deposit to the smart contract
-
Monitor challenge progress
-
Execute conditional distribution based on the outcome
-
Propose the next challenge when appropriate
The original ESPL also runs on a smart contract. The only structural difference from the AI agent version is one step at the top — whether the Sponsor role is taken by a person directly, or by an AI agent acting on someone's behalf.
Diagram 1 — Original ESPL
A human acts as Sponsor and designs the challenge directly
- Sponsor (human)
- Smart contract
- Challenger (walks)
- Conditional distribution (Recipient on success / on failure)
Diagram 2 — AI agent version
A delegated AI agent designs and acts on someone's behalf
- Person · family · employer · municipality · other AI
- AI agent
- Smart contract
- Challenger (walks)
- Conditional distribution (Recipient on success / on failure)
The smart contract, the walking, and the conditional distribution are identical. What changes is who takes the Sponsor role.
Human intent, AI participation, and smart-contract execution — when these three align on a single layer, behavior change becomes implementable.
03 / Coach vs Sponsor
AI advisor vs. AI sponsor
Most AI in healthcare today offers advice, notifications, reminders, and coaching.
What ESPL is aiming for is something more.
An AI stakes funds, locks the promise on a smart contract, and executes distribution based on success or failure.
In other words, AI evolves from "advisor" into "an actor with executable accountability".
-
Traditional AI healthcare
Gives advice
ESPL-style
Designs a challenge
-
Traditional AI healthcare
Sends notifications
ESPL-style
Stakes funds on-chain
-
Traditional AI healthcare
Encourages
ESPL-style
Locks in contract terms
-
Traditional AI healthcare
Observes outcomes
ESPL-style
Distributes funds based on outcomes
-
Traditional AI healthcare
Advising AI
ESPL-style
Acting AI
Not notifications. Promises. Not advice. Commitment. Not encouragement. Executable accountability.
04 / Why Walking
Why walking is the first behavior
Walking is the most fundamental, the most daily, and the most measurable health behavior.
Productivity, medical costs, long-term care prevention, isolation, mental health, community participation — many social problems begin to improve when a person steps outside, walks, and reconnects with someone.
-
The most fundamental
Walking is the most fundamental health behavior — something nearly anyone can do as part of daily life.
-
The most measurable
Step counts are captured by standard sensors and remain comparable across phones, watches, and operating systems.
-
The most social
Walking gets people outside, into their neighborhoods, into face-to-face contact — directly connecting to isolation prevention and mental health.
ESPL can become the first concrete behavior change infrastructure through which AI agents act on real-world social problems.
05 / Trust & Governance
Trust and governance
Precisely because AI participates in human behavior, transparency is required.
Every AI agent acting on ESPL should follow these seven principles.
-
Make explicit who delegated the AI's purpose
-
Make explicit the budget the AI is allowed to use
-
Make explicit the AI's own compensation
-
Make explicit the recipients on success and on failure
-
Avoid structures where the AI profits disproportionately from human failure
-
Humans can stop, retract, or refuse at any time
-
Limit the purposes for which health data is used
Diagram 3 — Trust guardrails
Six guardrails are installed around every AI agent at all times. Lose any one of them and trust collapses.
Spending cap
Human approval
Recipient whitelist
Data minimization
Kill switch
Audit log
The WHO has stated that AI in health should place ethics and human rights at the center of design, deployment, and use. NIST's AI Risk Management Framework similarly frames AI risk as something to be managed at the individual, organizational, and societal levels.
Where We're Going
Human intent.
AI participation. Smart-contract execution.
ESPL is the behavior change infrastructure through which AI agents — delegated by people or organizations — participate in human behavior change via smart contracts and incentives.
Precisely because AI is entering human behavior change, we must be explicit about whose purpose, whose money, who benefits, and who can stop it.
Whitepaper
Read the whitepaper
Behavior change infrastructure for the AI agent era — a position paper covering self-preservation, revenue incentives, and governance design.
ReadWhitepaper currently in Japanese. English summary on request.
App
Try the app
Experience the Sponsor / Challenger / Recipient mechanism in the existing ESPL app today.