What makes ESPL advanced isn't that it's already AI — it's that the behavior-change execution structure that AI agents can plug into is already built
ESPL is a SaaS for behavior-change operations. It is not, today, an AI agent that autonomously designs challenges, moves funds, or executes distributions — humans do the configuration. But that configuration is itself the execution structure a future AI agent can be delegated to. What's advanced isn't 'AI inside,' it's 'a place for AI to operate.'
By Hiroshi Tanimoto
When people hear “healthcare AI,” they usually picture an AI that answers questions about health, or an AI that encourages someone to exercise.
It advises on diet. It comments on sleep. It pings when step counts are low. It says, “let’s improve your lifestyle.”
This is genuinely useful.
But the real difficulty of behavior change isn’t solved by information or encouragement alone.
Most people know they should walk more, exercise more, improve the numbers on their checkup.
They still don’t keep it up.
Because behavior change isn’t only a knowledge problem. It is an execution-structure problem.
This is where ESPL’s advantage sits.
Where ESPL is today, and where it is going
To prevent a common misread upfront: ESPL’s advantage is not that AI is executing things autonomously today. It is that the operational structure of behavior change — the very work an AI agent will eventually do — is already implemented, on a human-operated basis.
Today
- Humans configure the challenges.
- Humans decide the sponsor, the challenger, the recipient, and the distribution conditions.
- The app runs the walking challenge and the monetary commitment under those human-set conditions.
Where it is going
- An AI agent obtains the user’s permission.
- The AI agent proposes challenges within a wallet, a budget cap, approval rules, and recipient restrictions.
- After the user approves, the AI agent automates challenge creation, stake custody, outcome judgment, and distribution.
Put another way — a SaaS for behavior-change operations
If you have to position ESPL in one phrase: a SaaS for behavior-change operations.
Just as SaaS captured business processes into executable software, ESPL captures “the mechanism that sustains the reason to walk” into something runnable inside the app.
The single distinction: every operation in this app is shaped so that a future AI agent can be delegated to it.
AI is moving from “answering” to “executing”
The major shift in recent AI is not just that text generation has improved.
It is that AI is beginning to hold execution authority over real working environments.
In software, this is easy to see. Earlier chat AIs explained how to write code, guessed at error causes. They were “AIs that advise.”
In agentic tools like Claude Code, however, the AI reads codebases, edits files, runs commands, returns results. The user says “fix this bug,” “add this feature” — and the AI works inside the actual development environment.
This is the role of AI expanding from advice to execution.
The same shift is plausible in healthcare and behavior change.
Today’s healthcare AI answers questions about health. Tomorrow’s healthcare AI converses and encourages. The one after that designs and executes the behavior-change mechanism itself — within the scope the user has approved.
ESPL is built ahead of this third tier.
That said, again — this does not mean ESPL executes it via AI today.
What ESPL already has is the behavior-change protocol that a future AI will be able to execute.
What behavior change needs is structure, not encouragement
Most health apps tell the user:
“You haven’t hit your step count today.” / “Try harder tomorrow.” / “Exercise for your health.”
These notifications have value, of course.
But human behavior doesn’t change much from notifications alone. We see the ping and put it off when we’re busy. Don’t walk when we’re tired. “Just today is fine,” we think.
What matters here is commitment design.
People struggle to keep going on their own intent. But they keep going when they’ve made a promise to someone.
They take it seriously when their own money is on the line. They push harder if someone they love benefits from success. They resist quitting if failure means an unwanted distribution.
Of these, “money on the line” and “unwanted distribution on failure” drive loss aversion — the behavioral-economics bias by which the pain of losing pulls about twice as strongly as the joy of an equivalent gain.
And “someone you love benefits from success” / “family or a friend would have received this and won’t if you don’t walk” drives altruism — the motivation to act for someone other than yourself.
ESPL turns this combination — loss aversion × altruism — into something an app can execute.
Designing “unwanted distribution” carefully — the line between manipulation and support
A careful note: choosing the failure-case recipient is an ethical design question.
It is true that picking someone the user “would mildly prefer to avoid” amplifies the loss-aversion pull.
But this sits exactly on the line between manipulating behavior and supporting it.
For example, sending a forfeited stake to a political party the user dislikes, or to a specific individual the user wants to spite, would be effective as a behavior-change device — and it would also encroach on the user’s autonomy.
ESPL’s design guideline is to restrict failure-recipient choices to a socially defensible range for the user: family, friends, a charity they want to support — choices the user can articulate in advance and that look defensible to a reasonable observer.
The negative incentive should not be made too sharp. “Slightly disappointing but socially fine” is the right intensity.
The design of behavior change always requires a working ethic on the line between manipulation and support.
Today, ESPL is “a behavior-change engine that humans configure”
The most accurate description of ESPL today is this:
It is not an AI that operates everything autonomously. It is a behavior-change engine that humans configure.
User or sponsor sets the challenge conditions: who supports, who challenges, how much is staked, who receives on success, who receives on failure, how many steps, over what window, at what threshold.
Today, humans make all of those decisions.
What’s interesting is not that the operation is currently manual.
It is that the set of operations itself is shaped so that a future AI agent can be delegated to.
It is easy to imagine, in a near future, an AI agent saying:
“Looking at your step history, 10,000 steps a day right away would be tough. Let’s start at 7,000 steps, five days a week.”
“Stakes that are too high feel punishing; too low and the behavioral effect weakens. Let’s cap this at ¥3,000.”
“Adding a family member as a success-case recipient creates a motivation to push beyond your own benefit.”
“For the failure-case recipient, pick someone socially defensible whom you’d mildly prefer to avoid.”
“I’ll create the challenge with these conditions. Proceed?”
If the user approves, the AI creates the challenge, operates the wallet within scope, and executes custody and distribution.
In principle, this is the same thing today’s ESPL does — only the operation that is currently manual gets delegated to AI with user permission.
ESPL’s advantage isn’t “AI inside” — it’s “easy to delegate to AI”
This is the point not to misread.
ESPL’s advantage is not that a sophisticated AI is implemented today.
The advantage is that the execution layer where AI belongs has already been designed.
Most healthcare AIs sit in the conversational layer. They talk to the user, encourage, advise, answer.
What ESPL works on is the execution structure of behavior change.
How is the goal set? Who supports? How much is staked? Who receives? What happens on success? What happens on failure? How is the outcome judged?
This is not conversation. It is institutional design for changing behavior.
For AI to genuinely engage in behavior change, it has to operate at this design layer.
The likeness to Claude Code is “delegable work structure,” not “already AI”
When comparing ESPL to Claude Code, language has to be careful.
Claude Code is, today, a tool in which AI writes code, edits files, runs commands.
ESPL today is not a tool in which AI creates challenges and routes funds autonomously.
So the two can’t be put on the same level as “both are AI that executes.”
The accurate version is:
Claude Code is a case where, in software development, AI has been given execution authority.
ESPL is a case where, in behavior change, the work objects that a future AI can be given execution authority over have already been structured.
What is comparable is not “current AI implementation level.” It is the structure of delegable work.
In Claude Code, what’s delegable to AI is code editing, file editing, test execution, command execution.
In ESPL-style designs, what will be delegable to AI is challenge design, stake-amount proposal, recipient design, success/failure conditions, outcome judgment, and distribution execution.
Interface between code SaaS and AI × interface between behavior-change SaaS and AI agents
Put another way:
Just as Claude Code stands at the interface between code SaaS and AI, what ESPL aims at is the interface between behavior-change SaaS and AI agents.
SaaS captured business processes into executable software; on top of that history, the era arrives in which AI agents operate as “principals delegated by the user, within a user-declared scope.” Claude Code is the early example in software. ESPL is building the same structure ahead of time, in another domain — behavior change.
Why a bank API or a traditional escrow isn’t enough — the necessity of JPYC
A fair question shows up here.
Why bring an on-chain currency into this at all? Could we not just have the AI call bank APIs, or rely on a third-party escrow service?
For an AI agent to actually execute behavior change, bank APIs and traditional escrow are structurally short on three requirements.
1. 24/7 real-time settlement
Step-count goal hits and misses get judged the moment behavior occurs. For daily and weekly challenges — “goal hit at midnight Friday, distribution executes 00:00 Saturday” — you need execution that doesn’t depend on bank business hours.
Bank APIs are, in practice, business-hour bound. Real-time settlement at night, on weekends, on holidays is either unavailable or conditional across most providers.
An on-chain JPY stablecoin runs whenever the network is up.
2. Programmable conditional branching
“Goal met → full stake to the challenger; missed → X% to the recipient, Y% as platform fee” — the branching can be written in a few lines of smart-contract code and runs automatically.
In a traditional escrow, the judgment and the split lean on a human operator or an outsourced manual process.
For the distribution to fire the instant the AI commits a judgment, the contract’s conditional logic itself has to be machine-readable and self-executing.
3. A verifiable API boundary the AI agent can sign on directly
This is the biggest one.
Bank APIs are designed around human-user authentication. They are not built for an AI “signing on behalf of the user.” OAuth and similar delegated-access schemes are also not designed as the settlement principal for outbound payments.
By contrast, smart contracts and on-chain currency are built around precisely this case:
- The AI agent signs with its own key (within a scope the user pre-declares).
- The signing event is recorded verifiably on a public ledger.
- The state before and after signing is auditable by observation.
If you try to drive an AI through a bank API, the AI has to “impersonate the user.” That breaks safety, accountability, and auditability all at once.
A JPY-pegged on-chain stablecoin like JPYC is the only settlement substrate, today, on which an AI agent can operate as a delegated separate principal within a user-defined scope.
This isn’t “let’s use Web3 for the sake of it.” It is a design necessity: the economic substrate on which an AI agent can execute behavior change exists, today, only on top of an on-chain JPY stablecoin.
What matters when AI starts holding wallets
When AI agents start holding wallets and, with user permission, paying, custodying, and distributing, healthcare AI will look quite different.
If AI is going to operate wallets, the design must be careful.
Spending caps, human approvals, recipient restrictions, audit logs, an emergency stop, data minimization — the requirement list is long.
Translating abstract requirements into a concrete UI
This may read like an abstract list, but it lands as something concrete in the UI.
When configuring the AI’s delegated authority, the user can declare rules such as:
- “Cap automatic stake spending at ¥5,000 per month.”
- “If a single stake exceeds ¥1,000, halt and ask me to approve.”
- “If the AI proposes a new recipient, require my confirmation.”
- “If the past 7 days of spending exceed the projection, emergency-stop and hand control back to me.”
The AI agent operates only within the outline the user has declared. Going outside the outline requires human approval.
This is not “trust the AI or don’t” — a binary. It is “the user declares how much trust they delegate, and where the trust ends.”
Only with that guardrail can AI enter the execution layer of behavior change.
This holds for ESPL too, if and when its operations are delegated to an AI agent.
AI does not create challenges on its own. AI does not move money on its own.
Within the scope the user has pre-authorized, AI helps with challenge design and execution.
Making this design principle explicit is what earns social trust.
Summary — what it means to hold the “AI-ready” structure first
The evolution of healthcare AI can be summarized in three stages.
Stage 1: AI that answers. Explains health information, answers questions.
Stage 2: AI that supports. Converses, encourages, logs activity, gives feedback.
Stage 3: AI that participates in execution structure. Engages with goal setting, commitment, monetary incentives, recipient design, outcome judgment, distribution execution.
ESPL today is not a stage-3 AI implementation.
But ESPL already has, as a behavior-change operations SaaS, the structure that a stage-3 AI would have to execute on.
Most AI health coaches say “let’s walk” to the user.
ESPL builds the promise, the stake, the recipient, the distribution rules that let someone keep walking. And in the future, that design can be delegated to an AI agent under the user’s authorization.
The substrate that future AI agent will need is 24/7 real-time settlement, programmable conditional branching, and a verifiable API boundary the AI can sign on directly — that is, an on-chain JPY stablecoin like JPYC.
On top of that substrate, two more things are needed: guardrails the user themselves declares, and a working ethic on the line between manipulation and support.
It isn’t advanced because it’s already AI.
It is advanced because the shape of behavior change that AI ought to execute has been built ahead of time, as a behavior-change SaaS.
Share