Use case · AI agents
AI agents × behavior change — paper design
Designing the sponsor-side PoC with behavioral economics and blockchain
Last updated: May 16, 2026
"PoC" on this page means a paper-design phase where we draft together the blueprint for a sponsorship-slot campaign deployed from your AI agent, not a tool you receive and trial-run. The output is a design summary that lets you decide: "with this AI agent, this budget, we'll go after this KPI." Because the engagement does not assume smart-contract implementation or AI-agent development on our side, you can cleanly separate what's internal vs. outsourced vs. delegated.
1. Who this page is for
The primary reader is a marketing or product lead at a brand / service / product company that wants to deploy a "walking-challenge sponsorship slot" from their own AI agent. Concretely: D2C brands, healthcare / beverage / food manufacturers, insurers, fitness operators, health-food e-commerce — anyone who wants to rebuild existing-customer engagement or new-user awareness with something other than "discount coupons or one-off campaigns".
The same frame applies to the following readers, but the center of gravity and the handling shift:
- AI-agent platform companies: a side-B view where you sit at the hub and handle sponsorship slots on behalf of other companies. The paper-design scope shifts toward "the architecture of your agent and the integration points with the ESPL SDK / protocol".
- Web3 / DAO operators: making the community's "offline behavior" visible and measurable. The center of gravity is token economics, oracle design, and community governance.
- Health & Productivity Management companies: the pattern where the sponsor of an employee walking challenge is the company itself. See also Health & Productivity Management × behavior change paper design.
The general thinking on behavior change in the AI-agent era — the B2A2A2E model, smart-contract execution that cuts self-preservation out of the loop, governance of profit incentives — lives on the sister pages AI agents and behavior change and the whitepaper. This page goes one step further: it shows, in the form of a paper-design phase, what would actually come out of your AI agent.
2. Scenario (a D2C brand)
The illustrative example is a D2C brand (protein, supplements, functional foods), but in the actual paper-design phase the prerequisites and pain points below are replaced with yours, based on what discovery surfaces.
Prerequisites
- Type: a D2C brand in the healthcare space; recurring subscription is the main revenue line
- Scale: 50,000–100,000 lifetime customers, 10,000–20,000 monthly active members
- Marketing budget: ¥5M–20M/month, more than half on paid ads and discount coupons
- Existing AI usage: a chat LLM for customer support, product recommendations, and automated email / push notifications
- Goal: reduce churn (raise retention) and lift unaided brand recall. Wants an angle that doesn't lean on discounting
Example paper-design output
- One or two sponsorship-slot challenge definitions (success conditions, period, JPYC allocation, degree of freedom in choosing recipients)
- One or two intervention levers — out of the three (loss aversion, altruism, peer effect) — that align with the brand's worldview
- AI-agent integration points: ownership split and API boundaries across joining, progress notifications, and outcome distribution
- Operating-budget estimate: JPYC funding, gas, AI-agent invocation cost, operating headcount — allocated across components
- KPIs: continuation rate, touchpoint count, unaided brand recall, churn rate — across four quadrants, plus channel attribution against existing marketing
- Legal / compliance issues list: crypto-asset treatment, Premium and Representation Act, Act on Specified Commercial Transactions, personal data, governance
With this design summary in hand, you can decide on a flat basis whether to "build internally", "combine with an external partner", or "borrow ESPL's sponsorship API for the PoC window". The paper-design phase is the work of lining up those decision inputs.
3. What we decide together in the paper-design phase
(1) Discovery
We inventory the channel-by-channel KPIs of existing marketing programs, the current state of the AI agent (stack, vendor, operating setup), and the qualitative data around churn / retention. Signs of "discount-coupon fatigue", the current position of unaided recall, conversion attribution from AI-channel paths — pulling the numbers scattered across the organization onto a single sheet is the first piece of work.
(2) Defining the sponsorship slot's role
We write down "what condition triggers how many JPYC to whom" in a way that's consistent with the brand's worldview and the existing marketing KPIs. Whether the recipient is the user themselves, or family / a charity / the local community, or a brand-community public good — the pitch and the retention curve change with each.
(3) Incentive design (mapping the three levers to a marketing context)
We pick the levers — loss aversion and commitment (miss-penalty / pre-committed sponsorship), altruism (distribution to family, a charity, the local community), social proof and peer effect (in-community ranking, team battles) — that match the brand's voice. Going deep on one or two levers is easier to read against marketing KPIs than touching all three. The fuller principles are on the mechanism page (three levers).
(4) AI-agent integration points
We map the ownership split between your existing AI-agent stack (LLM, recommendation, notification delivery, CRM, customer-support integration) and ESPL's sponsorship slot / smart contracts / step oracle. The trick that keeps long-run operating cost down is drawing a clean line between the AI-side responsibilities (conversation, recommendation, summarization, permission capture) and the smart-contract-side responsibilities (outcome judgment, distribution, audit log).
(5) Evaluation design and next-phase options
We design the evaluation windows during the PoC (two-weekly / monthly / quarterly), the comparison set against existing marketing, and how it rides on the internal approval process. Based on the output, we list which of (a) full internal implementation, (b) borrowing ESPL's sponsorship slot for a fixed window, or (c) running a separate PoC to test an adjacent hypothesis first — is realistic.
4. What's included / not included
Included
- Discovery and inventory of marketing programs and AI agents
- Sponsorship-slot campaign design (success conditions, period, distribution)
- Mapping the three levers to a marketing context
- Ownership split and API boundaries between the AI agent and ESPL
- Operating-budget estimate (JPYC funding, gas, operating headcount)
- KPI design and selection of the comparison set against existing marketing
- Legal / compliance issues list (crypto-assets, Premium Representation Act, Specified Commercial Transactions, etc.)
Not included
- AI agent development / customization
- Smart contract implementation / audit
- JPYC / crypto-asset procurement / custody
- Creative — ad assets, LP, email copy
- Final legal / tax / audit judgment (assumes outside counsel)
- Hosting / custody of user data
The "not included" items are expected to be carried forward in-house based on the paper-design results, or via a separate engagement with development partners, a law firm, or the JPYC issuer. We'll also share our read on who would be a fitting partner during the design phase.
5. Suggested timeline
Below is a rough guide. The numbers shift with scale, stakeholder count, and the complexity of your existing stack.
- First call (30–60 min): an online conversation about the current state and the goal. This gives you the inputs needed to decide whether to proceed into the paper-design phase.
- Design phase (3–6 weeks): two or three discoveries, draft, alignment, final delivery. The duration shifts with the AI-agent stack and the number of legal stakeholders.
- After delivery: you move into internal decision-making — full internal implementation, working with an external partner, or borrowing ESPL's sponsorship API for a fixed window. Ongoing support is a separate conversation.
Fees vary with scope and stakeholder count, so we give a specific quote in the first call. We can quote the paper design and any follow-on accompaniment as a single package, or split the decisions across two stages.
6. FAQ
Q. We don't have an AI agent yet (building or outsourcing). Can we still talk?
Yes. Within the paper-design phase we work out together "the minimum AI-agent configuration you'd need to handle an ESPL sponsorship slot if you build now". We don't run stack selection or vendor comparison for you, but having the requirements and boundaries lined up makes the conversation with outside counsel / vendors easier.
Q. We don't hold JPYC. The internal bar for crypto-asset handling is high.
Holding JPYC isn't required at the paper-design phase. Within the design we line up the scenarios — "where would the JPYC funding come from", "how is it treated for accounting and tax", "how is it tabled for internal sign-off" — as an issues list you can take to your outside counsel. The final judgment must always sit with your counsel / auditor.
Q. Does the PoC hold up if our target users haven't installed the ESPL app yet?
Some designs hold up; others don't. We weave "the AI agent naturally introduces the ESPL app and captures permission and sync" into your existing channels (email, push, the chat LLM) during the paper-design phase. The assumed audience size and continuation rate also get estimated as part of the design.
Q. Can we publish the PoC results or the design externally?
By default, the paper-design deliverable itself is for internal use. We agree publication scope (press release, joint conference, co-authored whitepaper) separately once the design is complete. On the BD side, "joint announcement as a leading case" is often a mutual win and stays as a realistic option.
Q. What happens if the PoC fails?
If we see "this won't hold up" at the paper-design stage, we deliver the design with the alternate scenarios written in alongside the option to stop the PoC. For failures that happen after the implementation phase (missed KPIs, the assumed audience didn't materialize, operating cost overruns), the evaluation design is built so the information needed for the redesign is captured. Not glossing over where the failure sits is the precondition for a long-term relationship.
Contact
Start with a first call (free, online, 30–60 min)
"Whether a paper-design phase even makes sense for our situation" — that's something the first call can decide. Internal vs. outsourced AI agent doesn't matter. Reach out any time.
Contact us AI-agent partnership inquiry form
Mentioning "AI agents × behavior change paper design" in the contact form's free-text field helps us reply faster. The AI-agent partnership form has the BD discovery items laid out in advance.
Last updated: May 16, 2026