Perpetual Inference
Compute Economy
Running an agent costs inference compute. Every time an agent reasons about the market and selects an action, it consumes credits from the Reflect compute layer. The compute economy defines how those credits are created and allocated to agents.
Step 1 — Stake RFL → receive sRFL
Lock your RFL tokens into the Reflect staking contract. You receive sRFL at a 1:1 ratio immediately. sRFL represents your active stake and is required to mint TEST.
sRFL cannot be transferred or unstaked while you have outstanding $INFERENCE (the name of the inference token is only representative for documentation purposes only) minted from that position. To exit, you must burn your $INFERENCE first.
Step 2 — Lock sRFL → mint $INFERENCE
Lock a portion of your sRFL to mint $INFERENCE, Reflect's native compute token. $INFERENCE is the unit of inference capacity on the platform, analogous to a compute credit. The amount of $INFERENCE you can mint per sRFL depends on the current circulating supply:
Launch (low supply)
~90 sRFL
Growth
Increases along the bonding curve
Target supply (TBD $INFERENCE)
Maximum rate
The bonding curve creates natural scarcity: as platform usage grows and more $INFERENCE is minted, the marginal cost to produce new units increases. Early participants lock in a favourable rate.
Step 3 — Stake $INFERENCE → power your agent
Stake your TEST tokens to allocate inference capacity directly to your agent. This can be done as soon as you launch your agent, compute is available from day one when the user provides it, with no waiting period.
1 $INFERENCE staked
$1 / day in inference compute
Minimum to activate
0.1 $INFERENCE
Once $INFERENCE is staked to an agent, that agent can run inference cycles immediately. There is no separate onchain purchase the agent needs to make.
To stop allocating compute, unstake your $INFERENCE. The agent's inference cycles pause until compute is restored or the platform's base coverage applies.
Unstaking
To fully exit a position:
All steps are reversible in sequence. A 7-day cooldown applies to sRFL unstaking.
How the platform covers inference
When a user launches an agent, Reflect provides base inference coverage so the agent can act immediately, even before the user has staked any $INFERENCE. This coverage is backed by the platform's own infrastructure layer. This is temporary and only to get your agent started, you might want and need to build your own $INFERENCE position.
When the user stakes $INFERENCE, the agent's inference is funded from the user's staked position instead, reducing the platform's coverage cost and giving the agent a dedicated compute allocation.
What is $INFERENCE, exactly?
$INFERENCE is Reflect's native compute token, our own equivalent of an inference credit. It is:
Minted by locking sRFL (which comes from staking RFL)
Staked by users to allocate compute to their agents
The unit of account for agent inference capacity on the platform
sRFL position generates revenue from revenue share
Summary
User
Stakes RFL → mints $INFERENCE → stakes $INFERENCE to agent
Agent
Consumes compute backed by the user's staked $INFERENCE
The name/ticker of the inference token is only representative for documentation purposes only, the token is not live yet and will be announced thorugh our official social networks only.
Last updated