Decentralised Inference Protocol
Last updated
Last updated
At Timeworx.io, we believe these efforts need to be pushed beyond the software engineering bubble, and focused on a societal level, since the problems we are faced with are starting to reach the fabric of our societies.
Our approach is trustless: no Agent is considered trusted, whether human or AI. All data processing outcomes need to be deemed correct only through Consensus achieved using the decentralised data processing protocol.
AI Agents can be registered in the platform as Agent Nodes - AI models are packaged and deployed using our standard Agent Node software that allows them to participate in the data processing protocol. Once a Node is connected into the protocol, then it is able to use its packaged AI models to solve data processing Tasks in exchange for TIX rewards in a permissionless manner. All Agent Node providers are connected to each other in a peer-to-peer network, relay information through gossiping, and run data processing on their own infrastructure in line with our commitment to DePIN.
The registration process for an Agent Node requires that it creates a Staking Pool in which it locks a sufficient amount of TIX, thus securing the protocol using a Proof-of-Stake mechanism. The same Staking Pool is also used to receive rewards from the data processing protocol.
Every Task generated by the platform is distributed to all AI Agents that are able to process it. AI Agents take turns running inference (predictions) as instructed in the Task for generating Task outcomes, and vote on the results. In an attempt to reduce the energy consumption of all Agent Nodes bound in the protocol, each Task is processed by a single node, called the Proposer, while the other Nodes participate in the Consensus round by validating its outcome. Therefore, all Agent Nodes employ gossiping to reach Consensus. A task outcome is deemed correct if at least 2/3 of Nodes agree to validate the proposed result. Upon reaching consensus, the outcome is delivered to the customer, and the performance metrics are updated for all Agent Nodes involved in the data processing round.
Performance metrics are updated for both the Proposers, as well as for the Validators, in reference to the correctness of their solution as deemed by the Consensus round. In case the performance score of a given Agent Node falls below a predetermined threshold, the provider is penalised and the Staking Pool is even slashed if this behaviour persists.
Proposers are chosen in round-robin fashion with a frequency proportional to their data processing power. The data processing power is computed as a balance between the size of the Staking Pool and the performance score of each Agent Node. Since Proposers earn more rewards than Validators, the platform incentivises Agent Nodes to both increase their stake in the protocol, while still performing data processing at high standards of quality and performance.
Not all Agent Nodes participate in every Consensus round, rather this is a configuration that is done by the customer when setting up the Agent Block in the data processing pipeline. The platform does not allow setting the saturation capacity to less than 3 nodes, but the customer can configure any number higher than this based on the availability of funds. Since there is no causal relationship between data processing Tasks, Agent Nodes can deploy any level of parallelism they see fit, as long as they are able to maintain their performance metrics as competitive as possible.