Quality Assurance
Last updated
Last updated
Ensuring the consistent quality of processed data is paramount to our platform. Inaccuracies or inconsistencies in data processing can lead to incorrect decisions, and even to biassed or incorrect model learning predictions, which can have significant repercussions, especially in critical applications such as healthcare or autonomous driving. For instance, an AI-powered diagnostic tool trained on poorly labelled medical images can potentially lead to a misdiagnosis and even malpractice, with ramifications ranging from falsely pronouncing a healthy person as sick to incorrectly classifying an existing tumour as normal.
The performance of each Human Agent is constantly monitored and assessed for ensuring the quality of data processing. We have designed four Key Performance Indicators (KPIs), each focusing on a specific question:
Speed: How quickly does a Human Agent solve tasks? The speed in solving data processing tasks is measured from the moment that a Human Agent has started working a Task upon until it is submitted for Consensus.
Proficiency: How accurately does a Human Agent solve tasks? The accuracy of a Human Agent is determined based on the outcome of the Consensus round upon completion of data processing tasks.
Reliability: How often does a Human Agent solve tasks? The reliability of each Human Agent boils down to the discipline of processing Tasks on a daily basis, thus ensuring a constant throughput of the platform.
Versatility: How many different types of tasks does a Human Agent solve? Human Agents that solve a variety of Tasks bring added value to the platform which can ensure that the overall speed of data processing is dictated by fair compensation, and not based on subjective attachment to certain tasks.
The platform monitors these KPIs not just for assessing the performance of each individual, rather to identify paths for improvement custom-tailored to each Human Agent. With transparency in mind, we designed a visual representation of the performance of every Human Agent in the form of the Punch Card NFT as an on-chain proof of skills and accomplishments:
From their Punch Cards, Human Agents gain insights into their current standing, as well as the targets they need to improve across every KPI to be able to level up. As shown above, we can deduce that the current user solves a variety of tasks with an above average speed. To achieve Level 5, the user needs to work on paying more attention to instructions, and setting daily goals for solving tasks.
Levels, along with the corresponding KPIs targets, are computed using a Fibonacci scale (see Figure 9) which implies that each new unlocked level sets the bar even higher. There is no maximum level defined by the platform, rather it all depends on human ingenuity, skill and diligence for ruling the leaderboard.
Since the Punch Card NFT acts as an on-chain proof of performance for each Human Agent, its value does not reside in trading, but in holding. Without a Punch Card, no transactions can be issued for claiming or staking rewards after solving tasks. As such, all users that are on-boarded into Timeworx.io need to first earn their Punch Cards by solving tasks. This is both a proof of the users’ intelligence and humanity, and it is also a requirement for the platform to be able to create initial performance profiles for each Human Agent, as a baseline for measuring future improvements.
Upon minting a Punch Card, all of the corresponding task rewards that had been locked until then are now distributed into three equal parts: one third is awarded to the user, one third is allocated for global staking rewards and one third is distributed to a pool of tokens that will be used in our gamification mechanisms. In this way, every Human Agent that mints a Punch Card brings added value to the entire community.