Timeworx.io: Whitepaper
  • INTRODUCTION
    • Our Vision
    • Terms & Definitions
    • Data Growth
    • Data Processing in a Nutshell
    • The Problem
  • THE SOLUTION
    • Principles
    • Overview
    • An Example
    • Pipelines
    • Revenue Model
    • Customers
    • Agents
    • Machine Learning in a Nutshell
    • Objectives for the future of AI
  • AI that is Fair
    • Data Labelling in a Nutshell
    • Problems in Data Labelling
    • Decentralised Data Labelling
    • Cognitive Effort
    • Quality Assurance
    • Gamification
    • Our Mobile Application
  • AI that is privacy-enhancing
    • Data Privacy in AI
    • Federated Learning in a Nutshell
    • Federated Learning Protocol
  • AI that is Trusted
    • Trust in AI
    • Decentralised Inference Protocol
    • Performance Monitoring
    • Delegation of Trust
  • Token
    • Utility
    • Tokenomics
    • Additional Information
  • Roadmap
    • Roadmap
  • Team
    • Our Team
    • Our Advisors
  • Other Information
    • Keep in touch
    • Media Kit
    • Register for alpha testing
Powered by GitBook
On this page
  1. AI that is privacy-enhancing

Federated Learning Protocol

PreviousFederated Learning in a NutshellNextTrust in AI

Last updated 1 year ago

At Timeworx.io, we take data privacy, ownership and sovereignty very seriously. Federated learning is an integrated part of the decentralised data processing protocol with the target of running AI training directly on Human Agents’ smartphones.

The platform supports federated learning directly as one of the . Since the data processing will actually be carried out on the Human Agents’ smartphones, the that can be configured for such Tasks is an AI model. This means that whenever a Human Agent starts solving such a Task, they are actually downloading the AI model itself. By following the data processing instructions, they are training the AI model locally. And, finally, the Task outcome is nothing other than the updated AI model.

Similar to all other Tasks, when configuring the Agent block, a business is able to choose the Consensus algorithm that is actually used for aggregating the global AI model. The platform supports a wide range of Consensus algorithms that actually translate to FL model aggregation techniques:

  1. FedAvg: one of the in FL. During the Consensus phase, the parameters of each AI model trained by a Human Agent are weighted and averaged towards producing the global AI model.

  2. FedProx: focused on addressing the issue of local optimisation. Running too many iterations by a Human Agent can lead to overfitting the AI model, so FedProx uses a different approach to regulate the influence of local AI models over the global model.

  3. Scaffold: an aggregation algorithm that improves the case for data heterogeneity. Some Human Agents might be contributing data of differing degrees, and of these outcomes on the global AI model.

Aside from the builtin Consensus algorithms, customers can provide their own custom implementations written in Python. The platform generates code stubs that can be implemented and deployed by the customer through a basic coding interface exposed by the UI.

We believe that federated learning is not only a privacy-enhancing technology, but it is also a key to reducing the overall environmental impact of training machine learning models. AI has become increasingly resource hungry, with reports of than a single air traveller flying from New York to San Francisco. In line with the Decentralised Physical Infrastructure Network (DePIN) movement, we are pushing the decentralisation bar even higher by distributing machine learning across the smartphones participating in our decentralised data processing protocol, in exchange for fair compensation.

Fast Track

Go directly to the if you are already familiar with the basic concepts for Trust in AI.

most commonly used methods of aggregating models
an enhanced version of FedAvg
Scaffold focuses on reducing the variance
training processes for a single model emitting 25 times more carbon
Decentralised Inference Protocol
data processing types
data type