Timeworx.io: Whitepaper
  • INTRODUCTION
    • Our Vision
    • Terms & Definitions
    • Data Growth
    • Data Processing in a Nutshell
    • The Problem
  • THE SOLUTION
    • Principles
    • Overview
    • An Example
    • Pipelines
    • Revenue Model
    • Customers
    • Agents
    • Machine Learning in a Nutshell
    • Objectives for the future of AI
  • AI that is Fair
    • Data Labelling in a Nutshell
    • Problems in Data Labelling
    • Decentralised Data Labelling
    • Cognitive Effort
    • Quality Assurance
    • Gamification
    • Our Mobile Application
  • AI that is privacy-enhancing
    • Data Privacy in AI
    • Federated Learning in a Nutshell
    • Federated Learning Protocol
  • AI that is Trusted
    • Trust in AI
    • Decentralised Inference Protocol
    • Performance Monitoring
    • Delegation of Trust
  • Token
    • Utility
    • Tokenomics
    • Additional Information
  • Roadmap
    • Roadmap
  • Team
    • Our Team
    • Our Advisors
  • Other Information
    • Keep in touch
    • Media Kit
    • Register for alpha testing
Powered by GitBook
On this page
  1. AI that is privacy-enhancing

Data Privacy in AI

PreviousOur Mobile ApplicationNextFederated Learning in a Nutshell

Last updated 1 year ago

“Digital evolution must no longer be offered to a customer in trade-off between privacy and security. Privacy is not for sale, it's a valuable asset to protect.” Stephane Nappo, Global Chief Information Security Officer at Groupe SEB

Data, especially when sourced from users or sensitive sectors, needs to be treated with utmost confidentiality. Ensuring data privacy during the labelling process becomes an imperative. To illustrate the gravity, consider a data labelling project for a healthcare firm, where patient medical records, scans, and histories are being annotated. A breach in this data not only compromises the personal information of the patients but could also expose the healthcare provider to severe legal and financial repercussions, not to mention a tarnished reputation.

Collecting, handling and processing data comes with its fair share of laws and regulations, the most important being the (GDPR) in the EU, and the (CCPA) in the US. Through these acts, personal data is safeguarded through legislation and business practices are required to reflect data sovereignty and ethics not just inside of the national territory in which they operate, but even .

With both data privacy and AI in the limelight, the United Nations specialised Agency for ICT - the (ITU-T) - has established the programme: an ongoing series of Webinars in which the AI community is able to connect, identify issues, discuss solutions in AI towards establishing global sustainable development goals.

A key initiative for AI for Good has been the constitution of the created with the sole purpose of standardising (PETs). These technologies are focused on empowering people and ensuring the protection of Personal Identifiable Information (PII) through minimising the use of personal data and maximising security measures. The overall scope is to create technologies that can limit access to personal information while providing the same excellence in service delivery, with examples ranging from homomorphic encryption and zero-knowledge proofs to federated learning.

In an effort to address the challenges raised by emerging data privacy acts, as well as the in data silos, (FL) has been introduced as a novel approach for decentralised machine learning model training. Data is no longer centrally stored, rather it is distributed to multiple data nodes. The AI training is carried out on each of these nodes and then aggregated into a global machine learning model based on every node’s contribution. Essentially, your data never has to leave your side, it’s the AI training process that gets distributed in an effort to promote data privacy & minimisation. Let’s take a deeper look!

General Data Protection Regulation
California Consumer Privacy Act of 2018
outside of the border
International Telecommunication Union
AI for Good
Trustworthy AI programme
Privacy-Enhancing Technologies
recent increase
federated learning