Detect, quantify and correct the sources of discrimination in your algorithms. Be in compliance with international laws and multiply your opportunities by displaying a label testifying to the ethical treatment of personal data.
60% of users do not have trust in Artificial Intelligence (IFOP 2017).
Algorithms, which are believed to be "neutral" by definition, reproduce a sometimes biased reality and create risks of discrimination and non-compliance. The label FDU - Fair Data Use - allows you to protect your company from these situations.
Be proactive! With this label you will inspire trust and reach new market shares.
The label covers the following points of the European Regulation on the Protection of Personal Data (GDPR):
Ensuring that the automatic treatments implemented do not reproduce the biases is a major society stake of Artificial Intelligence.
In this context, the FDU label enables companies to ensure that they have a positive impact on society and meet their Corporate Social Responsibility obligations.
No time-consuming audit, significant resource mobilization or endless questionnaire. At Maathics it's an algorithm that audits your algorithms!
Your team defines the process to be audited. The tool will look for breaks in equity. If the break affects a sensitive variable, there is a risk of discrimination. Otherwise, you are mathematically assured of not having implemented discrimination!
New features are under development. The next feature, currently being implemented, is a correction module: in case of audit result highlighting a source of discrimination we are able, without touching the core of the algorithm, to clean up this discrimination.
Once compliance is assured, the label is issued for a period of 1 year. In the case of treatments deemed to be at risk, an audit should be carried out every 6 months.