FAI3

We certify AI models using Web3 technology to ensure transparency, trust, and improvement by standardizing metrics for fairness, accuracy, toxicity, and data quality.

  • 1,700 Raised
  • 482 Views
  • 4 Judges

Categories

  • AI Track

Gallery

Description

FAI3 (Fair AI in web3) is committed to building AI models that are both trustworthy and equitable. We believe that AI should work for everyone, and we are committed to making that a reality. Join us on our mission to build a better future powered by AI.


Introduction

During this hackathon, we focused on addressing a critical issue in AI: fairness. AI models, when improperly designed, can perpetuate biases that negatively impact society, especially in areas like credit scoring, hiring, and healthcare. To help solve this issue, we developed a canister that allows users to submit their AI model data for fairness evaluation and built a public leaderboard that displays these fairness metrics for each evaluated model.


Solution

Our solution consists of two key components:


    1.    Canister for Fairness Metrics Evaluation: We developed a decentralized smart contract (canister) that allows users to submit their AI data and predictions for fairness evaluation. Once submitted, the canister evaluates the predictions using various fairness metrics, ensuring that users can assess the fairness of their algorithms.

    2.    Leaderboard for Public Metric Display: We built a leaderboard that connects to the canister, listing the metrics for each model evaluated. This creates a transparent environment where developers, companies, and users can see how different models perform regarding fairness, fostering accountability in AI.


Key Metrics

We implemented the following fairness metrics to evaluate AI models:

    •    Average Odds Difference: Measures the difference in decision rates between protected and non-protected groups, highlighting discrepancies in the model’s behavior.

    •    Disparate Impact: Focuses on the ratio of positive outcomes between different demographic groups, ensuring no group is disproportionately affected.

    •    Equal Opportunity Difference: Compares the true positive rates between groups, ensuring fairness in the model’s ability to identify positive cases.

    •    Statistical Parity Difference: Evaluates whether the likelihood of favorable outcomes is similar across groups, ensuring that no one group is unfairly advantaged.

These metrics provide a comprehensive view of how fair and unbiased an AI model is.


Impact and Benefits

Our solution contributes to the transparency and fairness of AI systems in several ways:

    •    Transparency: The leaderboard publicly displays the fairness metrics for each evaluated model, allowing developers and stakeholders to compare the fairness performance of different models.

    •    Accountability: By making fairness evaluations public, companies and developers are incentivized to optimize their models to reduce bias and enhance fairness.

    •    Trust: By allowing users to evaluate and view fairness metrics in an open manner, we foster trust between AI developers and their end-users, ensuring that models used in critical applications are held to higher fairness standards.


Technical Implementation

We used the following technologies to build our solution:

    •    ICP for Blockchain: The decentralized nature of the Internet Computer (ICP) blockchain ensures that model data and evaluations are securely processed. By utilizing ICP, we guarantee that all fairness evaluations are tamper-proof and transparent.

    •    Rust for Backend Code: The canister’s code was written in Rust, ensuring high performance and security for the evaluation process.

    •    TypeScript and React for Frontend: The leaderboard and frontend interface were built using TypeScript and React, ensuring an intuitive and responsive user experience.


Conclusion

In this hackathon project, we developed a solution to evaluate AI models’ fairness and promote transparency through a public leaderboard. Our canister evaluates models based on key fairness metrics, and the leaderboard ensures that the results are visible to all, fostering a more accountable and trustworthy AI ecosystem. This approach contributes to building fairer AI systems that can be trusted in high-stakes environments.


Attachments