Open-Source Toolkits

humancompatible.org develops several toolkits within the humancompatible organization in github and contributes to two flagship toolkits of the Linux Software Foundation. Our own toolkits include:

humancompatible.detect is an open-source toolkit for detecting bias in AI models and their training data.

humancompatible.explain is an open-source toolkit for counterfactual explanations with a variety of desiderata and focus on fairness.

humancompatible.interconnect is an open-source toolkit for the modelling, simulations, and theorem proving within ergodicity of multi-agent systems.

humancompatible.repair is an open-source toolkit for post-hoc verification of fairness and repair thereof.

humancompatible.train is a library of stochastic-constrained stochastic optimization algorithms for training AI systems with fairness guarantees. It serves as a plug-and-play replacement of PyTorch optimizers.

Linux Software Foundation toolkits include:

AI Fairness 360 is an open-source toolkit developed by a wider research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle.

AI Explainability 360 is an open-source toolkit developed by a wider research community that supports interpretability and explainability of datasets and machine learning models.

One thought on “AutoFair Project Kicked Off in Prague

Leave a Reply

Your email address will not be published. Required fields are marked *