humancompatible.org develops several toolkits within the humancompatible organization in github and contributes to two flagship toolkits of the Linux Software Foundation. Our own toolkits include:
humancompatible.detect is an open-source toolkit for detecting bias in AI models and their training data.
humancompatible.explain is an open-source toolkit for counterfactual explanations with a variety of desiderata and focus on fairness.
humancompatible.interconnect is an open-source toolkit for the modelling, simulations, and theorem proving within ergodicity of multi-agent systems.
humancompatible.repair is an open-source toolkit for post-hoc verification of fairness and repair thereof.
humancompatible.train is a library of stochastic-constrained stochastic optimization algorithms for training AI systems with fairness guarantees. It serves as a plug-and-play replacement of PyTorch optimizers.
Linux Software Foundation toolkits include:
AI Fairness 360 is an open-source toolkit developed by a wider research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle.
AI Explainability 360 is an open-source toolkit developed by a wider research community that supports interpretability and explainability of datasets and machine learning models.
Hi, this is a comment.
To get started with moderating, editing, and deleting comments, please visit the Comments screen in the dashboard.
Commenter avatars come from Gravatar.