Stars
The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
The Fraud AI Assistant Chatbot helps fraud analysts make faster, fairer, and more transparent decisions by combining interpretable machine learning models with natural-language summaries. Built usi…
Julia implementation of Explainable Boosting Machine
Sparse & Higher-Order Explainable Boosting Machines
Deterministic multivariate sample reduction and creation
Python tool for converting files and office documents to Markdown.
Magentic-Marketplace: Simulate Agentic Markets and See How They Evolve
A framework for building, orchestrating and deploying AI agents and multi-agent workflows with support for Python and .NET.
This repository contains the relevant materials (e.g., paper, code) for Actuarial NAM—an interpretable deep learning model for actuarial analysis.
Visualizing and interacting with outputs from EMB models
This repository, `project_learn_TalkToEBM`, explores and implements Explainable Boosting Machines (EBMs) with a focus on creating interactive explanations and allowing users to "talk" to the model.…
Python implementation of an Explainable Boosting Machine
This repository belongs to our paper "Challenging the Performance-Interpretability Trade-off: An Evaluation of Interpretable Machine Learning Models". In the paper we benchmark and asses the assume…
APLR builds predictive, interpretable regression and classification models using Automatic Piecewise Linear Regression. It often rivals tree-based methods in predictive accuracy while offering smoo…
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. Tensor…
A guidance language for controlling large language models.
Interpreting Visual Clusters in Dimensionality Reduction With Explainable Boosting Machine
A conda-smithy repository for interpret.
A Natural Language Interface to Explainable Boosting Machines
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and l…
Code for the CCS'22 paper "Federated Boosted Decision Trees with Differential Privacy"
XLabel: An Explainable Data Labeling Assistant
Automating machine learning training and save an SQL version of the model