επαγωγή · Independent Research · Stuttgart

Honest reasoning,
by construction.

The tools could have warned you. They didn't.

Structural correctness for machine learning. Python & R.

648 papers. Leaked.

Thirty scientific fields. Published and cited before anyone noticed. Not sloppy code. Structural errors the tools made invisible.

Exists

Build models

Training infrastructure, feature engineering, hyperparameter optimization.

Exists

Test models

Benchmarks, evals, hold-out metrics, backtesting, fairness, monitoring.

Was Missing

Structural correctness

Is the workflow itself valid? Not the data. Not the model. The epistemic structure.

Split. Fit. Assess. The rest follows.

Eight typed primitives. Use them in the wrong order
and the API rejects you before you get a result.

split()
Partition data. Temporal, grouped, or random.
cv()
Cross-validation with per-fold isolation.
prepare()
Fit on train, apply to all. Per fold.
fit()
Train a model. Any algorithm.
predict()
Generate outputs from a fitted model.
evaluate()
Measure on validation. Repeatable.
explain()
Feature importance, partial dependence.
assess()
Measure on test. Once. Terminal.

Four hard constraints

Assess once per holdout test set — a second call raises Prepare after split — never on the full dataset Type-safe transitions — fit on test data has no derivation No label access before split — the guard rejects it

When to stop using ml: when your framework of choice enforces all four constraints natively.

From research, not from marketing.

Built on independent research into data leakage, causal inference, and ML methodology. Preprint, falsifiable, and open to critique.

Releases only.

Major versions and research updates. No noise.

No spam. Unsubscribe any time. Privacy