Overview
Reusable experiment scaffolding for time-series ML: standardized configs, runs, metrics, and plots so model comparisons are fair and repeatable.
Across projects, the same problem kept repeating: experiments become hard to reproduce once there are many datasets, models, and hyperparameters. This library is a practical solution: a consistent way to launch experiments, capture configuration, compute metrics, and export results.
What I built
- A consistent ‘run’ layout (configs, outputs, plots) so results are auditable.
- Utilities for sweeping model variants and hyperparameters in a controlled way.
- Standardized metrics and plotting helpers for time-series tasks.
- Performance-focused utilities (including GPU acceleration where appropriate) to keep iteration cycles short.
Deliverables
- One-command experiment runs that produce comparable outputs.
- Reusable plotting and reporting utilities for papers and internal reviews.