CNEL

Time-series ML experiment library

Reusable infrastructure for running, tracking, and comparing time-series machine learning experiments with consistent reporting.

Plot representing adaptive filtering / time-series modeling.

Overview

Reusable experiment scaffolding for time-series ML: standardized configs, runs, metrics, and plots so model comparisons are fair and repeatable.

Across projects, the same problem kept repeating: experiments become hard to reproduce once there are many datasets, models, and hyperparameters. This library is a practical solution: a consistent way to launch experiments, capture configuration, compute metrics, and export results.

What I built

  • A consistent ‘run’ layout (configs, outputs, plots) so results are auditable.
  • Utilities for sweeping model variants and hyperparameters in a controlled way.
  • Standardized metrics and plotting helpers for time-series tasks.
  • Performance-focused utilities (including GPU acceleration where appropriate) to keep iteration cycles short.

Deliverables

  • One-command experiment runs that produce comparable outputs.
  • Reusable plotting and reporting utilities for papers and internal reviews.

Snapshot

Track
Research tools · ML infra · reproducibility
Status
Ongoing; shared across multiple research threads.
Focus
Time-series modelingExperiment trackingGPU acceleration

Stack

  • Python
  • PyTorch
  • CuPy/RAPIDS (when GPU acceleration is appropriate)
  • Matplotlib

Glossary

Time-series
Data collected over time (signals, sensor streams, physiological measurements).
Hyperparameters
User-chosen settings that affect how a model trains (e.g., learning rate, window size).