Skip to content

zqiao11/MSFT

Repository files navigation

Multi-Scale Finetuning for Encoder-based Time Series Foundation Models (NeurIPS 2025)

MSFT License

MSFT is a multi-scale fine-tuning approach designed to better leverage time series foundation models for forecasting by disentangling scale effects. This repository provides code for multi-scale fine-tuning on LSF tasks, built on top of Moirai (from the Uni2TS codebase). The fine-tuning setup follows the same configuration as described in the Moirai finetune_lsf page, where further experimental details can be found.

⚙️ Installation

The environment can be set up in the same way as in the original Moirai repository using venv, or alternatively using conda as shown below:

  1. Clone repository:
git clone https://github.com/anonymousauthors1818/MSFT.git
cd MSFT
  1. Create conda environment:
conda create -n MSFT python=3.10.12
conda activate MSFT
pip install uni2ts
  1. Set the PYTHONPATH for the conda environment:
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
nano $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
export PYTHONPATH="YOUR_PATH/MSFT/src:$PYTHONPATH"
  1. Reactivate and verify
conda deactivate
conda activate MSFT
echo $PYTHONPATH
  1. Create a .env file:
touch .env

📊 Data Prepration

  1. Download the pre-processed LSF benchmark datasets and put them in a suitable directory, by setting up the TSLib repository and following the instructions.
  2. Assign the dataset directory to the LSF_PATH environment variable:
echo "LSF_PATH=PATH_TO_TSLIB/dataset" >> .env
  1. Add the path to the directory where you want to save the processed dataset into the .env file.
echo "CUSTOM_DATA_PATH=PATH_TO_SAVE" >> .env
  1. Run the following script to process the dataset into the required format for LSF task.
bash project/lsf/build_lsf_ft_datasets.sh

💻 Reproduce

We provide several shell scripts to facilitate fine-tuning and evaluation, making it easy to reproduce our results. Configurations are managed with the Hydra framework.

Fine-tuning

Run the fine-tuning script of a specific LSF task:

bash project/lsf/multi_scale/finetune/small/ettm1.sh

The checkpoints of fine-tuned models will be saved in the output directory.

Evaluation

After fine-tuning, users need to add the relative checkpoint paths (starting with .outputs/...) to the corresponding evaluation shell script. Then run the evaluation script:

bash project/lsf/multi_scale/eval/small/ettm1.sh

🔧 Tip: We recommend running the scripts in the background using nohup and redirecting output to a log file for better monitoring. Evaluation results will be printed to the log and can be directly checked there.

nohup bash project/lsf/multi_scale/finetune/small/ettm1.sh > finetune_ettm1.out
nohup bash project/lsf/multi_scale/eval/small/ettm1.sh > eval_ettm1.out

🔗 Citation

@article{qiao2025multi,
  title={Multi-Scale Finetuning for Encoder-based Time Series Foundation Models},
  author={Qiao, Zhongzheng and Liu, Chenghao and Zhang, Yiming and Jin, Ming and Pham, Quang and Wen, Qingsong and Suganthan, PN and Jiang, Xudong and Ramasamy, Savitha},
  journal={arXiv preprint arXiv:2506.14087},
  year={2025}
}

About

Multi-scale Finetuning for Encoder-based Time Series Foundation Models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published