MSFT is a multi-scale fine-tuning approach designed to better leverage time series foundation models for forecasting by disentangling scale effects. This repository provides code for multi-scale fine-tuning on LSF tasks, built on top of Moirai (from the Uni2TS codebase). The fine-tuning setup follows the same configuration as described in the Moirai finetune_lsf page, where further experimental details can be found.
The environment can be set up in the same way as in the original Moirai repository using venv, or alternatively using conda as shown below:
- Clone repository:
git clone https://github.com/anonymousauthors1818/MSFT.git
cd MSFT
- Create conda environment:
conda create -n MSFT python=3.10.12
conda activate MSFT
pip install uni2ts
- Set the PYTHONPATH for the conda environment:
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
nano $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
export PYTHONPATH="YOUR_PATH/MSFT/src:$PYTHONPATH"
- Reactivate and verify
conda deactivate
conda activate MSFT
echo $PYTHONPATH
- Create a
.env
file:
touch .env
- Download the pre-processed LSF benchmark datasets and put them in a suitable directory, by setting up the TSLib repository and following the instructions.
- Assign the dataset directory to the
LSF_PATH
environment variable:
echo "LSF_PATH=PATH_TO_TSLIB/dataset" >> .env
- Add the path to the directory where you want to save the processed dataset into the
.env
file.
echo "CUSTOM_DATA_PATH=PATH_TO_SAVE" >> .env
- Run the following script to process the dataset into the required format for LSF task.
bash project/lsf/build_lsf_ft_datasets.sh
We provide several shell scripts to facilitate fine-tuning and evaluation, making it easy to reproduce our results. Configurations are managed with the Hydra framework.
Run the fine-tuning script of a specific LSF task:
bash project/lsf/multi_scale/finetune/small/ettm1.sh
The checkpoints of fine-tuned models will be saved in the output
directory.
After fine-tuning, users need to add the relative checkpoint paths (starting with .outputs/...
) to the corresponding evaluation shell script. Then run the evaluation script:
bash project/lsf/multi_scale/eval/small/ettm1.sh
🔧 Tip: We recommend running the scripts in the background using nohup and redirecting output to a log file for better monitoring. Evaluation results will be printed to the log and can be directly checked there.
nohup bash project/lsf/multi_scale/finetune/small/ettm1.sh > finetune_ettm1.out
nohup bash project/lsf/multi_scale/eval/small/ettm1.sh > eval_ettm1.out
@article{qiao2025multi,
title={Multi-Scale Finetuning for Encoder-based Time Series Foundation Models},
author={Qiao, Zhongzheng and Liu, Chenghao and Zhang, Yiming and Jin, Ming and Pham, Quang and Wen, Qingsong and Suganthan, PN and Jiang, Xudong and Ramasamy, Savitha},
journal={arXiv preprint arXiv:2506.14087},
year={2025}
}