D2Vformer: A Flexible Time Series Prediction Model Based on Time Position Embedding
/ Authors
/ Abstract
Existing time-series forecasting methods often struggle to adapt to dynamic scenarios and lack flexibility in prediction. They typically require retraining the model when the prediction length or position changes. Moreover, these methods still face challenges in effectively capturing and utilizing time-position embeddings (PEs). To address these limitations, this article proposes a novel model called D2Vformer. Unlike conventional prediction methods that rely on fixed-length predictors, D2Vformer can directly handle scenarios with arbitrary prediction lengths. In addition, it significantly reduces training resource consumption and proves highly effective in real-world dynamic environments. In D2Vformer, the Date2Vec (D2V) module is devised to leverage timestamp information and feature sequences to generate time PEs. Subsequently, D2Vformer introduces an innovative fusion module that leverages an attention mechanism to capture the mapping between input and target time PEs, thereby enabling flexible prediction. Extensive experiments on six datasets demonstrate that D2V outperforms other time-PE methods, while D2Vformer surpasses state-of-the-art approaches in both fixed-length and arbitrary-length prediction tasks. The code for D2Vformer is available at: https://github.com/TeamofHaoWang/D2Vformer.
Journal: IEEE transactions on neural networks and learning systems