In domains where there is inherent randomness in the process, simple (ensemble) models tend to outperform complex ones. Nonlinear models can capture nonlinear patterns but they also tend to fit noise.
Neural network models have been shown to work really well on text, images, sound etc, but these types of data have no inherent randomness in them. A piece of text is a piece of text.
Whereas most time series are usually trying to forecast quantities that have complex unmeasured causality, like natural gas prices. Past behavior is no guarantee of future behavior. Capturing the nonlinear behavior in the past better can actually degrade future performance. While simple models tend to be more robust because they tend to not overly bias towards any one trend.
9 months ago (things may have changed) someone showed simple time series models outperforming Chronos.
https://github.com/Nixtla/nixtla/tree/main/experiments/amazo...
In domains where there is inherent randomness in the process, simple (ensemble) models tend to outperform complex ones. Nonlinear models can capture nonlinear patterns but they also tend to fit noise.
Neural network models have been shown to work really well on text, images, sound etc, but these types of data have no inherent randomness in them. A piece of text is a piece of text.
Whereas most time series are usually trying to forecast quantities that have complex unmeasured causality, like natural gas prices. Past behavior is no guarantee of future behavior. Capturing the nonlinear behavior in the past better can actually degrade future performance. While simple models tend to be more robust because they tend to not overly bias towards any one trend.
I've been skeptical about the "time series" LLMs papers from earlier, so this is interesting to see. Curious if others disagree with this paper!
You should use the original title when posting these.
Thank for the feedback! :) I included “No” because that’s the paper’s conclusion. Hoping to save people the effort of reading.
That's not how HN works, though, and the site docs ask you to use original titles.
Gotcha, thanks! Updated.
We all know Betteridge’s law of headlines ;)