Discover the structure of Lag-Llama and be taught to use it in a forecasting venture utilizing Python
In October 2023, I printed an article on, one of many first basis mannequin for time collection forecasting, able to zero-shot inference, anomaly detection and conformal prediction capabilities.
Nevertheless, TimeGPT is a proprietary mannequin that’s solely accessed through an API token. Nonetheless, it sparked extra analysis in basis fashions for time collection, as this space has been lagging in comparison with pure language processing (NLP) and pc imaginative and prescient.
Quick-forward to February 2024, and we now have an open-source basis mannequin for time collection forecasting: Lag-Llama.
Within the authentic paper:, the mannequin is offered as a general-purpose basis mannequin for univariate probabilistic forecasting. It was developed by a big workforce from completely different establishments like Morgan Stanley, ServiceNow, Université de Montréal, Mila-Quebec, and McGill College.
On this article, we discover the structure of Lag-Llama, its capabilities and the way it was skilled. Then we truly use Lag-Llama in a forecasting venture, and evaluate its efficiency to different deep studying strategies life Temporal Fusion Transformer (TFT) and DeepAR.
In fact, for extra particulars, you possibly can learn the.
Let’s get began!
As talked about earlier, Lag-Llama is constructed for univariate probabilistic forecasting.
It makes use of a common methodology for tokenizing time collection information that doesn’t depend on frequency. That manner, the mannequin can generalize nicely to unseen frequencies.
It leverages the Transformer structure together with a distribution head to parse the enter tokens and map them to future forecasts with confidence intervals.
Since there’s a lot to cowl, let’s discover every foremost part in additional element.
Tokenization with lag options