Spaces:
Runtime error
Runtime error
bug fix
Browse files
app.py
CHANGED
@@ -114,7 +114,26 @@ def main():
|
|
114 |
image = Image.open('data/image.png')
|
115 |
st.image(image, caption='Coding.Waterkant Festival for AI')
|
116 |
|
|
|
117 |
st.markdown(body = """
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
### Experiments
|
119 |
We implemented TFT for sales multi-horizon sales forecast during Coding.Waterkant.
|
120 |
Please try our implementation and adjust some of the training data.
|
@@ -143,29 +162,8 @@ def main():
|
|
143 |
st.pyplot(fig)
|
144 |
|
145 |
st.markdown(body = """
|
146 |
-
|
147 |
-
|
148 |
-
static (i.e. time-invariant) covariates, known future inputs, and other exogenous
|
149 |
-
time series that are only observed in the past – without any prior information
|
150 |
-
on how they interact with the target. Several deep learning methods have been
|
151 |
-
proposed, but they are typically ‘black-box’ models which do not shed light on
|
152 |
-
how they use the full range of inputs present in practical scenarios. In this pa-
|
153 |
-
per, we introduce the Temporal Fusion Transformer (TFT) – a novel attention-
|
154 |
-
based architecture which combines high-performance multi-horizon forecasting
|
155 |
-
with interpretable insights into temporal dynamics. To learn temporal rela-
|
156 |
-
tionships at different scales, TFT uses recurrent layers for local processing and
|
157 |
-
interpretable self-attention layers for long-term dependencies. TFT utilizes spe-
|
158 |
-
cialized components to select relevant features and a series of gating layers to
|
159 |
-
suppress unnecessary components, enabling high performance in a wide range of
|
160 |
-
scenarios. On a variety of real-world datasets, we demonstrate significant per-
|
161 |
-
formance improvements over existing benchmarks, and showcase three practical
|
162 |
-
interpretability use cases of TFT.
|
163 |
-
|
164 |
-
### Experiments
|
165 |
-
We implemented TFT for sales multi-horizon sales forecast during Coding.Waterkant.
|
166 |
-
Please try our implementation and adjust some of the training data.
|
167 |
-
|
168 |
-
Adjustments to the model and extention with Quantile forecast are coming soon ;)
|
169 |
""")
|
170 |
|
171 |
if __name__ == '__main__':
|
|
|
114 |
image = Image.open('data/image.png')
|
115 |
st.image(image, caption='Coding.Waterkant Festival for AI')
|
116 |
|
117 |
+
|
118 |
st.markdown(body = """
|
119 |
+
### Abstract
|
120 |
+
Multi-horizon forecasting often contains a complex mix of inputs – including
|
121 |
+
static (i.e. time-invariant) covariates, known future inputs, and other exogenous
|
122 |
+
time series that are only observed in the past – without any prior information
|
123 |
+
on how they interact with the target. Several deep learning methods have been
|
124 |
+
proposed, but they are typically ‘black-box’ models which do not shed light on
|
125 |
+
how they use the full range of inputs present in practical scenarios. In this pa-
|
126 |
+
per, we introduce the Temporal Fusion Transformer (TFT) – a novel attention-
|
127 |
+
based architecture which combines high-performance multi-horizon forecasting
|
128 |
+
with interpretable insights into temporal dynamics. To learn temporal rela-
|
129 |
+
tionships at different scales, TFT uses recurrent layers for local processing and
|
130 |
+
interpretable self-attention layers for long-term dependencies. TFT utilizes spe-
|
131 |
+
cialized components to select relevant features and a series of gating layers to
|
132 |
+
suppress unnecessary components, enabling high performance in a wide range of
|
133 |
+
scenarios. On a variety of real-world datasets, we demonstrate significant per-
|
134 |
+
formance improvements over existing benchmarks, and showcase three practical
|
135 |
+
interpretability use cases of TFT.
|
136 |
+
|
137 |
### Experiments
|
138 |
We implemented TFT for sales multi-horizon sales forecast during Coding.Waterkant.
|
139 |
Please try our implementation and adjust some of the training data.
|
|
|
162 |
st.pyplot(fig)
|
163 |
|
164 |
st.markdown(body = """
|
165 |
+
Sources: Bryan Lim et al. in Temporal Fusion Transformers (TFT) for Interpretable Multi-horizon Time Series Forecasting
|
166 |
+
Demo created by: <a href=https://github.com/MalteLeuschner</a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
167 |
""")
|
168 |
|
169 |
if __name__ == '__main__':
|