Don't you effectively mask your training data to exclude the linear example? Would be interesting how the single encoder looks if you run the same masking on the input before training it.
The reason to apply multiple encoder to different shift windows of training data is that even for the same repeating pattern, if you look at the different starting point, it will be the different patterns. When you apply the single encoder, it will require the single model has a lot more representation ability and it is hard to trade-off between complexity and the gnerealization capability.
Great idea to cluster on time!
Yep agree , just watched it and i cant wait to test that!
Hello, is there any publications about this method?
How is the specific autoencoder selected in the end, when the testing data is passed?
Excellent talk. Wondered how you label your data in the first place.
Great talk
Don't you effectively mask your training data to exclude the linear example? Would be interesting how the single encoder looks if you run the same masking on the input before training it.
The reason to apply multiple encoder to different shift windows of training data is that even for the same repeating pattern, if you look at the different starting point, it will be the different patterns. When you apply the single encoder, it will require the single model has a lot more representation ability and it is hard to trade-off between complexity and the gnerealization capability.
excellent. can to see code?
Very good talk