Autoencoder Forest for Anomaly Detection from IoT Time Series | SP Group

Поделиться
HTML-код
  • Опубликовано: 28 июл 2024
  • Get the slides: www.datacouncil.ai/talks/time...
    ABOUT THE TALK
    In the energy/utility context, conditional monitoring is one of the most important processes in the daily operation & maintenance of the equipment. With more and more IoT sensors being deployed on the equipment, there is an increasing demand for machine learning-based anomaly detection for conditional monitoring. In this talk, I will discuss a method we designed for anomaly detection based on a collection of autoencoders learned from time-related information. This talk will cover the whole end-to-end flow on how this method is designed, and some energy specific use cases will be used to demonstrate its performance.
    ABOUT THE SPEAKER
    Yiqun Hu is currently the Director, Data & AI at SP Digital and is responsible for driving the initiatives of data & AI for the whole SP Group. His team has built and manages the group's big data infrastructure and deployed production-ready AI solutions to transform the utility industry.
    Before joining SP Group, Yiqun had experiences in leading data/AI teams in several industries, applying data science and machine learning to bring real impact to several organizations including a global payment company (PayPal), an e-commerce company (eBay) as well as a leading financial institute in Asia (DBS).
    Besides his experience in the industry, Yiqun also spent close to a decade in the academic R&D space as an AI researcher. He has published over 40 scientific papers in flagship international AI conferences/journals, i.e. TPAMI/TIP/TM, CVPR/ICCV/ECCV/ACMMM etc, as well as one book chapter. His publications have been cited over 1,700 times in other scientific publications.
    ABOUT DATA COUNCIL:
    Data Council (www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers. Make sure to subscribe to our channel for more videos, including DC_THURS, our series of live online interviews with leading data professionals from top open source projects and startups.
    FOLLOW DATA COUNCIL:
    Twitter: / datacouncilai
    LinkedIn: / datacouncil-ai
    Facebook: / datacouncilai
    Eventbrite: www.eventbrite.com/o/data-cou...
  • НаукаНаука

Комментарии • 10

  • @jmvanlith
    @jmvanlith 4 года назад +3

    Great idea to cluster on time!

    • @MrProzaki
      @MrProzaki 4 года назад

      Yep agree , just watched it and i cant wait to test that!

  • @abhalla
    @abhalla 4 года назад +1

    Very good talk

  • @markussagen3778
    @markussagen3778 4 года назад

    Great talk

  • @MichalMonday
    @MichalMonday 2 года назад +1

    Hello, is there any publications about this method?

  • @tthaz
    @tthaz 3 года назад

    Excellent talk. Wondered how you label your data in the first place.

  • @erminkevric4921
    @erminkevric4921 2 года назад

    How is the specific autoencoder selected in the end, when the testing data is passed?

  •  4 года назад +1

    Don't you effectively mask your training data to exclude the linear example? Would be interesting how the single encoder looks if you run the same masking on the input before training it.

    • @YiqunHu
      @YiqunHu 2 года назад

      The reason to apply multiple encoder to different shift windows of training data is that even for the same repeating pattern, if you look at the different starting point, it will be the different patterns. When you apply the single encoder, it will require the single model has a lot more representation ability and it is hard to trade-off between complexity and the gnerealization capability.

  • @najmesouri2088
    @najmesouri2088 4 года назад +2

    excellent. can to see code?