K-Fold Cross Validation: Explanation + Tutorial in Python, Scikit-Learn & NumPy

Поделиться
HTML-код
  • Опубликовано: 24 янв 2025

Комментарии • 15

  • @GregHogg
    @GregHogg  Год назад

    Take my courses at mlnow.ai/!

  • @Sufyansgoals
    @Sufyansgoals Год назад +3

    I don't get why your videos don't have millions of views yet, you explain everything so clearly! awesome work man keep it up!

    • @GregHogg
      @GregHogg  Год назад +1

      Haha thank you I appreciate that!!

  • @arsheyajain7055
    @arsheyajain7055 3 года назад +1

    LOVE THE THUMBNAIL

  • @jacobdavies3761
    @jacobdavies3761 10 месяцев назад

    Cheers for the help mate 👍

  • @prachishah2151
    @prachishah2151 3 года назад

    Thank you so much for the amazing video

    • @GregHogg
      @GregHogg  3 года назад

      Glad it was helpful and you're very welcome, Prachi!

  • @panagiotisgoulas8539
    @panagiotisgoulas8539 2 года назад

    For anyone wondering why he flatten the images example: X_train.reshape(X_train.shape[0],-1) is because sklearn predict wants a 2D array as a first argument.

    • @GregHogg
      @GregHogg  2 года назад

      Thanks Panagiotis, I appreciate that!!

  • @sukanyaacharya5202
    @sukanyaacharya5202 2 года назад

    Hi Greg.. Thank you for the wonderful video. I have been working on Time series evaluation and specifically cross validation. My question is, is there a way to generalize the parameter tuning once the cross validation is complete or is it just trial and error basis.

    • @GregHogg
      @GregHogg  2 года назад +1

      You can always tune values to minimize cross val loss :)

  • @pogiribhanuteja4650
    @pogiribhanuteja4650 3 года назад

    wanted an 'statistics for data science' video plz.. like distributions etc.,

  • @beypazariofficial
    @beypazariofficial 7 месяцев назад

    🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓