Machine Learning Project - Iris Flower Classification | @dsbrain

Поделиться
HTML-код
  • Опубликовано: 19 окт 2024
  • 🔍 Explore Further:
    Channel: Subscribe for more engaging content on data science, machine learning, and python (Full Course Ongoing).
    GitHub Repository: Access the code and resources from this project here.
    Embark on this exciting journey with Data Science Brain, where we unravel the beauty of data and the power it holds to transform our understanding of the world.
    github.com/Dee...
    🌐 Connect with Data Science Brain:
    deepakjosecode...
    / datasciencebrain
    🌟 Like, Share, Subscribe, and let the learning adventure begin! 🚀
    #DataScience #MachineLearning #IrisClassification #DataScienceBrain #MLProjects

Комментарии • 16

  • @nallachi2913
    @nallachi2913 Год назад +1

    Nice content 👍

  • @ghostlyverse2722
    @ghostlyverse2722 Год назад +1

    ❤❤❤❤ awsome bro 😊😊

    • @dsbrain
      @dsbrain  Год назад

      Thank you bro 🙏. Comment down suggestions to improve

    • @ghostlyverse2722
      @ghostlyverse2722 Год назад +1

      Already doing best my brother

    • @dsbrain
      @dsbrain  Год назад

      @@ghostlyverse2722 i believe there's improvement in everything. And im new to this from of teaching. Support us 💪🙏

    • @ghostlyverse2722
      @ghostlyverse2722 Год назад +1

      All ways will bro

  • @sanjanahalli1174
    @sanjanahalli1174 8 месяцев назад +2

    Can i run this code in vscode?

    • @Saiii69
      @Saiii69 8 месяцев назад

      Yes u can!!

  • @QAYNATSHAMA
    @QAYNATSHAMA Год назад +1

    ❤❤❤❤❤

  • @sin3divcx
    @sin3divcx 2 месяца назад +1

    But accuracy = 1 may be due to overfitting..

    • @dsbrain
      @dsbrain  2 месяца назад

      Absolutely. This was just simple project. I would be glad to hear from you further after doing more exploration with this

  • @tradingtigers6134
    @tradingtigers6134 11 месяцев назад

    Bro I want this bro project bro by two days I want to submit mini project bro please helpe bro

  • @dsbrain
    @dsbrain  Год назад

    Im correcting a mistake i made in the video here! The n_neighbors are not selected based on the number of classes.
    Here are a few considerations:
    Odd vs. Even:
    For binary classification problems, it's often recommended to use an odd number for n_neighbors to avoid ties.
    For multiclass classification, you might want to choose a value that doesn't result in ties as well.
    Rule of Thumb:
    A common rule of thumb is to start with sqrt(N), where N is the number of data points. This can provide a good balance between overfitting and underfitting.
    Cross-Validation:
    Use cross-validation to evaluate different values of n_neighbors. This helps you assess how well the model generalizes to new, unseen data.
    Plotting the performance metrics (e.g., accuracy, F1-score) against different values of n_neighbors can help you visualize the optimal choice.
    Domain Knowledge:
    Consider the nature of your data. If there are clear patterns or structures, you might choose a smaller n_neighbors. If the data is noisy or has a lot of outliers, a larger n_neighbors might be more robust.
    Experimentation:
    Try different values and see how they perform. You can use a loop to iterate over a range of values and evaluate the model's performance on a validation set.
    For example, in Python:
    for n in range(1, 21): # Try n_neighbors from 1 to 20
    knn = KNeighborsClassifier(n_neighbors=n)
    scores = cross_val_score(knn, X_train, y_train, cv=5, scoring='accuracy')
    print(f'n_neighbors={n}, Mean Accuracy: {scores.mean()}')
    Remember, there's no one-size-fits-all answer. It's often a balance, and the best value may depend on the specific characteristics of your dataset.

  • @sravanihoney-b4f
    @sravanihoney-b4f Месяц назад

    Url