YOLOv8 | Object Detection on a Custom Dataset using YOLOv8

Поделиться
HTML-код
  • Опубликовано: 9 янв 2023
  • YOLOv8
    Ultralytics YOLOv8, developed by Ultralytics, is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image classification tasks.
    Official YOLOv8 github repo: github.com/ultralytics/ultral...
    #######################################################
    For queries: Comment in comment section or you can mail me at aarohisingla1987@gmail.com
    #######################################################
    #YOLOv8
    #yolo
    #objectdetection
  • НаукаНаука

Комментарии • 302

  • @shravanacharya4376
    @shravanacharya4376 Год назад +12

    I have gone through your various tutorials, I can guarantee 100% that everyone will understand this concept. Thank you so much you're doing an amazing job.

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      Glad to hear that!

    • @FREEFIREGAMER-iv8dx
      @FREEFIREGAMER-iv8dx 7 месяцев назад

      maam ,please give me code how to extract detected objects in an image and save those images in seperate files ,like if we detect 5 objects in an image using yolov8,and that image is saved in a file ,then i need the detected objects seperated please help maam@@CodeWithAarohi

  • @PRIYAINTOUCH
    @PRIYAINTOUCH Год назад +3

    Excellent, Elite and Eloquent content and crystal-clear explanations. If possible, provide object detection with voice or sound output ie. object name.

  • @sushantparajuli611
    @sushantparajuli611 Год назад +3

    Wow!! you are brilliant ma'am. Up to date every time. Thank you so much for the invaluable pieces of information.

  • @neeraj.kumar.1
    @neeraj.kumar.1 Год назад +1

    Thanks Aarohi
    It was as simple as your previous videos.

  • @hamidddshekoohiii8267
    @hamidddshekoohiii8267 Год назад

    You plainly explained it.Thankyou so much👍👍

  • @Rishu_Dakshin
    @Rishu_Dakshin 7 месяцев назад

    Im new to Yolo and with the help of your video i was able to run the code. Thank you for your efforts

  • @JasonsFun17
    @JasonsFun17 Год назад

    It's a great video. I have tried it out and it's working like a charm. Thank you

  • @oykuecekoken3350
    @oykuecekoken3350 5 месяцев назад

    Thank you for the video. I want to show the timer value in the bounding box for each introduced object, how should I do it?

  • @hamidraza1584
    @hamidraza1584 3 месяца назад

    Your videos are very informative, in understanding the deep learning models and neurul network s.lots of love from Lahore Pakistan

  • @jonatapaulino
    @jonatapaulino Год назад

    Hey, thanks for the tips. This files images/1.jpg is where? I downloaded an image and ran the code but I couldn't get the image recognized.

  • @kosttavmalhotra5899
    @kosttavmalhotra5899 4 месяца назад

    mam is there any video on you channel which shos how to install, which folders to opt for and while dealing with ultralytics env on your channel

  • @angelospapadopoulos7679
    @angelospapadopoulos7679 Год назад

    amazing and up to data as always !

  • @UR3C00L
    @UR3C00L Год назад

    Great video! Do you know how to choose specific model to train (i.e. yolov8s or yolov8m) without passing pretrained weights?

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      Yes, you cando that. Get the yaml file of that model. It is available in yolov8 github repo. And then change the number of classes. Leave the other things as it is.

  • @cyberhard
    @cyberhard Год назад

    As usual, great job!

  • @DhavalShukla-mc4os
    @DhavalShukla-mc4os 2 месяца назад +2

    I am trying to do a traffic light detection project on google colab and during training I am encountering path related errors. What I had in my .yaml file is configuration of train, val, test folders each containing images, labels folders with paths as /content/drive/MyDrive/dataset/train/images and such is the same for /content/..../train/labels. Now what should there exactly be for it to run without errors? Would you have an idea? What paths I should add in every case? Any suggestions from anyone?

  • @user-lz2lu7yj4x
    @user-lz2lu7yj4x Год назад

    Can you please explain why the background is there in confusion matrix even i don't have a class called background.did it trained with other classes

  • @NA-cw4pj
    @NA-cw4pj Год назад +2

    Hi, incredible explanation and video, I have some questions if you don't mind :
    1.Is this working for a video, for example, and in that case, should I train the model with small part of videos?
    2.If I want to detect a lot of things in a video of a city, should I train the model several times, one for each class (objects)
    Thank you so much!

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад +3

      1- Yolov8 is trained on images only. Train your model on images and then you can detect objects from videos also. 2- Create a dataset for all the objects at once. And then train your model on that dataset. No need to train model for separate objects one by one.

  • @aymeneboucha4974
    @aymeneboucha4974 6 месяцев назад

    Amazing video, everything is clear.

  • @gareven
    @gareven 13 дней назад

    Thank you for such a great video! Can you please also make a video to show how to implement focal loss function to YOLOv8?

  • @Sandykrang
    @Sandykrang Год назад +1

    hi, great video, can we here testing matrices on test data, like classification reports for CNNs

  • @user-is5fz3jw2c
    @user-is5fz3jw2c 9 месяцев назад

    Aarohi ma'am, I have a dataset of chest xrays. I want to predict the active tb and latent tb . but the other 2 classes dont have annotations. how to approach this?

  • @dheerajvasudevaraovelaga6006
    @dheerajvasudevaraovelaga6006 6 месяцев назад +1

    I really liked the content and the way you explained it. Is it always required to have a text file for every single image? Doesnt that make it very hard considering people only have images with them? Can you share more knowledge on how you created that custom dataset so that it could be trained on YOLOv8?

    • @CodeWithAarohi
      @CodeWithAarohi  6 месяцев назад +1

      Yes, you always need annotations for each image in txt format. This is the mandatory step to use this algorithm.

  • @jassimelaouni8940
    @jassimelaouni8940 Год назад

    Great explanation ! thank youu

  • @shinwarikhan4677
    @shinwarikhan4677 Год назад

    thank you so much mam.. l learn much from your videos thanks alot of... 💗💗💗

  • @rakeshbullet7363
    @rakeshbullet7363 3 месяца назад

    Can u please share the link for github repo shown in the video . Could not locate that in the description of the video unless i missed it.

  • @aasheesh6001
    @aasheesh6001 5 месяцев назад

    Thanks for this video

  • @sahilkadu9679
    @sahilkadu9679 Год назад

    Great video & great explanation Ma'am!🙌 Does it require GPU to run? Because I tried running it on Jupyter notebook & the kernel was getting restarted frequently🙁

  • @Developer_Lop_Lop
    @Developer_Lop_Lop 7 месяцев назад

    thank for your video madam. hope you have a great day

  • @ismailidowu7746
    @ismailidowu7746 Год назад

    Never mind again with my previous question. I got the solution. Thanks

  • @mayaltaee2963
    @mayaltaee2963 3 месяца назад

    @CodeWithAarohi, Hello, I traind the yolov8 (detect) on custom dataset now how can I assess the yolov8 model with test dataset where I can get Recall , Precision, mAP, confusion matrix, curvs, and accurecy.

  • @goksel9908
    @goksel9908 Год назад +1

    I'm going to make multi label classification (9 classes). Am i have to create text files for labels ? Or add csv files each train and val folders? I'm confused because I put all images in the folders of the classes they belong to and i thought classification models doesnt have to include csv or text files because of this.
    Thank you from now

  • @boazmwubahimana4702
    @boazmwubahimana4702 Год назад

    Actually i started watching this video after 23min after uploading, this is very amazing. When are you going to release how to train seg, det and cls model as you mentioned in the endition of this video? We'll buy a coffee one day!

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад +1

      Hi, Glad you liked my video. And I will release the Image segmentation on Custom dataset in next 3-4 hours :)

    • @boazmwubahimana4702
      @boazmwubahimana4702 Год назад

      @@CodeWithAarohi can't explain the thankful but over all best of good luck! Resile and Prosper in this year2023. Need to tell others about this work !

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад +2

      @@boazmwubahimana4702 ruclips.net/video/myle_dNJjqg/видео.html

    • @boazmwubahimana4702
      @boazmwubahimana4702 Год назад

      @@CodeWithAarohi thanks

  • @createfun1106
    @createfun1106 3 месяца назад

    Very nice tutorial. Thanks 😊

  • @jamesroy9027
    @jamesroy9027 6 месяцев назад

    Thank You so much 🤗🤗🤗

  • @LolLoloilol
    @LolLoloilol 4 месяца назад

    Hello Mam,
    the video was very helpful thank you for making such good content.
    can you explain how layer freezing works how we can do it on our pretrained model in yolo it would be quite helpful for us to understand thank you.

  • @zaidbilakhia6312
    @zaidbilakhia6312 4 месяца назад

    Hello, WIth GPU (with cuda) it doesnt work but with cpu it works.
    NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build)

  • @ShivamRana-wv6uo
    @ShivamRana-wv6uo Год назад

    after using task=predict command it is showing "448x640 10 humans, 16.0ms
    " but the output image is not shown. Iam running it on pycharm

  • @dcdales
    @dcdales 9 месяцев назад

    Gonna lay out a problem I encountered and its solution. Great tutorial by the way, thanks so much!
    Problem: "illegal hardware instruction"
    Solution: Update MacOS. (Nope - that's just a temporary fix - ACTUAL solution: type 'deactivate' to leave the environment, then open the environment again with 'source myenv/bin/activate'. Works for me now.

  • @Marketblank
    @Marketblank 4 месяца назад

    Hi Thank you for the valuable information and my question which tools you used for labeling? Thank you

    • @CodeWithAarohi
      @CodeWithAarohi  4 месяца назад +1

      Sometime I use labelImg tool and sometime I work with roboflow.

  • @user-bw4mj8uk4r
    @user-bw4mj8uk4r 4 месяца назад

    hi can u tell im using Yolov8 model for Obj Detection on colab, dataset from public Roboflow, detecting defects. Pl tell datasets spilt, train valid or train valid and test too. min epochs for good mAP> pl reply. How to change, hyperpara

  • @hiteshsingh1039
    @hiteshsingh1039 3 месяца назад

    Hy aarohi , could you pls also make videos on productionizing models , dufferent ways

  • @Rishu_Dakshin
    @Rishu_Dakshin 7 месяцев назад

    Hello, Thank you very much for your reply. i also want to know how to capture the accuracy of the model. information like
    ( How many images testing
    How many clear
    How many are not clear
    Model|accuracy|%|images tested) need to be captured. Can you please help me with this

  • @aguspray
    @aguspray 8 месяцев назад

    what annotation tools that u used for?

    • @CodeWithAarohi
      @CodeWithAarohi  8 месяцев назад

      I took this dataset from roboflow universe. But if you want to annotate then you can use tools like Labelimg. You can also use roboflow to annotate your dataset.

  • @sanathspai3210
    @sanathspai3210 4 дня назад

    Hi Arohi. It is very good video and one request could you create playlists for going throughout the papers from yolov1 - v9(present version)? It would be very very beneficial

  • @ismailidowu7746
    @ismailidowu7746 Год назад +4

    Thank you so much, Aarohi, for your excellent explanation. You have saved me a lot of time. Quick question . I need to get the exact pixel location(say: 300x200) of those objects detected by Yolo. Do you know how to go about it, please? Any code example will be highly appreciated

  • @aviralkatiyar2538
    @aviralkatiyar2538 Год назад

    Again a really nice video. But from next time could you please share the jupyter notebook as well.That will be a big help.

  • @victorarayal
    @victorarayal Год назад

    Thanks!
    Is it possible to change the size of the images at training and set a custom one?

  • @vishalpahuja2967
    @vishalpahuja2967 Год назад

    Hi Aarohi ,can you make video for detecting objectts from floor plan images using yolov8 as i cannot find any resource for that. It would be really helpful if you make one

  • @arka6501
    @arka6501 Год назад

    "yolo is not recognized as an internal or external command operable program or batch file"; I am facing this problem when I am going to execute the yolo command. Please give me some advice to solve this problem

  • @oxynofiring
    @oxynofiring Месяц назад +2

    how can i reuse this trained model by saving it

  • @user-lv9lj4mm5p
    @user-lv9lj4mm5p Год назад

    Hi, great video!I have a question:in object detect ,I only have one training class,for exemple “dog”, the training effect is not good, and the label boxes are all the same size, how should I improve it

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      need more detail about your dataset to suggest you anything

    • @user-lv9lj4mm5p
      @user-lv9lj4mm5p Год назад

      @@CodeWithAarohi Object penct 1.5%

  • @hbrt10
    @hbrt10 Год назад +1

    hi, thank you for the documentation.
    I have a problem about predict images. i trained my model and predict image grayscale but i come into view error : ValueError: axes don't match array.
    What should I do? I must predict image grayscale.

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      Your model should be trained on grayscale images if you want to make prediction on grayscale image because colored images have 3 channels and grayscale images have channel 1.

  • @yeongnamtan
    @yeongnamtan Год назад

    Thank you. How can we incorporate a counter into detect so it shows us how many images there are? In Yolov5, we can edit the detect.py script but for Yolov8, it is just a one-liner

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      You can still clone the YOLOv8 github repo and run the commands like earlier. Will explain in upcoming videos.

    • @yeongnamtan
      @yeongnamtan Год назад

      @@CodeWithAarohi thank you very much

  • @mayurirakhonde7615
    @mayurirakhonde7615 Год назад

    Thanks for the excellent explanation. I am having one doubt. What is mAP50 and mAP50-95? What should we measure for accuracy purpose?

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      mAP50 is the mean Average Precision at a 50% IoU (Intersection over Union) threshold. IoU is a measure of the overlap between the predicted bounding box and the ground truth bounding box. A threshold of 50% means that the predicted bounding box is considered a correct detection if it overlaps with the ground truth bounding box by at least 50%.
      mAP50-95 is the mean Average Precision averaged over the range of IoU thresholds from 50% to 95%, with a step size of 5%.

  • @nuhmanpk3082
    @nuhmanpk3082 Год назад

    Great Video.
    What the format that I need to follow for dataset preparation. Same as yolov5,7 images->train,val , lables->train,val

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      Yes, exactly

    • @nuhmanpk3082
      @nuhmanpk3082 Год назад

      @@CodeWithAarohi Can You make a video on how to use a onnx format file or a core ml file in mobile devices for live detection on V8 version

  • @soravsingla6574
    @soravsingla6574 7 месяцев назад

    Code with Aarohi is Best RUclips channel for Artificial Intelligence #CodeWithAarohi

  • @shivangichaudhary2262
    @shivangichaudhary2262 7 месяцев назад

    ma'am how can we optimize the hyper parameters of Yolov8 ?

    • @CodeWithAarohi
      @CodeWithAarohi  7 месяцев назад

      docs.ultralytics.com/modes/train/#arguments

  • @thenextlevelclub5120
    @thenextlevelclub5120 8 месяцев назад

    Mam, can you come up with video data rather than image data and same video. Just change the dataset with video, will be helpful

  • @sofgril246
    @sofgril246 5 месяцев назад

    Thank you so much ma'am. Your instruction is very clear and easy to understand.
    But I don't know how to intepret the results: confusion matrix, the charts (box loss etc.).
    Is there any standard documentation to follow?
    Thank you

    • @CodeWithAarohi
      @CodeWithAarohi  5 месяцев назад +1

      Confusion Matrix:
      True Positive (TP): Correctly predicted positive instances.
      True Negative (TN): Correctly predicted negative instances.
      False Positive (FP): Incorrectly predicted positive instances.
      False Negative (FN): Incorrectly predicted negative instances.
      Accuracy: (TP + TN) / (TP + TN + FP + FN)
      Precision: TP / (TP + FP)
      Recall (Sensitivity): TP / (TP + FN)
      F1 Score: 2 * (Precision * Recall) / (Precision + Recall)
      Use these metrics to gauge the model's performance, considering the balance between precision and recall.
      Loss Charts (e.g., Box Loss):
      Training Loss: Measures how well the model is learning during training. A decrease indicates learning.
      Validation/Test Loss: Indicates how well the model generalizes to new data. Monitor for overfitting (training loss significantly lower than validation loss).
      Understanding these metrics helps you assess the model's accuracy, ability to identify positives/negatives, and potential overfitting.

    • @sofgril246
      @sofgril246 5 месяцев назад

      @@CodeWithAarohi Thank you ma'am!!

  • @Sunil-ez1hx
    @Sunil-ez1hx 2 месяца назад

    Nice

  • @nayantharabs9850
    @nayantharabs9850 Месяц назад

    Could you please do a tutorial on how to use yolo for object detection in cases where the objects you want to detect are not in the pretrained dataset. As in , to use the pretrained model for feature extraction and detect the custom objects.

  • @srinidhijala3688
    @srinidhijala3688 6 месяцев назад

    Thanks a lot mam

  • @afriquemodel2375
    @afriquemodel2375 7 месяцев назад

    is it possible to convert yolov8 to .pb file tensorflow TF2.?

  • @hemanthsrivathsav
    @hemanthsrivathsav 7 месяцев назад

    I trained the yolov8 model but is there any way that I can download the trained model???

    • @CodeWithAarohi
      @CodeWithAarohi  7 месяцев назад +1

      After training your model stored in runs folder. And if you want to use pretrained models then you can get it from yolov8 github repo

  • @adnanemehdaoui5487
    @adnanemehdaoui5487 2 месяца назад

    how can we use multupke file yaml to train yolo??

  • @imenselmi9230
    @imenselmi9230 Год назад

    Can you make video about model inference using NVIDIA triton server and yolo how optimise the model using triton

  • @fabiodagostino7529
    @fabiodagostino7529 3 месяца назад

    can you explain the labels annotation in details? Is it x,y of the top left corner + width and height normalized?

    • @CodeWithAarohi
      @CodeWithAarohi  2 месяца назад

      x-coordinate of the center: This is the x-coordinate of the center of the bounding box relative to the width of the image.
      y-coordinate of the center: This is the y-coordinate of the center of the bounding box relative to the height of the image.
      Width of the bounding box: This is the width of the bounding box relative to the width of the image.
      Height of the bounding box: This is the height of the bounding box relative to the height of the image.
      These coordinates are often normalized to the range [0, 1], where (0, 0) represents the top-left corner of the image, and (1, 1) represents the bottom-right corner.

  • @siddharthabhat2825
    @siddharthabhat2825 9 месяцев назад +1

    How to get other metrics like accuracy, specificity and sensitivity ??
    Will it be stored anywhere or if not how to get tp, fp, values, so that any metrics can be calculated ??

    • @CodeWithAarohi
      @CodeWithAarohi  9 месяцев назад

      To calculate metrics such as accuracy, specificity, and sensitivity, you'll need a confusion matrix which we already have.
      Accuracy = (True Positives + True Negatives) / Total Predictions
      Sensitivity = True Positives / (True Positives + False Negatives)
      Specificity = True Negatives / (True Negatives + False Positives)
      Precision = True Positives / (True Positives + False Positives)

    • @siddharthabhat2825
      @siddharthabhat2825 9 месяцев назад

      ​@@CodeWithAarohi Thanks a lot for replying !!
      Yea, I can calculate final value manually. But if i want to draw graph/curve, then how can i extract values at every epochs ?? Also, how to plot the ROC curve ??

    • @CodeWithAarohi
      @CodeWithAarohi  9 месяцев назад

      @@siddharthabhat2825 use results.txt file. All the results are saved there. Using this results.txt file. You can plot the graphs for all the epochs.

    • @siddharthabhat2825
      @siddharthabhat2825 9 месяцев назад

      @@CodeWithAarohi results.csv gives precision, recall, mAP & other loss values. Is it possible to get tp, fp, tn ,fn values ??

  • @mwtest-ty4ro
    @mwtest-ty4ro Год назад

    Great video. One Question. Where/how did you get the labels.cache file?

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      When you start training, in the first few seconds it generate labels.cache

    • @mwtest-ty4ro
      @mwtest-ty4ro Год назад

      @@CodeWithAarohi Thank you for your response. Unfortunately I am getting the following error.
      [Errno 2] No such file or directory: 'C:\\Users\\USER\\PycharmProjects\\Yolo8\\Training\\pistol\\train\\labels.cache'
      any clue on how i can solve this. Thanks

  • @jeffreyeiyike122
    @jeffreyeiyike122 10 месяцев назад +1

    @CodeWithAarohi, please can you help with directions on how to features for yolo v8, a layer before the output layer where I can find images features, bounding features, confidence/probability before it get to the output layer. I need it for the training of another model. Thank you

    • @CodeWithAarohi
      @CodeWithAarohi  10 месяцев назад

      USing this code, you can get boxes, clabels and probability scores.
      from ultralytics import YOLO
      # Load a model
      model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
      #results = model("images/person.jpg", save=True)
      results = model("images/1.jpg", save=True)
      class_names = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
      for result in results:
      boxes = result.boxes # Boxes object for bbox outputs
      probs = result.probs # Class probabilities for classification outputs
      cls = boxes.cls.tolist() # Convert tensor to list
      xyxy = boxes.xyxy
      xywh = boxes.xywh # box with xywh format, (N, 4)
      conf = boxes.conf
      print(cls)
      for class_index in cls:
      class_name = class_names[int(class_index)]
      print("Class:", class_name)

  • @JustShorts-7
    @JustShorts-7 Год назад

    Ma'am , what if the source is webcam or external camera , can it detect object in real time and can we use this model separately to detect from Web cam with bounding box in real time ?
    Please answer this

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      Yes, you can use webcam as input to your Object Detection model.

  • @allea-zb7kl
    @allea-zb7kl 3 дня назад

    hi, I have a question. if I divide the dataset into training and testing only, is it necessary to run the validation part? and if not, during inference how to find out the mAP?

    • @CodeWithAarohi
      @CodeWithAarohi  3 дня назад

      Skipping validation means you won't be monitoring the model's performance during training.
      You should only skip this part if you're confident in your training procedure and the dataset quality.
      Regarding inference and calculating mAP without a validation set-
      1-Run inference on your test dataset and get the predicted bounding boxes.
      2- Calculate Intersection over Union (IoU) between predicted bounding boxes and ground truth bounding boxes.
      3- Use the IoU values to compute Precision-Recall curves for each class.
      4- Compute Average Precision (AP) for each class.
      5- Calculate mAP by taking the mean of AP across all classes.
      Here is a code for validation. you can try with this code: github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/detect/val.py

    • @allea-zb7kl
      @allea-zb7kl 2 дня назад

      @@CodeWithAarohi If I split the data into train, val, and test sets, should I use the mAP from the validation set as the benchmark, or do I also need to calculate the mAP from the test set? By the way, I am conducting research on object detection and I am still confused about which mAP should be used as the benchmark: the mAP from the validation set or the mAP from the test set? Please help me.

  • @nehavora5146
    @nehavora5146 Год назад

    Nice Tutorial.Thank you So much. Can you suggest how to save the name of the objects detected in the image or video in a txt file using yolov8?

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад +1

      You can try something like this: import cv2
      import numpy as np
      # Load the YOLOv8 model
      net = cv2.dnn.readNet("path/to/yolov8.weights", "path/to/yolov8.cfg")
      # Load the input image or video
      cap = cv2.VideoCapture("path/to/input.mp4")
      # Loop over each frame
      while True:
      ret, frame = cap.read()
      if not ret:
      break
      # Prepare the frame for inference
      blob = cv2.dnn.blobFromImage(frame, 1/255.0, (416, 416), swapRB=True, crop=False)
      net.setInput(blob)
      outs = net.forward(get_outputs_names(net))
      # Loop over each detection
      for out in outs:
      for detection in out:
      scores = detection[5:]
      class_id = np.argmax(scores)
      confidence = scores[class_id]
      if confidence > 0.5:
      # Extract the bounding box for the detection
      x, y, w, h = (detection[0:4] * np.array([width, height, width, height])).astype("int")
      # Open the text file and write the class label and confidence score
      with open("detections.txt", "a") as f:
      f.write("{} {:.2f}
      ".format(class_labels[class_id], confidence))
      # Display the frame with detections
      cv2.imshow("frame", frame)
      if cv2.waitKey(1) & 0xFF == ord("q"):
      break
      cap.release()
      cv2.destroyAllWindows()

  • @ishakag262
    @ishakag262 3 месяца назад

    I am getting the Box(P = 0 R = 0 mAP50 = 0 mAP50-95 = 0 )zero at the time of training the dataset at the epochs any solution for this issue???

  • @connectrRomania
    @connectrRomania Год назад

    Kernel dead when training on custom data, any advice how to minimize the model parameter to avoid cuds problems

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад +1

      results = model.train(data="custom_data.yaml", epochs=20, workers=1, batch=8,imgsz=640)

  • @adamgilbert6802
    @adamgilbert6802 Год назад

    Hi, great video! I have one problem where when I run the predict command, no results are saved to a runs\detect\predict folder. If you have any suggestions as to how to fix this issue I would much appreciate your help.

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      Please share the command you are using to make predictions

    • @adamgilbert6802
      @adamgilbert6802 Год назад

      @@CodeWithAarohi the command I am using is: !yolo task=detect mode=predict model=runs/detect/train2/weights/best.pt conf=0.25 source=test_images

    • @aliel7770
      @aliel7770 Год назад

      you should add save=True in the end of the command line

    • @adamgilbert6802
      @adamgilbert6802 Год назад

      @@aliel7770 thank you very much, it now saves👍

  • @thoufeekbaber8597
    @thoufeekbaber8597 7 месяцев назад

    nice video

  • @user-mx8rf2uw6v
    @user-mx8rf2uw6v 4 месяца назад

    can we run live inferencing on yolov8 without using the ultralytics library like we used to in previous version of yolov8? I want to setup the codebase instead of directly running using ultralytics library.

    • @CodeWithAarohi
      @CodeWithAarohi  4 месяца назад

      Yes, you can run YOLOv8 for live inference without relying on the Ultralytics library, but it requires setting up the environment, handling the model loading, inference, and post-processing manually.

  • @raehanfelda8956
    @raehanfelda8956 11 месяцев назад

    I tried yolov8 for some time, and when I want to try tuning hyperparameters using ray tune, it shows an error, even though I followed the steps provided by ultralytics, can you make a video about tuning yolov8 hyperparameters using ray tune?

  • @nurnajiha6013
    @nurnajiha6013 5 месяцев назад

    can i know where to get the yolov8_pretrained script that you used in the video?

    • @CodeWithAarohi
      @CodeWithAarohi  5 месяцев назад

      I am sorry, I am not sure which folder yo are talking about. Can you share the time stamp where I discussed it.

  • @AJ-wf3wp
    @AJ-wf3wp 5 месяцев назад

    mam how are you having predict and all folders ?
    please tell me how to setup env from starting.

    • @CodeWithAarohi
      @CodeWithAarohi  5 месяцев назад +1

      Run these commands one by one-
      # In below command 3.9 is my python version with which I want to create separate environment
      py -3.9 -m venv myvenv
      myvenv\Scripts\activate
      pip install ultralytics
      pip install jupyter notebook
      # To open jupyter notebook type:
      jupyter notebook

    • @AJ-wf3wp
      @AJ-wf3wp 5 месяцев назад

      @@CodeWithAarohi mam while installing labelImg ....I am facing issue "ERROR: Failed Building wheel for PyQt5-sip"

  • @MRDAM-zn4qf
    @MRDAM-zn4qf 6 месяцев назад

    Hello ma'am, i would like to know if we can add the logic into it?? And if yes then how? I'm currently working on a project where i want to detect traffic violations such as without_helmet, and then if a rider falls into that category i want to capture the image of the number plate corresponding to the rider and then apply OCR. Please help us!

    • @CodeWithAarohi
      @CodeWithAarohi  6 месяцев назад

      Yes, You can implement this. You need 2 models. First model will detect traffic_violations and then if traffic_violation detected then use another model which is trained to detect license plates. And if license plate is detected then use easy-ocr to fetch the details of that license plate. I will try to do a video on this.

    • @MRDAM-zn4qf
      @MRDAM-zn4qf 6 месяцев назад

      @@CodeWithAarohi thank you ma'am, there's another violation which is crosswalk violation, if a vehicle is standing on a crosswalk it should detect it as a violation, can i implement something like that with MASK R-CNN or Yolov8??

  • @vincegallardo1432
    @vincegallardo1432 7 месяцев назад

    Hello Good Morning, I just wanted to ask how can I compile the Yolov8 to use it offline. Is it possible to compile it on Tensorflow?

    • @CodeWithAarohi
      @CodeWithAarohi  7 месяцев назад

      To compile YOLOv8 for offline use doesn't involve a traditional compilation process as you might see with some programming languages. Instead, you need to ensure you have all the necessary dependencies installed and the pre-trained weights downloaded. YOLOv8 will then run inference on your local machine. I never tried with tensorflow as the official repo using pytorch.

  • @abdelhamidazanzal4403
    @abdelhamidazanzal4403 Год назад

    Thanks for the tutorial... when you are programming to make a video on custom image segmentation ?

  • @RAZZKIRAN
    @RAZZKIRAN Год назад

    hand written text from images using cnn or transfer learning ?, please upload

  • @pragneshkumar2850
    @pragneshkumar2850 5 месяцев назад

    i keep getting this error when i try to import ultralytics into the python notebook, although the CLI commands are working and i am able to see the predictions: AttributeError: 'OutStream' object has no attribute 'reconfigure'.
    Any solutions?

    • @CodeWithAarohi
      @CodeWithAarohi  5 месяцев назад

      I am not sure about the error but you can try to upgrade the jupyter notebook or reinstall it.

  • @hamidullahturkmen1782
    @hamidullahturkmen1782 Год назад

    thank you Aarohi, could you make a video integrating yolov8 into the flask app

  • @harshsonawane578
    @harshsonawane578 Год назад

    i am trying this in google colab I just want to use this YOLOv8 model and show result
    but runs folder is not getting formed and i cant find a way to store results and display it
    if i set " save_conf=True " then runs folders gets form and in predection folder croped images are getting stored
    please help

    • @harshsonawane578
      @harshsonawane578 Год назад

      so CLI use kar ke pata nahe kaise save karna h
      par py me model.predict() save=True pass karne ke bad save ho ra

  • @balajipetchetti0106
    @balajipetchetti0106 8 месяцев назад

    how to create bounding boxes for trees and labeling

    • @CodeWithAarohi
      @CodeWithAarohi  8 месяцев назад

      You can use data annotation tools like labelImg or labelbox etc to annotate your images

  • @muhammadsabrimas2016
    @muhammadsabrimas2016 11 месяцев назад

    how to evolve yolov8 hyperparameter with raytune?

  • @nouamanesouadi7187
    @nouamanesouadi7187 11 месяцев назад

    how can I detect using a python script, the consol shows me that the image is processed but I can't find where it's saved

    • @CodeWithAarohi
      @CodeWithAarohi  11 месяцев назад

      By default it get stored in "runs" folder. If your image is not there then use save=True in your command

  • @TugceKeskin-px5wh
    @TugceKeskin-px5wh Год назад

    Hi, I am in the first step and try to predict model in CLI, I cannot get the "result saved to runs\detect\..." thing. It takes the image, model etc. but does not give the result, why?

  • @irani9957
    @irani9957 Год назад

    great video , tnx, is it possible to learn detect and counting for yolov6-7-8 ?

  • @ameer-alahmadi
    @ameer-alahmadi Год назад

    Thanks a lot for your great tutorial . But, can you please explain how to convert yolov8 to tflite model and use it with raspberry pi + coral usb accelerator?

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад +1

      Will cover in upcoming videos

    • @ameer-alahmadi
      @ameer-alahmadi Год назад

      @@CodeWithAarohi I'll be so grateful, and I'll appreciate it.

  • @germancruz6618
    @germancruz6618 9 месяцев назад

    Dear Aarohi. How could I activate nvidia cuda for yolov8?

    • @CodeWithAarohi
      @CodeWithAarohi  9 месяцев назад +1

      # Train the model with 2 GPUs
      results = model.train(data='coco128.yaml', epochs=100, imgsz=640, device=[0, 1]) . If you are using single gpu the write device=0

  • @shinwarikhan4677
    @shinwarikhan4677 Год назад

    hello mam! i have an issue .. when i run it on 2 classes after training the class name and confidence level show correctly but when i run it on 5 classes then instead on class name it show the indexes of these class...i check m yaml file everything is perfect... i will be thankful

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад +1

      I am not sure what is the issue. Need to see the related files to find out the problem.

  • @FREEFIREGAMER-iv8dx
    @FREEFIREGAMER-iv8dx 7 месяцев назад +1

    maam ,please give me code how to extract detected objects in an image and save those images in seperate files ,like if we detect 5 objects in an image using yolov8,and that image is saved in a file ,then i need the detected objects seperated please help maam

  • @sumanpahari4220
    @sumanpahari4220 Год назад

    Very very useful, madam can you share a notebook and custom dataset. It will be helpful

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      Dataset is downloaded from roboflow. And these commands are here github.com/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb

  • @karthikvb1293
    @karthikvb1293 Год назад

    Great content, can you do a video on yolox

  • @thilakcm1527
    @thilakcm1527 Год назад

    can i use .json files instead of .txt as the annotations files in training yolov8 on custom dataset?

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      No, YOLOv8 does not support JSON files as annotation files for training on a custom dataset

    • @thilakcm1527
      @thilakcm1527 Год назад

      @@CodeWithAarohi okay. Can you show a sample of the text file? So I can try to create txt files like that for my training

    • @CodeWithAarohi
      @CodeWithAarohi  Год назад

      @@thilakcm1527 for every image , create a seperate txt file and inside that txt file data will look like this : 0 0.5202702702702703 0.5524861878453039 0.9594594594594594 0.7403314917127072
      classid, x_center y_center width height
      You can create dataset in this folrmat using labelImg tool and you can install itby pip install labelImg