How To Export and Optimize an Ultralytics YOLOv8 Model for Inference with OpenVINO | Episode 9

Поделиться
HTML-код
  • Опубликовано: 7 ноя 2024

Комментарии • 100

  • @blueblow7762
    @blueblow7762 6 месяцев назад +2

    Worked perfectly thank you so much
    inference time was around 4500 ms and new one is around 500 ms
    my presentation is in about 4 hours hahahahah
    this just saved my life

  • @Ingroth-automation
    @Ingroth-automation Год назад +3

    Worked perfect. Old inference Time about 2400ms new time around 900ms

    • @Ultralytics
      @Ultralytics  Год назад +2

      Happy Learning! Glad your issue is resolved :)

    • @ArumugaTamilSelvan
      @ArumugaTamilSelvan 7 месяцев назад +1

      Happy to hear solution that worked..makes me confident thank you...😊

    • @Ultralytics
      @Ultralytics  3 месяца назад

      That's awesome to hear! 😊 If you have any more questions or need further assistance, feel free to ask. Happy coding! 🚀

  • @TheodoreBC
    @TheodoreBC 3 месяца назад

    Sounds cool, bro! But what about optimizing YOLOv8 for non-Intel hardware? Are there alternatives that don't tie ya down to specific chips?

    • @Ultralytics
      @Ultralytics  3 месяца назад

      Absolutely! YOLOv8 can be optimized for various hardware platforms beyond Intel. You can export your model to formats like TensorRT for NVIDIA GPUs, ONNX for general CPU optimization, and CoreML for Apple devices. Each format has its own advantages depending on your deployment needs. Check out the full list of supported export formats and their benefits here: docs.ultralytics.com/modes/benchmark/. 🚀

  • @DoomsdayDatabase
    @DoomsdayDatabase 11 месяцев назад +2

    Hi, can we use it to run inference on live webcam? If so can you provide tutorial for that?
    Thanks in advance!

    • @Ultralytics
      @Ultralytics  11 месяцев назад +1

      Yes, you can perform inference with a live webcam. Simply use `source=0` for the webcam. The sample code is provided below.
      ```
      yolo task=detect mode=predict source=0 model='yolov8n_openvino_model/'
      ```
      Thanks

  • @Smitthy-k9d
    @Smitthy-k9d 3 месяца назад

    Loving the series so far! So, once you optimize the YOLOv8 model with OpenVINO, how much of a speed and performance boost can we realistically expect? Trying to figure out if it's worth all the hassle or just hype!?

    • @Ultralytics
      @Ultralytics  3 месяца назад

      Thank you for your kind words! 😊 Optimizing YOLOv8 with OpenVINO can indeed provide significant performance boosts. Typically, you can expect up to a 3x speedup on CPUs and up to a 5x speedup on GPUs, depending on your hardware and specific use case. For more details, check out our OpenVINO Optimization Guide docs.ultralytics.com/guides/optimizing-openvino-latency-vs-throughput-modes/. It's definitely worth exploring if you need faster inference times! 🚀 If you have any specific questions or run into issues, feel free to share more details.

  • @Philips-i2x
    @Philips-i2x Год назад +1

    Does exporting to lower format precision in OpenVINO equals to quantization in the sense that the activation functions and weights are all updated?

    • @Ultralytics
      @Ultralytics  Год назад

      Quantization is a separate module, it might not be the same. What do you mean by lower format? Does this mean exporting the model with a small imgsz?

  • @BuseYaren
    @BuseYaren 6 месяцев назад +1

    Hello, I get the following error when converting pytorch format to onnx. Is the problem related to the versions I am using?
    ImportError: DLL load failed while importing _pyopenvino: The specified module could not be found

    • @Ultralytics
      @Ultralytics  6 месяцев назад +1

      Seems like the issue is related to openvino modules. Can you please upgrade the Ultralytics package? If issue still exist, you can ask your queries at: github.com/ultralytics/ultralytics/issues/new

    • @BuseYaren
      @BuseYaren 6 месяцев назад +1

      @@Ultralytics oh okay! Thanks a lot, i will try right now

    • @Ultralytics
      @Ultralytics  6 месяцев назад

      @@BuseYaren 😃

  • @warrior_1309
    @warrior_1309 Год назад +3

    could you pls help or provide info about how to use it for gpu ?

    • @Ultralytics
      @Ultralytics  Год назад +1

      you can use the --device="cuda" option to enable GPU for Ultralytics YOLOv8 Inference.
      For more information, you can check our Docs: docs.ultralytics.com/modes/predict/#inference-arguments

  • @AlirezaFazli-z5o
    @AlirezaFazli-z5o 2 месяца назад

    Is this OpenVINO only available for ultralytics library and YOLO or I can use it for any costume model I have??
    As u said it yourself, this method can speed up the model even in cpus and we dont always have gpu at hand

    • @Ultralytics
      @Ultralytics  2 месяца назад

      OpenVINO is a versatile toolkit that can optimize and deploy models from various deep learning frameworks, not just Ultralytics YOLO. You can use it with custom models from frameworks like PyTorch, TensorFlow, ONNX, and more. For detailed instructions, check out the OpenVINO documentation docs.ultralytics.com/integrations/openvino/. 🚀

  • @FaizanUlHaq-mf3zt
    @FaizanUlHaq-mf3zt 9 месяцев назад +1

    ive done the same thing, after the export it says that the model.onnx will make predictions on images of size 800x800 and even after resizing my images to 800x800 it still gives me the error of recieving some other size
    Error:
    The input tensor size is not equal to the model input type: got [1,3,640,640] expecting [1,3,800,800].

    • @Ultralytics
      @Ultralytics  9 месяцев назад

      It appears that you should utilize 'imgsz=800' in the prediction command for accurate predictions. i.e,
      `yolo predict model="path/to/model.onnx" source="path/to/video/file.mp4" imgsz=800`
      Thanks
      Ultralytics Team!

  • @Nitiproom
    @Nitiproom 3 месяца назад +1

    I use custom roboflow dataset and train in Colab. When use openvino version 2024 and follow your code. The detection results is always 1.0 . How to solve this error ? please !

    • @Ultralytics
      @Ultralytics  3 месяца назад +1

      Hi! It sounds like there might be an issue with the model conversion or inference process. First, ensure you're using the latest versions of `torch` and `ultralytics`. If the problem persists, please provide more details about your setup and any error messages you see. You can also check our detailed guide on optimizing YOLOv8 with OpenVINO docs.ultralytics.com/integrations/openvino/ for additional troubleshooting tips. 😊

    • @Nitiproom
      @Nitiproom 3 месяца назад +1

      @@Ultralytics thankyou

    • @Ultralytics
      @Ultralytics  3 месяца назад

      You're welcome! If you have any more questions, feel free to ask. Happy coding! 😊

  • @devrajgothi5149
    @devrajgothi5149 7 месяцев назад +1

    Hello! I was doing object detection on raspberry pi 5, i tried quantising the model to both int8 and fp16 but i didnt recieve any faster inferencing time. Can you tell me a good solution on what i can do? I tried to export the model to tflite, onnx, openvino and much more. but the best inference time i could get was 170-200ms. can you tell me what i can do to make these inference time lower. Thanks in advance

    • @Ultralytics
      @Ultralytics  7 месяцев назад

      What are the dimensions of the image you are using for inference, as well as the input image size for the model? Thank you.

  • @kent张
    @kent张 8 месяцев назад +1

    can it run on nvidia-gpu?or just run on intel based something?

    • @Ultralytics
      @Ultralytics  8 месяцев назад

      While it can run on an NVIDIA GPU, utilizing an Intel processor is recommended for optimized and improved inference speed. Thanks

  • @CiDCatTV
    @CiDCatTV Год назад +1

    I've read that exporting a yolov8 model by setting the 'half' or 'int8' parameter to 'True' isn't the same as model quantization in the sense of reducing the model's weights and activations to lower bit widhts. I don't really understand that. Thanks in advance!

    • @Ultralytics
      @Ultralytics  Год назад +1

      half option will provide float16 model. The int8 option will yield a quantized model file that can be subsequently utilized on edge devices.

    • @CiDCatTV
      @CiDCatTV Год назад +1

      @@Ultralytics Thanks for your quick reply, I got it so far! But why this isn't the same as model quantization? If I wanted to do model quantization, I would have to use additional tools or frameworks after exporting despite using half or int8 option, right?

    • @Ultralytics
      @Ultralytics  Год назад +1

      Once you export the model, you can do quantization by specifying either int8 or fp16 options. The model will undergo automatic quantization, making it compatible and functional on embedded devices.

  • @anandukc4709
    @anandukc4709 5 месяцев назад +1

    Can you explain how openvino and onxx is getting around 3x faster Speed in cpu than pytorch. Whats actually happening in these exports

    • @Ultralytics
      @Ultralytics  5 месяцев назад

      OpenVINO and ONNX achieve around 3x faster speeds on CPUs compared to PyTorch by optimizing model inference. These exports use graph optimization techniques, including constant folding and operator fusion, to reduce computational overhead. OpenVINO further accelerates performance with hardware-specific optimizations, while ONNX Runtime uses efficient backend implementations and parallel execution. Both frameworks streamline the execution graph, reduce redundant calculations, and leverage CPU-specific optimizations to enhance speed.
      For more details, you can check the export feature available at: github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/exporter.py
      Thanks,
      Ultralytics Team!

  • @ArbaazShaikh-y3t
    @ArbaazShaikh-y3t Год назад

    Hello Buddy,
    I want to know, can I use Yolov8 for commercial use ?
    If YES then whats the process ?
    if NO then whats the reason and solution to use it for commercial ?

    • @Ultralytics
      @Ultralytics  Год назад +1

      Yes, you can use YOLOv8 for commercial purposes. We offer both open-source licensing under AGPL-3.0 and an Enterprise License for maximum flexibility in commercial product development. For more details, please visit ultralytics.com/license.

    • @ArbaazShaikh-y3t
      @ArbaazShaikh-y3t Год назад

      Thanks. Can you get me someone who can guide me through the overall process or if you can make a video on this topic of "How to get YOLOv8 commercial license" will be helpful.@@Ultralytics

    • @Ultralytics
      @Ultralytics  3 месяца назад

      I'm glad to help! For detailed guidance on obtaining a YOLOv8 commercial license, please visit our licensing page at ultralytics.com/license. Unfortunately, we can't provide private support or create custom videos on request, but our documentation and resources should cover everything you need. 😊

  • @AkshataPalsule
    @AkshataPalsule 10 месяцев назад +1

    Can we convert our custom yolo model to openvino ?

    • @Ultralytics
      @Ultralytics  10 месяцев назад

      If you have a fine-tuned model of Ultralytics YOLOv8, you can follow the steps outlined in the video to convert the YOLO model to OpenVINO.
      Thank you.

  • @user-firebender
    @user-firebender Месяц назад

    i try to use openvino to speed up the process of my custom trained object detection on video, it is faster yes but now it cannot detect the object it was trained on
    is there could be a loss in knowledge when converting model format or is it because of other factor?

    • @Ultralytics
      @Ultralytics  Месяц назад +1

      It sounds like there might be an issue with the conversion process or the input data format. Ensure that the model was exported correctly and that the input size and preprocessing steps match those used during training. Also, check if the OpenVINO model files (XML and BIN) are complete and correctly loaded. For more details, you can refer to our OpenVINO integration guide docs.ultralytics.com/integrations/openvino/. If the issue persists, you might want to verify the model's performance on a few test images to isolate the problem.

  • @ShridharBenni-fv1sv
    @ShridharBenni-fv1sv 9 месяцев назад +1

    Hi,
    I want to export the yolov8n-seg custom-trained model in tflite or onnx with int8 quantization, by using the below code,
    model = YOLO('/content/yolov8n-seg.pt')
    model = YOLO('/content/best.pt') # load a custom trained model
    # Export the model
    model.export(format='onnx', int8 = True, nms = True )
    model exporting is successful but when I check the exported model in netron app, the input and output are in still float32(tensor: float32[1,3,640,640]),
    shouldn't they be int8?

    • @Ultralytics
      @Ultralytics  9 месяцев назад +1

      The int8 option is not applicable for ONNX; however, it is suitable for TFLite export. For further details, please refer to the export section in our documentation: docs.ultralytics.com/modes/export/#export-formats

    • @ArumugaTamilSelvan
      @ArumugaTamilSelvan 7 месяцев назад

      Can i use the code to Export in "openvino" with int8?\
      @@Ultralytics

    • @Ultralytics
      @Ultralytics  3 месяца назад

      Yes, you can export your model to OpenVINO with INT8 quantization. Here's how you can do it:
      ```python
      from ultralytics import YOLO
      model = YOLO('/content/

  • @mtg7848
    @mtg7848 4 месяца назад +1

    How can i make this for live webcam

    • @Ultralytics
      @Ultralytics  4 месяца назад

      It's very simple, you can use `source=0` for the webcam and `source=1` for the external camera connected to your machine.
      Regards,
      Ultralytics Team!

    • @mtg7848
      @mtg7848 4 месяца назад +1

      @@Ultralytics how can i adapt my dataset to openVINO

    • @Ultralytics
      @Ultralytics  4 месяца назад

      Well, you can simply trained the Ultralytics YOLOv8 model on custom dataset and later export it to OpenVino for predictions :)

  • @ehshankhan7003
    @ehshankhan7003 10 месяцев назад +1

    How to extract the prediction labels to compare it with anything?

    • @Ultralytics
      @Ultralytics  10 месяцев назад

      You can use mentioned code below to extract the predictions labels
      ```
      from ultralytics import YOLO
      model = YOLO('yolov8n.pt')
      names = model.model.names
      im0 = 'ultralytics.com/images/bus.jpg'
      results = model.predict(im0)
      boxes = results[0].boxes.xywh.cpu()
      clss = results[0].boxes.cls.cpu().tolist()
      for cls in clss:
      print("Class Name :",names[int(cls)])
      ```
      Thanks
      Ultralytics Team!

    • @ehshankhan7003
      @ehshankhan7003 10 месяцев назад

      @@Ultralytics TypeError: 'generator' object is not subscriptable I am getting this error.
      from ultralytics import YOLO

      model = YOLO("yolov8n_openvino_model/")
      print("Hello")
      while True:

      results = model.predict(stream=True,show=True,source=0) # source alreadysetup
      boxes = results[0].boxes.xywh.cpu()
      clss = results[0].boxes.cls.cpu().tolist()

    • @Ultralytics
      @Ultralytics  3 месяца назад

      The error occurs because `results` is a generator when `stream=True`, and you can't directly index a generator. Instead, you should iterate over the generator. Here's how you can modify your code:
      ```python
      from ultralytics import YOLO
      model = YOLO("yolov8n_openvino_model/")
      print("Hello")
      while True:
      results = model.predict(stream=True, show=True, source=0) source already setup
      for result in results:
      boxes = result.boxes.xywh.cpu()
      clss = result.boxes.cls.cpu().tolist()
      for cls in clss:
      print("Class Name:", model.model.names[int(cls)])
      ```
      This way, you can process each `result` from the generator individually. For more details, you can refer to our predict mode documentation docs.ultralytics.com/modes/predict/.

  • @m033372
    @m033372 4 месяца назад

    OpenVINO on Intel CPUs is super fast, this is probably the best option for all Intel users.

    • @Ultralytics
      @Ultralytics  4 месяца назад

      Thank you for your comment! 😊 We're glad to hear you're enjoying the speed of OpenVINO on Intel CPUs. If you have any specific questions or run into issues, feel free to share more details so we can assist you better. Also, make sure you're using the latest versions of `torch` and `ultralytics` for the best performance. For more information, check out our OpenVINO integration guide docs.ultralytics.com/integrations/openvino/. Happy optimizing! 🚀

  • @AxelRyder-q1b
    @AxelRyder-q1b Месяц назад

    Yo, this is lit! 💥 But what if I ain't got Intel hardware?? Can Ultralytics YOLOv8 still roll like a pro with other setups, or am I stuck in slow-mo land? 🚀 And does this OpenVINO magic mess with accuracy? Let's hear some juicy insights! 🧐

    • @Ultralytics
      @Ultralytics  Месяц назад

      Hey there! 🚀 No worries if you don't have Intel hardware. Ultralytics YOLOv8 works great on various setups, including NVIDIA GPUs and CPUs. You can use formats like PyTorch, ONNX, and TensorRT for optimization. OpenVINO mainly boosts speed without compromising accuracy, so you're still getting top-notch performance. For more details, check out our OpenVINO guide docs.ultralytics.com/integrations/openvino/. Enjoy the speed! 😄

  • @ziranshuzhang6831
    @ziranshuzhang6831 Год назад +1

    how to export to a desire folder? right now, it seems export method export to somewhere predetermined

    • @Ultralytics
      @Ultralytics  Год назад +1

      The export feature saves the output file in the same location as the original model weights. If you wish to export it to a different location, you can copy the original weights file to that specific destination and then execute the export command.

    • @ziranshuzhang6831
      @ziranshuzhang6831 Год назад

      @@Ultralytics Or I can just move the exported files after exporting, but that is not really very clean. It is surprising export function doesnt have an output folder arg

    • @Ultralytics
      @Ultralytics  Год назад

      Thank you for the information. We will certainly investigate the export feature and incorporate the option to export the model based on the folder path provided by the user.

  • @Sasha-n2x
    @Sasha-n2x 3 месяца назад

    Considering the impressive 3x speedup with OpenVINO, are there any specific applications or industries where this optimization would make a critical difference? Also, is there potential for a similar boost with non-Intel hardware? #TechTalk #ModelOptimization?

    • @Ultralytics
      @Ultralytics  3 месяца назад

      Absolutely! The 3x speedup with OpenVINO can be critical in industries like healthcare for real-time diagnostics, retail for inventory management, and smart cities for traffic monitoring. For non-Intel hardware, similar optimizations can be achieved using frameworks like TensorRT for NVIDIA GPUs. Check out our OpenVINO guide docs.ultralytics.com/integrations/openvino/ for more details. 🚀

  • @jingjungpractice4445
    @jingjungpractice4445 Год назад

    If I didn't define the imgsz parameter and I use the OpenVINO model on a video resolution of 1920x1080, is that okay?

    • @Ultralytics
      @Ultralytics  Год назад

      The choice largely hinges on the specific problem you're addressing. In an ideal scenario, a video resolution of 320*320 is preferable when conducting inference with OpenVINO.

  • @adarshjha1300
    @adarshjha1300 4 месяца назад

    ago
    Hello! I was doing object detection on raspberry pi 5, i tried quantising the model to both int8 and fp16 but i didnt recieve any faster inferencing time. Can you tell me a good solution on what i can do? I tried to export the model to tflite, onnx, openvino and much more. but the best inference time i could get was 170-200ms. can you tell me what i can do to make these inference time lower. the image size of the model is 256.

    • @Ultralytics
      @Ultralytics  4 месяца назад

      Hello! 👋 It sounds like you've tried several optimization techniques already. To help you better, could you please share more details, such as the specific YOLOv8 model you're using and any error messages or warnings you encountered? Also, ensure you're using the latest versions of `torch` and `ultralytics`. For Raspberry Pi, you might want to try using TensorRT, which can significantly speed up inference times on NVIDIA hardware. You can find more details on exporting to TensorRT here: TensorRT Export Guide docs.ultralytics.com/integrations/tensorrt/. If you need further assistance, please provide more details, and we'll be happy to help! 😊

    • @adarshjha1300
      @adarshjha1300 4 месяца назад

      @@Ultralytics I am running my model on coral edgetpu device and i am unable to convert it to tensorrt. is there any other way i can optimize my model for faster inference?

    • @Ultralytics
      @Ultralytics  3 месяца назад

      Got it! For Coral Edge TPU, TensorRT isn't compatible. Instead, focus on optimizing your model with TensorFlow Lite (TFLite) and ensuring it's quantized correctly for the Edge TPU. Here's a quick guide:
      1. Export to TFLite:
      ```python
      from ultralytics import YOLO
      model = YOLO("yolov8n.pt")
      model.export(format="tflite") creates 'yolov8n_float32.tflite'
      ```
      2. Quantize for Edge TPU:
      Use the Edge TPU Compiler to compile your TFLite model:
      ```sh
      edgetpu_compiler yolov8n_float32.tflite
      ```
      For detailed steps, check out our TFLite guide: TFLite Export Guide docs.ultralytics.com/integrations/tflite/. This should help you achieve faster inference on your Coral Edge TPU. 🚀

  • @chihebnouri5541
    @chihebnouri5541 7 месяцев назад +1

    how to output the realtime detection in angular?

    • @Ultralytics
      @Ultralytics  7 месяцев назад +1

      To achieve real-time YOLOv8 detection in Angular, you can follow the mentioned steps.
      1. Set up a backend server to run the YOLOv8 model.
      2. Establish communication between Angular and the backend.
      3. Create a component to capture video streams.
      4. Stream frames to the backend for detection.
      5. Display detection results on the front end in real-time.
      6. Continuously update the UI with new detections.
      Thanks

    • @chihebnouri5541
      @chihebnouri5541 7 месяцев назад

      @@Ultralyticsso for example i use flask and embedd the streaming in angular?without exporting it?then how can i deploy my model with angular to the web.so clients use it.i dont have a strong server

    • @Ultralytics
      @Ultralytics  3 месяца назад

      Yes, you can use Flask to run the YOLOv8 model and serve the results to your Angular frontend. Here’s a concise approach:
      1. Flask Backend: Set up a Flask server to handle video frames and run YOLOv8 inference. Use `streamlit` for real-time detection.
      2. Angular Frontend: Capture video streams and send frames to the Flask server via HTTP requests.
      3. Deployment: Deploy both Flask and Angular apps on a cloud platform like Heroku or AWS.
      For detailed steps on setting up real-time inference, check our guide: Streamlit Live Inference docs.ultralytics.com/guides/streamlit-live-inference/.
      If you don't have a strong server, consider using cloud services to handle the computational load.

    • @chihebnouri5541
      @chihebnouri5541 3 месяца назад

      @@Ultralytics thank you,now im stuck how can upload it in cloud?with flask and angular?

    • @Ultralytics
      @Ultralytics  3 месяца назад

      You're welcome! To deploy your Flask backend and Angular frontend to the cloud, follow these steps:
      1. Flask Backend:
      - Use a cloud platform like Heroku, AWS, or Google Cloud.
      - Create a `Procfile` for Heroku or set up a Docker container for AWS/GCP.
      - Push your Flask app to the cloud repository.
      2. Angular Frontend:
      - Build your Angular app using `ng build`.
      - Deploy the build files to a cloud service like Firebase Hosting, Netlify, or AWS S3.
      For detailed deployment steps, refer to the respective cloud platform documentation. If you need more guidance, feel free to ask! 😊

  • @dderedde
    @dderedde 4 месяца назад

    Why does my export not have a .mapping file?Please help me

    • @Ultralytics
      @Ultralytics  4 месяца назад

      Hi there! 😊 It sounds like you're encountering an issue with the export process. To help you better, could you please provide more details? Specifically, let us know the exact command you're using and any error messages you're seeing. In the meantime, make sure you're using the latest versions of `torch` and `ultralytics`. You can upgrade them with: ` pip install --upgrade torch ultralytics ` For more detailed guidance, you can check our documentation here: YOLOv8 Export Guide docs.ultralytics.com/modes/export. Feel free to share more info so we can assist you further! 🚀

  • @nhattran4833
    @nhattran4833 Год назад

    Hello, thanks for sharing, could you make a similar video for yolov7

    • @Ultralytics
      @Ultralytics  Год назад

      Thank you for your interest! While we’re not the creators of YOLOv7, we do have comprehensive documentation for YOLOv5 and YOLOv8 available on our website. You might find it helpful for your projects. docs.ultralytics.com/

  • @Melo7ia
    @Melo7ia 25 дней назад

    🎵 Curious mix à la OpenVINO groove! 💃 How does the optimização process impact the accuracy of YOLOv8 quando we're pushing for that 3x speedup? Curious if anyone diz there are trade-offs we'll trip over! 🔍🤔

    • @Ultralytics
      @Ultralytics  24 дня назад

      Great question! Optimizing YOLOv8 with OpenVINO can significantly boost speed without major accuracy loss. However, some trade-offs might occur depending on the model and data specifics. It's always a good idea to test and validate the optimized model to ensure it meets your accuracy needs. For more details, check out our guide: docs.ultralytics.com/integrations/openvino/ 😊

  • @ponjoses1833
    @ponjoses1833 7 месяцев назад +2

    from ultralytics import YOLO
    model = YOLO("yolov8n-seg.pt")
    mode = YOLO('model/')
    result = mode("output_image.jpg")
    i have modified and tried to get result using the above code, and i face the below error
    File "C:\Users\Prasanth\anaconda3\envs\Openv\lib\site-packages\ultralytics
    n\autobackend.py", line 286, in __init__
    raise TypeError(f"model='{w}' is not a supported model format. "
    TypeError: model='model' is not a supported model format.
    can u tell me how to rectify the error

    • @Ultralytics
      @Ultralytics  7 месяцев назад

      Below is the accurate code that will function properly:
      ```python
      from ultralytics import YOLO
      model = YOLO("yolov8n-seg.pt")
      result = model("output_image.jpg")
      ```

  • @afriquemodel2375
    @afriquemodel2375 Год назад +1

    tutorial with tensorflow2 please

    • @Ultralytics
      @Ultralytics  Год назад

      Thank you for sharing your thoughts. We will certainly look into this further!

  • @gigi-oc8gn
    @gigi-oc8gn 10 дней назад

    thankssss

    • @Ultralytics
      @Ultralytics  10 дней назад

      You're welcome! 😊 If you have any questions or need more info, feel free to ask!

  • @m033372
    @m033372 4 месяца назад

    How dramatically can optimizing a YOLOv8 model with OpenVINO improve inference speed, and are there any trade-offs we should be aware of, like sacrificing accuracy or compatibility issues? 🚀?

    • @Ultralytics
      @Ultralytics  3 месяца назад

      Great question! Optimizing a YOLOv8 model with OpenVINO can significantly boost inference speed, often achieving up to 3x CPU speedup and 5x GPU speedup. However, there are some trade-offs to consider. While the accuracy generally remains high, certain optimizations like INT8 quantization might introduce minimal accuracy loss. Compatibility can also vary depending on the target device and the specific optimizations applied. For more details, check out our comprehensive guide on optimizing with OpenVINO: docs.ultralytics.com/guides/optimizing-openvino-latency-vs-throughput-modes/. Make sure you're using the latest versions of `torch` and `ultralytics` for the best results. 🚀✨

  • @GvK-wb2nc
    @GvK-wb2nc 27 дней назад

    Did not work! You lost a Sign... that is bad

    • @Ultralytics
      @Ultralytics  27 дней назад

      Sorry to hear you're having trouble! Could you provide more details about the issue you're facing? I'll do my best to help. 😊