Build Your Own Object Detection System with Machine Learning

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024

Комментарии • 52

  • @dhruv.pandey93
    @dhruv.pandey93 3 года назад +2

    Finally some good news for Raspberry pi. The UI looks very intuitive. Good utilities for both beginner and experts. I would definitely love to give it a try. Hopefully you are not asking bucks for these services soon.

  • @BotBytesHQ
    @BotBytesHQ 3 года назад +3

    This is Excellent.....So I have already been working on improvising a remote monitoring using IP Cameras and I have already completed the complete frame work but wanted more smart and dynamic way to inference and recognize objects in the video/photo. I can say I am super delighted to use Edge impluse in my project and build what I need.

  • @jalls
    @jalls 2 года назад +1

    Just tried and it worked.... Great platform @edge impulse. I can’t wait to implementing this. Many ideas that come to my mind 🤯

  • @grunftz
    @grunftz 3 года назад +2

    love this guy energy :D thx for video!!!

  • @imranarshad755
    @imranarshad755 3 года назад +2

    Will give this a go when I get time! Thanks buddy, love the plant!

  • @foxcorgi8941
    @foxcorgi8941 2 года назад +1

    Thanks for your website, this is very well explained and very easy to use.

  • @aiforgreen7640
    @aiforgreen7640 2 года назад

    Thanks for this! We used it for our school project!

    • @aiforgreen7640
      @aiforgreen7640 2 года назад

      check out our project ruclips.net/video/TZJxIuQ1EXw/видео.html

  • @chinchinchin695
    @chinchinchin695 3 года назад +2

    Amazing approach

  • @warricksmythevideo
    @warricksmythevideo 3 года назад

    Absolute f***ing genius!! Thank you so much!!

  • @Truthseeker98-e3k
    @Truthseeker98-e3k 2 дня назад

    Is it possible to detect human and not human by this process? And If I do not want to use Pi, what is other best option to do that?

  • @adityagupta-hm2vs
    @adityagupta-hm2vs 3 года назад

    woww! this is just amazingly smooth to install and use! great!

  • @rajmeetsingh1625
    @rajmeetsingh1625 6 месяцев назад

    Thank you, How we can do segmentation? any tutorials

  • @bassguitarist2686
    @bassguitarist2686 Год назад

    Can you give tutorial on once make the model how to use it in Angular or React? I see you did something towards the end of video, is that code downloadable?

  • @BobBeatski71
    @BobBeatski71 3 года назад +1

    That is really impressive.

  • @jimysaiid8329
    @jimysaiid8329 11 месяцев назад

    If we have the RGB data is this working good with ESP 32 cam ?

  • @efwan5825
    @efwan5825 2 года назад +1

    Can the raspberry Pi Pico perform and function in the same way that the Raspberry Pi in this video does???

  • @pekaway
    @pekaway 3 года назад

    great! we will test it soon.

  • @jarrettgross9472
    @jarrettgross9472 Год назад +1

    "edge-impulse-linux --clean" does not work for me... do you have any Ideas why?

  • @ricardostocker2012
    @ricardostocker2012 4 месяца назад

    Conclusion
    Implementing logarithmic and optimized calculations on edge devices with limited resources can significantly improve efficiency and performance. By following these strategies, you can develop robust and efficient solutions that make the most of limited hardware capabilities.

  • @emeobioh7950
    @emeobioh7950 Год назад

    Is there a way to trigger somethibg like a servo or led when a specific object is detected?

  • @naim9878
    @naim9878 2 года назад

    After i got this model, how i want to connect this model with for example cashier system to detect the object send it into the system with the system already have database on it??

  • @DarpaProperty
    @DarpaProperty 2 года назад

    Can I count number of cars in traffic with this, will the frame-rate on Pi4 make it miss any cars?

  • @ssalman5876
    @ssalman5876 11 месяцев назад

    Is there any code is required to run the model in Raspberry Pi...rather than using commands

  • @AliDaGrate
    @AliDaGrate Год назад

    If I wanted to hire someone to do things like this, what is their title? Example: Coder, Engineer, UX Designer?

  • @小寶906
    @小寶906 Год назад

    I want to change it to press the button to take a picture and then recognize it. How do I change the program?

  • @BellaCharlote
    @BellaCharlote 5 месяцев назад

    thank you very much

  • @ahmedalmunla6866
    @ahmedalmunla6866 2 года назад

    I am a total begginer , what should i change if i use windows instead of Linux ?

  • @protoTYPElab44
    @protoTYPElab44 3 года назад +1

    Hi just wondering if you could have a tutorial about creating an event when the image is detected, like if an image is detected it would trigger an led, or maybe a relay or a servo, something like that case. Anyways the UI is very cool and lit!!! Also ,very simple to connect on my pi.

    • @emeobioh7950
      @emeobioh7950 Год назад

      Yes ... did you find out how to do this?

  • @ManuelHernandez-zq5em
    @ManuelHernandez-zq5em 3 года назад

    What do you suggest a development box be, to run & debug a Python program that can test an edge-impulse-linux trained model (.eim) along with OpenCV before deploying it to a Raspberry Pi 4 device? My Windows 10 development box cannot run edge-impulse-linux and I am also having issues using an edge-impulse-linux .eim model on Ubuntu 20.04 in WSL2. Again, any suggestions on setting up an ideal development box to run & debug edge-impulse-linux? :)

    • @janjongboom7561
      @janjongboom7561 3 года назад

      Hi Manuel, a Linux VM should do the trick. This runs on Linux x86 as well.

    • @ManuelHernandez-zq5em
      @ManuelHernandez-zq5em 3 года назад

      @@janjongboom7561 , it also works well in Ubuntu 20.04 inside WSL2! Awesome! I installed both, the Edge Impulse CLI and Edge Impulse through pip3 install for my Python code. Than I used Visual Studio Code Remote (to run my program in the Ubuntu in WSL2) and it worked smoothly! No issues after that! .. :D. Thank you for your support!

  • @SuperLazyCat
    @SuperLazyCat Год назад

    how can i do this locally? i tried the website it wont let me go past the impluse design i followed the steps.

  • @Felipe.N.Martins
    @Felipe.N.Martins 3 года назад

    Very nice!!

  • @ManuelHernandez-zq5em
    @ManuelHernandez-zq5em 3 года назад +1

    It works beautifully! Thank you! :D. I need more samples on how I can integrate this on a python program (I am barely learning how to program in Python too). I need for my python program to send the detected objects (and accuracy levels) to another device like an Arduino or a Windows 10 PC, perhaps through UART. Where can I get such sample codes or learning material?

    • @janjongboom7561
      @janjongboom7561 3 года назад +2

      Hi, github.com/edgeimpulse/linux-sdk-python has some examples on invoking models from Python, then from there you can use whatever libraries (e.g. pySerial for UART, see pyserial.readthedocs.io/en/latest/shortintro.html) to communicate to other devices.

    • @ManuelHernandez-zq5em
      @ManuelHernandez-zq5em 3 года назад

      Is it possible to install edge_impulse_linux in a Windows 10 machine? If it is, can you please share the link to follow the steps? :)

    • @janjongboom7561
      @janjongboom7561 3 года назад

      @@ManuelHernandez-zq5em It's not. But you can download the tflite file (from Dashboard), then use the tflite python package to run inferencing.

    • @ManuelHernandez-zq5em
      @ManuelHernandez-zq5em 3 года назад

      ​@@janjongboom7561:
      Afer I run my Python program (VSCode > WSL2 > Ubuntu > VcXsrv where the OpenCV imshow does display the image) and made model executable (chmod +x .eim), I get the following error:
      File "/home/winlinuxuser/projects/classify-vehicle/test-numpy.py", line 20, in countAxles
      model_info = runner.init()
      File "/home/winlinuxuser/.local/lib/python3.8/site-packages/edge_impulse_linux/image.py", line 19, in init
      model_info = super(ImageImpulseRunner, self).init()
      File "/home/winlinuxuser/.local/lib/python3.8/site-packages/edge_impulse_linux/runner.py", line 30, in init
      self._runner = subprocess.Popen([self._model_path, socket_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
      File "/usr/lib/python3.8/subprocess.py", line 858, in __init__
      self._execute_child(args, executable, preexec_fn, close_fds,
      File "/usr/lib/python3.8/subprocess.py", line 1704, in _execute_child
      raise child_exception_type(errno_num, err_msg, err_filename)
      OSError: [Errno 8] Exec format error: '/home/winlinuxuser/projects/classify-vehicle/coin-detector.eim'
      Any ideas? :D

  • @vladyslavbykov987
    @vladyslavbykov987 2 года назад

    Hi! This is awesome video! Is it possible to train a model to recognize, for example, a car and and damaged car?

    • @janjongboom7561
      @janjongboom7561 2 года назад

      Yeah sure, although if you know the car will be in the frame you can do a normal image classifier instead of object detection. Much easier to label and train.

  • @waynehami7062
    @waynehami7062 2 года назад

    Can you actually use object tracking to track a human being

  • @errrbrrr3821
    @errrbrrr3821 Год назад

    can somebody help me? I'm facing an error. When I run the edge-impulse-linux-runner it says 'Failed to run impulse Error'.

    • @EdgeImpulse
      @EdgeImpulse  Год назад

      Hi ! Thank you for reaching out, please head over to our forum at forum.edgeimpulse.com where we can help you in more detail! :)

  • @ghaferhadhri3548
    @ghaferhadhri3548 2 года назад

    Please how to connect edge impulse on raspberry pi

  • @timothymalche8907
    @timothymalche8907 2 года назад

    Excellent

  • @revietech5052
    @revietech5052 3 года назад

    Can you create models optimized for tpu devices?

    • @janjongboom7561
      @janjongboom7561 3 года назад +1

      Hey, yeah! Go to **Dashboard** and you can find the TensorFlow SavedModel there. You can run that from any environment that supports TPUs, e.g. TF from Python.

  • @Yvon-
    @Yvon- 3 года назад

    Why is this a 22 minute ad?

  • @Jan12700
    @Jan12700 3 года назад

    Really? This f*cking long vid is an ad?