Это видео недоступно.
Сожалеем об этом.

Nvidia Jetson Nano vs. Google Coral Dev board, Detailed Comparison

Поделиться
HTML-код
  • Опубликовано: 17 авг 2024

Комментарии • 68

  • @Hardwareai
    @Hardwareai  4 года назад +16

    [Correction]
    There's a mix-up in test result table( 06:03 ), coral dev is 20.7 ms and NVIDIA Jetson nano is 72 ms. The audio is correct if you listen to it, but the column order is wrong.
    16GB eMMC flash is only available in production-ready Jetson Nano module , instead of an SD card slot. Jetson Nano Development Kit's SoM doesn't have 16GB eMMC flash storage.

    • @Maisonier
      @Maisonier 4 года назад

      There isn't any solution with HBM or any faster memory attach to the tensor cores?

    • @Hardwareai
      @Hardwareai  4 года назад +1

      None that I am aware of, no. Nvidia Jetson Nano is the lowest tier in Jetson line-up, so I don't think Nvidia plans on making a version with HBM. So, for Jetson Nano it's really all about how good you can optimize the model

    • @bossssssist
      @bossssssist 4 года назад +1

      also around 5min with the fps. audio says opposite

    • @ChuongNguyenPlus
      @ChuongNguyenPlus 4 года назад

      Please fix the video to avoid further confusion. This should be an easy fix.

    • @Hardwareai
      @Hardwareai  4 года назад +1

      Unfortunately it isn't... RUclips doesn't allow editing videos - only sound track

  • @cahyawirawan
    @cahyawirawan 5 лет назад +14

    Hi, you mentioned at 06:03 that coral dev is 3 times faster than jetson nano for inferencing, but according to your video, coral dev needs 72 ms for inferencing, where jetson nano needs only 20 ms, which one is also now 3 times faster?

    • @Hardwareai
      @Hardwareai  5 лет назад +9

      Hi. A mix-up in that test result table, coral dev is 20.7 ms and NVIDIA Jetson nano is 72 ms. The audio is correct if you listen to it, but the column order is wrong. Thank you for the pointing it out.

    • @nonamenoname2618
      @nonamenoname2618 4 года назад +11

      @@Hardwareai I suggest adding in your video a sign or a bubble commenting that you have mixed up the column order, most viewers will get confused

    • @Hardwareai
      @Hardwareai  4 года назад +8

      Good idea! Unfortunately I cannot correct video now or add a sign/bubble from RUclips studio. Maybe RUclips will grant me this privilege after I get 1k subs

  • @whoseai3397
    @whoseai3397 5 лет назад +4

    Jetson NANO development Kit doesn't have 16GB eMMC flash. Only Jetson NANO module has 16GB eMMC,whose price is 129US dollar

    • @Hardwareai
      @Hardwareai  5 лет назад

      Thank you for noticing this. I'm not sure how can I correct the video, but I'll make sure to put a correction statement on top of the video description!

  • @AltMarc
    @AltMarc 5 лет назад +4

    Even I don't really know what I'm doing, with ML on the Jetson Nano, the lack of memory is my biggest trouble for GPU accelerated ML. Is there a "swap system" that also work for GPUs?

    • @Hardwareai
      @Hardwareai  5 лет назад +4

      Right, I also encountered the memory issues during testing. Some helpful tips:
      1) Mounting swap might help. You can check Jetson Hacks on how to do it www.jetsonhacks.com/2019/04/14/jetson-nano-use-more-memory/
      2)Install lightweight desktop environment or get rid of desktop altogether 3)Decrease batch size for the model 4)Run the model using pure Tensor-RT(not TF-TRT) 5)Configure TF-TRT session with memory limits devtalk.nvidia.com/default/topic/1042936/tensorrt/trt5-0-memory-error-when-building-engine/post/5293402/#5293402
      Good luck! What model are you trying to run?

  • @whoseai3397
    @whoseai3397 5 лет назад +6

    I really like your T-Shirt ! :)

    • @Hardwareai
      @Hardwareai  4 года назад

      Thanks! I actually try to wear a new t-shirt for every video xD

  • @kestergascoyne6924
    @kestergascoyne6924 4 года назад +1

    Wow, thank you for this.

    • @Hardwareai
      @Hardwareai  4 года назад

      Glad to hear you found it useful!

  • @sgodsellify
    @sgodsellify 4 года назад

    With on device training (7:16). You mention low shot learning with imprinted weights using tflite. However there is nothing stopping anyone from install full Tensorflow on Google's Edge TPU board.

    • @Hardwareai
      @Hardwareai  4 года назад

      Yes, you're right! Using Full Tensorflow is probably not recomended since Edge TPU's CPU is quite slow and the training would take long time. if you're interested in doing backpropagation training on device with Edge TPU though, there is one of the newest addition to their documentation, which was not present at the time of making that video: coral.ai/docs/edgetpu/retrain-classification-ondevice-backprop/#overview as documented here. It trains only softmax layer of the model, using Edge TPU for forward pass and performing backprop only for the last layer(which will be executed on CPU), thus making resulting model more precises than weight imprinted models.

  • @birdo1180
    @birdo1180 4 года назад +1

    Getting 15 fps on PyTorch (same mb2 ssdlite model) converted to TRT 6 engine on the JN Nano, running inference w/ the engine in Python. I think PyTorch takes a hit on performance, and I only need 10 fps for my use-case, but still would love to pump 30 frames per second for demos. Oddly (or maybe not), I get the exact same FPS directly using PyTorch model instead of the TRT 6 engine.

    • @Hardwareai
      @Hardwareai  4 года назад

      If the performance is the same for TRT and original model then there is a problem indeed. FP16?

  • @gheesungng1211
    @gheesungng1211 5 лет назад +4

    Has anyone tried the Coral USB plugged into Jetson Nano combi?

    • @Hardwareai
      @Hardwareai  5 лет назад +4

      Yes!
      blog.usejournal.com/google-coral-edge-tpu-vs-nvidia-jetson-nano-a-quick-deep-dive-into-edgeai-performance-bc7860b8d87a
      Not surprisingly, it performs better than with Raspberry Pi, because of USB 3.0 and (to certain extent) faster CPU.

  • @aabb-zz9uw
    @aabb-zz9uw 5 лет назад +1

    Does the Google Coral work with a Nanopi Neo Air? This is because the raspberry pi is too heavy for my project

    • @Hardwareai
      @Hardwareai  5 лет назад +1

      In theory - yes. I checked the info on Nanopi Neo Air and it seems it uses the same architecture as RPI 3, arm v7l. In practice - you'll have to try yourself.But it worth a shot

  • @jemo_hack
    @jemo_hack 4 года назад

    Nicely executed and great info.

    • @Hardwareai
      @Hardwareai  4 года назад

      Thanks. Glad that you found it helpful.

  • @astroboytechranger8231
    @astroboytechranger8231 4 года назад +2

    Iam looking for Jetson nano CPP programs for line departure warning,self driving with camera assistance,road sign recognition, safe distance estimation and warnings,ADAS-advanced driver assistance applications that are in code AUTOSAR architecture compatible,help me if some one having similar devolapment

    • @Hardwareai
      @Hardwareai  4 года назад +2

      I wish I could provide more help, but I am not aware of any ready-to-go packages providing this functionality for Jetson Nano. Also, if you're thinking about creating your own self-driving car(it sounds like that :) ), then it might be worse considering better board, maybe recent Xavier NX?

    • @astroboytechranger8231
      @astroboytechranger8231 4 года назад +1

      @@Hardwareai prototype and for testing the models and for proof of concept currently have only Jetson nano Wana test and then move to Xavier board

  • @polydynamix7521
    @polydynamix7521 2 года назад

    What about making a board that had a Nano AND multiple Coral chips... I've already sandboxed one on Gepetto. There were no conflicts in its construction.

    • @Hardwareai
      @Hardwareai  2 года назад

      Sorry, RUclips held your comment for some reason. Anyways, while it is theoretically possible, what would be the cost and usage of such board?

  • @salgadev
    @salgadev 4 года назад

    Dude you messed up all the comparison slides, the screen says coral and you say Jetson and vice-versa. Which is it?

    • @Hardwareai
      @Hardwareai  4 года назад

      The table is wrong, coral dev is 20.7 ms and NVIDIA Jetson nano is 72 ms. The audio is correct if you listen to it, but the column order is wrong. I need to pin it on top of the comments :)

  • @ludmilamaslova3773
    @ludmilamaslova3773 5 лет назад +2

    Все отлично, спасибо за информацию

  • @devue4183
    @devue4183 5 лет назад

    Is it possible to put a wifi dongle on the Nano? (compatible with ubuntu, with right driver)

    • @Hardwareai
      @Hardwareai  5 лет назад +2

      Hi! Yes, of course. Although I think as of now there is no official list of compatible dongles, I have tried 3 that I had in my office, one of them worked, albeit was slightly not stable.

    • @billfield8300
      @billfield8300 5 лет назад

      If you install the wifi card found here amzn.to/2H26b2R under the heat sink of the Jetson Nano, it will still be a bit cheaper than the Google board. $33 CAD (about $25 USD) with free shipping. Comes with antennas etc..It plugs right in. Hope that helps.

  • @Andrea-in8jd
    @Andrea-in8jd 4 года назад

    Hi, great video, very interesting. I was wondering if it was possible to compare Google Coral with Invida (Neural computing Stick 2)

    • @Hardwareai
      @Hardwareai  4 года назад +1

      Yes, I get that request quite a lot :) I probably won't make a comparison, but might make a project with Neural computing Stick 2 in near future

  • @John-vk1ij
    @John-vk1ij 3 года назад

    Where did you get the T shirt...

    • @Hardwareai
      @Hardwareai  3 года назад

      Where I get everything else, Taobao :)

  • @RyeinGoddard
    @RyeinGoddard 4 года назад

    Your benchmark table is wrong, or you mixed it up when you said it.

    • @Hardwareai
      @Hardwareai  4 года назад

      Yes, you're right. The table is wrong, coral dev is 20.7 ms and NVIDIA Jetson nano is 72 ms. The audio is correct if you listen to it, but the column order is wrong. Thank you for the pointing it out.

  • @hunardongsson7087
    @hunardongsson7087 4 года назад

    It's a good video, but why are you uploading in 720p?

    • @Hardwareai
      @Hardwareai  4 года назад

      Because I live in China currently :) the access to RUclips and other websites is problematic and the speed is quite slow. Other reason is that I only use semi-professional equipment for recording, so going up to 1080p wouldn't make much difference.

  • @Thangheo12233
    @Thangheo12233 4 года назад +1

    There's no eMMC on jetson nano, you must received some money from Nvidia

    • @Hardwareai
      @Hardwareai  4 года назад +1

      I wish I had :) It was mentioned in the comments, that Jetson Nano Compute Module in fact does have eMMC - but the module that comes with Dev board doesn't. When preparing the video I took the information about the specs from Compute Module description page.
      It is one of the earlier videos I made, so now I'm double and triple checking videos before posting - since RUclips doesn't allow changing video content.

  • @mattizzle81
    @mattizzle81 4 года назад +2

    The Jetson Nano is just too bulky and power hungry. Raspberry PI 4 + usb Edge TPU is best. I even had the edge TPU running on a rooted Android phone.
    I wish there were more people experimenting with other types of models on them. I'm interested to see if Pix2pix will run on it and how fast.

    • @Hardwareai
      @Hardwareai  4 года назад

      It might as well be! On the moment of making that video RPI 4 wasn't released yet.
      About running tpu on android phone - interesting! Do you have a link to an article how to do it?
      Right, I also want to try more different networks, especially now when Edge TPU compiler supports post-training quantization. Last month I have been busy doing my freelance projects with K210 chip, but almost done with that one, preparing to make next video!

    • @mattizzle81
      @mattizzle81 4 года назад

      @@Hardwareai So far I haven't heard of anyone else doing it. I only tried it on a Samsung Galaxy S7 so far. On that phone I had root and could install Linux via the Linux Deploy app. I also had a custom kernel on the phone, not sure if that made a difference though. On the S7 it was slow, I am assuming because it was micro-usb OTG and not USB-C. The install script just needed a few tweaks to ignore the platform it was being installed on. It took a bit of playing around but just the same as installing on any other Linux.
      I assume it is also probably possible to write a libusb wrapper to get it to work as non-root using the UserLand app to install Linux. However on my current phone, Huawei P30 Pro, Android does not recognize the Edge TPU as a USB device at all. Not sure if it is power issues or what.
      TFLite and GPU Delegate, NNAPI, on the newer phones is making me re-evaluate whether or not I would need the Edge TPU though. I am getting close to 30 fps with SSD MobileNet on the P30 Pro.

  • @vvxx2287
    @vvxx2287 4 года назад +1

    你的美偷走了我的心

  • @lakshay510
    @lakshay510 4 года назад +1

    Nano performs poorly, when you in actual try to deploy model it will give just 1-2 FPS, Then you think whats the issue? They tell you to convert ssd or yolo model to tensorrt but there is no support for that. The pretrained model runs on it but the model after transfer learning won't run on the tensorrt and hence you'll indirectly run your model on Nano's GPU which gives lil better results than pi4b. Choose Nvidia if you are much into C development rather than python development. The tftrt takes quite a long time and will give you 7-8 FPS not 20 FPS.

    • @Hardwareai
      @Hardwareai  4 года назад +1

      I really wonder what is with auto-moderator on RUclips... Your comments was held for review for some reason.
      Anyways, I'm making another video about Xavier NX vs Nano, where I will include custom model inference testing. It is possible to do with Python using TRT and PyCuda. What is model are you trying to run?

    • @lakshay510
      @lakshay510 4 года назад +1

      @@Hardwareai I tried retrained SSD inceptionNet 2018 version. Tried too many things converted it to uff but it always failed while converting to Tenssort. Tried converting 2017 version and it worked(I guess the process is available on dusty-nv github) but problem with 2017 is that It is outdated now. 2018 version is unable to interpret Batch Norm layers. So if possible make a video on converting SSD/Faster RCNN model available on Tensorflow Model Zoo to TensorRT inference. Thanks

  • @MrLemonYAN
    @MrLemonYAN 4 года назад +1

    T桖图案好评!

    • @Hardwareai
      @Hardwareai  4 года назад

      哈哈,谢谢。我每个视频买新的T恤 xD

  • @racketsong
    @racketsong 4 года назад

    😂💯👊