Have you considered testing the Sipeed MAIX Binocular Camera? Supposedly you can do depth tracking with it. It would be great if someone would do a video on that! :)
Supposedly :) that's the key word. I asked for sample of Binocular Camera from Sipeed and should get it pretty soon. But there is no code samples available for stereo-to-depth on K210 or Micropython. I just searched on google and the top result is somebody posting a job on upwork to implement stereo-to-depth on K210 ... for 100 USD. I laughed so hard xD www.upwork.com/jobs/Developer-Needed-for-Stereo-Depth-Estimation-for-K210_~0138dcf553378be167 Anyways, I will test it, but concerning stereo-to-depth... While it is certainly possible to make some sort of quick python-to-micropython code conversion(must use ulab lib), the result probably will be way too slow.
@@Hardwareai haha well somebody's got to be the first to do it and share the code. And that somebody will get a lot of attention for making such a low cost depth camera. Much more low cost than the OpenCV OAK camera for $150+. Personally I'm fine with even 5fps if it's stable.
Hello, @DoomsdayDatabase! For K210, the highest version of YOLO officially supported was v2. I added v3 support myself (you can find the PR on Sipeed Github for that), but the PR was never merged. You can try converting YOLOv8 yourself with nncase, then run it on dummy data without postprocessing to find out the inference time. It's really adding the post-processing is the trickiest part of the object detection models porting.
Yes. The simplest to implement that would be to use person detector first, then cut out and resize a person from frame and run a second model(pose estimation) on that smaller image. Basically all the tools are already available, but I haven't seen anybody implementing it. You can be first!
Is this chip able to repeatedly recognize the same raccoon out of multiple raccoons present? When object is recognized, is it possible to write the object name to some kind of a database?
That is not something a chip would be responsible for - rather it is a network you train yourself. As long as the operations in that neural network are supported by the chip (K210 in this case), you can do with hardware acceleration. Racoon recognition is probably will be very similar to face recognition. Check out the whale recognition competition on Kaggle to see something similar.
I tried everything, but I get the following error on every try to train my model: Traceback (most recent call last): File "aXeleRate/axelerate/train.py", line 180, in setup_training(config_file=args.config) File "aXeleRate/axelerate/train.py", line 165, in setup_training return(train_from_config(config, dirname)) File "aXeleRate/axelerate/train.py", line 142, in train_from_config config['train']['valid_metric']) File "/tmp/aXeleRate/axelerate/networks/yolo/frontend.py", line 125, in train is_only_detect=False) File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 40, in get_train_annotations is_only_detect) File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 174, in parse_annotation fname = parser.get_fname(annotation_file) File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 75, in get_fname root = self._root_tag(annotation_file) File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 148, in _root_tag tree = parse(fname) File "/root/anaconda3/envs/yolo/lib/python3.7/xml/etree/ElementTree.py", line 1197, in parse tree.parse(source, parser) File "/root/anaconda3/envs/yolo/lib/python3.7/xml/etree/ElementTree.py", line 598, in parse self._root = parser._parse_whole(source) xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 0
Hi, thanks for the tutorial, and I followed it. when i reached the execution phase i notice that the code stucks in task = kpu.load(0x......) for too long with no response till i cut the connection (10mins of waiting ), I used the firmware 0.5.0.22. I noticed also that u skipped that little frame in your video and i'm wondering if you did any modification during that dead time. Thanks
Hi! For example code, please consult github.com/AIWintermuteAI/aXeleRate/blob/master/example_scripts/k210/detector/racoon_detector.py For your problem, can you create an issue on Github? Try connecting to the board using serial terminal(in MaixPy IDE Tools -> Terminal), then execute your code in the terminal(Ctrl+E to paste, then Ctrl+D to execute code). Post the results of running code in terminal + details about model you're using in Github issues following bug report template, thanks!
@@Hardwareai it was my fault i didn't respect memory allocation for flashing codes. what happened was overwriting firmware with kmodel which ruins the code.
Hello i tried your person detector on google colab and it shows this error ModuleNotFoundError: No module named 'axelerate.networks.yolo.backend.utils.augment'. When i went and checked utils folder there was no file named augment can you help?
The person detector wasn't updated for a while. However, I merged a pull request today, so could you try it again and report back if it works/not work?
Hi, Thanks for the tutorial, I've been trying to follow it, but when I try to train the model, I get : AttributeError: module 'keras.layers' has no attribute 'ReLU' Any Idea why?
Can you execute "conda list" or "pip list" in your environment and tell me what versions of Keras and Tensorflow are you using? Are these the same as in requirements? (tensorflow 1.14.0, keras 2.3.0)
@@HardwareaiSynology Surveillance Station, BlueIris, etc uses motion detection for recordings and this is far from accurate detection. The result is a lot of false recording tons of wasted SSD or HardDrive Space. So if we can feed cameras video feeds to board and trigger Synology Surveillance Station or BlueIris with MQTT or any other type of protocol. Believe me, there is a lot of interest in this. Here is one of the examples: ruclips.net/video/fwoonl5JKgo/видео.html but this is not fast enough, if docker runs on a headless PC or server, about 3-4 sec. and this is slow for a security system.
yes, I think so, there is a simulator, look at the docs github.com/kendryte/nncase - you'll need to use pretty early version of nncase though to run the model on K210
Hi! Video is just a shortened version of the tutorial. In the tutorial the process of conversion is described on Step 4. If you're getting padding error it probably means two things --- you're using versions of Keras or Tensorflow different from specified in the requirements or you modified feature.py script somehow. I already fixed padding issue in my repo by including modified version of Mobilenet. Hope that solved your problem!
It can do color based object tracking. Considering FPS, it really depends on your application - basically what do you do with an image. For simple applications (image capture+small (~300kb) detection model inference+output you can achieve up to QVGA@60fps/VGA@30fps (you can't run inference on VGA input, so you'll need to resize it to QVGA anyways) with C++ code. canaan.io/product/kendryteai
thanks for this great tutorial, I followed the tutorial on instructable and created a kmodel but while I try to run it with the firmware that is used in this video I get ValueError b>>> init i2c2 currently, I am using firmware: maixpy_v0.4.0_50_gcafae9d and my model size is 1.81MB I am using sipeed maix dock as hardaware and MaixPy IDE
With the release of the new M1n I was thinking of using it for presence detection where the range is up to ~3 meters. Do you think I can combine person detection with face detection in the same device? I guess if so, the fps will drop to about half
No, that won't affect FPS by that much, definitely not half - running inference on the image is the least time consuming portion of the pipeline, since it is done on KPU. Image processing adds much more overhead, because it is done on CPU. So, by all means try combining these two models, I think you will still get FPS > 13 frames per second.
Btw, that guide is outdated - I published new framework for training models for various types of hardware, including K210 github.com/AIWintermuteAI/aXeleRate
can you let me know whether you used Tiny Yolo and how long does it take to complete a detection of a given image on the device. What was the resolution of the image
well, the architecture of neural network in that case was Mobilenet as a feature extractor and yolo v2 detection layer. So, no, I didn't use Tiny Yolo, because it showed worse performance as compared to Mobilenet+Yolo v2. Good question. I'll do some tests. Resolution is 224x224.
Rocks? :) yes, that is possible. How far is the rock? 50 km/h doesn't seem insanely fast, K210 can go as fast as 30 (and even 60) FPS according to specifications. You won't get that speed with micropython firmware though, definitely need to use standalone SDK.
@@Hardwareai It would be relatively close to the camera, max 1m - 1.5m. Micropython would be too slow to process that? Which board would you recommend to start with and use with an Arduino? Sipeed Maixduino?
Sorry for the the reply, RUclips marks comments with at least one reply as already "answered" xD okay, in that case yes. You can try Micropython first, it is easiest to get started. Maixduino will work fine, you can get Maix Bit since it is cheaper and smaller. Has the same functionality as Maixduino, except for wireless.
Thank you so much for doing this tutorial. I'm so close to it working but it fails after training and maybe during the conversion process. File "C:\Users\Dev\miniconda3\envs\yolo\lib\subprocess.py", line 1207, in _execute_child startupinfo) OSError: [WinError 193] %1 is not a valid Win32 application I've tried MANY things to get past this including installing the 32-bit version of everything which didn't work at all. Any help would be massively appreciated. Thanks
Think I may have sorted/bodged that In convert.py, Line 96 add shell=True result = subprocess.run([k210_converter_path, "compile", model_path,output_path,"-i","tflite", "--dataset-format", "raw", "--dataset", folder_name], shell=True) Now getting C:\Windows\System32\aXeleRate\axelerate\projects accoon_detector\2020-07-17_12-43-33\YOLO_best_mAP.kmodel 'C:\Users\Dev\miniconda3\envs\yolo\lib\site-packages\axelerate etworks\common_utils cc cc' is not recognized as an internal or external command, operable program or batch file. This is painful
Emmm, it is not made to work with Windows :) only Linux and Colab. can you train in colab? I mean the only thing that needs to be changed for this particular part to work is to download windows version of ncc converter and provide the right path - it is complaining because Python converter script download Linux version
Делаем роботов для соревнований. Робот ездит по спецполигону и выполняет задания. Хочу добавить автоматики: например чтобы робот при обнаружении определенного предмета брал его манипулятором. Где можно подробно, пошагово почитать по теме машинного зрения на базе этой доски? Желательно по русски)))
Здравствуйте! С манипулятором могут возникнуть трудности :) вообще я где возможно стараюсь избегать манипуляторов и роборук в сорвенованиях, особенно если средняя школа или младше. В любом случае, в плане машинного зрения на этой доске на русском ничего особенно нету... самая полная и часто обновляемая документация - на китайском. На английском тоже нормально. Доска кроме нейросетей полностью поддерживает OpenMV кстати, если хотите начинать переводить документацию, даже для личных нужд - можно начинать оттуда)
Thank you very much for what you are doing man! Please help me a little, when I run you Colab notebook for detection and use the Kmodel I get the maixpy IDE gives me this error: err kmodel version only support v3/v4 now . This happens when I try to use a model that is in the flash memory of my Sipeed Dock, when I try to use the model inside the SD card it gives me this error instead: kpu load error 2006 err_no_mem please help me, I would be very grateful for your time.
My pleasure! Some cool hybrid drones on your channel! How long can they fly? Have a look at the issues section of aXeleRate on Github, I think the problem that you are using latest firmware, which has kmodel v4 disabled by default. you can try building github.com/sipeed/MaixPy/commit/461f841fe95cf9109df32e1f3a76d60a659c31af this commit from source or download precompiled firmware, 5.0-46 --- there is a link in my article to that version
@@Hardwareai Thank you very much for your answer! I tried to use the firmware you said and also many other versions (from 4.0 all the way up I tried many of them) but the result is the same: kpu load error 2, only supported model v3/v4 now, I saw the file you linked but I really do not know how to use it (I'm still a noob). About my drones, thank you for your interest, this was an experimental project I started when I was 16, now they fly pretty well despite all the challenges, gasoline two-stroke engines have their own world and are not easy to combine with alternators, starters, and electronics. If you have any question do not hesitate to ask :)
Fascinating stuff :) are you going to use Maix dock on one of your drones? Anyways, for your problem, can you create an issue in aXelerate repository or MaixPy repository - I monitor both of them. Specify the exact firmware number that you used, the model details and other relevant details. Github issues is more suitable for this than RUclips comments
Да, это правда xD. Обычно я использую беспроводной микрофон, когда записываю видео на камеру. А это видео я в России делал, когда домой возвращался на месяц, оборудования не было, поэтому при помощи встроенного микрофона записывал...
Добрый день. Получил сегодня K210 , и сразу начал ее изучать. Загрузил прошивку maixpy_v0.3.2_full.bin и face_model_at_0x300000 (прошивал kflash_gui) . С помощью MaixPy IDE загрузил скеч распознавания лица, работает лица распознает, но как только я отключаю MaixPy IDE, K210 перестает работать экран становится красным. Я не пойму K210 ему нужен компьютер что бы распознавать лица, автономно он что ли не работает? Если на K210 через USB подать только питание 5 вольт он будет работать, распознавать лица и на мониторе это бы отображалось. Может я что то делаю не так? Спасибо.
Все я разобрался, надо в MaixPy IDE нажать save open script to board (boot.py) . Сейчас работает как надо , при подаче питания 5 вольт сразу запускается скрипт распознавания лица, я очень удивлен скоростью распознавания СССУПППЕР. Заказал себе еще Sipeed MAIX Dock K210 AI.
Ты распознавание лиц пробуешь или просто нахождение лиц? На распознованеи лиц нужна прошивка 0.5. По ссылке можешь ознакомиться www.maixhub.com/index.php/index/index/detail/id/235.html
@@HardwareaiС этим я уже разобрался, изучил как лица искать , как предметы , как самому создавать свои предметы. К210 соеденил с ардуино по UART и передаю координаты X и Y (центр лица), а ардуино управляет серводвигателями и отклоняет прямо пропорционально как на мониторе. Если развивать эту тему дальше то возможностей очень много.
Hey thanks fot his tutorial. I followed all and when i tried to test it i just got and error in the code.The kmodel is fine i think but for some reason i just got: Traceback (most recent call last): File "", line 21, in OSError: run error: 13 MicroPython v0.5.0-29-g97fad3a on 2020-03-13; Sipeed_M1 with kendryte-k210 Type "help()" for more information. I follow this script :github.com/AIWintermuteAI/aXeleRate/blob/master/example_scripts/k210/detector/racoon_detector.py and just modify the While cycle due an error with these lines : .rotation_corr(z_rotation=90.0) a = img.pix_to_ai() I'll really appreciate any help.
@@Hardwareai Hi , i posted it and they said me it was the firmware and to try the latest. But it got me the error "only support Kmodel V3". I tried again with the minimmun ide firmware u posted in the link tutorial but also called "Error:13". Could you help me please? i really want to do this cause is the first step to the ANPR i want to build
We'll be following that problem in Github issues - for anyone else facing "only support Kmodel V3" - you can compile the firmware yourself following build instruction in MaixPy github repo or here www.maixhub.com/compile.html
Привет. Видео очень понравилось.Я заказал себе по почте (Sipeed MAIX Dock K210), раньше я увлекался компьютерным зрением , распознавания лиц, лиц как личность, предметов. Я не буду объяснять как это делается вот видео: ruclips.net/video/f6Dwa8tvH1Q/видео.html . Делал я все в пайтоне 3, через пип загружал библиотеки, создавал этот файл haarcascade_frontalface_default.xml в папке с кодом py ..... ну и так далее это все понятно как это работает. Я перекопал весь интернет об этом устройстве, но все равно полностью не до понимаю как это работает. Я объясню как я это понимаю, если что то не правильно то поправьте меня. В этой строке (task = kpu.load(0x600000)) находится адрес файла лица который уже загружен в устройство вместе с основным кодом. Его загружают отдельно от кода или он тоже загружается автоматический при загрузке основного года? Где мне это найти подробно как это работает? Спасибо.
Привет! Спасибо, рад слышать. По поводу распознавания лиц, у меня последнее видео как раз про это, можешь посмотреть. Если вкратце, task = kpu.load(0x600000) загружает модель нейросети в оперативную память устройства. В задаче распознавания лиц у нас используются три модели на самом деле - первая для нахождения лиц, вторая для нахождения face landmarks(нос, губы, глаза и тд). Затем зная местонахождение face landmarks выравниваем лицо в стандартную позицию и затем при помощи третьей модели высчитываем т.н face feature vectors, цифровые репрезентации уникальных черт лица человека. На финальном этапе мы просто высчитываем расстояние между двумя векторами и если расстояние достаточно мало, это одно и то же лицо. Нейросеть только высчитывает векторы из изображения, в процессе сравнения векторов нейросеть не участвует. Подробнее можно найти у меня в последнем видео или в других статьях про распознавание лиц, процесс везде один и тот же. С Наступившим!
My board is on it's way.. thank you for sharing
Wonderful! Which one did you buy though?
@@Hardwareai Maix Dock sir.
Have you considered testing the Sipeed MAIX Binocular Camera? Supposedly you can do depth tracking with it. It would be great if someone would do a video on that! :)
Supposedly :) that's the key word. I asked for sample of Binocular Camera from Sipeed and should get it pretty soon. But there is no code samples available for stereo-to-depth on K210 or Micropython. I just searched on google and the top result is somebody posting a job on upwork to implement stereo-to-depth on K210 ... for 100 USD. I laughed so hard xD
www.upwork.com/jobs/Developer-Needed-for-Stereo-Depth-Estimation-for-K210_~0138dcf553378be167
Anyways, I will test it, but concerning stereo-to-depth... While it is certainly possible to make some sort of quick python-to-micropython code conversion(must use ulab lib), the result probably will be way too slow.
@@Hardwareai haha well somebody's got to be the first to do it and share the code. And that somebody will get a lot of attention for making such a low cost depth camera. Much more low cost than the OpenCV OAK camera for $150+. Personally I'm fine with even 5fps if it's stable.
Inference time of Yolov8n On this board without show=True as in just the inference time without showing the image with bounding boxes?
Hello, @DoomsdayDatabase! For K210, the highest version of YOLO officially supported was v2. I added v3 support myself (you can find the PR on Sipeed Github for that), but the PR was never merged. You can try converting YOLOv8 yourself with nncase, then run it on dummy data without postprocessing to find out the inference time. It's really adding the post-processing is the trickiest part of the object detection models porting.
I've been playing around with the M.A.R.K., very cool development.
Yes, I've been playing with MARK for a year now, haha :) Saw your channel, doing great work! Keep it up
Awesome. Thank you..My board is on the way..
Nice! Which one?
@@Hardwareai it's Max go.
Same here, Maixduino. Hoping to be able to use object and face detection in combination with WiFi
@@FuZZbaLLbee I am able to run face recognition application but I am unable to run WiFi examples.
@@Hardwareai If time permits can you make tutorial on WiFi and speech recognition.
Great video! very informative. Is it possible to run real time pose estimation on the MaiX boards?
Yes. The simplest to implement that would be to use person detector first, then cut out and resize a person from frame and run a second model(pose estimation) on that smaller image. Basically all the tools are already available, but I haven't seen anybody implementing it. You can be first!
@@Hardwareai I would like to see that!
Is this chip able to repeatedly recognize the same raccoon out of multiple raccoons present?
When object is recognized, is it possible to write the object name to some kind of a database?
That is not something a chip would be responsible for - rather it is a network you train yourself. As long as the operations in that neural network are supported by the chip (K210 in this case), you can do with hardware acceleration.
Racoon recognition is probably will be very similar to face recognition. Check out the whale recognition competition on Kaggle to see something similar.
I tried everything, but I get the following error on every try to train my model:
Traceback (most recent call last):
File "aXeleRate/axelerate/train.py", line 180, in
setup_training(config_file=args.config)
File "aXeleRate/axelerate/train.py", line 165, in setup_training
return(train_from_config(config, dirname))
File "aXeleRate/axelerate/train.py", line 142, in train_from_config
config['train']['valid_metric'])
File "/tmp/aXeleRate/axelerate/networks/yolo/frontend.py", line 125, in train
is_only_detect=False)
File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 40, in get_train_annotations
is_only_detect)
File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 174, in parse_annotation
fname = parser.get_fname(annotation_file)
File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 75, in get_fname
root = self._root_tag(annotation_file)
File "/tmp/aXeleRate/axelerate/networks/yolo/backend/utils/annotation.py", line 148, in _root_tag
tree = parse(fname)
File "/root/anaconda3/envs/yolo/lib/python3.7/xml/etree/ElementTree.py", line 1197, in parse
tree.parse(source, parser)
File "/root/anaconda3/envs/yolo/lib/python3.7/xml/etree/ElementTree.py", line 598, in parse
self._root = parser._parse_whole(source)
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 0
Looks like something is wrong with your annotation files. Create an issue on Github?
Hi, thanks for the tutorial, and I followed it.
when i reached the execution phase i notice that the code stucks in task = kpu.load(0x......) for too long with no response till i cut the connection (10mins of waiting ), I used the firmware 0.5.0.22.
I noticed also that u skipped that little frame in your video and i'm wondering if you did any modification during that dead time.
Thanks
Hi! For example code, please consult github.com/AIWintermuteAI/aXeleRate/blob/master/example_scripts/k210/detector/racoon_detector.py
For your problem, can you create an issue on Github? Try connecting to the board using serial terminal(in MaixPy IDE Tools -> Terminal), then execute your code in the terminal(Ctrl+E to paste, then Ctrl+D to execute code). Post the results of running code in terminal + details about model you're using in Github issues following bug report template, thanks!
@@Hardwareai it was my fault i didn't respect memory allocation for flashing codes. what happened was overwriting firmware with kmodel which ruins the code.
Hello i tried your person detector on google colab and it shows this error ModuleNotFoundError: No module named 'axelerate.networks.yolo.backend.utils.augment'. When i went and checked utils folder there was no file named augment can you help?
The person detector wasn't updated for a while. However, I merged a pull request today, so could you try it again and report back if it works/not work?
Hi, Thanks for the tutorial, I've been trying to follow it, but when I try to train the model, I get :
AttributeError: module 'keras.layers' has no attribute 'ReLU'
Any Idea why?
Can you execute "conda list" or "pip list" in your environment and tell me what versions of Keras and Tensorflow are you using? Are these the same as in requirements? (tensorflow 1.14.0, keras 2.3.0)
Can I feed an external feed to this board like from a security camera? Not from onboard camera.
Yes, I believe that is possible. Are there any use cases for that?
@@HardwareaiSynology Surveillance Station, BlueIris, etc uses motion detection for recordings and this is far from accurate detection. The result is a lot of false recording tons of wasted SSD or HardDrive Space. So if we can feed cameras video feeds to board and trigger Synology Surveillance Station or BlueIris with MQTT or any other type of protocol. Believe me, there is a lot of interest in this. Here is one of the examples: ruclips.net/video/fwoonl5JKgo/видео.html but this is not fast enough, if docker runs on a headless PC or server, about 3-4 sec. and this is slow for a security system.
is it possible to run a kmodel without a board device?
yes, I think so, there is a simulator, look at the docs github.com/kendryte/nncase - you'll need to use pretty early version of nncase though to run the model on K210
Flash size refers to a microsd card?
It refers to internal FLASH memory size (16 Mb)
you just did not tell how to convert tfliet to kmodel in detail. its where the main problem is.. as the ncc gets padding error
Hi! Video is just a shortened version of the tutorial. In the tutorial the process of conversion is described on Step 4. If you're getting padding error it probably means two things --- you're using versions of Keras or Tensorflow different from specified in the requirements or you modified feature.py script somehow. I already fixed padding issue in my repo by including modified version of Mobilenet. Hope that solved your problem!
Is it possible to do person detection?
Of course. www.hackster.io/dmitrywat/axelerate-keras-based-framework-for-ai-on-the-edge-96f769
Do you know how fast this device is (what is the FPS) it can process, can it also do object tracking.
It can do color based object tracking. Considering FPS, it really depends on your application - basically what do you do with an image. For simple applications (image capture+small (~300kb) detection model inference+output you can achieve up to QVGA@60fps/VGA@30fps (you can't run inference on VGA input, so you'll need to resize it to QVGA anyways) with C++ code. canaan.io/product/kendryteai
thanks for this great tutorial,
I followed the tutorial on instructable and created a kmodel
but while I try to run it with the firmware that is used in this video I get
ValueError b>>> init i2c2
currently, I am using firmware: maixpy_v0.4.0_50_gcafae9d
and my model size is 1.81MB
I am using sipeed maix dock as hardaware
and MaixPy IDE
Can you post full output from the terminal?
With the release of the new M1n I was thinking of using it for presence detection where the range is up to ~3 meters. Do you think I can combine person detection with face detection in the same device? I guess if so, the fps will drop to about half
No, that won't affect FPS by that much, definitely not half - running inference on the image is the least time consuming portion of the pipeline, since it is done on KPU. Image processing adds much more overhead, because it is done on CPU. So, by all means try combining these two models, I think you will still get FPS > 13 frames per second.
Btw, that guide is outdated - I published new framework for training models for various types of hardware, including K210 github.com/AIWintermuteAI/aXeleRate
can you let me know whether you used Tiny Yolo and how long does it take to complete a detection of a given image on the device. What was the resolution of the image
well, the architecture of neural network in that case was Mobilenet as a feature extractor and yolo v2 detection layer. So, no, I didn't use Tiny Yolo, because it showed worse performance as compared to Mobilenet+Yolo v2.
Good question. I'll do some tests. Resolution is 224x224.
@@Hardwareai Many thanks, can you also let me know the FPS achieved with your setup
Is it possible to train the board with any object? E.g. a rock? I want to use it for motion tracking of a quite fast (50km/h) linear motion.
Rocks? :) yes, that is possible. How far is the rock? 50 km/h doesn't seem insanely fast, K210 can go as fast as 30 (and even 60) FPS according to specifications. You won't get that speed with micropython firmware though, definitely need to use standalone SDK.
@@Hardwareai It would be relatively close to the camera, max 1m - 1.5m.
Micropython would be too slow to process that?
Which board would you recommend to start with and use with an Arduino? Sipeed Maixduino?
Sorry for the the reply, RUclips marks comments with at least one reply as already "answered" xD
okay, in that case yes. You can try Micropython first, it is easiest to get started. Maixduino will work fine, you can get Maix Bit since it is cheaper and smaller. Has the same functionality as Maixduino, except for wireless.
Thank you so much for doing this tutorial. I'm so close to it working but it fails after training and maybe during the conversion process.
File "C:\Users\Dev\miniconda3\envs\yolo\lib\subprocess.py", line 1207, in _execute_child
startupinfo)
OSError: [WinError 193] %1 is not a valid Win32 application
I've tried MANY things to get past this including installing the 32-bit version of everything which didn't work at all.
Any help would be massively appreciated. Thanks
Think I may have sorted/bodged that
In convert.py, Line 96 add shell=True
result = subprocess.run([k210_converter_path, "compile", model_path,output_path,"-i","tflite", "--dataset-format", "raw", "--dataset", folder_name], shell=True)
Now getting
C:\Windows\System32\aXeleRate\axelerate\projects
accoon_detector\2020-07-17_12-43-33\YOLO_best_mAP.kmodel
'C:\Users\Dev\miniconda3\envs\yolo\lib\site-packages\axelerate
etworks\common_utils
cc
cc' is not recognized as an internal or external command,
operable program or batch file.
This is painful
Emmm, it is not made to work with Windows :) only Linux and Colab. can you train in colab? I mean the only thing that needs to be changed for this particular part to work is to download windows version of ncc converter and provide the right path - it is complaining because Python converter script download Linux version
@@Hardwareai Thank you for replying, I've been smashing my face into my desk.
OK, I'll fire up a Linux VM and start again!
That will work too - if you have any other problems, you can open an issue at aXeleRate repository on Github :)
@@Hardwareai Thanks again
Делаем роботов для соревнований. Робот ездит по спецполигону и выполняет задания. Хочу добавить автоматики: например чтобы робот при обнаружении определенного предмета брал его манипулятором. Где можно подробно, пошагово почитать по теме машинного зрения на базе этой доски? Желательно по русски)))
Здравствуйте! С манипулятором могут возникнуть трудности :) вообще я где возможно стараюсь избегать манипуляторов и роборук в сорвенованиях, особенно если средняя школа или младше. В любом случае, в плане машинного зрения на этой доске на русском ничего особенно нету... самая полная и часто обновляемая документация - на китайском. На английском тоже нормально. Доска кроме нейросетей полностью поддерживает OpenMV кстати, если хотите начинать переводить документацию, даже для личных нужд - можно начинать оттуда)
Thank you very much for what you are doing man! Please help me a little, when I run you Colab notebook for detection and use the Kmodel I get the maixpy IDE gives me this error: err kmodel version only support v3/v4 now . This happens when I try to use a model that is in the flash memory of my Sipeed Dock, when I try to use the model inside the SD card it gives me this error instead: kpu load error 2006 err_no_mem please help me, I would be very grateful for your time.
My pleasure! Some cool hybrid drones on your channel! How long can they fly?
Have a look at the issues section of aXeleRate on Github, I think the problem that you are using latest firmware, which has kmodel v4 disabled by default. you can try building github.com/sipeed/MaixPy/commit/461f841fe95cf9109df32e1f3a76d60a659c31af this commit from source or download precompiled firmware, 5.0-46 --- there is a link in my article to that version
@@Hardwareai Thank you very much for your answer! I tried to use the firmware you said and also many other versions (from 4.0 all the way up I tried many of them) but the result is the same: kpu load error 2, only supported model v3/v4 now, I saw the file you linked but I really do not know how to use it (I'm still a noob).
About my drones, thank you for your interest, this was an experimental project I started when I was 16, now they fly pretty well despite all the challenges, gasoline two-stroke engines have their own world and are not easy to combine with alternators, starters, and electronics. If you have any question do not hesitate to ask :)
Fascinating stuff :) are you going to use Maix dock on one of your drones?
Anyways, for your problem, can you create an issue in aXelerate repository or MaixPy repository - I monitor both of them. Specify the exact firmware number that you used, the model details and other relevant details. Github issues is more suitable for this than RUclips comments
@@Hardwareai I posted the issue, thank you again! Yes, I hope to use it during the flight and test what we can achieve :)
прекрасные видео, но надо купить какой-то микрофон лучше...
Да, это правда xD. Обычно я использую беспроводной микрофон, когда записываю видео на камеру. А это видео я в России делал, когда домой возвращался на месяц, оборудования не было, поэтому при помощи встроенного микрофона записывал...
Добрый день. Получил сегодня K210 , и сразу начал ее изучать. Загрузил прошивку maixpy_v0.3.2_full.bin и face_model_at_0x300000 (прошивал kflash_gui) . С помощью MaixPy IDE загрузил скеч распознавания лица, работает лица распознает, но как только я отключаю MaixPy IDE, K210 перестает работать экран становится красным. Я не пойму K210 ему нужен компьютер что бы распознавать лица, автономно он что ли не работает? Если на K210 через USB подать только питание 5 вольт он будет работать, распознавать лица и на мониторе это бы отображалось. Может я что то делаю не так? Спасибо.
Все я разобрался, надо в MaixPy IDE нажать save open script to board (boot.py) . Сейчас работает как надо , при подаче питания 5 вольт сразу запускается скрипт распознавания лица, я очень удивлен скоростью распознавания СССУПППЕР. Заказал себе еще Sipeed MAIX Dock K210 AI.
Извини за поздний ответ! У нас тут в Китае эпидемия коронавируса, я паниковал пять дней, сейчас уехал временно из Киатя :))
Ты распознавание лиц пробуешь или просто нахождение лиц? На распознованеи лиц нужна прошивка 0.5. По ссылке можешь ознакомиться www.maixhub.com/index.php/index/index/detail/id/235.html
@@HardwareaiС этим я уже разобрался, изучил как лица искать , как предметы , как самому создавать свои предметы. К210 соеденил с ардуино по UART и передаю координаты X и Y (центр лица), а ардуино управляет серводвигателями и отклоняет прямо пропорционально как на мониторе. Если развивать эту тему дальше то возможностей очень много.
@@ИванПлотников-д4х рассказал бы соотечественникам- как ты это сделал?
Hey thanks fot his tutorial. I followed all and when i tried to test it i just got and error in the code.The kmodel is fine i think but for some reason i just got:
Traceback (most recent call last):
File "", line 21, in
OSError: run error: 13
MicroPython v0.5.0-29-g97fad3a on 2020-03-13; Sipeed_M1 with kendryte-k210
Type "help()" for more information.
I follow this script :github.com/AIWintermuteAI/aXeleRate/blob/master/example_scripts/k210/detector/racoon_detector.py
and just modify the While cycle due an error with these lines :
.rotation_corr(z_rotation=90.0)
a = img.pix_to_ai()
I'll really appreciate any help.
Can you please create a Github issue at aXeleRate Github or MaixPy Github? It does seem like an issue with MaixPy and not the model training though.
@@Hardwareai Hi , i posted it and they said me it was the firmware and to try the latest. But it got me the error "only support Kmodel V3". I tried again with the minimmun ide firmware u posted in the link tutorial but also called "Error:13".
Could you help me please? i really want to do this cause is the first step to the ANPR i want to build
We'll be following that problem in Github issues - for anyone else facing "only support Kmodel V3" - you can compile the firmware yourself following build instruction in MaixPy github repo or here www.maixhub.com/compile.html
Привет. Видео очень понравилось.Я заказал себе по почте (Sipeed MAIX Dock K210), раньше я увлекался компьютерным зрением , распознавания лиц, лиц как личность, предметов. Я не буду объяснять как это делается вот видео: ruclips.net/video/f6Dwa8tvH1Q/видео.html . Делал я все в пайтоне 3, через пип загружал библиотеки, создавал этот файл haarcascade_frontalface_default.xml в папке с кодом py ..... ну и так далее это все понятно как это работает. Я перекопал весь интернет об этом устройстве, но все равно полностью не до понимаю как это работает. Я объясню как я это понимаю, если что то не правильно то поправьте меня. В этой строке (task = kpu.load(0x600000)) находится адрес файла лица который уже загружен в устройство вместе с основным кодом. Его загружают отдельно от кода или он тоже загружается автоматический при загрузке основного года? Где мне это найти подробно как это работает? Спасибо.
Привет! Спасибо, рад слышать. По поводу распознавания лиц, у меня последнее видео как раз про это, можешь посмотреть. Если вкратце, task = kpu.load(0x600000) загружает модель нейросети в оперативную память устройства. В задаче распознавания лиц у нас используются три модели на самом деле - первая для нахождения лиц, вторая для нахождения face landmarks(нос, губы, глаза и тд). Затем зная местонахождение face landmarks выравниваем лицо в стандартную позицию и затем при помощи третьей модели высчитываем т.н face feature vectors, цифровые репрезентации уникальных черт лица человека. На финальном этапе мы просто высчитываем расстояние между двумя векторами и если расстояние достаточно мало, это одно и то же лицо. Нейросеть только высчитывает векторы из изображения, в процессе сравнения векторов нейросеть не участвует.
Подробнее можно найти у меня в последнем видео или в других статьях про распознавание лиц, процесс везде один и тот же.
С Наступившим!
@@Hardwareai СПАСИБО. С прошедшим и с рождеством!!!
Мой обзор на maixduino с описанием примеров распознавания и детектирования лиц mysku.ru/blog/aliexpress/79416.html
Bad tutorial bro, should've done it slower
Thanks for the feedback! The video itself is not a tutorial though, more like an addition to the article, which IS a tutorial.