- Видео 150
- Просмотров 127 152
Альберт Иванов
Добавлен 10 окт 2012
ruadapt_qwen2.5_3B test on raspberry pi 4b 8gb raspbian bullseye
./build/bin/llama-cli -m models/Q4_K_M.gguf -co -cnv -p "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." -fa -ngl 80 -n 512 -t 0.1
выводы:
- в большинстве случаев модель выдает правильные развернутые ответы.
- иногда допускает грубейшие фактические ошибки - например, о том, что СССР воевал в начале Второй мировой войны на стороне Германии, а потом против.
- скорость генерации текста приемлема.
выводы:
- в большинстве случаев модель выдает правильные развернутые ответы.
- иногда допускает грубейшие фактические ошибки - например, о том, что СССР воевал в начале Второй мировой войны на стороне Германии, а потом против.
- скорость генерации текста приемлема.
Просмотров: 41
Видео
ruadapt_qwen2.5_3B test on luckfox 3566 ubuntu 24.04
Просмотров 3814 часов назад
- luckfox 3566 4GB RAM 32Gb emmc - wiki.luckfox.com/Core3566/ - ubuntu 20.04 was updated to 24.04 - llama.cpp built - github.com/ggerganov/llama.cpp/blob/8f275a7c4593aa34147595a90282cf950a853690/docs/build.md#l4 - model rus-rus quantized - huggingface.co/RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4_gguf/tree/main ./build/bin/llama-cli -m models/Q4_K_M.gguf -co -cnv -p "You are Qwen, crea...
Mini-Omni2 test on cpu i5-10400 (no cuda)
Просмотров 6314 дней назад
model speech(or/and image)-to-text. -вопрос можно задать на русском в том числе, ответ всегда на английской. -речь на выходе real-time заикается, разработчики пишут, что это связано с моделью, которая в float32. - готовый аудиофайл в wav без заиканий. github.com/gpt-omni/mini-omni2
yolon11.pt vs yolon11.onnx (imgsz=dynamic) vs yolon11 (imgsz=256) on raspberry pi 4b
Просмотров 2914 дней назад
1400ms - yolo11n.pt 640x640 726ms - yolo11n.onnx imgsz=dynamic int8 333ms - yolo11n.onnx imgsz=256 int8
stable diffusion mojo vs stable diffusion onnx inference tests
Просмотров 4021 день назад
python3 text-to-image.py prompt 'realistic futuristic city-downtown with short buildings, sunset' negative-prompt 'lowres, blurry' output city.jpg mojo text-to-image.🔥 seed 7 num-steps 25 prompt "realistic futuristic city-downtown with short buildings, sunset" negative-prompt "lowres, blurry" inference time: - onnx - 2 min 30sec - mojo - 2 min 36sec strange results. mojo should be faster ) how ...
together.ai api test (meta-llama/Llama-3.2-11B-Vision-Instruct-Turbo - image-to-text multi-modal)
Просмотров 4421 день назад
test with local image submitting. *problem with images with size exceed 1Mb - dont forget to resize. api.together.ai/
mojo(fast python) on raspberry pi - install and some tests
Просмотров 4428 дней назад
*medium.com/coinmonks/how-to-install-mojo-on-linux-9151f9afc1e3 sudo curl get.modular.com | MODULAR_AUTH=mut_B9A2265c00574f2a9ca86beba502de62 sh - modular install mojo mojo examples: github.com/svpino/mojo
how to convert craft-text-detector .pt model to .onnx model
Просмотров 24Месяц назад
using github.com/k9ele7en/ONNX-TensorRT-Inference-CRAFT-pytorch/tree/main we add some code to convert.
craft-text-detector onnx inference on raspberry pi 4b
Просмотров 45Месяц назад
github.com/k9ele7en/ONNX-TensorRT-Inference-CRAFT-pytorch/tree/main standard time (no refine-net included) - 32 sec onnx (no refine-net included) - 19 sec
whisper vs whisper-cpp vs whisper-jax vs vosk on raspberry pi 4b
Просмотров 52Месяц назад
smallest models comparison with rus-eng text
аудио с таджикского в текст с помощью vosk на raspberry pi4b
Просмотров 63Месяц назад
аудио вырезано из фрагмента ролика - ruclips.net/video/-SzSWd9GFVo/видео.html так как, кроме песен, сложно что-то найти на таджикском. несмотря на то, что была использована основная модель (не small) результат получился странный. alphacephei.com/vosk/models
HLK-LD2450 test on raspberry pi 4 uart
Просмотров 90Месяц назад
connect directly to raspberry, no adapter needed: 5v - 5v raspberry tx - rx raspberry (GPIO14) rx - tx raspberry (GPIO15) GND - GND raspberry configure uart - www.electronicwings.com/raspberry-pi/raspberry-pi-uart-communication-using-python-and-c github.com/csRon/HLK-LD2450 показывает расстояние до 3х тел одновременно, кошки тоже учитываются, скорость их движения и их координаты до мм.
yolov9-t-onnx - test on raspberry pi 4b
Просмотров 56Месяц назад
test fastest model of yolov9 (yolov9-t-converted.pt) converted to onnx: original - github.com/gwd777/YoloV9-onnx-Pro python export.py device cpu weights './yolov9-t-converted.pt' include onnx python3 xr_yolov9_detection.py
llava-v1.5-7b-4096-preview from groq - test on raspberry pi 4b
Просмотров 133Месяц назад
console.groq.com/playground multimodal fast free at the moment -online only, need register -from Russia need vpn to connect
saiga_llama3_8b_gguf - test models q2 and q4 on raspberry pi 4b
Просмотров 42Месяц назад
- модели не мультимодальны, нельзя загрузить картинку и спросить, что на ней. - понимает вопросы на русском и отвечает на русском. - q2 отвечает очень коротко.
mini-omni (Real-time speech-to-speech) test on raspberry pi 4b 8Gb RAM - very slow
Просмотров 2052 месяца назад
mini-omni (Real-time speech-to-speech) test on raspberry pi 4b 8Gb RAM - very slow
prometheus+grafana - monitoring windows and linux PCs from raspberry pi 4b
Просмотров 712 месяца назад
prometheus grafana - monitoring windows and linux PCs from raspberry pi 4b
FLUX:SOTA (shnell) test on RTX 3060 12Gb
Просмотров 3052 месяца назад
FLUX:SOTA (shnell) test on RTX 3060 12Gb
whisper_cpp (speech-to-text) test on raspberry 5b
Просмотров 552 месяца назад
whisper_cpp (speech-to-text) test on raspberry 5b
llama-cpp Q4_K_M model on raspberry pi 5b 4Gb - speed test
Просмотров 1212 месяца назад
llama-cpp Q4_K_M model on raspberry pi 5b 4Gb - speed test
yolov8 edge tpu model on raspberry pi 5b speed and accuracy test
Просмотров 1172 месяца назад
yolov8 edge tpu model on raspberry pi 5b speed and accuracy test
google coral edge tpu m.2 on raspberry 5b waveshare nwme hat
Просмотров 942 месяца назад
google coral edge tpu m.2 on raspberry 5b waveshare nwme hat
picamera2 in pyenv with python3.9 on raspberry pi5 bookworm
Просмотров 883 месяца назад
picamera2 in pyenv with python3.9 on raspberry pi5 bookworm
monodepth (midas) on cpu and NCS 2(MYRIAD) on raspberry pi 4b
Просмотров 314 месяца назад
monodepth (midas) on cpu and NCS 2(MYRIAD) on raspberry pi 4b
arducam64mp - Failed to allocate buffers: Cannot allocate memory - fix the error
Просмотров 1554 месяца назад
arducam64mp - Failed to allocate buffers: Cannot allocate memory - fix the error
how to install Intel® Neural Compute Stick 2 on Raspbian x64 with openvino
Просмотров 844 месяца назад
how to install Intel® Neural Compute Stick 2 on Raspbian x64 with openvino
не работает вывод звука на наушники (jack) на raspberry pi - чиним
Просмотров 44 месяца назад
не работает вывод звука на наушники (jack) на raspberry pi - чиним
face_detector dlib python vs mediapipe face_detector python
Просмотров 1064 месяца назад
face_detector dlib python vs mediapipe face_detector python
EAST_text_detection tflite model speed test on raspberry pi 4b python 3.9 bullseye х64
Просмотров 474 месяца назад
EAST_text_detection tflite model speed test on raspberry pi 4b python 3.9 bullseye х64
03.11.2024 works with 3.01, thank you for that great tip!!!
3.0.1 still works 😂
Hi as a mac user how does one even access this file to change the code?
3.0.1 working
👍
10/31/2024 still works for 3.0.1 After I saved the change, I had to fully close program and then re-open. When I did that, "poof", the blur was gone :-D
Тоже такая проблема. Зарядное устройство всё родное но начал часто перезагружаться. Что можно сделать, подскажите пожалуйста
@AzikKhakimovich снять крышку и приклеить радиатор от raspberry pi на то , что греется.
do you have a tool to translate the text? I would like to translate the game into Italian (google translation) у вас есть инструмент для перевода текста? Я хотел бы перевести игру на итальянский язык.
@LeonardoGanzerli habr.com/ru/articles/787708/
27-10-2024 Jala perfecto Probability limit = 1.00
w, thank you
Remember to stop and re run facefusion
hi sir, im wondering how do you remove the sfw filter in roop unleashed?
I do in 3.00 Version but nothing, it continues being with filter, what can i do, please help
I just changed it on 3.00 make sure you look at the comments, the video is a bit confusing.
16/10/2024 still working on FaceFusion 3.0.0. thanks!
are you sure bro?
Здравствуйте есть какой то у вас контакт для связи? Есть проект именно по маппингу. Можете помочь или подсказать?
@КонстантинОнчуков-т4я poisk123 yandex.ru
working in 3.0 thanks
does it work for the new 3.0 version
yes, works like a charm
Not for me …
@@francoismarousez647try to completely delete the whole pinokio folder and reinstall it with the latest cuda. That worked for me. But in my opinion the 2.6 version of the app is better than the latest one. No need to change. I would roll it back, tbh.
its not working
working on 3.0.0 as of 25/9/2024
Does anyone know how to bypass on 3.0?
3.0.0 is working, thanks
how it could be on Mac??
@@physobornsicx idk, I’m on the windows
@@madnessroyalty8224 真羨慕你,感謝回覆,兄弟
Still works on 3.0.0
I changed it but now the faces don't swap... Do you know how to make it work? I'm on 3.0
@@ilanchico8375 check that you followed the instructions carefully and correctly. I found the file, edited in VB Studio to comment out the 0.8 line and add the 1.00 line and it's working for me in 3.0
@@ilanchico8375 it does work, but it's a bit flaky. A better approach is to go down to the actual valuation line and set it to always return false. Search for: return probability > PROBABILITY_LIMIT and change to: return False #probability > PROBABILITY_LIMIT
@@ilanchico8375 You did something wrong, I change it and works fine!
2.6.1 stil working
Funciona?
плохо, но funciona
Thanks for this video . Can i contact you on telegram please
Hello sir, thank you for this explanation. Can I contact you privately, please?
how u see that ?
Great video! Thank you! What is the inference time you are seeing for a single image?
cant remember exactly but as to video: 3 images with 20sec each.
@@АльбертИванов-ц4х Thanks! Do you think if I quantize it, it will run <1FPS?
make a try. but i doubt it. better take special arducam tof camera for real-time.
hey man, after i put all the files into their respective folders and ran the program, it gives the error "no module named 'libcamera._libcamera' ", any ideas why?
read info to video.
I'm confused.....In the video, it does not look like you changed the probability limit numbers at all
What's so confusing? He did it before making the film so he had it 1.0
Because he already change it, is not that hard to figure out, kid.
thanks!!
thank you boss, its still working on 2.6.1
weird i cant seem to get this to work
Thank you boss, its working still working on 2.6.1
У меня тоже самое, не рекомендую для покупки
how did u install pytorch in raspi? mine is not working it is showing illegal instruction. my os is bookworm debian. SHould i transfer to ubuntu to make it work?
no need. try to build or search for wheels.
May I know how many ram and cpu has cm4? Will it work on 4b model?
tested with 4Gb but need swap increase. 4b will. try zram install. it let use bigger ones.
Thank you for your video! I encountered the same issue with the incompatibility of libcamera. Could you please explain this part in more detail? "copy libs of libcamera from /usr/lib/aarch64-linux-gnu (Bullseye) to /usr/lib/aarch64-linux-gnu " Are you copying the libcamera files into the same directory they were originally located in?
exactly. take from Bullseye (were they were natively build) and put to Bookworm.
Gracias, funciona perfectamente en la versión 2.6.1. Ya lo tengo trabajando sin censura!!!
Gracias, me ha funcionado
Thank you for making this video! It is hard to find data about how long it takes to run based on different hardwares, do you happen to have any resources?
u`re welcome. what d`u mean of resources ?
Is this the time spent both for inference and writing the output image, or only inference?
inference
2.6.1 doesn't work
Can I start server from any llamafile? If I will start like ./llava-v1.5-7b-q4.llamafile -ngl 9999 -m OmniFusion-1.1-Q5_K_M.gguf --mmproj mmproj-model-f16.gguf is it suppose work correct?
they say so. try.
@@АльбертИванов-ц4х I tried, when I send there an image, it breaks up, starting always give some strange bullshit like” you, re you don’t you are need you are you …” but without image, it talks normal
try to clear cache before send or restart code.
thanks for this video . I have problem when I ran the code I gets the error " import libcamera ModuleNotFoundError: No module named 'libcamera' " I use python 3.9.2
better seach for coral fork for python3.11.
How did you achive to install picamera2 in Python 3.9.2? I tried a lot but I cannot use that python version and install picamera in my virtual env. I'm struggling a lot.
@@abelreybarreiros1425 picamera2 have no problem with python3.9 and 3.11. coral has.
I have same problem . are you find solution for this problem
I am trying to configure raspberry pi 4b with edge tpu silva (google coral for tflite models) but I am struggling because I am using python 3.9 in virtual env but I cant install picamera2 with pip. By the way the picamera2 package is native installed in my raspberry but I cannot use it for my scripts for ML purposes. As I have the Pi camera v3 model, I cannot use another library to obtain the image/video from my camera to use it with opencv or whatever. Can you help me or do you know some choice to use my camera with tflite and the Gcoral to apply my models ?
@@abelreybarreiros1425 better choice is to move to Bullseye. picamera2 based on libcamera and one’s should keep in mind that coral only supports python3.9 (the last version). in Bookworm python3.11 is native and only way there - use gstreamer or the something similar.
Funciona en la version 2.6.0 de: PROBABILITY_LIMIT = 0,80 a: PROBABILITY_LIMIT = 1,00
Не работает. Пишет "Не удалось подключиться"
попробуй downgrade openvpn client
Thought that phi3mini onnx was actually not that bad…
perhaps, compilation without optimisation. they wrote would think about it.
Is actually that shitty? Fuck
有用
哥 可以教一下嗎 Mac改了沒用