Have you tried combining ARFoundation + OpenCV plus Unity on an Android phone? I tried but got a white background, that is, the AR camera works, but the OpenCV (RawImage) overlay does not, just a white background and that’s it. Is there a way to combine them?
tnx for the great tutorial! i have a question, how would it be possible to add the open cv asset to the assembly definition that my codes are part of? tnx!
I fixed the ToVector2 function so that you dont have to change the rotation, size and position of your collider to line it up, This way it also works with diferent resolutions of camera and screen. private Vector2[] toVector2(Point[] points, int width, int height) { float Mx = (float)Screen.width / (float)width; float My = (float)Screen.height / (float)height; int _x = 0; int _y = 0; vectorList = new Vector2[points.Length]; for (int i = 0; i < points.Length; i++) { _x = Convert.ToInt32(points[i].X * Mx); _y = Convert.ToInt32((height - points[i].Y) * My); Vector3 p = new Vector3(_x, _y, 0); p = Camera.main.ScreenToWorldPoint(p); vectorList[i] = new Vector2(p.x, p.y); } return vectorList; } the wisth and height parameters are the width and height of the processimage. Width = processImage.Width; Height = processImage.Height;
Hey Matt, great tutorial! Your introduction worked perfectly for me on Windows. Now, I'm trying to implement this Unity + OpenCV setup on an Android smartphone, but I'm having some issues. Do you have any experience with that and might have some tips? Thanks in advance and keep up the good work!
Very cool video!! I used the very first part of your code and the marker detection example in 'OpenCV plus unity' to make the marker tracking appear on the webcam feed! Thanks for the explanations! I have a problem though; it won't track my paper marker, but if I hold it up to the screen it will track the marker "double" that appears in the unity editor video feed.. I have no idea what I've done wrong! Ultimately I'm looking to turn my camera into a shoddy 6dof pointer, by anchoring the scene and a unity camera on the virtual marker and then using the webcam as an input device in my 3D scene.. do you think that's possible with these tools, or is it all just an illusion of 3D?
Great tutorial and fascinating project! I'm trying to use OpenCV to see if it's possible to "track" an object in a video (not the webcam) and finding this difficult to achieve. There is a demo scene that uses a TrackerScript which inherits from Webcam and it achieves this perfectly on a webcam view. I figured it would be straight-forward to do this on a video surface instead but man it's giving me trouble. Your tutorial doesn't cover this but it does give me a much better understanding of how to work with OpenCV. If you have any tips on achieving this effect I'd love your insight!
Great tutorial Matt. Could you make one that would allow someone to create a very simple Cube tracking an AR marker through a webcam demo? Essentially the most simple AR demo there is but I want to be able to do it with Open CV. Thanks a million.
I have trouble with this lib when build android : "DllNotFoundException: Unable to load DLL 'OpenCvSharpExtern'. Tried the load the following dynamic libraries: Unable to load dynamic library 'OpenCvSharpExtern' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen failed: library "OpenCvSharpExtern" not found ". Can you help me with this?
Hey Matt, can you help me with my project? i have a track where car is moving, now after seeing your video i got to know how to create colliders around the shapes but how do i suppose to separate them means if you need to collide two shapes from webcam and turn light a bulb in unity how you gonna do that? i want this for my car if car collides with sides of the track , negative marking for the player.
There are other forms of segmenting them available in openCV. If your cars are distinct colors you might be able to track the color. Background subtraction might help. You might also be able to use the match template function
@@MattBell thanks for your valuable reply. Also what happen if there is a shadow of an object? will there be a collision with shadow too? if so. is there a way to avoid it
@@milliwaysgames with this technique you might, the shadow might trigger a collision unless you can control the lighting well enough that you can control it with the threshold value. The other segmentation techniques I mentioned might give you better fidelity.
Hey, I need to use OpenCV for text recognition (kids will draw a letter in a Canvas), but I have NO idea how to do it and if there's a way of doing it like, for example, in this video with this webcam usage. Please, if you know something, I'll be happy to read it, it's for a project in my class.
hi budd, thanks for made this tutorial. glad to see. i have orbbec camera could you make tutorial for vfx using orbbec or maybe any easy solution? thanks in advance
can it be built into an android or desktop application? I've tried using OpenCvSharp but when it was built into an android application the results were different from the editor sorry if my english is weird, i use google translate
thats amazing, a childs dream bringing real objects into a gameworld, seeing the end result is really tripping me out ^^ thanks for the tutorial
Amzing , I hope you publish more tutorials & lectures
Looks super fun! Thanks for sharing the whole process.
This is awesome! Had to try it right away :) Been poking around OpenCV in python for some days and bringing it to Unity and C# is amazing :)
Have you tried combining ARFoundation + OpenCV plus Unity on an Android phone? I tried but got a white background, that is, the AR camera works, but the OpenCV (RawImage) overlay does not, just a white background and that’s it. Is there a way to combine them?
I also suffer from the same problem, have you reached a solution?
I loved this tutorial.
tnx for the great tutorial! i have a question, how would it be possible to add the open cv asset to the assembly definition that my codes are part of? tnx!
So cool! Why do you have a hover function? Is it an extension? It seems to hide so much coding.
I fixed the ToVector2 function so that you dont have to change the rotation, size and position of your collider to line it up, This way it also works with diferent resolutions of camera and screen.
private Vector2[] toVector2(Point[] points, int width, int height)
{
float Mx = (float)Screen.width / (float)width;
float My = (float)Screen.height / (float)height;
int _x = 0;
int _y = 0;
vectorList = new Vector2[points.Length];
for (int i = 0; i < points.Length; i++)
{
_x = Convert.ToInt32(points[i].X * Mx);
_y = Convert.ToInt32((height - points[i].Y) * My);
Vector3 p = new Vector3(_x, _y, 0);
p = Camera.main.ScreenToWorldPoint(p);
vectorList[i] = new Vector2(p.x, p.y);
}
return vectorList;
}
the wisth and height parameters are the width and height of the processimage.
Width = processImage.Width;
Height = processImage.Height;
Great tutorial. Can you guide if we can detect user's facial expression using the front camera whether the person is angry, happy, surprised, etc?
Hey Matt, great tutorial! Your introduction worked perfectly for me on Windows. Now, I'm trying to implement this Unity + OpenCV setup on an Android smartphone, but I'm having some issues. Do you have any experience with that and might have some tips? Thanks in advance and keep up the good work!
I am afraid I havent used the OpenCV library on Android.
Very cool video!!
I used the very first part of your code and the marker detection example in 'OpenCV plus unity'
to make the marker tracking appear on the webcam feed! Thanks for the explanations!
I have a problem though; it won't track my paper marker, but if I hold it up to the screen it will track the marker "double" that appears in the unity editor video feed.. I have no idea what I've done wrong!
Ultimately I'm looking to turn my camera into a shoddy 6dof pointer, by anchoring the scene and a unity camera on the virtual marker and then using the webcam as an input device in my 3D scene.. do you think that's possible with these tools, or is it all just an illusion of 3D?
Well done. Super cool to watch such fun projects.
THANKS for that video. This content is priceless and really awesome, thanks, thanks and thanks!!
Great tutorial and fascinating project! I'm trying to use OpenCV to see if it's possible to "track" an object in a video (not the webcam) and finding this difficult to achieve. There is a demo scene that uses a TrackerScript which inherits from Webcam and it achieves this perfectly on a webcam view. I figured it would be straight-forward to do this on a video surface instead but man it's giving me trouble. Your tutorial doesn't cover this but it does give me a much better understanding of how to work with OpenCV. If you have any tips on achieving this effect I'd love your insight!
Any luck? I'm trying to find the same thing
Work like a charm ! Awesome thanks !
Great tutorial Matt. Could you make one that would allow someone to create a very simple Cube tracking an AR marker through a webcam demo? Essentially the most simple AR demo there is but I want to be able to do it with Open CV. Thanks a million.
how to change webcam?
That‘s super cool!
I have trouble with this lib when build android : "DllNotFoundException: Unable to load DLL 'OpenCvSharpExtern'. Tried the load the following dynamic libraries: Unable to load dynamic library 'OpenCvSharpExtern' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen failed: library "OpenCvSharpExtern" not found ". Can you help me with this?
Hey Matt, can you help me with my project? i have a track where car is moving, now after seeing your video i got to know how to create colliders around the shapes but how do i suppose to separate them means if you need to collide two shapes from webcam and turn light a bulb in unity how you gonna do that? i want this for my car if car collides with sides of the track , negative marking for the player.
There are other forms of segmenting them available in openCV. If your cars are distinct colors you might be able to track the color. Background subtraction might help. You might also be able to use the match template function
@@MattBell thanks for your valuable reply. Also what happen if there is a shadow of an object? will there be a collision with shadow too? if so. is there a way to avoid it
@@milliwaysgames with this technique you might, the shadow might trigger a collision unless you can control the lighting well enough that you can control it with the threshold value. The other segmentation techniques I mentioned might give you better fidelity.
我的RawImage的宽高是3860*2160的,太大了,怎么设置WebCamTexture的Input的视频流宽高呢?而且显示的画面是静态的
Hey, I need to use OpenCV for text recognition (kids will draw a letter in a Canvas), but I have NO idea how to do it and if there's a way of doing it like, for example, in this video with this webcam usage. Please, if you know something, I'll be happy to read it, it's for a project in my class.
any chance you could post the code as its a little unreadable at that resolution?
i want to install in my phone, but the screen size turn wrong, and if i change size look like detect can't collider with the ball =口=
hi budd, thanks for made this tutorial. glad to see. i have orbbec camera could you make tutorial for vfx using orbbec or maybe any easy solution? thanks in advance
Assets\Scripts\ContourFinder.cs(24,44): error CS1503: Argument 2: cannot convert from 'object' to 'UnityEngine.Texture2D'
How to solve this error?
Loved it ! Thanks)
How can you use a mobile camera as alive feed back in this case.
I'm not sure I quite understand your question, but I believe this technique should run on Android directly.
Awesome tutorial! Does anybody know how to actually get the x and y coordinates of the contours? I need them to move an object
What Unity Extension for VSCode do you use?
Usually whatever the most recent one is
@@MattBell Ah thx, I just read the instructions of Intellisense for VSCode. I didn't open VS on the root of the project. Now it works.
@@MattBell hi matt, how if we use our body as an object? could you give some explanation? thanks in advance
@@thegrey448 You can use body with this technique so long as you are videoing them against a plain background.
@@MattBell got it. many thanks 😇 buddy.
would you know how to do person pose estimation with this, or point me in the direction?
OpenCV tends to be a little lower level than that, but check out MediaPipes
@@MattBell I wound up buy the 100 dollar version. Now I'm combining what you did with what they did. You're tutorial is extremely helpful. Thank you
super helpful, thanks!
This is so cool! gj!
Amazing!
Good job! 👍
Bro can I get some help please
thanks man
It will work in Android ?
I am not sure, but I suspect so
i love you
bro array out of index don't work
For sure, bro. Gotta keep those indices in bounds
@@MattBell bro if give min are 5000 is slows down. and array out of index. if I give 10000 it shows videos but don't find any contour.
@@MattBell bro can I get some help
@@MrTaffy90 hey in drawcontour method u might used int i =0, try this for (int i = 1; i < Points.Length; i++) int i =1 not 0(zero)
Bro What about the colors of the Spheres???? heheh please
particle.GetComponent().material.color = new Color(Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f));
#pro
can it be built into an android or desktop application?
I've tried using OpenCvSharp but when it was built into an android application the results were different from the editor
sorry if my english is weird, i use google translate