Never tried to make one, but i think the different part is when in aimlab, the target and background already have contrast color, so he can 'increase' the contrast, meanwhile in real fps game, the target is probably a character that is more blend to the background, so i assume the program would need to access the enemy character model, render it in different color that is contrast to the background, then continue with the same process. Again, just assumption.
@@Alex-de7rz I think Object recognition + Image classification might do the trick. No need to pin point the exact boundary of the character model, just draw a box over the character model and try to hit the center of that frame. May need to handle the delay and character movement tho.
@@welovethemallthesame1125 Yo en mi proyecto de tesis para acortar tiempos pasé de tensorflow a AWS Rekognition y era muy fácil el aprendizaje del modelo, en su momento creé un sistema de videovigilancia que detectaba objetos personales perdidos, supongo que la aplicación debe ser muy similar, sólo que al detectar un personaje de Valorant dispare... Claro que es más fácil decirlo, hay mucho ensayo y error.
I agree. You need to take note if 1. The mouse has mouse accel on/not, make sure it’s not 2. How many Gs the mouse can handle in terms of speed? Most modern gaming mice can handle adequate Gs that we mere humans cannot reach. 3. Do you turn off Windows pointer precision? This may cause you to do those complicated maths. My point is, replace the mouse and turn off Windows pointer precision. I recommend at minimal a Logitech G304 since thats the cheapest most adequate wireless gaming mouse I can think of; The more variables you can control, the merrier. Nice work!
@@barjuandavis Windows pointer precision doesn't affect games that use raw input (99% of modern shooters). Also, if anything, he should ideally just turn the sens up a shit ton and make the motors as precise as possible.
@@re4796 that's not my point. It proves that your mouse and screen don't matter that much when your movements are perfect. You don't need a fancy gaming mouse to be that good, otherwise the robot would not be be able to achieve the scores.
Possible improvements: (physical) - unshell the mouse and mount its optical sensors directly onto the robot. This will reduce rattle/wiggle and ditch the mass of the shell. - replace the mouse buttons with relays that can be directly triggered. - move as many components as possible into a separate stationary breakout box, again, to reduce the mass of the mobile unit. Ideally, your mousedroid will have only motor drives, wheels and the optical sensor(s) within the mobile unit. (possible complete reconstruction) - consider keeping the sensor itself completely stationary and inverted (pointing up) and moving the surface instead (a thin piece of roughened plastic that is easily optically trackable and could be moved with a geared/belted X/Y setup. - or, mount the sensor in the same way as the printing head of a 3D printer over the tracking surface. (programming) - keep tabs on the velocity of the mousedroid and don't just take the proximity of the target into account, but also try to select targets that are also in line with the current direction of motion to reduce the amount that the mouse has to turn.
Thank Mark for watching! These suggestions are great, and some of them I do plan on doing for a V2. This was really a demo, and was me really seeing if I can do it. But I don't like how bulky the system is so I am going to take a part a mouse and make a new much slimmer and faster system coming soon. Also, I love your name you gave to the mouse --- mousedroid is fantastic, and I might still that for the future video!
Awesome stuff. An algorithm change to allow shots while on the move should give you those extra points. Plan ahead for the optimum path, re-evaluate if targets change, shoot as you pass the target on the way to the next. That's how the humans do it. Never slow down, and estimate distances in terms of time to get there, not physical distance. You might also want to get an old school rubber ball mouse, toss the ball, and turn the rollers directly.
@@niezbo And if you want, you could skip the motors entirely, and instead simply blink LEDs at a rate that suits you. The movement in a particular direction is detected by two photo junction devices that look at a LED through a perforated disk. Skip the disk and the LED, and excite the sensors with two of your own LEDs directly to create the movement you want. The challenge is to get the timing right, but once there, the number of moving parts is 0.
Go a step further. See how the mouse sends the electrical signal through USB and use an arduino to simulate mouse movement by sending the right vdc x and vdc y.
@@Shedding There's (some) danger in that one. It is conceivable that a future gaming product might look for that kind of a hack. So if you do it, it better emulate all of a gaming mouses behavior... That said, the standard mouse signaling over USB is well defined and plenty of sample code exists out there.
I am a backend developer, but I took 2 embedded systems courses, I like this a lot combining hardware with software with Machine Learning is just amazing and a complex task oh man , you are a "Real" software engineer, you had an idea and you made it into reality and kept developing it congratulations
Thanks for the kind words, and understanding how much work went into this! Software engineering is something else really difficult, but these one off programming challenges are fun.
This is such an awesome project, came across the video on your tiktok and you’re making super high quality content, keep it up man, and I can’t wait to see how far you can go
Cool video and nice work! I feel like the PID tuning would be much easier of you focus on singular events. For example if you let the robot shoot a couple of targets and then make a plot of "error" over time, you might have an easier time making the PID feedback faster while keeping an eye on overshoot and unwanted oscillations. Probably too much work but it would be cool to see these kind of plots!
First off thanks for watching. Yeah I have some plot of those errors I didn't think it made for interesting content but will write a technical post about that. And when I make a V2 I can add that in.
I think that and even bigger improvement could be made by using optimal control techniques instead of PID. In the video, it looks like the robot tends to overshoot some targets when it starts move large distances. Maybe using LQR or something and optimizing around minimizing some objective outputs would give much better system response than just tuning PID by hand.
i just postet it already now. i see, this would have been the right place. cnc-servo-motors are controlled with 3 pids. position->pid->speed->pid->torque->pid->dutycycle. sounds complicated, but makes tuning much easier in the end.
Very cool! The fastest way to get a screenshot is to copy directly from the game’s back buffer by hooking the Present function in the D3D render pipeline. This involves writing a small library and injecting the DLL into the game’s process. A much simpler method involves using the Windows GDI API. It’s not as fast, but you can still reach a few hundred FPS. You’d probably be able to achieve much higher scores by improving your kinematics. Directly driving two axes rather than using omniwheels will reduce slipping and allow you to increase acceleration.
Wow thanks watching and thanks for the information the screenshotting tips sound real helpful and I will look into them. My V2 will be still using wheels, but down the line definitely want to try with a 2 axis gantry.
@kamal carter: I love the hardware aspect to this project, but I see the microcontroller you're using and I was wondering what the point of the mouse was? Couldn't we just send HID mouse/keyboard commands from the Microcontroller to emulate the mouse? It would remove several of the variables right?
Yeah I really wasn't trying to do the best way more so just wanted to see if this idea could work with limited resources. The better way is to definitely spoof the mouse inputs
Use gaming mouse with a good switch and good sensor. The higher DPI will help the robot tracking. By using unhumanly high sensitivity you can get the robot to aim with little (but precise) movement
First of all, this creation is fantastic! Got that exact idea myself like a year ago. If you are still "Working" on this or will in the future i have a thought. (And to point out, i have close to no expertise in this area). Would it be an option to use a trackball mouse and wire the controller board to the 2 sensors? In that way you could input the exact number of movements (like a stepper motor?). And also counteract the gravitation of the device itself??? just something i came to think about as i stumbeled on this video. Awesome work. Good video. 🙂👌
Not a python developer but have you tried pillow module to take the center of the screen instead of the whole screen as a screenshot? I just imagine there's a lot of dead space around the edges that could be cropped to make the image smaller for faster encoding. Possibly faster to use .PNG encoding but not 100% sure on that. from PIL import ImageGrab ss_region = (300, 300, 600, 600) ss_img = ImageGrab.grab(ss_region) ss_img.save("SS3.png")
I think that you could get a couple more points by optimising the targetting algorithm. Instead of targeting the closest circle,.you could turn this into a travelling sales man problem, to select the shortest path. Because there are only 3 circles visible at once, there only 6 (3!) possibilities. You could add those paths into an array, sort it by total distance and pick the shortest path and repeat after every hit. Also, tuning the values for the mouse, sounded liie something you could also turn into an ML project
Great work! A couple of suggestions: First, opencv library in python it's easy to use and setup but if you manage to code it in c++ you would improve the perfomances. Second, as far as for the PID controller. I dont know the gains you set but it's importat to have the correct sample time (for you it would be the frequency at which you execute the PID) since it influence the effectiveness of the control. Moreover, given you case, it might be enough to have a PD controller that basically has the integration gain set as zero (or just a very little value compared to the other two). The proportional part let you have a fast response while the derivative avoid the mouse to go over the target. However, again, great work! I love it! Edit: I realized too late that the video is old..lol
You could strip the mouse down to just its sensor and board for a smaller footprint. It'd probably be more accurate with it's reduced weight minimising inertia. You could also suspend it in place and have a small sliding base under the sensor.
Old trackball mice might be better for this as the mechanics required can be smaller and fit in the void for the ball. In reality you could also just disassemble the mouse completely to get to the bar rollers...
You just need to detect a color pixel, one optimization can be made in the size of screen capture you take(resolution), make it as small as possible which still keeps the accuracy of detection high. So as to reduce the load on computation. Maybe as low as 128x128 for a 1:1 display. let me know how this may affect your performance
the problem with this is real life, in a game the more you lower resolution the lower information you have, which may cause the robot to not detect a head
You could try using a 3d printer gantry to move the mouse faster and more precisely, the mouse could be held stationary while the mousepad moves under it
Great job, as a vision engineer, i really impressed ! I don't use python/open source in my industrial project, but maybe OpenCv in C++ is faster ? For the modelisation in the discrete automation is really math heavy, you've done a great job to find the value by trying !
Hello, thank you for your kind words, especially from someone that do computer vision for a living. How would you go about simplifying this model of the system because at the moment it's definitely nonlinear and wheel slippage will be a problem? Would you first get it into state space form?
OpenCV-python is just a wrapper around the C++ library so it shouldn't actually be much faster to switch, provided you are using numpy and not native python features
I also do computer vision stuff, and I usually cv2.split(frame) to get the blue, green and red frame, and then just get the mask by cv2.subtract(g,b) to get the mask that only green stuff remains
I had a similar idea but with a metal frame and when you move one end of the frame to the other it does a 360 turn so u can get more consistent results with a steadier frame for faster flicks
Really cool! I’d suggest maybe using some small brushless motors. It will be a more complicated wiring setup but likely would be quite a bit quicker and more accurate. I’d also try more securely mounting the mouse to the plate, it seems there’s slight wiggle which could cause lower accuracy. Well done!
Wait you might be on to something I am going to look into brushless motors they would be a lot better. One thing I doubt they get a lot of torque the motors I used are geared down a lot to do this.
@@ZennySilverhand why brushless motors instead of stepper motors like used on a CNC machine? also take the mouse apart and just use the guts, no need for the extra weight of the case
To comment on the screenshot thing. You might not need a faster screenshot process if you do trajectory planning on each screen shot. Then use PID (or cascade control my favorite) to track the trajectory. This also may give you an even higherscore as you can plan an optimal path so you never have to slow down the mouse so it's always moving. I always wanted to do this but with a stereo camera looking at the screen. That may also increase your screenshot refresh rate
Hmm 🤔 pointing the camera at the screen would be cool. It would increase the it's just like a human playing aspect. Yeah I'm also thinking about doing more advanced/optimal controls in the future the jerky jerk of the PID is definitely not sufficient if I want the hardware not to implode.
What where you using to make those screenshots? The fastest way known to me is the library "mss". Also you cold use something like yolov5 (or yolov7 but didn't test that - most likely overall better) to get those screen-coords. I know - machine learning is a bit overkill to shoot at same-colored spheres but it's GPU accelerated and probably much faster. Plus you could train it to recognize and shoot players. (i did exactly that)
Yes I used mss for this demo I found a new library that's called dxcam that worked really fast. And you're going like my next video if you know about YOLO alogrithms
3d printers cap out at 5-6 meters per second. They are incredibly accurate too. I suggest modifying your setup to use this tech. Gut the mouse and just have the laser attached to the print head. Lock the Z axis, and set some real records!!
Its really impressive to see your robot hit 118k even with a burnt out motor. Before I stopped playing fps my aimlabs score sat around 103k though am I no expert by all means, realistically anything higher than 100k isn't necessary for anyone at all.
this is really cool! if your gameplay gets audited most games will ban you for dual mouse inputs though, spoofing an arduino as a mouse fixes this, in your case you might need 2 arduino + host shields since you have a physical robot rather than just sending the arduino serial commands.
Wow you noticed that 😲. Yeah I could have done a lot with knowing the position and velocity of the encoders, but when the controls started working the encoders were over killed
Did you use the root locus method for your PID tuning? You may be able to squeeze a little more performance out of it by optimizing rise and settling time
Wow, root locus that's a term I haven't heard in a minute 😂. No, I did not do root locus because I did not want to find the transfer function of the system. But if I was really engineering I would definitely go down a path like this.
I've been thinking about it. Since you already have it, you could just back out the transfer function of you plant. Then maybe Ziegler Nichols method would work well. It's really simple, just need to find the open loop gain. No root-locus necessary
@@chain3519 Hmm intriguing. I guess I can make the assumption this is a second order system and find the damping ratio and natural frequency by reading the response time plots and build the model that way. It's and interesting proposition much more research focus than perhaps a RUclips video, but if I come across the data I will send it your way.
@@KamalCarter Second order would definitely cover it. I think something like that could work, but seeing as you already have something that works... My suspicion is you could back out that stuff with a ruler and an iphone using slo mo. Then it's just a quadratic fit in excel or something (ma+cv+kx =F -> s^2 + c/m s + k/m = F(S), making sure m matches what is measured on a scale). Ziegler-Nichols probably would have just been a really nice starting point for the tuning you did anyways. Also thought of a solution that is simpler than anything we thought of/ you did. Use an arduino pro micro that poses as a mouse. Have it draw straight lines to the target and inject noise into the path so it's harder to detect. What you did is a lot more fun though.
@@chain3519 yeah I can get odometry data from the servos for the tracking part. Also, your last point is spot on I could have just spoof mouse inputs and made it seem as real as possible, but why do that when I get just literally make the mouse move.
Using the PyAutoGUI library: PyAutoGUI is a cross-platform Python library that can be used to automate keyboard and mouse actions, including taking screenshots. Using the PIL library: The Python Imaging Library (PIL) provides a way to capture the screen on Windows, but it requires the installation of a third-party library called Pillow. Using the mss library: The MSS (Multi-Screen Shot) library is a cross-platform library that can be used to take screenshots in Python. (Just let me know if you need any starter code for this... I will be glad to help)
What you proved here is the concept, and it works. I have no doubt that you can significantly improve the performance once you start min maxing each factor. Outside gaming - is there a use case where you want the control to be from an external factor (like from a mouse / joystick) and not programmed via an application in the OS?
This with an arm would probably rip. I think the wheels and traction as cool as they are add a lot of inaccuracies and slowness to the entire system. Still sick though 😁
For people who code, we all know this not as simple as his explanation. Great work.
How would this be able to work in an actual fps game? There are so many different colors everywhere?
@@ProdByCari good question. probably require a lot more AI 'training'
Never tried to make one, but i think the different part is when in aimlab, the target and background already have contrast color, so he can 'increase' the contrast, meanwhile in real fps game, the target is probably a character that is more blend to the background, so i assume the program would need to access the enemy character model, render it in different color that is contrast to the background, then continue with the same process. Again, just assumption.
@@Alex-de7rz I think Object recognition + Image classification might do the trick. No need to pin point the exact boundary of the character model, just draw a box over the character model and try to hit the center of that frame. May need to handle the delay and character movement tho.
@@welovethemallthesame1125 Yo en mi proyecto de tesis para acortar tiempos pasé de tensorflow a AWS Rekognition y era muy fácil el aprendizaje del modelo, en su momento creé un sistema de videovigilancia que detectaba objetos personales perdidos, supongo que la aplicación debe ser muy similar, sólo que al detectar un personaje de Valorant dispare... Claro que es más fácil decirlo, hay mucho ensayo y error.
Please give that robot a better mouse. You're holding it back
I agree. You need to take note if
1. The mouse has mouse accel on/not, make sure it’s not
2. How many Gs the mouse can handle in terms of speed? Most modern gaming mice can handle adequate Gs that we mere humans cannot reach.
3. Do you turn off Windows pointer precision? This may cause you to do those complicated maths.
My point is, replace the mouse and turn off Windows pointer precision. I recommend at minimal a Logitech G304 since thats the cheapest most adequate wireless gaming mouse I can think of; The more variables you can control, the merrier. Nice work!
Give that robot some RGB lighting + mouse with RGB lighting, instantly 200x Pro
Nah he doesn't have to
@@thickgirlsneedlove2190 does he want a gamer robot or not? Make that mouse look like a Christmas tree!
@@barjuandavis Windows pointer precision doesn't affect games that use raw input (99% of modern shooters). Also, if anything, he should ideally just turn the sens up a shit ton and make the motors as precise as possible.
Way late, but this was one of the coolest videos I've seen. Good ingenuity
This is true physical: "I Made Real-life Airsoft AIM-ASSIST: Aimbot V3"
3:10 “This took me 2 painstaking months.”
This proves that you can get pro level aim with a cheap wireless mouse and a high latency screen.
@QUAD849 His OpenCV code taking screenshots is slow, compared to "pro gaming 100000Hz RGB monitors".
@ Nooo it isn't slow
No it doesn't its quite literally a robot
@@re4796 that's not my point. It proves that your mouse and screen don't matter that much when your movements are perfect. You don't need a fancy gaming mouse to be that good, otherwise the robot would not be be able to achieve the scores.
@ I hope to God you're joking
Now I wanna see an aim lab tournament with this against 5 pros lol
Possible improvements:
(physical)
- unshell the mouse and mount its optical sensors directly onto the robot. This will reduce rattle/wiggle and ditch the mass of the shell.
- replace the mouse buttons with relays that can be directly triggered.
- move as many components as possible into a separate stationary breakout box, again, to reduce the mass of the mobile unit. Ideally, your mousedroid will have only motor drives, wheels and the optical sensor(s) within the mobile unit.
(possible complete reconstruction)
- consider keeping the sensor itself completely stationary and inverted (pointing up) and moving the surface instead (a thin piece of roughened plastic that is easily optically trackable and could be moved with a geared/belted X/Y setup.
- or, mount the sensor in the same way as the printing head of a 3D printer over the tracking surface.
(programming)
- keep tabs on the velocity of the mousedroid and don't just take the proximity of the target into account, but also try to select targets that are also in line with the current direction of motion to reduce the amount that the mouse has to turn.
Thank Mark for watching! These suggestions are great, and some of them I do plan on doing for a V2. This was really a demo, and was me really seeing if I can do it. But I don't like how bulky the system is so I am going to take a part a mouse and make a new much slimmer and faster system coming soon. Also, I love your name you gave to the mouse --- mousedroid is fantastic, and I might still that for the future video!
@@KamalCarter You can also completely ditch the mouse and make the micro controller pretend to be a mouse.
@@gSys1337 yes but the whole point is tho have a physical aimbot
Dude joined RUclips 15 years ago!
@@topkek5853 microcontroller can act as a HID device too... So that will be convenient..
Sick idea, even sicker execution!
I had the same idea i just never learned to code cuz i got sick
Awesome work man, very cool project!
Awesome stuff. An algorithm change to allow shots while on the move should give you those extra points. Plan ahead for the optimum path, re-evaluate if targets change, shoot as you pass the target on the way to the next. That's how the humans do it. Never slow down, and estimate distances in terms of time to get there, not physical distance.
You might also want to get an old school rubber ball mouse, toss the ball, and turn the rollers directly.
Using ball mouse is actually pretty damn good idea!! It would require only to motors with small torque. It could be extremely fast!
@@niezbo And if you want, you could skip the motors entirely, and instead simply blink LEDs at a rate that suits you.
The movement in a particular direction is detected by two photo junction devices that look at a LED through a perforated disk. Skip the disk and the LED, and excite the sensors with two of your own LEDs directly to create the movement you want. The challenge is to get the timing right, but once there, the number of moving parts is 0.
Go a step further. See how the mouse sends the electrical signal through USB and use an arduino to simulate mouse movement by sending the right vdc x and vdc y.
@@Shedding There's (some) danger in that one. It is conceivable that a future gaming product might look for that kind of a hack. So if you do it, it better emulate all of a gaming mouses behavior... That said, the standard mouse signaling over USB is well defined and plenty of sample code exists out there.
@@akiko009 yep. There might already be someone who did this already. :)
Move the mousepad, not mouse
Ohhhh!!! Damn smart!!!
It’s cheating
It's easy to say but the whole program will change if you invert the moving medium. He don't make the code he just copied it.
It's even easier to just emulate a mouse with an arduino
@@respise You don't have any idea how gameguard/anticheat program works. Arduino devices are an old school cheat using it will detect you easily.
RIP SHOOTERS 1987-2022, THANKS
That was an awesome project! I was super impressed with how well it turned out. I’m sure that you could beat TenZ with a little more optimizing.
I'm happy you enjoyed it. I, too, think I could have beat Tenz, but it would have taken a considerable more amount of work.
I am a backend developer, but I took 2 embedded systems courses, I like this a lot
combining hardware with software with Machine Learning is just amazing
and a complex task
oh man , you are a "Real" software engineer, you had an idea and you made it into reality and kept developing it
congratulations
Thanks for the kind words, and understanding how much work went into this! Software engineering is something else really difficult, but these one off programming challenges are fun.
This was awesome man. Actually so cool
Another genius using his powers for evil lol
This is such an awesome project, came across the video on your tiktok and you’re making super high quality content, keep it up man, and I can’t wait to see how far you can go
Cool video and nice work!
I feel like the PID tuning would be much easier of you focus on singular events. For example if you let the robot shoot a couple of targets and then make a plot of "error" over time, you might have an easier time making the PID feedback faster while keeping an eye on overshoot and unwanted oscillations.
Probably too much work but it would be cool to see these kind of plots!
First off thanks for watching. Yeah I have some plot of those errors I didn't think it made for interesting content but will write a technical post about that. And when I make a V2 I can add that in.
I think that and even bigger improvement could be made by using optimal control techniques instead of PID. In the video, it looks like the robot tends to overshoot some targets when it starts move large distances. Maybe using LQR or something and optimizing around minimizing some objective outputs would give much better system response than just tuning PID by hand.
i just postet it already now. i see, this would have been the right place.
cnc-servo-motors are controlled with 3 pids. position->pid->speed->pid->torque->pid->dutycycle. sounds complicated, but makes tuning much easier in the end.
Very cool! The fastest way to get a screenshot is to copy directly from the game’s back buffer by hooking the Present function in the D3D render pipeline. This involves writing a small library and injecting the DLL into the game’s process. A much simpler method involves using the Windows GDI API. It’s not as fast, but you can still reach a few hundred FPS. You’d probably be able to achieve much higher scores by improving your kinematics. Directly driving two axes rather than using omniwheels will reduce slipping and allow you to increase acceleration.
Wow thanks watching and thanks for the information the screenshotting tips sound real helpful and I will look into them. My V2 will be still using wheels, but down the line definitely want to try with a 2 axis gantry.
that mean kernel cheat guard certainly detect that?
The way it can be improved so much, make me think the extent of robots and Ai powers. And bro just casually made it.
Legends say even the game developers approved him for his innovative idea.
@kamal carter: I love the hardware aspect to this project, but I see the microcontroller you're using and I was wondering what the point of the mouse was? Couldn't we just send HID mouse/keyboard commands from the Microcontroller to emulate the mouse? It would remove several of the variables right?
Yeah I really wasn't trying to do the best way more so just wanted to see if this idea could work with limited resources. The better way is to definitely spoof the mouse inputs
And people thought learning ML via C++ was tough. Your dedication is insane,keep up the good work.
Waddp carter! Found this video on tiktok and im super glad i found the channel. Super excited to watch more of your content.
Use gaming mouse with a good switch and good sensor. The higher DPI will help the robot tracking. By using unhumanly high sensitivity you can get the robot to aim with little (but precise) movement
Yeah bro, also higher sens allows the mouse to hit a target without moving a long distance, which will speed it up a bit.
First of all, this creation is fantastic! Got that exact idea myself like a year ago. If you are still "Working" on this or will in the future i have a thought. (And to point out, i have close to no expertise in this area). Would it be an option to use a trackball mouse and wire the controller board to the 2 sensors? In that way you could input the exact number of movements (like a stepper motor?). And also counteract the gravitation of the device itself??? just something i came to think about as i stumbeled on this video.
Awesome work. Good video. 🙂👌
Yeah I plan on using a track ball in the future been so busy with other things but thank you for the suggestion
wow, what a nice piece of robot you made 😳👍
have a few ideas for the V2 if you want 😊
Let me hear them! I have ideas in my head and am working on some things but would love to hear what others say.
how can a human being be this creative
And this was the humble beginning of Mr. Glass
0:01 "whole round of ammo" 💀
Glad to see your back creating content hopefully in a more regular basis.
would love to see you develop this further
Not a python developer but have you tried pillow module to take the center of the screen instead of the whole screen as a screenshot? I just imagine there's a lot of dead space around the edges that could be cropped to make the image smaller for faster encoding. Possibly faster to use .PNG encoding but not 100% sure on that.
from PIL import ImageGrab
ss_region = (300, 300, 600, 600)
ss_img = ImageGrab.grab(ss_region)
ss_img.save("SS3.png")
It's like watching a video from "stuff made here" similar style. Talking ect.
I think that you could get a couple more points by optimising the targetting algorithm. Instead of targeting the closest circle,.you could turn this into a travelling sales man problem, to select the shortest path.
Because there are only 3 circles visible at once, there only 6 (3!) possibilities.
You could add those paths into an array, sort it by total distance and pick the shortest path and repeat after every hit.
Also, tuning the values for the mouse, sounded liie something you could also turn into an ML project
I just heard about this. Pretty impressive skills you are showing off. Great work!
Thank you for watching. More content to come!
"The crux of it was, I was changing three numbers around for days" - yep, sounds about right.
Great work! A couple of suggestions:
First, opencv library in python it's easy to use and setup but if you manage to code it in c++ you would improve the perfomances.
Second, as far as for the PID controller. I dont know the gains you set but it's importat to have the correct sample time (for you it would be the frequency at which you execute the PID) since it influence the effectiveness of the control. Moreover, given you case, it might be enough to have a PD controller that basically has the integration gain set as zero (or just a very little value compared to the other two). The proportional part let you have a fast response while the derivative avoid the mouse to go over the target.
However, again, great work! I love it!
Edit: I realized too late that the video is old..lol
@@111Crytek hey love suggestions the video is old but this definitely process for improvement
Bro you have potential in youtube, keep making these bangers
You could strip the mouse down to just its sensor and board for a smaller footprint. It'd probably be more accurate with it's reduced weight minimising inertia. You could also suspend it in place and have a small sliding base under the sensor.
Wow... what a nice piece of wheels... cleverrrr ❤
And also, nice coding!
Great robot man. Great video pacing and editing too
Old trackball mice might be better for this as the mechanics required can be smaller and fit in the void for the ball. In reality you could also just disassemble the mouse completely to get to the bar rollers...
put that robot on a gaming chair it will hit 200k no cappp
Awesome job, man. Keep it up. I'd like to see more stuff like that
Nice 😂 nice Idea. would like to have something like that too 😂
when the terminators attack we gotta call tenz
You just need to detect a color pixel, one optimization can be made in the size of screen capture you take(resolution), make it as small as possible which still keeps the accuracy of detection high. So as to reduce the load on computation. Maybe as low as 128x128 for a 1:1 display.
let me know how this may affect your performance
the problem with this is real life, in a game the more you lower resolution the lower information you have, which may cause the robot to not detect a head
You could try using a 3d printer gantry to move the mouse faster and more precisely, the mouse could be held stationary while the mousepad moves under it
good idea!
Great job, as a vision engineer, i really impressed ! I don't use python/open source in my industrial project, but maybe OpenCv in C++ is faster ?
For the modelisation in the discrete automation is really math heavy, you've done a great job to find the value by trying !
Hello, thank you for your kind words, especially from someone that do computer vision for a living. How would you go about simplifying this model of the system because at the moment it's definitely nonlinear and wheel slippage will be a problem? Would you first get it into state space form?
Totally agreed c++ will make a better program in terms of speed, and ofc an AMAZING PROJECT! Love it!
OpenCV-python is just a wrapper around the C++ library so it shouldn't actually be much faster to switch, provided you are using numpy and not native python features
@@chawakornchaichanawirote1196 correct the CV was very fast due to the python wrapper
I swear engineering people are the one that we shouldn't be messing with
Great job. I had this same idea a few months ago. I moved it down on my list. Using wheels a great idea. Again great job.
I also do computer vision stuff, and I usually cv2.split(frame) to get the blue, green and red frame, and then just get the mask by cv2.subtract(g,b) to get the mask that only green stuff remains
I find it faster than hsv threshold, as I work on computer vision for FRC competitions
then you can blur, or erode the mask to filter out noise from the mask, then get the contours, center of contours, and so on
I had a similar idea but with a metal frame and when you move one end of the frame to the other it does a 360 turn so u can get more consistent results with a steadier frame for faster flicks
Kamal, you could apply for NASA or similar. You've a natural interest, you can solve the problem, you can get the results. You achieve.
Really cool! I’d suggest maybe using some small brushless motors. It will be a more complicated wiring setup but likely would be quite a bit quicker and more accurate. I’d also try more securely mounting the mouse to the plate, it seems there’s slight wiggle which could cause lower accuracy. Well done!
Wait you might be on to something I am going to look into brushless motors they would be a lot better. One thing I doubt they get a lot of torque the motors I used are geared down a lot to do this.
@@KamalCarter nah they’ll have lots of torque. That’s really what they’re for, speed and power.
@@ZennySilverhand why brushless motors instead of stepper motors like used on a CNC machine? also take the mouse apart and just use the guts, no need for the extra weight of the case
I want to see vs people in game, you use a cam to track it or just record the screem?
This whole video was entertaining af
To comment on the screenshot thing. You might not need a faster screenshot process if you do trajectory planning on each screen shot. Then use PID (or cascade control my favorite) to track the trajectory. This also may give you an even higherscore as you can plan an optimal path so you never have to slow down the mouse so it's always moving. I always wanted to do this but with a stereo camera looking at the screen. That may also increase your screenshot refresh rate
Hmm 🤔 pointing the camera at the screen would be cool. It would increase the it's just like a human playing aspect. Yeah I'm also thinking about doing more advanced/optimal controls in the future the jerky jerk of the PID is definitely not sufficient if I want the hardware not to implode.
@@KamalCarter can't wait to see it :P
What where you using to make those screenshots? The fastest way known to me is the library "mss". Also you cold use something like yolov5 (or yolov7 but didn't test that - most likely overall better) to get those screen-coords. I know - machine learning is a bit overkill to shoot at same-colored spheres but it's GPU accelerated and probably much faster. Plus you could train it to recognize and shoot players. (i did exactly that)
Yes I used mss for this demo I found a new library that's called dxcam that worked really fast. And you're going like my next video if you know about YOLO alogrithms
3d printers cap out at 5-6 meters per second. They are incredibly accurate too. I suggest modifying your setup to use this tech. Gut the mouse and just have the laser attached to the print head. Lock the Z axis, and set some real records!!
Did you use Brett Beauregard's PID library or made your own? :)
I did my own PID. I have done PID tuning enough to not need a library unless I have a lot of gains going.
Really cool project! Thanks for sharing!
iv never seen the subscribe button glow like that when u said like and subscribe lol that was cool
The pro player training session was insane
Its really impressive to see your robot hit 118k even with a burnt out motor. Before I stopped playing fps my aimlabs score sat around 103k though am I no expert by all means, realistically anything higher than 100k isn't necessary for anyone at all.
Props for not using this strong aim bot in game :)
Thanks man. But you might not like what I do in the next video.
Honestly, this is crazy.
This is why you shouldn't mess with engineers
Yo this awesome project! Great work
Thank you so much glad you liked it!
Bruh why didnt youtube recommend this much sooner i sub for your work you put into this video
Actually People didn’t won, if you give the robot a better mouse and faster click and rotation speed it could dominate e-sporters too
this is really cool! if your gameplay gets audited most games will ban you for dual mouse inputs though, spoofing an arduino as a mouse fixes this, in your case you might need 2 arduino + host shields since you have a physical robot rather than just sending the arduino serial commands.
I welcome our robot masters.
You went the extra mile using those motors instead of just modifying the mouse lol. That was impressive
Did you Explore moving the surface below the mouse to make the pointer move?
ur a genuis, motivation 101
Thank you for the kind words
This video deserves 1 million view
I’ve done 91,876 , definitely not as good as I used to be but i’m 40 years old with arthritis.
That’s what 30 years of video games gets you.
Why did you cut the Wheel Encoder cable on the motors ? you could achieve better wheel speed control by using that.
Wow you noticed that 😲. Yeah I could have done a lot with knowing the position and velocity of the encoders, but when the controls started working the encoders were over killed
That’s brilliant great work
I have subbed, bro keep it up!!
Thanks for watching and subbing
自瞄滑鼠墊感覺是個不錯的東西
這樣馬達就不用再負擔滑鼠的重量
amazing, this video deserves a like and a sub
Came from TikTok ur content is right up my alley🤙
Appreciate all my TikTok viewers hopefully I can keep producing content you enjoy.
Where is V2 ?
Actually sounds like a great idea
He already made it. It's the second most recent video, called undetectable AI robot aimbot
I believe the Sentdex (on yt) has made a gta bot. He also ran into slow screen capture but solved it somehow.
Multiplayer's days are numbered
Yes do more video about this this could destroy p2w games
That was fantastic! I'd love a 2nd channel where you dive into the code of things. Subbed!
Did you use the root locus method for your PID tuning? You may be able to squeeze a little more performance out of it by optimizing rise and settling time
Wow, root locus that's a term I haven't heard in a minute 😂. No, I did not do root locus because I did not want to find the transfer function of the system. But if I was really engineering I would definitely go down a path like this.
I've been thinking about it. Since you already have it, you could just back out the transfer function of you plant. Then maybe Ziegler Nichols method would work well. It's really simple, just need to find the open loop gain. No root-locus necessary
@@chain3519 Hmm intriguing. I guess I can make the assumption this is a second order system and find the damping ratio and natural frequency by reading the response time plots and build the model that way. It's and interesting proposition much more research focus than perhaps a RUclips video, but if I come across the data I will send it your way.
@@KamalCarter Second order would definitely cover it. I think something like that could work, but seeing as you already have something that works...
My suspicion is you could back out that stuff with a ruler and an iphone using slo mo. Then it's just a quadratic fit in excel or something (ma+cv+kx =F -> s^2 + c/m s + k/m = F(S), making sure m matches what is measured on a scale).
Ziegler-Nichols probably would have just been a really nice starting point for the tuning you did anyways.
Also thought of a solution that is simpler than anything we thought of/ you did. Use an arduino pro micro that poses as a mouse. Have it draw straight lines to the target and inject noise into the path so it's harder to detect. What you did is a lot more fun though.
@@chain3519 yeah I can get odometry data from the servos for the tracking part. Also, your last point is spot on I could have just spoof mouse inputs and made it seem as real as possible, but why do that when I get just literally make the mouse move.
Using the PyAutoGUI library: PyAutoGUI is a cross-platform Python library that can be used to automate keyboard and mouse actions, including taking screenshots. Using the PIL library: The Python Imaging Library (PIL) provides a way to capture the screen on Windows, but it requires the installation of a third-party library called Pillow. Using the mss library: The MSS (Multi-Screen Shot) library is a cross-platform library that can be used to take screenshots in Python. (Just let me know if you need any starter code for this... I will be glad to help)
That is amazing!!!
Nicely done.
Fun project, well and interestingly done!
Human trains for a whole life in AIMLAB, Robot: almost beats they after a few days of adjustments
What you proved here is the concept, and it works. I have no doubt that you can significantly improve the performance once you start min maxing each factor. Outside gaming - is there a use case where you want the control to be from an external factor (like from a mouse / joystick) and not programmed via an application in the OS?
No real use case just thought it would be fun. Yeah it could be significantly improved but for this version I liked how it all came together
This with an arm would probably rip. I think the wheels and traction as cool as they are add a lot of inaccuracies and slowness to the entire system. Still sick though 😁
Sometimes your opponent is having a really good day.
This guys content is underated🙌👏👏
Idk why he stop uploading tho
fast way to screenshot on python would probably be dxcam, pretty much the fastest way to do screenshots possible.