![Steven Hong](/img/default-banner.jpg)
- Видео 40
- Просмотров 87 992
Steven Hong
США
Добавлен 12 апр 2015
This channel is purely for fun, and spreads the idea of innovations and creativity!
86-3-60 RAM! RAM! RAM! #battlefield2042
RAM with good teammate is my favorite setup in battlefield 2042
Mobility is always the key!
Thanks for theamazingatom being a good gunner!
Mobility is always the key!
Thanks for theamazingatom being a good gunner!
Просмотров: 157
Видео
Apache Chaos Kill - All Out War #battlefield2042
Просмотров 2314 месяца назад
Battlefield 2042 All Out War Apache. Anti-ground combat and aerial combat
[Battlefield 2042] Feel the justice from the sky
Просмотров 777 месяцев назад
Battlefield 2042 Attack Helicopter Gameplay
PokéWar Arcade Game with Object-Oriented Programming in Java
Просмотров 7292 года назад
This is the demo video for the final project of CSSE220 at Rose-Hulman Institute of Technology, which was taught by Dr. Joe Hollingsworth. The video shows the arcade game I developed with my teammates Qingyu Chai and Nigel Nie using the object-oriented programming ideas. I'd like to thank Dr. Joe Hollingsworth for teaching such a great course during the 2018 fall quarter. I also want to thank m...
Privileged Reinforcement Learning for Quadrupedal Robots, Part 3 (Final Report Demo)
Просмотров 6882 года назад
This is the demo video of final report for EECS 545 final project. The goal of the poster session demo is to show the intermediate training result of ANYmal B robot in RaiSim simulator. We employed Proximal Policy Optimization (PPO) as the training algorithm and rewarded for higher forward velocity, less body motion, less joint velocity, and less torque input. We also increased the slope steepn...
Privileged Reinforcement Learning for Quadrupedal Robots, Part 2 (Poster Demo)
Просмотров 1,4 тыс.2 года назад
This is the demo video of poster session for EECS 545 final project. The goal of the poster session demo is to show the intermediate training result of ANYmal B robot in RaiSim simulator. We employed Proximal Policy Optimization (PPO) as the training algorithm and rewarded for higher forward velocity, less body motion, less joint velocity, and less torque input. We also did one comparison for e...
Legged Robot behind the Scene at UMich ROAHM Lab
Просмотров 3,3 тыс.2 года назад
There is the compilation of robot failure videos I recorded for the past year when I worked on the research projects related with the legged robots. Legged robots are awesome, but the key to success is coping with failure. Because of the hard work by so many researchers in the community, we could see legged robots performing these wonderful agile maneuvers. I'd like to thank Dr. Ram Vasudevan f...
Privileged Reinforcement Learning for Quadrupedal Robots, Part 1 (Progress Report Demo)
Просмотров 6522 года назад
This is the demo video of progress report for EECS 545 final project. The goal of the progress report demo is to show the initial training result of ANYmal B robot in Raisim simulator. We employed Proximal Policy Optimization (PPO) as the training algorithm and rewarded for higher forward velocity and less torque input. I'd like to thank Dr. Honglak Lee for teaching such a great course during t...
ANYmal Pharos SLAM Demo by Steven
Просмотров 4082 года назад
This is a demo video for the Pharos SLAM algorithm developed by ANYbotics on ANYmal C robot. Special thanks to University of Michigan and ROAHM lab for offering such a wonderful research opportunity. Video: made by Steven Hong on Premiere Pro 2022 Date: February 15th, 2022
Visual Navigation with ORB-SLAM
Просмотров 5833 года назад
This is a demo video for the Visual Navigation for Autonomous Aerial Vehicles (VNA2V) course taught by Dr. Vasileios Tzoumas at University of Michigan - Ann Arbor, listed as AE740. The video demonstrates the use of ORB-SLAM package with loop closure for visual navigation. I'd like to thank Dr. Vasileios Tzoumas for teaching such a great course during the 2021 winter semester, and "genuine GSI" ...
ANYmal VS Spot, Who’s Going To Win?
Просмотров 4 тыс.3 года назад
Walking Contest between ANYmal and Spot at Ford Robotics Building in University of Michigan - Ann Arbor. Measured Spot Top Speed: 1.367 m/s Measured ANYmal Top Speed: 0.568 m/s Spot can walk faster with all-knee configuration, but Boston Dynamics reduced the gait length to constrain its top speed. One hypothesis is the perception limit.
MPC Final Project - ANYmal Walk in MATLAB Simulation
Просмотров 3043 года назад
This video is a demo video for the final project of Model Predictive Control (MPC) course taught by Dr. Ilya Kolmanovsky at University of Michigan - Ann Arbor, listed as AE740. This video demonstrates the ANYmal trotting in MATLAB using LTV-MPC formulation. We used Featherstone toolbox to convert ANYmal URDF into symbolic data where we can calculate the dynamics equation using Euler-Lagrange fo...
Object Localization using YOLOv3 Detector
Просмотров 2043 года назад
This is a demo video for the Visual Navigation for Autonomous Aerial Vehicles (VNA2V) course taught by Dr. Vasileios Tzoumas at University of Michigan - Ann Arbor, listed as AE740. The video demonstrates the use of YOLOv3 detector to detect the teddy bear in the video, and then uses the bounding box of teddy bear to estimate its locations with camera's location and pose information. I'd like to...
Steven's First Open Water Dive (vlog)
Просмотров 773 года назад
It was my first open water dive at Gilboa Quarry after earned my open water certification with Huron Scuba. We fed the fish and explored some wrecks at 60 ft depth. I'd like to thank the awesome instructor, David Bowen, for his excellent open water training, and Huron Scuba for offering such a wonderful course. I'd also like to thank my dive buddies, Eric Pear, Roberto Rivera, and Anna Cronin f...
Pose Estimation for Autonomous Aerial Vehicles
Просмотров 1953 года назад
This is a demo video for the Visual Navigation for Autonomous Aerial Vehicles (VNA2V) course taught by Dr. Vasileios Tzoumas at University of Michigan - Ann Arbor, listed as AE740. The video demonstrates the pose estimation using 4 different kinds of algorithms, including OpenGV's 5-point, 8-point, and 2-point algorithms for 2D-2D correspondences, and Arun's 3-point method for 3D-3D corresponde...
Feature Tracking for Autonomous Aerial Vehicles
Просмотров 893 года назад
Feature Tracking for Autonomous Aerial Vehicles
Trajectory Generation for Autonomous Aerial Vehicles Using Linear Optimization
Просмотров 2243 года назад
Trajectory Generation for Autonomous Aerial Vehicles Using Linear Optimization
Trajectory Tracking for Autonomous Aerial Vehicles Using Geometric Controller
Просмотров 1333 года назад
Trajectory Tracking for Autonomous Aerial Vehicles Using Geometric Controller
ST Robotics R12 Robot Demo by Steven
Просмотров 2,4 тыс.4 года назад
ST Robotics R12 Robot Demo by Steven
Steven's Robot Arm GUI - Stanford Manipulator
Просмотров 1,3 тыс.4 года назад
Steven's Robot Arm GUI - Stanford Manipulator
PID Control to Balance the Tilt-Ball System
Просмотров 69 тыс.5 лет назад
PID Control to Balance the Tilt-Ball System
H.I.I. Field Trip for Innovation Project of FLL in Kaihua
Просмотров 155 лет назад
H.I.I. Field Trip for Innovation Project of FLL in Kaihua
H.I.I Innovation Project in FLL for 2014 Nature's Fury Season
Просмотров 365 лет назад
H.I.I Innovation Project in FLL for 2014 Nature's Fury Season
Nice work! May I ask you what type of package are you using for the anymal locomotion? Is it the champ package or something like that?
Hi i have the same project like this can i get the source code . please
ا Hello sir, can you tell me what are the electronic components that you used? I'm very interested
Hello, I took this idea as a project in my studies, but I am facing some difficulties. Can you help me please?
Yes, I'm glad to help. What issues are you facing?
@@stevenhong7099 I had trouble identifying the components ,If you can explain to me the idea using Captor ,What is the role of Captor here and how does it work? , thank you very much, sir
@@aichahadil287 sorry I don’t quite follow you. What is Captor you are referring to?
Wow! I only just found this! Fantastic job and many thanks. - David Sands
Glad you enjoyed it!🎉🎉
They have different leg structure. One for Spot is good for fast movement and one for ANYmal is good for stability. The result is not surprising
Okay....XD
So the only reason why spot can't walk on ice is due to lack of grip it seems
Yeah, even human can’t walk well on ice. But I did manage to make Spot walk on ice. The lesson learned is to reduce the speed when it’s fore legs touch the ice. This will reduce the moment and make robot being able to walk on the ice. That’s only for slow walk. I think engineers can manage to make Spot skating on the ice but do we really want to invest money on that lol.
ᑭяỖmo𝓼𝐦 💔
Biped Robot failure videos :) ruclips.net/video/CGcrKxNJQ_o/видео.html
Easy win
Yeah ,i thought it would be more equal race , guess i was wrong 😂
Initial training (part 1) video: www.youtube.com/watch?v=oU672... Intermediate training (part 2) video: ruclips.net/video/laf3i8RKxVw/видео.html Final training (part 3) video: ruclips.net/video/1AJfFtLTQmQ/видео.html
Initial training (part 1) video: www.youtube.com/watch?v=oU672... Intermediate training (part 2) video: ruclips.net/video/laf3i8RKxVw/видео.html Final training (part 3) video: ruclips.net/video/1AJfFtLTQmQ/видео.html
Initial training (part 1) video: www.youtube.com/watch?v=oU672... Intermediate training (part 2) video: ruclips.net/video/laf3i8RKxVw/видео.html Final training (part 3) video: ruclips.net/video/1AJfFtLTQmQ/видео.html
Do you plan on putting the learned policy on real hardware?
Putting learned policy on the physical robot requires a lot of extra work. What we did here is not enough for the implementation on the physical robot. First, we want a trained policy that robot won't fall or get tripped in the simulation. With the current setup, robot still have a small chance of falling. The reward function still requires some tuning. Second, we need to train the ANYdrive characteristics. Motors have nonlinear models and we didn't include this into the simulations. ETH has a good paper on this. arxiv.org/abs/1901.08652 Third, the main idea behind this methodology is teacher-student policy. What we trained is the teacher policy, while the final policy implemented on the robot is the student policy. Teacher policy has the privileged information, which is not possible on the real life. So we want to use teacher policy to teach student policy, and implement the student policy on the physical robot. arxiv.org/abs/2010.11251 Fourth, the robot in our lab is ANYmal C, while we used ANYmal B in this video. We started with ANYmal B because it has simpler dynamics model and can train faster. However, when we migrated from B to C, the robot was behaving super weird (crawling on its knees). It means that the tuned weights for B is not suitable for C. Since the deadline for the project was approaching soon, my team decided to stick with ANYmal B within the scope of ECE545. If we want to put this policy on the real hardware, we need to retrain the ANYmal C model with new reward function.
Nice work! Would you like to share something more about "Privileged RL" in the title? like your control architecture or anything else about "Privilege"?
Hello, the idea of this work is inspired by ETH RSL lab, the hub of all ANYmal robots. They came up with an idea of teacher-student policy where teacher has all the information needed to training the locomotion policy, while student only has proprioceptive sensor inputs. The term "privileged" is basically referring to the teacher policy here. ruclips.net/video/9j2a1oAHDL8/видео.html My lab historically worked on the model-based controller and reachability-based motion plan. As you can tell, we are more dynamics people and mathematics people, so we used to "despise" the reinforcement learning because of all sorts of the issues. However, the excellent results shown by ETH blew our mind, so we start to explore this area as well. They also published a new paper incorporating the perception at the beginning of this year, which makes my paper "useless" from the initiative and motivation. ruclips.net/video/zXbb6KQ0xV8/видео.html
Can you explain what pid is
PID is one of controls technique widely used. It’s simple and doesn’t need to model the system to control the signals. It’s a feedback control so it requires sensors to measure the information and data. There are three parts correspond to 3 letters. P is proportional control, basically you calculate the error between current state and reference state, the control input is proportional to the error. This is the driving force of control. I is integral control. You integral the errors over time. This is just a sum-up in digital system. I term is dedicated to reduce steady-state error. D is derivative control. You calculate the derivative of two consecutive sensor data, add them to the control input. This is dedicated for speed control. For example, prevent the overshoot of the control signal.
That's cool! Nice project Steven
Nice hoodie! 😉
You used a distance sensor for this? If so, which one?
No, there is an angle sensor at the bearing
That SpaceX hoodie is epic
Can you see the F-22 Air Force Team sticker on the back side of my laptop, lol. Maybe the FRC hand is more visible
Thank you for sharing.
I think legged robots have a big potential in the future. Showing what they can do and what they can’t really help people become more familiar with this new technology. I’m very help to see the legged robot community to grow
It looks like this is only using the "D" part that tracks error and reduces the rate of error
4:12 absolutely REVOLUTIONARY gait pattern that BIG ROBO doesn't want you to know about
You know how to come up with a better RUclips title 😂😂😂
Hrx
2:40 Damn it's creepy, it got away like it was scared after getting rekt so many times! I know it can't be scared but still.
Yeah, I was controlling the robot at that time, and I didn't move the robot. It ran away by itself because of the obstacle avoidance
Do you have access to all the software of spot or only some APIs? I know that anymal is not open source and restricted in some areas
Spot is not on open-source and only API. ANYmal is open-source after signing documents with ANYbotics. HyQ is open-source on GitHub. Digit is not open-source and we wrote our own controller.
Boston Dynamics has the best robots in the world so they don’t give a shit of sharing source codes.😅😅😅
I am a 3rd-year student in computer engineering < could you give me good material to start learning robotics, I am mainly interested in computer vision, SLAM, and reinforcement learning and know the basics of ROS, thanks in advance and your channel have great projects keep going.
Explore
Robotics is a very broad area, and depending on what you are looking for, the answer may vary a lot. I'm not sure what kind of materials you are looking for, but taking robotics courses is always a good way to learn. After you getting into a research lab, you spend more time reading papers and attending seminars to learn stuff instead of robotics. The field is growing really fast and I say it's exponentially growing, so basically I'm also learning new things everyday by working on a specific project. If you narrow down the scope of the materials you are looking for, I probably could give you a better answer, but the following is very generic. 1. RSS and ICRA are the two top conferences for robotics. It will be overwhelming for new people, but you will learn the-state-of-art from there. I met my current research advisor in a RSS conference. 2. RUclips has a couple of excellent videos posted by the famous research groups. For example, I watch a lot of videos from ETH/RSL, MIT CSAIL, UPenn, UCB.
@@stevenhong7099 I have watched some autonomous vehicle oriented courses(a 1-2 hours lecture for each topic like SLAM, perception, control, and path-planning) and then read a book about ROS and after that, I just got stuck and did not know where to go, I am interested in deep learning especially fro computer vision and did some projects and willing to read about reinforcement learning and want to get hands-on experience in these topics in robotics. So it would be great if you could suggest college courses(ones that are available online) with simulation tasks/assignments to get some theoretical and practical knowledge, and sadly there are no robotics research group/lab where I am as the field itself is just new in my country Egpyt, here we focus more on app and web development as they require no "expensive" hardware . I really appreciate your help and the conferences you have suggested are really great I have searched for them on youtube.
@@mohamedel-hadidy4844 Yeah, I understand that the hardware or equipment is expensive. Only big title universities have some advantages on the funding for getting the most advanced hardware. My undergrad is a small school, so they didn't have good robotics program as well. But for simulation, computer vision, or reinforcement learning, you could work on any good laptop. 1. For deep reinforcement learning, CS285 from UC Berkeley is the best course I've ever seen. Dr. Sergey Levine is one of the best RL professors. rail.eecs.berkeley.edu/deeprlcourse-fa20/ 2. For computer vision, CS231n from Stanford is another famous course. Dr. Fei-Fei Li is one of the best professors in this fields. cs231n.stanford.edu/ 3. For SLAM and perception of drones, I took VNA2V course at UMich. It was not open-source but MIT version is open-source. ocw.mit.edu/courses/aeronautics-and-astronautics/16-485-visual-navigation-for-autonomous-vehicles-vnav-fall-2020/index.htm 4. Also, MIT Open Courseware is a good place to learn stuff. You can try finding the topics you want there. ocw.mit.edu/courses/find-by-topic/
@@stevenhong7099 I have a trustworthy gtx1060 laptop XD so I believe it will do the job, I have taken a look at the material you have sent and they are all great and I am planning on starting on them as soon as the end of my finals, so thank you very much I really appreciate your help <3
'Ok'
🤩🤩🤩
Which engineering major did u pick up bro? Just curious.
I did Mechanical Engineering major for my Bachelor and Master degree. I also have minors in electrical and computer engineering, robotics, and economics. Currently my focus is on robotics, such as quadrupedal robots.
Woah 🥰, this is exactly who i wanna be. Thanks, ❤️❤️❤️
Wow, please keep making these videos senpai. Cheers!
wow steven! congratulations! very impressvive, did you use step motors or bldc? what is the payload of the arm?
It's step motor with optical encoder. The payload is only 500g with gripper. We only used this for developing kinematics control algorithm. It focuses more on theory than the application
You have to compare them in rough terrain!
Spot definitely wins without doubt
@@stevenhong7099 and you know that, because Spot wins the drag Race?
@@cyberistful Spot can literally go everywhere except ice lake, we tested that for my lab’s newest paper. However, ANYmal still couldn’t handle stairs with the package we purchased. Also it’s blinding locomotion so no perception built into it that means no automatic obstacle avoidance. There are ETH videos showing up ANYmal but only that lab has the code and also knows how to run it. We are also struggling a lot other things with ANYmal too, such as adjusting the walking speed.
@@stevenhong7099 But what I heared from Anybotics is that, they have a ready-to-use programm codes in which we can directly modify and adapt to our controller. In other words, they have an open-source community and also opensourced their codes.
@@shengzhiwang2143 I do have access to their source codes. But it’s 1 million lines of codes
I'm very surprised this video got so many views, since the technique used here was not advanced/cutting-edge comparing with ones in my other videos. It's very simple PID control, and high school students can also do it without much difficulty. Update: Also I'm not exaggerating this. First time I learned PID was in high school for a competition called FIRST LEGO League (FLL). You can see a bunch of videos about them on my channel too.
Good work ! Do you share the code ?
@@oracid It's really simple PID controller and nothing fancy. The rest is just setting up the PIC/microcontroller with PWM. That's more professional embedded system coding as we (engineers) don't use libraries. For clarify, we use registers directly so the code runs faster then using library and make more profit for the company. /*********************************************/ // implement the CONTROLLER (Gc) functions // PID controller /*********************************************/ Derror = error - last_error; last_error = error; Isum = Isum + error; if (Isum > 0.0) Isum = min(Isum, MAX_ISUM); else Isum = max(Isum, -MAX_ISUM); u = kp*error + kd*Derror + ki*Isum;
@@stevenhong7099 Thank you very much.
Lol im not even in high school
alunos de ensino médio no Brasil nem sonham em fazer isso
Excellent
Well done.
The ball is a paid actor!
Maybe you spend less money on buying a glue, and glue the ball to the tilt-ball system, lol. But my goal for the video just to show the control algorithm/technique, so not actually using a glue :)
@@stevenhong7099 it was a joke :)
@@yaminsiddiqui4690 Engineers have no humor, lol I just saw so many RUclips comments saying Boston Dynamics are fake and CGI, but I play with Spot every day. Sometimes hard to tell if people are joking or really don't understand the idea.
I see great prospects in your project for shops )
More!. code?
Simple PID, not using any advanced or modern control theory. I'm very surprised so many people watch this simplest video, as high school students can also do it.
@@stevenhong7099 I know PID but i like to see how people implement it. Linking D and I or ignoring D and just playing with P and I. I did slow HVAC control code.
@@ConsultingjoeOnline P is main driving force, I is correcting steady-state error, D is preventing overshoot. PI, ID, PD controllers are also commonly used in the industry depending on the applications. To be honest, not a lot of people using PID because too many parameters to tune. For this video, actually Integral controller itself performs the best. I did P, I, PI, PD, ID, PID controllers for this same tasks. Since it's running on a microcontroller, so I also have other tuning parameters such as sampling time, max_I_sum, max_d. Tuning process itself is very tedious. Although PID is the most used control technique but I don't like to work on those projects personally. I like to use LQR, MPC, robust control instead of PID.
@@stevenhong7099 Very cool. Yeah We would try keeping it simple and just have limits and an error val as I would just do temp or humidity control. Drive a demand percentage into a 3rd party system even.
@@ConsultingjoeOnline /*********************************************/ // implement the CONTROLLER (Gc) functions // PID controller /*********************************************/ Derror = error - last_error; last_error = error; Isum = Isum + error; if (Isum > 0.0) Isum = min(Isum, MAX_ISUM); else Isum = max(Isum, -MAX_ISUM); u = kp*error + kd*Derror + ki*Isum;
okay
awesome, new projects?
Nice work !!! It's really amazing animation on matlab. How did you import the cad model into matlab axes ??
I used STL file to generate vertices in matlab.
Good video !
Lol, although not much real content in this video. This is just a raw demo video for my professor, and even didn't add any caption or explanation to it. I put more effort into other videos though.
@@stevenhong7099 yes but is great, almost all videos related want the ball in the center.... But is kind of easy, there is a column at center!! So is a good stability point... But your approach go for different stability points and is a good control strategy and very interesting
:)
hello can u share the code it would be very helpful
Lol
Hey awesome work! I'am performing right now a stanford arm with an offset, Did you use Simulink in order to import the arm to matlab?
Hello, depending on what you want to do. For this video, I'm just doing the forward kinematics, and as you can see in the video that I didn't implement the inverse kinematics. If you are doing the spherical wrist, you don't need Simulink for inverse kinematics calculations. However, since you mentioned that you were using offset wrist, it would require Simulink for nonlinear calculations (depending on the algorithm). In this video, I didn't use Simulink, but I do have Simulinks for another robot arm called ST Robotics R12 Robot, which has an offset wrist. However, the algorithm is based on the actual configuration of the robots. I don't think my codes would work for other configurations.
@@stevenhong7099 Nice, How do you put your cad into the matlab figure?
@@Alteregofr Using STL files to create vertices in MATLAB. However, it is a huge pain to rotate and translate the solid models to desired position.
@@stevenhong7099 so you use CAD file to demo this or what? I'm struggling because I use Peter Corke Toolbox and I can't make it understand the spherical wrist. I set my Stanford arm very similar to the puma560 preset but it understands puma560's wrist not mine. (I check it by robot.isspherical() ) Also, can you share the code? PLease :)
Wow! This is awesome! I'll definitely take Dr. Fisher's class!
Yeah! Take Dr. Fisher's class. It's the best robotics course offered in Rose-Hulman!