@@danielsantini1153 From a motricity point of view yes, chicken still wins by a mile. From a cognitive point of view all robots have access to "ChatGPT/Gemini/LLama" level capabilities, which make them more knowledgeable and able to communicate with humans than any chicken I have seen. Yann Lecun has mentioned multiple times that "the dream" is to have machines able to "learn to be like a cat". This work is on that line of research. Looking are results like this ruclips.net/video/REvNnUzVDAA/видео.htmlsi=EdPmU4VdeWGnZV-u we seem to be on a good trajectory.
Thank you for your interest! You can refer to the papers here for examples of flat ground walking ruclips.net/video/AISy0hxo6-0/видео.htmlfeature=shared and stair walking ruclips.net/video/MPhEmC6b6XU/видео.htmlfeature=shared. Without visual inputs, the gaits either adapt to flat ground or maintain a higher swing height that always anticipates upcoming stairs. These examples should provide insight into the qualitative improvements in walking behavior when the vision system is turned off.
The RL policy receives XY and yaw velocity commands from the user’s joystick to control the robot's heading and speed, while the policy manages the rest of the robot's responses.
I am missing the side-by-side when the vision system is turned off, to see the qualitative improvement in walking behaviour.
This video document testifies that the machine, that all machines have the level of intelligence of a chicken.
@@danielsantini1153 From a motricity point of view yes, chicken still wins by a mile.
From a cognitive point of view all robots have access to "ChatGPT/Gemini/LLama" level capabilities, which make them more knowledgeable and able to communicate with humans than any chicken I have seen.
Yann Lecun has mentioned multiple times that "the dream" is to have machines able to "learn to be like a cat". This work is on that line of research.
Looking are results like this ruclips.net/video/REvNnUzVDAA/видео.htmlsi=EdPmU4VdeWGnZV-u we seem to be on a good trajectory.
Thank you for your interest! You can refer to the papers here for examples of flat ground walking ruclips.net/video/AISy0hxo6-0/видео.htmlfeature=shared and stair walking ruclips.net/video/MPhEmC6b6XU/видео.htmlfeature=shared. Without visual inputs, the gaits either adapt to flat ground or maintain a higher swing height that always anticipates upcoming stairs. These examples should provide insight into the qualitative improvements in walking behavior when the vision system is turned off.
Does anyone know how to detect and visualize the points of the elevation map in the mujoco simulation in this video?
it's that auto, or by joystick?
The RL policy receives XY and yaw velocity commands from the user’s joystick to control the robot's heading and speed, while the policy manages the rest of the robot's responses.