- Видео 22
- Просмотров 70 099
General Robotics Lab
Добавлен 11 май 2021
General Robotics Lab is directed by Prof. Boyuan Chen at Duke University.
Lab website: generalroboticslab.com/
Boyuan Chen: boyuanchen.com/
Lab website: generalroboticslab.com/
Boyuan Chen: boyuanchen.com/
Automated Global Analysis of Experimental Dynamics through Low-Dimensional Linear Embeddings
Project website (paper, code, video): generalroboticslab.com/AutomatedGlobalAnalysis
Abstract: Dynamical systems theory has long provided a foundation for understanding evolving phenomena across scientific domains. Yet, the application of this theory to complex real-world systems remains challenging due to issues in mathematical modeling, nonlinearity, and high dimensionality. In this work, we introduce a data-driven computational framework to derive low-dimensional linear models for nonlinear dynamical systems directly from raw experimental data. This framework enables global stability analysis through interpretable linear models that capture the underlying system structure. Our approach ...
Abstract: Dynamical systems theory has long provided a foundation for understanding evolving phenomena across scientific domains. Yet, the application of this theory to complex real-world systems remains challenging due to issues in mathematical modeling, nonlinearity, and high dimensionality. In this work, we introduce a data-driven computational framework to derive low-dimensional linear models for nonlinear dynamical systems directly from raw experimental data. This framework enables global stability analysis through interpretable linear models that capture the underlying system structure. Our approach ...
Просмотров: 8 858
Видео
GUIDE: Real-Time Human-Shaped Agents
Просмотров 4422 месяца назад
Project website (paper, code, video): generalroboticslab.com/GUIDE Abstract: The recent rapid advancement of machine learning has been driven by increasingly powerful models with the growing availability of training data and computational resources. However, real-time decision-making tasks with limited time and sparse learning signals remain challenging. One way of improving the learning speed ...
Automated Discovery of Continuous Dynamics from Videos
Просмотров 4532 месяца назад
Project website (paper, code, video): generalroboticslab.com/SmoothNSV Abstract: Dynamical systems form the foundation of scientific discovery, traditionally modeled with predefined state variables such as the angle and angular velocity, and differential equations such as the equation of motion for a single pendulum. We propose an approach to discover a set of state variables that preserve the ...
HUMAC: Enabling Multi-Robot Collaboration from Single-Human Guidance
Просмотров 3782 месяца назад
Project website (paper, code, video): generalroboticslab.com/HUMAC Abstract: Learning collaborative behaviors is essential for multi-agent systems. Traditionally, multi-agent reinforcement learning solves this implicitly through a joint reward and centralized observations, assuming collaborative behavior will emerge. Other studies propose to learn from demonstrations of a group of collaborative...
The Duke Humanoid: Design and Control For Energy Efficient Bipedal Locomotion Using Passive Dynamics
Просмотров 3,1 тыс.2 месяца назад
Project website (paper, code, video): generalroboticslab.com/DukeHumanoidv1 Abstract: We present the Duke Humanoid, an open-source 10-degrees-of-freedom humanoid, as an extensible platform for locomotion research. The design mimics human physiology, with minimized leg distances and symmetrical body alignment in the frontal plane to maintain static balance with straight knees. We develop a reinf...
WildFusion: Multimodal Implicit 3D Reconstructions in the Wild
Просмотров 6172 месяца назад
Project website (paper, code, video): generalroboticslab.com/WildFusion Abstract: We propose WildFusion, a novel approach for 3D scene reconstruction in unstructured, in-the-wild environments using multimodal implicit neural representations. WildFusion integrates signals from LiDAR, RGB camera, contact microphones, tactile sensors, and IMU. This multimodal fusion generates comprehensive, contin...
CREW: Facilitating Human-AI Teaming Research
Просмотров 5254 месяца назад
Project website (paper, code, video): generalroboticslab.com/CREW Abstract: With the increasing deployment of artificial intelligence (AI) technologies, the potential of humans working with AI agents has been growing at a great speed. Human-AI teaming is an important paradigm for studying various aspects when humans and AI agents work together. The unique aspect of Human-AI teaming research is ...
[VCC-ALIFE 2024] Text2Robot: Evolutionary Robot Design from Text Descriptions
Просмотров 8285 месяцев назад
Virtual Creature Competition Submission: Text2Robot: Evolutionary Robot Design from Text Descriptions. Duke General Robotics Lab. Authors: Ryan P. Ringel∗, Zachary S. Charlick∗, Jiaxun Liu∗, Boxi Xia, Boyuan Chen. (* denotes equal contribution) Full Project website (paper, code, hardware manual, video): generalroboticslab.com/Text2Robot/ Abstract: Robot design has traditionally been costly and ...
ClutterGen: A Cluttered Scene Generator for Robot Learning
Просмотров 3485 месяцев назад
Project website (paper, code, video): generalroboticslab.com/ClutterGen Abstract: We introduce ClutterGen, a physically compliant simulation scene generator capable of producing highly diverse, cluttered, and stable scenes for robot learning. Generating such scenes is challenging as each object must adhere to physical laws like gravity and collision. As the number of objects increases, finding ...
Text2Robot: Evolutionary Robot Design from Text Descriptions
Просмотров 1,3 тыс.5 месяцев назад
Project website (paper, code, hardware manual, video): generalroboticslab.com/Text2Robot/ Abstract: Robot design has traditionally been costly and labor-intensive. Despite advancements in automated processes, it remains challenging to navigate a vast design space while producing physically manufacturable robots. We introduce Text2Robot, a framework that converts user text specifications and per...
Perception Stitching: Zero-Shot Perception Encoder Transfer for Visuomotor Robot Policies
Просмотров 3145 месяцев назад
Project website (paper, code, video): generalroboticslab.com/PerceptionStitching Abstract: Vision-based imitation learning has shown promising capabilities of endowing robots with various motion skills given visual observation. However, current visuomotor policies fail to adapt to drastic changes in their visual observations. We present Perception Stitching that enables strong zero-shot adaptat...
SonicSense: Object Perception from In-Hand Acoustic Vibration
Просмотров 9335 месяцев назад
Project website (paper, code, video): generalroboticslab.com/SonicSense Abstract: We introduce SonicSense, a holistic design of hardware and software to enable rich robot object perception through in-hand acoustic vibration sensing. While previous studies have shown promising results with acoustic sensing for object perception, current solutions are constrained to a handful of objects with simp...
Robot Studio Class - Tutorial Video on Fusion 360 Export
Просмотров 24410 месяцев назад
Tutorial video on Fusion 360 design history export from Robot Studio class at Duke University. Course website: generalroboticslab.com/RobotStudioSpring2024/index.html Code: github.com/general-robotics-duke/FusionHistoryScript Credit: Teaching Assistant: Zach Charlick
Policy Stitching: Learning Transferable Robot Policies
Просмотров 746Год назад
Conference on Robot Learning 2023 (CoRL 2023). Project Website: generalroboticslab.com/PolicyStitching/ Abstract: Training robots with reinforcement learning (RL) typically involves heavy interactions with the environment, and the acquired skills are often sensitive to changes in task environments and robot kinematics. Transfer RL aims to leverage previous knowledge to accelerate learning of ne...
Discovering State Variables Hidden in Experimental Data
Просмотров 4,3 тыс.3 года назад
Project website: www.cs.columbia.edu/~bchen/neural-state-variables/ Abstract: All physical laws are described as relationships between state variables that give a complete and non-redundant description of the relevant system dynamics. However, despite the prevalence of computing power and AI, the process of identifying the hidden state variables themselves has resisted automation. Most data-dri...
Full-Body Visual Self-Modeling of Robot Morphologies
Просмотров 3,3 тыс.3 года назад
Full-Body Visual Self-Modeling of Robot Morphologies
(Data Collection) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
Просмотров 16 тыс.3 года назад
(Data Collection) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
(Demos) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
Просмотров 8 тыс.3 года назад
(Demos) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
(Overview) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
Просмотров 1,5 тыс.3 года назад
(Overview) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
(Hardware Animation) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
Просмотров 16 тыс.3 года назад
(Hardware Animation) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
The Boombox: Visual Reconstruction from Acoustic Vibrations
Просмотров 1,5 тыс.3 года назад
The Boombox: Visual Reconstruction from Acoustic Vibrations
Visual Perspective Taking for Opponent Behavior Modeling
Просмотров 7253 года назад
Visual Perspective Taking for Opponent Behavior Modeling
why no positive face expressions?
When AI discovers this, and uses it to predict human reactions, it will play us perfectly
DAMN MAN YOU ARE UNDERATED. You deserve at least a 100k subs. You got 1 more from me! Do remember us when you get big!
It's not enough to DISPLAY emotions! There HAS to be a active inner state also!!!
I ALWAYS like to keep up with the latest on the tech 😁👍
This is such an underrated paper and video. Thanks for uploading.
Can't access the project website! Please fix
This works for us with a test just now. Could you try again or with a different browser or device?
@@boyuan_chen can't from my end, browser throws SSL_ERROR_UNRECOGNIZED_NAME_ALERT
@@boyuan_chen something to do with SSL certification
@@boyuan_chen it still doesn't work! "Secure connection failed", maybe try outside of your university network?
Man I guess even us mathematicians are not safe from AIs automating our job 😂
I thought finite dimension Koopman operators cannot express multistability. I'm surprised that the duffing and magnetic pendulum cases somehow work. Are they truly multistable (asymptotically) and also robust to small perturbations in the state?
Beautiful presentation, by the way.
Great video! I’ll be checking out your paper in detail later today. I had a quick glance at it and I have a few questions and a suggestion. I’m curious: around 13:09, was the change of coordinates that your method learned for the Hodgkin-Huxley model a time varying change of coordinates? Also, did the change of coordinates depend on the initial condition? Last, to the best of my (weak) understanding, the Hodgkin-Huxley model has inputs. If I’m right, how did you handle those? Also, typically, the condition for “asymptotic” (as opposed to “Lyapunov”) stability is that “\dot{V}(x) < 0” (as opposed to “\dot{V}(x) \leq 0”). If you are getting away with a negative semi-definite Lyapunov rate by using a LaSalle’s invariance principle based argument, you should state that in the video and in your paper. FYI, I’m using definitions as they are given in Khalil’s textbook. If my critiques are coming from a difference of terminology, please disregard. Also, I love the figures! Very compelling!
Thank you! The Hodgkin Huxley model we considered does not have inputs, this is the standard model with four states and is given in the paper. The change of coordinates is simply a function of the states. The resultant system in the new coordinates is a linear time-invariant system. This is a good catch but simply a typo! All of the models learned in the paper have Re(\lambda) < 0 and thus \dot{V}<0.
@ Wonderful! Thank you very much.
Nice video, showing complex mathematics, in a friendly way. Personally, I think the most important skills to have is an understanding of the dynamical system, building an intuition and then forecasting behaviour. Automating it is an extra bonus in my opinion, which will help reduce the time between understanding the dynamics and forecasting results. But the hard skill here, is building the intuition for dynamical systems. Great work, cheers.
0:24 fingerprints pattern...
The hodgkin huxley approximation is huge for neuromorphic computing. linear approximations mean closed form solutions are applicable, reducing the number of calculations needed during spiking neural network inference. if you'd like to know more, please get in touch with me.
id like to know more
I don't have to understand all of it, to understand the beauty of it.
Tremendous research and thanks for sharing to the global robotics community
Would this be applicable to the Einstein field equations by any chance?
This is soo cool, I would love to see how this compares to more conventional RL methods on MDPs or POMDPs, or PINN approaches (maybe this can also be thought of as a PINN?), especially it would be interesting to compare the network sizes of various methods. I would be interested to help integrate this within Nvidia Isaac sim for virtual world models. I briefly played with and encountered Koopman analysis when I did a project with wavelet feature representations and I've also thought about this potential.
dx/dt = f(t) = f(x)
Great video!
this is the area I'm interested in! I'm only a freshman in college, so I was looking everywhere for experts who is doing research in this field. RUclips algorithm really works out for me this time
Can you please fix the SSL on your website so that we could take a look at the code?
Thanks for your interests! The code is hosted on our project website: generalroboticslab.com/AutomatedGlobalAnalysis as well as GitHub: github.com/generalroboticslab/AutomatedGlobalAnalysis. We can access both of them. Can you try again?
@@boyuan_chen Works now, thank you!
This is sick, amazing!
amazing work
So cute and cool!
Paura anche se fossi piena di soldi non comprerei mai un robot che mi fa solo paura.😮
This is really great work and progress made! I look forward to seeing more developments!
it seems like everything has already been invented
all robots must also exist in the parallel three-dimensional world as well as our own
give it some big meaty cheeks
That's sooo cool! I love stuff like this, and I especially like the integration of passive dynamics, since that seems to be often neglected. Heck, I didn't even know that was the word for it until seeing this video... Great work!
can i buy that?
Using the 3D bounding box of objects for positioning with a bin packing algorithm, or even just using the 2D bounding box information in the xy plane to perform bin packing, could solve this problem. Why design such a complex system to address such a basic issue?
This is a very good question! There are several limitations of bin packing algorithms compared to our methods. First, our task is to determine a physics-compliant stable pose for the queried object with even irregular shape. The main challenge is finding the desired position in a cluttered environment where collisions are sometimes acceptable, such as in stacking, which is not allowed in packing algorithms. You can refer to some generated scene setups at time 2:17. Secondly, our method also considers the diversity of generated scene, which is essential for robot training. However, packing algorithms always place objects in fixed or heuristic ways. Finally, our method can zero-shot generalize to different queried regions after training, while even the most efficient 2D bin packing algorithm still requires O(nlogn). Moreover, our task operates in 3D with object rotation. Let us know if you have further feedback!
They have knees but don't use them at all. Their joints are too big and blocky, making them look awkward instead of cute. Making them bigger leaving the hardware the same size might look better. Overall nice concept, but needs a lot of work still.
tremendo, tremendous, a step forward to create expected movie-robots hahah, good video
Very insightful.
This is awesome ❤ Lately ive been thinking that i wouldn't want a humanoid robot in the house, but would definitely go for a small monkey like robot, like an ai animal companion haha
Humans are monkeys. :-)
awesome
You could also make it produce sounds from the visual clues.
The internal representation(s) of self and outer world into which we enact thoughts experiments, for example imagination and anticipation, is present in humans and could be expanded in future works like this. Also pursuing goals through action so internal networks that can learn motivation, desires, repulsion. are needed to get to conscience.
Good stuff!
Incredible.
C'est peut-être la découverte du siècle le potentiel est presque infini !
Incroyable découverte! Hâte de suivre l'évolution de vos recherches!! Votre IA semble très prometteuse 😍
Je viens de découvrir ce concept dans le journal de l'espace et effectivement c'est tout bonnement incroyable !
@@redpandachannel7981 pareillement! Merci d'ailleurs à Quentin et son équipe s'il passe par là. L'étude des résultats de cette IA ouvre la voie à des perspectives incroyables et découvertes majeures qui pourraient révolutionner notre compréhension du monde et offrir à l'Humanité les clés pour s'expanser subitement dans un avenir plus heureux aux multiples potentialités... Rêvons!!
This is insane. Congratulations!
I fell in love with this concept back when I used Nutonian Eureqa (and lots of good pointers from M Schmidt). This is next level! There are few ML endeavors more beneficial to humanity than discerning physical theorems directly from data with no initial primer. Very excited to see where you take this next. Makes me want to use my physics education a bit more and get coding!
Total genius, I love that fire has by far the highest ID.
Oh that's just fantastic work!