AI-based Path Planning - Auto. scenario variation to explore a Mars-like surf. using Reinf. Learning
HTML-код
- Опубликовано: 15 ноя 2024
- The video outlines the main steps to configure, variate and execute one or multiple reinforcement learning processes for controlling a rover on a Mars-like surface in VEROSIM. These steps are:
Basic simulation model description according to the OpenSCENARIO standard including:
-- Referenced entities which provide the basis for a following specification of the training
-- The storyboard which mainly defines the initial setup of the Reinforcement Learning process
Configuration of a Python based agent
Setting up available sensors and a reward calculation based on a reward map and a reward function
Saving possible parameter variations (e.g. various hyperparametervalues, sensors or sensorpositions) based on which multiple concrete scenarios are generated
Automatically generating a simulation model based on a concrete scenario
Starting training the rover to reach the target area
Parallel execution of every concrete scenario containing different sensor-, reward-, or hyperparameter-configuration
Institutions involved in the KImaDiZ project are:
DLR (German Aerospace Center) : [www.dlr.de/]
RIF: [www.rif-ev.de/]
MMI: [www.mmi.rwth-a...]
FEV: [www.fev.com/de...]