AI-based Sensor Simulation: Transforming Ground Truth to Color Images

Поделиться
HTML-код
  • Опубликовано: 15 ноя 2024
  • In the context of physical sensor simulation, ray tracing algorithms are the gold standard for generating highly realistic sensor data (e.g. camera images) based on physical models previously created by engineers. This enables the automated generation of reference data relevant for training and testing AI (e.g. advanced driver assistance systems).
    This approach works very well and has proven to be beneficial in system development.
    But what if we could get the same simulation results without the need for physical modeling?
    This video demonstrates a novel approach to AI-based sensor simulation. Instead of tracing light paths through the scene, the NVIDIA vid2vid architecture is used to generate color images based solely on the available ground truth (i.e. semantic label images). For this purpose, the pre-trained (i.e. original) vid2vid was applied and further trained on a simulated dataset using transfer learning. Subsequently, the similarity of the AI-based sensor simulation (i.e. the result of the newly trained vid2vid) was compared to the ray-tracing-based simulation using the Structural Similarity Index Measure (SSIM).
    www.rif-ev.de
    www.mmi.rwth-a...
    www.ficosa.com/
    github.com/NVI...

Комментарии •