Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of DSpace
  • Guidelines
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Flores Benites, Victor"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Priority sampling and visual attention for self-driving car
    (Universidad Católica San pablo, 2023) Flores Benites, Victor; Mora Colque, Rensso Victor Hugo
    End-to-end methods facilitate the development of self-driving models by employing a single network that learns the human driving style from examples. However, these models face problems such as distributional shift, causal confusion, and high variance. To address these problems we propose two techniques. First, we propose the priority sampling algorithm, which biases a training sampling towards unknown observations for the model. Priority sampling employs a trade-off strategy that incentivizes the training algorithm to explore the whole dataset. Our results show a reduction of the error in the control signals in all the models studied. Moreover, we show evidence that our algorithm limits overtraining on noisy training samples. As a second approach, we propose a model based on the theory of visual attention (Bundesen, 1990) by which selecting relevant visual information to build an optimal environment representation. Our model employs two visual information selection mechanisms: spatial and feature-based attention. Spatial attention selects regions with visual encoding similar to contextual encoding, while feature-based attention selects features disentangled with useful information for routine driving. Furthermore, we encourage the model to recognize new sources of visual information by adding a bottom-up input. Results in the CoRL-2017 dataset (Dosovitskiy et al., 2017) show that our spatial attention mechanism recognizes regions relevant to the driving task. Our model builds disentangled features with low cosine similarity, but with high representation similarity. Finally, we report performance improvements over traditional end-to-end models.
Contacto
Jorge Luis Román Yauri
Correo
jroman@ucsp.edu.pe
COPYRIGHT © 2024 Universidad Católica San Pablo