Walterio Mayol
Walterio Mayol
  • Видео 46
  • Просмотров 23 717
Keynote Prof Mayol-Cuevas CINIAI 2024
Magistral: Inteligencia Artificial: Humanos en el Lazo para Realidad Extendida, Robotica y Comunicación.
La Inteligencia Artificial y los sistemas que puede ayudar a desarrollar están atrayendo una atención justificada por el potencial que representan en la búsqueda hacia comprender cómo funcionamos nosotros y el universo. Pero un aspecto clave que no debemos olvidar es cómo las personas pueden estar involucradas en estos sistemas. En esta platica, cubriré investigaciones recientes en mi grupo en la Universidad de Bristol y en la industria en relación con estos sistemas en los que las personas están al centro de los mismos. Específicamente, hablaré sobre la importancia de que las personas ...
Просмотров: 51

Видео

Simultaneous Localisation and Mapping (SLAM) with wearable visual robot IEEE ISMAR 2003
Просмотров 131Год назад
The wearable robot builds a SLAM map of the surroundings in real-time and uses it to localise itself within that map and allows a remote user observing through the robot to add virtual annotations in 3D to the scene. Work done by Dr Andrew Davison and Dr Walterio Mayol while at the Robotics Research Group University of Oxford. Presented at IEEE ISMAR, Tokyo 2003. Paper at: ieeexplore.ieee.org/d...
Towards drone racing with a pixel processor array
Просмотров 502 года назад
Drone racing is an interesting scenario for an agile MAV due to the need for rapid response and high accelerations. In this paper we use a Pixel Processor Array (PPA) demonstrating the marriage of perception and compute capabilities on the same device. A Pixel Processor Array (PPA) consists of a parallel array of processing elements, each of which features light capture, processing and storage ...
Agile reactive navigation for a non-holonomic mobile robot using a pixel processor array
Просмотров 62 года назад
Agile reactive navigation for a non-holonomic mobile robot using a pixel processor array. Liu Y, Bose L, Greatwood C, hen J, Fan R, Richardson T, Carey SJ, Dudek P, Mayol-Cuevas W. IET Image Processing 2021;1-10. This paper presents an agile reactive navigation strategy for driving a non-holonomic ground vehicle around a pre-set course of gates in a cluttered environment using a low-cost proces...
The Object at Hand: Automated Editing for Mixed Reality Video Guidance from Hand-Object Interactions
Просмотров 1843 года назад
ISMAR 2021 Conference Paper In this paper, we concern with the problem of how to automatically extract the steps that compose real-life hand activities. This is a key competence towards processing, monitoring and providing video guidance in Mixed Reality systems. We use egocentric vision to observe hand-object interactions in real-world tasks and automatically decompose a video into its constit...
Keynote Prof Walterio Mayol-Cuevas IROS 2021 Workshop Egocentric Vision
Просмотров 1273 года назад
Keynote talk "Egocentric Affordance and Skill Determination from Video" IEEE/RSJ IROS 2021 Workshop on Egocentric vision for interactive perception, learning, and control. Prof Walterio Mayol-Cuevas In this talk I cover recent work on Affordance Estimation from a single example on RGB-D scenes and Skill ranking and manipulation understanding from egocentric video.
Head pose estimation and control of attention of wearable robot Circa 2003
Просмотров 813 года назад
This video shows a real-time system that computes the head pose of a person and uses that pose to control the attention of a wearable visual robot mounted on the shoulder. In this case the same person is wearing and controlling the robot but in general the robot can use this technique to direct its attention to where somebody else around the wearer is looking at. Work done by Dr Walterio Mayol,...
Adding virtual objects with hand gestures in real-time Circa 2003-2004
Просмотров 283 года назад
This video shows the addition of virtual objects to a real scene using hand gestures. The hand detector and gesture recognition runs in real time and is capable of detecting editing gestures such as selecting features, deleting selections, etc. In this particular clip, the 3D position of features in the scene are recovered in real-time and when 3 features are selected by the hand gesture recogn...
6DVO.UAV
Просмотров 933 года назад
6D visual odometry for UAVs. Circa 2010
GlaciAR Lite Circa 2014
Просмотров 693 года назад
Wearable assistant that recognizes the objects and provides video guides. See paper GlaciAR from Augmented Human 2017. Work done in 2014.
Glance manual Circa 2013-14
Просмотров 443 года назад
This manual prototype demonstrates object recognition and early attention prediction with IMU working on Glass. The GLANCE proposal aims to extract -automatically- which objects are important and more meaningful, nuance, video guides.
GlaciAR FullyAutomatic
Просмотров 493 года назад
GlaciAR FullyAutomatic
HandheldRobot Mark II September 2015
Просмотров 273 года назад
Our Handheld Robot Mark II in an illustration of reaching various points in 5D (3D position and 2D orientation). This can be useful to guide novice and expert users to the right locations to inspect or clean while providing useful reassurance that the job is being done. Sept 2015.
Keynote 1: Augmenting People, Prof Walterio Mayol Cuevas, ACVR Workshop ECCV 2020
Просмотров 483 года назад
Assistive Computer Vision and Robotics ACVR @ ECCV 2020 Workshop. Augmenting People: From information in your eyes to robots in your hands. In this Keynote I talk about our team's work on two research areas that aim to expand the capabilities of people with Extended Reality (XR) and Handheld Robotics. For XR, specifically I describe how it is possible to develop new and better AR without obsess...
Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays, ECCV 2020
Просмотров 1364 года назад
We demonstrate for the first time complete inference of a CNN upon the focal plane of a sensor. The key idea behind our approach is storing network weights "in-pixel", allowing the parallel analogue computation of the SCAMP PPA to be fully utilized. This current work considers a baseline task for digit classification at over 3000 frames per second. We are moving into more sophisticated tasks an...
Prof Walterio Mayol, The Importance of Content Authoring for XR/MR/AR
Просмотров 1404 года назад
Prof Walterio Mayol, The Importance of Content Authoring for XR/MR/AR
HIGS: Hand Interaction Guidance System
Просмотров 995 лет назад
HIGS: Hand Interaction Guidance System
A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays oral ICCV2019
Просмотров 4305 лет назад
A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays oral ICCV2019
Where can i do this? Geometric Affordances from a Single Example with the Interaction Tensor
Просмотров 876 лет назад
Where can i do this? Geometric Affordances from a Single Example with the Interaction Tensor
Visual Odometry for Pixel Processor Arrays. ICCV 2017 SPOTLIGHT
Просмотров 5777 лет назад
Visual Odometry for Pixel Processor Arrays. ICCV 2017 SPOTLIGHT
"Tracking control of a UAV with a parallel visual processor", IROS 2017.
Просмотров 1547 лет назад
"Tracking control of a UAV with a parallel visual processor", IROS 2017.
The GlaciAR system. Automated guidance and authoring with Google Glass
Просмотров 3047 лет назад
The GlaciAR system. Automated guidance and authoring with Google Glass
Inverse Kinematics and Design of a Novel 6-DoF Handheld Robot Arm.
Просмотров 8878 лет назад
Inverse Kinematics and Design of a Novel 6-DoF Handheld Robot Arm.
Estimating Visual Attention from a Head Mounted IMU
Просмотров 3418 лет назад
Estimating Visual Attention from a Head Mounted IMU
Ismar2015 a few moments
Просмотров 1288 лет назад
Ismar2015 a few moments
Handheld Robot Feedback methods Sept 2015
Просмотров 4349 лет назад
Handheld Robot Feedback methods Sept 2015
Improving MAV Control by Predicting Aerodynamic Effects of Obstacles
Просмотров 2749 лет назад
Improving MAV Control by Predicting Aerodynamic Effects of Obstacles
Recognition and Reconstruction of Transparent Objects for Augmented Reality
Просмотров 1,6 тыс.10 лет назад
Recognition and Reconstruction of Transparent Objects for Augmented Reality
HandheldRobot TilingNov2013
Просмотров 80310 лет назад
HandheldRobot TilingNov2013
ObjectDetectionTracking Bristol Feb2012
Просмотров 3710 лет назад
ObjectDetectionTracking Bristol Feb2012

Комментарии

  • @ApoloCalistenes
    @ApoloCalistenes 4 дня назад

    Esta entrevista a der ser masomenos den entre 1997 o 1999

  • @liliansalamun7370
    @liliansalamun7370 2 года назад

    Muchas gracias por este Hermoso programa!

  • @eprohoda
    @eprohoda 2 года назад

    der! that is unreal ! 🖖

  • @lauralagarda6292
    @lauralagarda6292 2 года назад

    Wuaoooo. Maravilloso. En pleno 2022 y viviendo en Cancún, lejos de mi ciudad y mis tradiciones, ver esto, realmente las lágrimas me salieron. Cuando iba con mi madre al centro a recorrer tienda por tienda. Ya no existe, está maravilla, los señores con traje y corbata, despachando y cobrando en su panadería, con una historia y tradición inigualable. Aquí en Cancún, todo lo que es comida nefasto, no hay historia aún. Grande Cristina.

  • @eddygonzalez6513
    @eddygonzalez6513 2 года назад

    Did you use ROS ?

  • @eddygonzalez6513
    @eddygonzalez6513 2 года назад

    HI there, I was looking your channel I think is amazing, I have a question about Visual Odometry, do you have any UAV dataset ? I need an UAV dataset to test the visual odometry algorithm

  • @sandymar04mj79
    @sandymar04mj79 3 года назад

    Me encanto 💖

  • @osaidzahid1765
    @osaidzahid1765 3 года назад

    Great talk!

    • @wmayol
      @wmayol 3 года назад

      Thanks for watching Osaid!

  • @Andrea72773
    @Andrea72773 3 года назад

    Realmente los “anfitriones” no colaboraron en la entrevista jaja se nota quien de los dos hermanos está y quien no en el negocio…’

  • @mariaguadalupe4931
    @mariaguadalupe4931 3 года назад

    En donde se ubica la panadería exactamente que estación del metro que da serca

    • @wmayol
      @wmayol 3 года назад

      Hola La direccion es Calle de Mixcalco 15, Centro, Cuauhtémoc, 06020 Ciudad de México, CDMX, Mexico. Y el telefono es: 55 5522 4548. Un metro cerca es el Zocalo, pero llamando tal vez recomiendan otra opcion. Un saludo.

    • @maricelacortez5871
      @maricelacortez5871 2 года назад

      Queda a media cuadra del mercado de Mixcalco..Si caminas del Templo Mayor hacia Circunvalación sobre lo que es Doncels, después está calle se llama Justo Sierra y sigue Mixcalco. Cerca del parque de Loreto y de la Escuela de Invidentes. Es muy rico ese pan Me llaman la atención los precios de hace 7 años. Ahorita, están alrededor de $ 10.00

    • @crislaravel8
      @crislaravel8 2 года назад

      Maria Guadalupe Sólo que la zona es ALTAMENTE peligrosa en TODOS los sentidos. Y si vas acompañada por tu esposo, hijo, sobrino o amigo, alguna "prosti", de las MUCHAS que abundan en esa calle, te lo puede jalonear de la ropa para que se vaya con ella. Por otro lado, también es sumamente riesgoso que como mujer, vayas tú sola.

    • @amaliabautista3918
      @amaliabautista3918 2 года назад

      @@crislaravel8 yo no se de que tiempo habla usted por que eso no sucede.mujeres y hombres hacen sus compras y su vida y eso que dice no se ve.ni en anillo de circunvalacion que estan las sexo servidoras.ellas no jalan a nadie ellos solitos se arriman a hablar con ellas.

    • @guadalupeduperou334
      @guadalupeduperou334 28 дней назад

      Mentira, es segura esa zona!

  • @anagutierrez2019
    @anagutierrez2019 3 года назад

    Gracias por este vídeo tan hermoso

  • @Apoloesfebo
    @Apoloesfebo 7 лет назад

    Walterio. ¿Me puedes explicar porqué no tienen audio tus videos? Atte. El Micro. Sí el de La Salle.

    • @wmayol
      @wmayol 7 лет назад

      Hola José que gusto cual es tu correo?

    • @Apoloesfebo
      @Apoloesfebo 7 лет назад

      Walterio Mayol Noooo. José, no. El Micro. Jajaja. jose.silva@efectophi.com por ahí nos escribimos. Pasa el tuyo.

  • @Love2TravelAway
    @Love2TravelAway 8 лет назад

    Gracias por el video tan informativo...l

  • @roidroid
    @roidroid 8 лет назад

    i know the answer to this question must be kinda central to the paper... but how does an IMU estimate a user's attention?

    • @wmayol
      @wmayol 8 лет назад

      Hi +roidroid Thanks for asking. In our paper, we discuss this we define both temporal (ie when the person is paying attention to) and spatial attention (where on the image the attention is). In brief, we use a gaze tracker to learn the head motions that lead to gaze fixations that happen when the user is doing things. But this is for interacting with things while not translating so its aimed at things like operating machines or manipulating objects. Paper is vailable at: www.cs.bris.ac.uk/Publications/Papers/2001754.pdf or dl.acm.org/citation.cfm?id=2808394&CFID=548041087&CFTOKEN=31371660 Hope it helps

  • @wmayol
    @wmayol 10 лет назад

    Recognition and Reconstruction of Transparent Obj…: ruclips.net/video/i31ZWB_tSxs/видео.html From our paper: Alan Torres-Gomez and Walterio Mayol-Cuevas. Recognition and reconstruction of transparent objects for Augmented Reality. Ismar 2014. Pdf at: www.cs.bris.ac.uk/Publications/Papers/2001714.pdf This is the MSc work of Alan Torres. Apart from the detection and 3d reconstruction of glass objects, I see this as an example of an area in AR that has not been exploited much, or at all. Knowing what a material is should allow to augment things more meaningfully. Here we look at the properties of transparent objects (just because they are hard!) but the principle and some of the sub methods we use could be extended to different cases.

  • @wmayol
    @wmayol 10 лет назад

    Cognitive Handheld Robots We have been working for about 2.5 years on prototypes (and about 8 years conceptually!) on what we think is a new extended type of robot. Handheld robots have the shape of tools and are intended to have cognition and action while cooperating with people. This video is from a task with our first prototype back in November 2013. We are also offering details of its construction and 3D CAD models in this webpage www.cs.bris.ac.uk/Publications/pub_master.jsp?id=2001712 where a report on user interaction is also available. We are currently developing a new prototype and more on this soon.

    • @TonyChanXprasia
      @TonyChanXprasia 10 лет назад

      this is good to help disability people to grab the items into the right position and location, i think...

  • @EmilRzajev
    @EmilRzajev 10 лет назад

    Ufa, la infancia de mi mamá y otro cacho de la mía. Saludos a la familia Mayol. Don Antonio. Mis saludos.

  • @wmayol
    @wmayol 10 лет назад

    #Robots #aerial The video from our paper: Learning to Predict Obstacle Aerodynamics from Depth Images for Micro Air Vehicles by John Bartholomew, Andrew Calway and Walterio Mayol-Cuevas, IEEE ICRA 2014. This allows us to anticipate what "ground effects" a MAV will have by learning a mapping from depth images to acceleration suffered for a variety of objects. We have also closed the loop so we correct for the effect, but that is for another paper. I think this is an example of the type of information that can enrich 3D maps, now that these are "easy" to obtain and so the next step is to enhance them whith information that is beyond only geometry.

    • @JavierCivera
      @JavierCivera 10 лет назад

      Very interesting, Walterio!! I totally agree with your view: our SLAM maps should contain more than just geometry...

    • @wmayol
      @wmayol 10 лет назад

      Thanks Javier! looking forward to catch up in Hong Kong.

  • @wmayol
    @wmayol 11 лет назад

    3D from looking. How to build a 3D model of something one looks at without any clicks or even visual feedback. From our paper T. Leelasawassuk and W.W. Mayol-Cuevas. 3D from Looking: Using Wearable Gaze Tracking for Hands-Free and Feedback-Free Object Modelling. ISWC 2013.

  • @wmayol
    @wmayol 11 лет назад

    How to get a 3D model of something one looks at without any clicks or even feedback to the user? T. Leelasawassuk and W.W. Mayol-Cuevas. 3D from Looking: Using Wearable Gaze Tracking for Hands-Free and Feedback-Free Object Modelling. International Symposium on Wearable Computers (ISWC). Zurich, September 2013.

    • @FrankDellaert
      @FrankDellaert 11 лет назад

      Cool idea, Walterio

    • @twd20g
      @twd20g 11 лет назад

      Particularly nice because it exploits the user's natural behaviour so doesn't require learning from the user (other than maybe smooth motion).

    • @wmayol
      @wmayol 11 лет назад

      Frank Dellaert Thanks Frank trust all is well, are you going to IROS?

    • @wmayol
      @wmayol 11 лет назад

      Tom Drummond Hi Tom, yes I believe gaze-based interaction needs to be carefully designed to not interrupt normal gaze patterns, in this case we go to the extreme of trying to see what happens with no feedback whatsoever. Of course it can fail mainly if the initial map is corrupted or the gaze calibration is not properly done. Hope ISMAR was good!

    • @FrankDellaert
      @FrankDellaert 11 лет назад

      Yup, will be in Tokyo :-)

  • @moonostan
    @moonostan 11 лет назад

    ¿Is there an opensource API?