Browsing by Author "Rosas Cuevas, Yessica"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A Semi-Automated Approach for Recognizing Moving Targets Using a Global Vision System(Institute of Electrical and Electronics Engineers Inc., 2016) Ripas Mamani, Roger; Cervantes Jilaja, Claudia; Rosas Cuevas, Yessica; Patiño Escarcina, Raquel Esperanza; Barrios Aranibar, DennisGlobal vision system works with processes of sorting, recognition and identification through some external characteristics as: color, shape and size depending of specific targets. In this paper we propose a semi-automated approach to recognize the targets in moving, where first is performed the image calibration with respect to the lighting and then proceeds to recognize a variety of colors and sizes, through several channels of different color spaces in the processing of video sequences to recognize moving targets, using the proposed algorithm called Color Segmentation (Algorithm 1) to identify a variety of light and dark colors. After semi-automated process is performed the sorting or recognizing of the moving target, where is obtained the position (x, y) of central point and the size of the area (pixels) of the segmentation region. Tests were conducted in: the location of robots in a soccer robot environment (with 94.36% of accuracy) and chestnuts selection process (with 91.80% of accuracy), if the image needs to recognize more than five detections then it proceeds to add parallelism, i.e. add a thread for each segmented color, thus improving processing time. © 2016 IEEE.Item Mixed Reality Applied to the Teleoperation of a 7-DOF Manipulator in Rescue Missions(Institute of Electrical and Electronics Engineers Inc., 2016) Lovon Ramos, Percy; Ripas Mamani, Roger; Rosas Cuevas, Yessica; Tejada Begazo, María Fernanda; Marroquin Mogrovejo, Renato; Barrios Aranibar, DennisThe need for complementing robot's autonomy behaviour with human reasoning has made robot teleoperation a research topic for more than fifty years. And while most of the research work usually deploys a master-slave architecture for controlling robots (whether using a joystick and/or desktop computer), recently Mixed Reality has come into play. This is because it is able to improve the teleoperator's view with less bandwidth consumption than traditional teleoperation architectures. In this paper a new approach for teleoperating a manipulator is presented. The virtual representation of the manipulator is made by using encoder sensors, and also, a camera image is displayed through an Android interface to complement the teleoperator's view. We can show with our results that the real positions of the manipulator and the virtual model are closed to be the same with some differences due to the transmission time delay which is minimum. © 2016 IEEE.Item People Detection and Localization in Real Time during Navigation of Autonomous Robots(Institute of Electrical and Electronics Engineers Inc., 2016) Lovon Ramos, Percy; Rosas Cuevas, Yessica; Cervantes Jilaja, Claudia; Tejada Begazo, María Fernanda; Patiño Escarcina, Raquel Esperanza; Barrios Aranibar, DennisCurrently the navigation involves the interaction of the robot with its environment, this means that the robot has to find the position of obstacles (natural brands and artificial) with respect to its plane. Its environment is time-variant and computer vision can help it to localization and people detection in real time. This article focuses on the detection and localization of people with respect to plane of the robot during the navigation of autonomous robot, for people detection is used Morphological HOG Face Detection algorithm in real-time, where our goal is to localization people in the plane of the robot, obtaining position information relative to the X-Axis (left, right, obstacle) and with the Y-Axis (near, medium, far) with respect robot, to identify the environment in that it's located in the robot is applied the vanishing point detection. Experiments show that people detection and localization is better in the medium region (201 to 600 cm) obtaining 93.13% of accuracy, this allows the robot has enough time to evade the obstacle during navigation, the navigation getting 97.03% of accuracy for the vanishing point detection. © 2016 IEEE.