Advanced Guiding of Wheel Chair (2005): This sequence shows the behaviour of a non-contact Head Controller over an advanced wheelchair (SIAMO prototype). Using only head movements this user is able to go into a standard elevator; this is done without pressing or moving any kind of physical device. The key of the system is a patented IR detector that locates user's head an evaluates its movements according to a predefined set of commands. This work was made in 2005, in the Electronics Department of the University of Alcala (Spain, www.geintra-uah.org); the IR detection algorithm was designed by Mr. Henrik Vie Christensen (who drives the chair) from the Dept. of Control Engineering, of the Aalborg University (Denmark).
SIAMO Intelligent Wheelchair (1996-1999): This video shows a short resume of SIAMO project (SIAMO = Integral System for Assisted Mobility). Audio track is in Spanish. This project was under development between 1996 and 1999 in the Electronics Dpt. of the University of Alcala (Spain, www.depeca.uah.es). Main focus was to study, design and test several Human Machine Interfaces over an advanced wheelchair platform, also designed by the same research group. More information in: www.geintra-uah.org/en
UMIDAM Intelligent Wheelchair (1991-1994): This video shows a short resume of an Intelligent Wheelchair called UMIDAM (UMIDAM = Intelligent Mobile Unit of Assisted Mobility). Audio track is in Spanish. This project was under development between 1991 and 1994 in the Electronics Dpt. of the University of Alcala (Spain, www.depeca.uah.es). A first prototype, controled by vocal commands, was shown inside the ONCE stand in the EXPO'92 of Sevilla (Spain). More information in: www.geintra-uah.org/en
Ultrasound Indoor Location System (2009): This sequence shows the behaviour of an experimental ILS (Indoor Location System) based on codified Ultra Sound beacons. This research have been made in the GEINTRA labs of the University of Alcala (Spain) in 2009; web-page: http://www.geintra-uah.org/en. In this video, ILS uses LS (Loosely Synchronized) codes to measure accurately the TOF (Time Of Flight) between the target and a set of reference beacons. To demonstrate system behaviour a mobile robot moves inside the ILS coverage area; while moving, two different set of data can be seen on screen: conventional odometry is shown with a green line; ILS positioning is shown unfiltered (red line) and filtered (black line). Around second six, the robot is taken up while running: odometry estimation show a growing error as expected, while ILS estimation remains low and keep track of moving object in any time.
This research project deals with the design and implementation of an electronic architecture (sensorial, control and communication) that contributes to the cooperative guidance of transport units: platoon formed by electrical vehicle prototypes.
The following results have been achieved: Development and implementation of control solutions so that the followers units can automatically perform a stable leader-following process in non-linear trajectories. Design and implementation of algorithms which deal with merge and split manoeuvres of units related to a convoy. Design of an intelligent system for attendance to the leader in the decision making process on the optimal and efficient route, according to previous experiences.
The following videos show several experimental tests carried out in the laboratories of the Polytechnic School at the University of Alcala, using P3-DX robotic units as electrical vehicle prototypes. The convoy departs from one laboratory, moves along a corridor and arrives to the adjoining laboratory (video sequence 1) or to the original one (video sequence 2).
This section shows two demonstrations of robot routing in a real simplified transport scenario, showed in the following figure.
There are two Pioneer P3-DX units, one of them working as a convoy leader continuously moving around the periphery, and another identical robot working as a pursuer which challenge is to merge the convoy in the optimal meeting point (minimum merging time).
The scenario includes different areas with different statistical values (mean and variance) of velocities (see figure), this way a random velocity is dynamically assigned to each robot once a new node is reached. In order to delimit the randomness inherent to the problem, the parameter “risk factor” is considered, limiting the probability of the convoy reaching the merging node before the pursuer (failed merging maneuver).
The first video was taken in the most conservative condition (0% risk factor). The meeting point is decided in order to reduce the probability of failure, but the waiting time is notably large (direct link to mpg video (61MB), in case you have problems with flash)
The second video was taken in the less conservative condition (100% risk factor). In this case, the required time for the meeting maneuver has been considerably reduced. However the number of re-planning for the pursuer unit has increased as well as the number of the percentage of failed tests (direct link to mpg video (92MB), in case you have problems with flash)
Virtual mouse based on face tracking techniques (2010): The video shows a very simple HMI (Human-Machine Interface) based in AAMs (Active Appearance Models). The user is able both to move a cursor around the screen only with head movements, and simulate a mouse left click opening the mouth (you can also get the full quality avi file).
SRP-PHAT based acoustic localization system: The green sphere represents ground truth speaker position, and the red one is the system location estimation. No tracking is attempted, just plain memoryless localization. The SRP-PHAT power grid is partially shown, with the cube intensity being related to the the received signal power in the corresponding grid location (you can also get the full quality video file).
Microphone arrays modeled as perspective cameras (3D map, short footage): Acoustic 3D map estimated from a single microphone array modelled as a perspective camera. x and y axes represent angle variations in azimuth and elevation, respectively, with the array center being the local coordinate center. The z axe represents acoustic power coming from the corresponding direction in 3D space (azimuth, elevation). The microphone array is composed by 4 elements, three of them linearly aligned and the third one orthogonal to the others (inverted T-shape). The speaker is slightly moving during the sequence (you can also get the full quality video file).
Microphone arrays modeled as perspective cameras (2D map, long footage): Acoustic 2D map generated from the 3D surface estimation described above. In this case, ground truth is also shown as a red circle (the ground truth being the circle center). x and y axes represent angle variations in azimuth and elevation, respectively, with the array center being the local coordinate center. The speaker is slightly moving during the sequence (you can also get the full quality video file).
Microphone arrays modeled as perspective cameras (2D map): Acoustic 2D map generated by a linear microphone array composed of four equispaced microphones (so that only azimuth can be identified. x and y axes represent angle variations in azimuth and elevation, respectively, with the array center being the local coordinate center. The speaker was static during all the sequence (you can also get the full quality video file).
Sector based acoustic localization using multiple arrays: Demonstration of sector based localization, using two circular arrays. The system identifies the active sectors for each array and run a SRP-PHAT algorithm in the intersection to determine the speaker position. The black circle is the ground truth position, and the red one is the estimated location. The active intersections are highligted also in red (you can also get the full quality video file).
XPFCP based tracker (using a 2D grid): XPFCP based tracker, using a 2D grid estimation and target projections on the floor. Each target is given a certain target ID (color). Sequence obtained in the former GEINTRA lab (you can also get the full quality video file).
XPFCP based tracker (using a 3D grid): XPFCP based tracker, using a 3D grid estimation and target projections on the floor. Each target is given a certain target ID (color) and a clustering procedure tries to identify different targets in the 3D space. Sequence obtained in the new GEINTRA Intelligent Space Lab (you can also get the full quality video file).