Multisensor Systems and Robotics Lab

Multisensor Systems, Robotics, Human-Robot Interaction, Man-Machine Systems, Pervasive Computing, Ambient Intelligence, Industry 4.0.

A research unit at the Dept. of Electrical, Electronics, Computers and Systems Engineering (DIEECS), Universidad de Oviedo, Gijón Campus.

Location: Ed. Departamental Oeste, Módulo 2, 1ª planta (2.1.15), Campus de Gijón, Viesques 33204 Gijón, Asturias, España (map)

Active Research areas

Body-worn sensors for the measurement of human activity and behavior: Body-worn (wearable) sensors can be used to measure the physical activity and behavior of people. A fundamental advantage of such systems is that they can be applied in daily life conditions (home, work, sports field, etc.). They also present other advantages as they provide an uninterrupted flow of information and they do not suffer from issues that are common to other technologies (such as occlusions in environmental vision systems). However, to successfully apply this technology to sense humans, it is necessary to reach a compromise between the potential benefits derived from its use and the degree of intrusiveness that actually implies to carry sensors on the body over the activity itself and the comfort of the individual. This intrusiveness not only refers to the degree of interference involved in carrying the measuring devices, but also to possible adjustment or calibration procedures, which must be minimized and eliminated as far as possible.

From this statement, we mainly explore the measurement of physical activity and behavior from biomechanical estimations using body-worn inertial sensors (accelerometers, gyroscopes, magnetic sensors-IMUs) placed in different body segments. We are interested in the development of new systems and measurement procedures based on this technology, with new methods of signal processing and different configurations of sensors to improve the state of the art of the field. Moreover, we are also exploring for this purpose other promising sensors (e.g. electromyography, galvanic skin response, electrocardiogram, electroencephalogram, force) and technologies and devices (e.g. GPS, body-cams).

Interpretation and prediction of human activity and behavior by means of body worn sensors: We address new methods for obtaining information of a greater degree of abstraction about the human activity and behavior, but with the same restrictions of mobility, non-intrusiveness and privacy of the previously posed problem.

Specifically, we intend to address the detection of user intention (ability to discern whether an action or sequence of actions has a purpose or have occurred arbitrarily) and the prediction of user intention (ability to predict the final goal of the user from an incomplete sequence of actions). For that purpose we resort to all kinds of models of human behavior (biomechanical, neuroscientific, computational, etc.), even elaborating them in those circumstances in which they are not available.

Human-machine interaction in the presence of humans empowered with body-worn sensors. Traditionally, environmental sensors or sensors assembled in the machine have been to sense the human in human-machine collaboration scenarios. Among such sensors, those that try to mimic the human senses (cameras, microphones, haptic sensors) are prevalent, guided by an attempt to implement human-machine interaction from a biomimetical approach. These external sensors suffer from well-known issues. For instance, cameras suffer from occlusions and environmental optical disturbances. Motivated by this, a first goal of our research is to see whether wearable sensors ported by the human can feed the machine with better information than external or machine-assembled sensors, always addressing the trade-off posed by the intrusiveness that this technology implies for the human partner. On the other side, the state of the art of sensing technologies provides low cost and low intrusive ‘wearable’ sensors to measure a wide field of physiological parameters: velocities and accelerations of body segments, skin conductivity, temperature, EEG, ECG, etc. Given that we are intending to "augment" the machine with sensors placed on the human body, why not to use those “new” sensors to perform the interaction in presently unknown but promising ways?