Human-Machine cognitive interaction by means of physiological computing 

I currently join the Multisensor Systems and Robotics Lab of the University of Oviedo, where I try to explore to which extent can physiological computing help to develop better human-machine intelligent interaction. To be more specific, let me clarify that I am referring to the interaction between a human and an artificial systems such as a robot, but do not forgetting other kinds of artificial computerized systems (e.g. intelligent environments).

In human-machine cognitive interaction the machine must recognize intentions and even expectations, objectives and goals behind human actions in order to elaborate her own plan of action. Traditionally, environmental sensors or sensors assembled in the machine have been used for such purpose. Among such sensors, those that try to mimic the human senses (cameras, microphones, haptic sensors) are prevalent, guided by an attempt to implement human-machine interaction from a biomimetical approach. 

These external sensors (from the point of view of the human) suffer from well-known issues. For instance, cameras suffer from occlusions and environmental optical disturbances. Motivated by this, a first goal of my research is to see whether wearable sensors ported by the human can feed the machine with better information than external or machine-assembled sensors, always addressing the trade-off posed by the intrusiveness that this technology implies for the human partner.

On the other side, the state of the art of sensing technologies provides low cost and low intrusive ‘wearable’ sensors to measure a wide field of physiological parameters: velocities and accelerations of body segments, skin conductivity, temperature, EEG, ECG, etc. Given that I am intending to augment the machine with sensors placed on the human body, why not to use those “new” sensors to perform the interaction in presently unknown but promising ways?