HMI

How are we going to interact with the machines soon?

Start
Our electronic and computer devices are getting smaller and smaller, more and more adapted to our needs, and closer and closer to us physically. From the first heavy, fixed and complex computers, we have moved on to smartphones that accompany us in our every move. What innovations should we now expect? Éric Lecolinet, a researcher in human-computer interaction at Télécom ParisTech, answers our questions on this rapidly evolving field.
 

How are human-machine interactions defined?

Auman-machine interactions concern all interactions between humans and electronic or computing devices, and also interactions between humans via these devices. This concerns desktop computers or smartphones as much as airplane cockpits and industrial machines! The study of these interactions is very vast, with applications in almost all fields. The aim is to develop machines capable of representing data that the user can easily interpret, and to allow the user to interact intuitively with this data.
 
In human-machine interactions, a distinction is made between output data, which is sent from the machine to the human, and input data, which is sent from the human to the machine. The output data is usually visual, because it is sent via screens, but it can also be auditory, or even tactile, thanks to vibrations for example. Input data is typically sent via a keyboard and mouse... but we can also communicate with our machines using gestures, voice or touch!
 
The study of human-machine interactions is a multidisciplinary field. It concerns computer science disciplines (software engineering, machine learning, signal and image processing), as well as social science disciplines (cognitive psychology, ergonomics, sociology). Finally, design, graphic arts, hardware, and new materials are also very important specialties in the development of new interfaces.
 

How have human-machine interactions evolved?

Let's go back a few years, to the 1950s... At that time, computer devices were computer centres: stationary, very bulky machines, relocated in specialised laboratories. It was humans who had to adapt to these computers: you had to learn their language and be an expert in the field in order to hope to interact with them...
 
The next step was the personal computer, especially the Macintosh, in 1984, following the work of Xerox PARC in the 1970s. It was a real shock! The computer belongs to you, it's in your office, in your home. We first developed the desktop PC, which is fixed, and then there was the laptop PC that you take everywhere with you: there is this aspect of appropriation, of mobility of the machines that has taken place. And finally, these first personal computers were made to facilitate interaction. It is no longer the user who has to learn the language of the machine. It is the machine itself that facilitates the interaction, in particular thanks to the WIMP model (Windows Icon Menu Pointer), a metaphor of the desktop.
 
If, since the 2000s, we have seen a miniaturization of machines, the real breakthrough came with the IPhone in 2007. It is a new paradigm, which redefines in depth the man-machine interface, with the main objective of making it as adapted as possible to the human. Radical decisions are taken: the interface is fully tactile, without a physical keypad, with a high-resolution multitouch screen, proximity sensors turn off the screen when the phone is worn to the ear, the display is adaptive according to the orientation of the phone...
 
Machines are becoming smaller and smaller, more and more mobile, and closer and closer to the body, like connected watches and biofeedback devices... In the future, we could imagine jewellery, clothes, tattoos! And above all, our devices are more and more intelligent, more and more adapted to our needs. Now, it's no longer we who learn to use the machines, it's the machines that adapt to us.
 
Right now, the media is talking a lot about voice interfaces, which could be the next revolution in human-machine interfaces.
These are very interesting technologies, which are making a lot of progress and will become increasingly useful. There is certainly a lot of work being done on these voice interfaces, and more and more services are available, but, for me, this will not replace the mouse and keyboard. For example, it is not suitable for word processing or digital drawing! It's fine for certain specific tasks, such as asking your phone "find me a movie for tonight at 8 o'clock", while walking or driving, or for material handlers who have to give instructions to machines while having their hands gripped. But interactional bandwidth, the amount of information you can transmit through this modality, is limited. Also, for everyday use, privacy issues arise: do you want to talk aloud to your smartphone in the subway or at the office?

 

We also hear a lot about brain-machine interfaces...

These are promising technologies, especially for people with severe disabilities. But they are far from being used by the general public, for video games for example, which require a high speed of interaction... They are slow and restrictive technologies. Unless you are willing to have electrodes implanted in the brain, you have to wear a net of electrodes on your head, which must be calibrated so that they don't move, and often coat them with conductive gel to improve their effectiveness.
 
We can face a technological leap forward that will soon make applications for the general public possible, but I think that many other innovations will be on the market before brain-machine interfaces.
 

What fields of innovation will man-machine interfaces turn towards?

There are a lot of possibilities, the research currently being carried out on the subject is very varied! Gestural interactions, for example, are at the centre of many projects, and some devices are already on the market. The idea is to use 2D or 3D gestures, the type of touch and pressure to interact with one's smartphone, computer, television... At Télécom ParisTech, we have for example developed a prototype of an intelligent watch, Watch it, which can be controlled using a vocabulary of gestures. This makes it possible to interact with the device without even looking at it!
 
 
This project also allowed us to explore possibilities to interact with a connected watch, a small object that is difficult to control with our fingers. We thought of using the bracelet as a touch interface, to scroll the screen of the watch. These small objects that are very close to the body, which we carry with us, will continue to develop. We could see connected jewellery appear for example! Some researchers are working, for example, on interfaces projected directly onto the skin to interact with these kinds of small devices.
 
Tangible interfaces are also an important avenue. The idea is that all everyday objects can become virtually interactive, with interactions specific to their function: no more searching through menus, the object corresponds to a precise function. These objects can also change shape (shape changing interfaces). In this field of research, we have developed Versapen: it is a modular augmented pen. It consists of modules that the user can arrange to create new functions for the object, and each module is user programmable. So we have a tangible interface that is totally customizable!
 
 
Finally, one of the great revolutions in human-machine interfaces is augmented reality. It's a recent technology, but already functional. And the applications are everywhere, in video games as well as in help during maintenance operations for example... At Télécom ParisTech, we have worked in collaboration with EDF to develop augmented reality devices. The idea is to project information on the dashboard controls of nuclear power plants to guide employees during maintenance operations.
 
It is very likely that augmented realities, both virtual and mixed, will continue to develop in the coming years. The GAFA (Google, Amazon, Facebook, Apple) are investing considerable sums in this area. These technologies have already made a huge leap forward, and are really starting to become popular... In my opinion, this will be one of the next key technological subjects, just as big data or artificial intelligence are currently... And as a researcher specialising in human-machine interfaces, it seems important to me to take a position on this topic!
 
Eric LecolinetHuman-Computer Interaction Researcher at Télécom ParisTech
 
The original of this article was published by I'MTecheditorial partner of UP' Magazine

Anything to add? Say it as a comment.

0 Comments
Inline Feedbacks
View all comments
Spy plant?
Previous article

The U.S. military wants to genetically modify plants to make them spies...

Brain Imaging
Next article

A pacemaker in the brain to increase your memory

Latest articles from In the Secrecy of Labs

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT.
logo-UP-menu150

Already registered? I'm connecting

Inscrivez-vous et lisez three articles for free. Recevez aussi notre newsletter pour être informé des dernières infos publiées.

→ Register for free to continue reading.

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT

You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.
Share
Tweet
Share
WhatsApp
Email
Print