A wearable sensor will help ALS patients to communicate

MIT ALS Communication

A group of researchers at MIT has designed a device that can detect slight facial movements, helping ALS patients to communicate.

ALS (amyotrophic lateral sclerosis) is a progressive neurodegenerative disease that attacks motor neurons and leads to paralysis of voluntary muscles, including respiratory muscles.
Patients who suffer from it lose the ability to speak and, until now, could only communicate via a special device to be used with eye movement.
In recent days, however, things seem to have changed thanks to a wearable sensor.

Wearable sensor

The device has a skin-like structure that can be attached to the patient’s face and can detect slight movements, such as a spasm or smile.
Using this approach, patients will be able to communicate a variety of feelings and needs through slight movements that the device perceives and interprets.

The study was published in Nature Biomedical Engineering.

The wearable sensor is thin and can be camouflaged with makeup to match any skin tone, making it discreet and tolerable.
Canan Dagdeviren, professor of Arts and Multimedia Sciences at the Massachusetts Institute of Technology and leader of the research team, explains that the device is comfortable, lightweight, disposable and absolutely invisible, thus freeing the patient from great discomfort.

How it works

The sensor sends the information it is able to obtain thanks to a portable processing unit, and analyses it using an algorithm programmed to recognise facial movements.
In the current prototype, this unit is wired to the sensor, but in the future the connection could be wireless for greater convenience and ease of use.

Based on these detectable facial movements, a library of phrases or words could be created that correspond to different combinations of movements.

The researchers tested the first version of their device on two ALS patients, a man and a woman, demonstrating that it can accurately recognise three different expressions: pout, open mouth and smile.

The researchers say that, starting from the recognised facial movements, it will be possible to create a set of phrases or words that can be personalised because they are associated with different combinations of movements.

This would lead to greater autonomy for the patient, at least as far as the ability of expression is concerned.