HEALTH

How Facial Recognition Will Help Doctors Take Better Care Of Babies

Medical researchers are developing facial analysis apps that diagnose genetic disorders and monitor babies' pain

HEALTH
Photo Illustration: R. A. Di Ieso
May 10, 2017 at 11:44 AM ET

While on a service mission to Uganda in 2013, clinical geneticist Paul Kruszka was confused by a baby’s face. A cardiologist had handed him an African child who had a heart defect linked to Down syndrome. But the baby didn’t bear the distinct facial features typically associated with the genetic disorder, such as upward slanting eyes that are unusual for the child’s ethnic group, and unusually shaped or small ears. After an examination, the doctor learned that the child did, in fact, have Down syndrome.

During his work travels Kruszka realized that minimal genetics research had been done in developing countries where medical researchers prioritize pervasive issues like infectious disease and starvation. “We gave a talk in Nigeria, and there was just a line of physicians with pictures on their cell phones of kids with genetic syndromes,” Kruszka said. “After enough instances like that it became apparent to us that this is something we needed to explore.”

Kruszka and a colleague at the National Institutes of Health (NIH), geneticist Maximilian Muenke, decided they should build a tool that would help with diagnoses. They first developed a guide, the NIH Atlas of Human Malformation Syndromes in Diverse Populations, which allows users to browse images of faces by geography or condition.

After compiling photos taken by doctors across the world in a format that is free and easily accessible in developing countries, they turned their focus to facial recognition technology. They teamed up with Marius Linguraru, at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Health System (CNHS), who had developed facial analysis technology for genetic studies.

Together the researchers at NIH and CNHS used the facial analysis technology to detect Down syndrome in different races. Then they expanded their research to DiGeorge syndrome, with a study that they published last month in the American Journal of Medical Genetics. In addition to cognitive impairment and heart defects, DiGeorge syndrome causes cleft palate and distinct facial characteristics that can vary by race. For the study, researchers used doctor-submitted photos of 101 people with DiGeorge syndrome in 11 countries. The technology analyzed 126 facial features on 156 Africans, Asians, Caucasians, and Latin Americans with the syndrome, and diagnosed correctly for all races 96.6 percent of the time. Soon the team will submit their paper showing how accurately the system can detect Noonan syndrome.

As they refine the technology, the researchers hope this tool will be used by healthcare providers around the world. “Our ultimate goal is a simple and accurate tool that would enable doctors in clinics without access to state-of-the-art genetic facilities to help vulnerable young patients everywhere,” Linguraru said. “The technology would allow clinicians to identify children with genetic conditions by simply using a photo. This would be ideal in community hospitals and in the developing world where blood tests and specialized genetic expertise are usually unavailable.”

In fact, a similar application is already widely available. Last year, digital health startup FDNA launched Face2Gene, a smartphone app that provides a list of possible genetic syndromes based on a selfie. In early April, the company announced it could recognize the facial characteristics associated with 2,000 syndromes. Anyone can download the app, but you have to be a medical professional in order to use it. But Face2App only lists possibilities and is not a diagnostic tool. Perhaps with the research being done by the NIH and CNHS, AI will soon be capable of providing a reliable diagnosis.

But genetic syndromes aren’t the only thing machines can read on a face. Last month, researchers at the University of South Florida (USF) announced they have developed facial analysis software that measures the pain of infants, much likes neonatal nurses do. “Nurses look at many different factors when assessing babies for pain. They look at vital signs. They look at the baby’s behavior. But they also look at changes in the baby’s facial expression,” said Terri Ashmeade, pediatrics professor at USF Health Morsani College of Medicine. “These evaluations are fairly subjective because it’s somebody’s interpretation of something that’s happening to somebody else, so there’s variability between nurses in terms of their assessment of a patient’s pain.”

Ashmeade and other neonatal experts worked with developers at the USF College of Engineering to create an application that can continuously assess babies’ facial expressions. Their initial study included 53 infants in neonatal intensive care. They placed GoPro cameras over incubators then compared that footage with synced readings of vital signs and oxygen levels in the brain. When those readings showed the infant was hurting or distressed, facial analysis software registered and recorded the change in expression. The nurses also helped tweak the algorithms using their own face-reading expertise.

Such a system could provide non-stop monitoring that alerts nurses when they should intervene and alleviate pain. These measures could potentially have a lifelong effect on neurodevelopment. “There’s been a lot of research into pain in babies who are in neonatal intensive care units — especially premature babies who can be in the hospital for several months and who receive multiple procedures that are necessary for their care but can cause pain,” Ashmeade said. “Babies that have multiple recurrent and prolonged exposure to pain seem to have changes in their brain development, even in terms of brain structure.”

Ashmeade believes that this system could also be used to monitor adult patients who can’t communicate with their nurses because of brain damage, dementia, or senility. It seems it may not be long before robots can understand our feelings better than their flesh-and-blood counterparts.

The new season of Dark Net — an eight-part docuseries developed and produced by Vocativ — airs Thursdays at 10 p.m. ET/PT on SHOWTIME.