Researchers at Northern Illinois University have used artificial intelligence technologies to extract information from sounds made by babies. They focused on two tasks: learning to identify the signs of problems in babies by voice and translate the signals they send to their parents into understandable terms. The first successes are encouraging.
The machine learning algorithm is based on the already developed technology of automatic speech recognition with the allocation of key patterns in it that are characteristic of the infant's vocal apparatus. For example, the researchers took into account that the infant is not able to reproduce complex sounds, and also does not consciously control their sequence. On the other hand, an experienced nanny can easily distinguish the sound of "discomfort" from a "normal scream", even without knowing what kind of trouble they are talking about.
The basis for AI training was recordings of sounds from 26 babies in the intensive care unit and comments to them from hospital staff - obstetricians and nurses, as well as nannies and mothers with many children. AI was taught to distinguish five main signals from the stream of sounds: "hunger", "full diaper", "sleep", "attention" and the most important - "discomfort". The main thing was just to distinguish the fifth signal from the first four in order to minimize the number of false alarms.
So far, AI is undergoing training and testing. For him, gender, race and weight of the baby are no longer important, he successfully identifies five key calls even in those children whose voice he hears for the first time.