Initially, the UCLA team developed a program to help hard of hearing parents, which could distinguish between the crying of a child in the background and give a signal. The application was named Chatterbaby and during its testing it became clear that all infant sounds can be attributed to three categories: hunger, pain and boredom. An experienced mother can understand by intonation what the baby wants, but how about creating a software algorithm for this?
The task turned out to be both simpler and more difficult than it was thought. Since the infant acts reflexively, the painful sensations cause him to scream continuously, whereas with normal fuss, there are distinct periods of pause between sounds. Food demands have their own acoustic color, and after studying 2000 samples, the program learned to recognize three types of screams with an accuracy of 90%. The only question is in the initial calibration of the system.
Genotype, exact age, climate, environment, and other factors affect the baby's voice quite strongly, so UCLA decided to collect a database with samples of the calls of as many children as possible. Chatterbaby users are encouraged to record and send the voices of their offspring, but the information will not be personalized. These are just technical data, tests for system debugging and algorithm optimization.
But such a bank of infant voices may have broader uses, such as early recognition of autism. In older people with this problem, the voice changes significantly - maybe, after analyzing thousands of records, it will be possible to deduce such patterns for babies? To do this, parents are asked to meet the researchers halfway and send short, 5 seconds each, recordings of the voices of their children, plus regularly undergo comprehensive examinations so that data can be compared and observations can be made.