From ShutEye to SleepScore, several smartphone apps are available if you’re trying to better understand how snoring impacts your rest, allowing you to leave the microphone on overnight to record your raucous nasal grunts and rumbling throat reverberations. But while smartphone apps are helpful for tracking the presence of snores, their accuracy remains an issue when applied to real-world bedrooms with extraneous noises and multiple audible people.
Preliminary research from the University of Southampton looks into whether your snores have a signature sound that could be used for identification. “How do you actually track snoring or coughing accurately?” asks Jagmohan Chauhan, an assistant professor at the university who worked on the research. Machine learning models, specifically deep neural networks, might provide assistance in verifying who is performing that snore-phonic symphony.
While the research is quite nascent, it builds off peer-reviewed studies that used machine learning to verify the makers of another data-rich sound, often heard piercing through the sanguine silence of night: coughs.
Researchers from Google and the University of Washington mixed human-speech audio and coughs into a data set and then used a multitask learning approach to verify who produced a particular cough in a recording. In their study, the AI performed 10 percent better than a human evaluator at determining who coughed out of a small group of people.
Matt Whitehill, a graduate student who worked on the cough identification paper, questions some of the methodology underlying the snoring research and thinks more rigorous testing would lower its efficacy. Still, he sees the broader concept of audible identification as valid. “We showed you could do it with coughs. It seems very likely you could do the same thing with snoring,” says Whitehill.
This audio-based segment of AI is not as widely covered (and definitely not in as bombastic terms) as natural language processors like OpenAI’s ChatGPT. But regardless, a few companies are finding ways that AI could be used to analyze audio recordings and improve your health.
Resmonics, a Swiss company focused on AI-powered detection of lung disease symptoms, released medical software that is CE-certified and available to Swiss people through the myCough app. Although the software is not designed to diagnose disease, the app can help users track how many overnight coughs they experience and what type of cough is most prevalent. This provides users with a more complete understanding of their cough patterns while they decide whether a doctor’s consultation is needed.
David Cleres, a cofounder and chief technology officer at Resmonics, sees the potential for deep learning techniques to identify a particular person’s coughing or snoring, but believes that big breakthroughs are still necessary for this segment of AI research. “We learned the hard way at Resmonics that robustness to the variation in the recording devices and locations is as tricky to achieve as robustness to variations from the different user populations,” writes Cleres over email. Not only is it hard to find a data set with a range of natural cough and snore recordings, but it’s also difficult to predict the microphone quality of a five-year-old iPhone and where someone will choose to leave it at night.
So, the sounds you make in bed at night might be trackable by AI and different from the nighttime sounds produced by other people in your household. Could snores also be used as a biometric that’s linked to you, like a fingerprint? More research is required before jumping to premature conclusions. “If you’re looking from a health perspective, it might work,” says Chauhan. “From a biometric perspective, we cannot be sure.” Jagmohan is also interested in exploring how signal processing, without the help of machine learning models, could be used to assist in snorer spotting.
When it comes to AI in health care settings, eager researchers and intrepid entrepreneurs continue to encounter the same issue: a dearth of readily-available quality data. The lack of diverse data for training AI can be a tangible danger to patients. For example, an algorithm used in American hospitals de-prioritized the care of Black patients. Without robust data sets and thoughtful model construction, AI often performs differently in real-world circumstances than it does in sanitized practice settings.
“Everyone’s really kind of shifting to the deep neural networks,” says Whitehill. This data-intensive approach further heightens the need for reams of audio recordings to produce quality research into coughs and snores. A machine learning model that tracks when you’re snoring or hacking up a lung is not as memeable as a chatbot that crafts existential sonnets about Taco Bell’s Crunchwrap Supreme. It’s still worth pursuing with vigor. While generative AI remains top of mind for many in Silicon Valley, it would be a mistake to hit the snooze button on other AI applications and disregard their vibrant possibilities.