TECO Logo

Thesis

RT-CHOMP: Real-Time Multimodal Chewing Side Detection with Earphones

Background

Chewing side preference (CSP) has been identified both as a risk factor (Barcellos et al., 2011; Tiwari et al., 2017) for temporomandibular disorders (TMD) and behavioral manifestation (López-Cedrún et al., 2017; Yap et al., 2024). Despite TMDs affecting roughly one third of the global population (Alqutaibi  et al., 2025; Zieliński et al., 2024), assessment mainly relies on clinical examinations and self-reports (Diernberger et al, 2008; National Academies of Sciences, 2020), offering limited insight into everyday jaw function. Continuous CSP monitoring could provide an objective proxy for functional asymmetries. Prior wearable approaches, however, mostly use specialized form factors and demonstrate limited performance (Yamasaki et al., 2015; Nakamura et al. 2020a, 2020b, 2021; Kim et al., 2022; Wang et al., 2021; Chung et al., 2017). We therefore introduced CHOMP (Hummel & Burzer et al., 2026), the first system for chewing side detection using earphones. Employing OpenEarable 2.0 (Röddiger et al., 2025), we collected data from 20 participants with microphones, a bone-conduction microphone, IMU, PPG, and a pressure sensor across eleven foods, five non-chewing activities, and three noise conditions. We applied the Continuous Wavelet Transform (CWT) to each sensing modality and used the resulting multi-channel scalograms as inputs to CNN-based classifiers. Microphones achieved the strongest single-sensor unit performance, with median  scores of 94.5% in leave-one-food-out (LOFO) and 92.6% in leave-one-subject-out (LOSO) cross-validations. Fusing sensing modalities further improved performance to 97.7% for LOFO and 95.4% for LOSO, with additional evaluations under noise interference indicating robust performance.

There are, however, limitations to the current implementation of CHOMP. The system currently operates offline: sensor data are first stored on SD cards attached to the OpenEarable 2.0 devices before being processed on an external device (e.g., smartphone). Due to bandwidth constraints, real-time streaming of all sensors is not feasible at the chosen frequencies. As a result, CHOMP provides no immediate feedback to users and remains largely a “black box,” validated only under controlled, artificially constrained left- and right-chewing conditions. In real-world usage, this lack of transparency and feedback makes it difficult for users to trust or meaningfully interact with the system.

Your Tasks

The goal of this master’s thesis is to address these limitations by developing RT-CHOMP: a real-time, adaptive chewing side analysis system. Rather than replacing CHOMP, RT-CHOMP should take a complementary angle – prioritizing usability, transparency, and user trust in everyday scenarios. Building on the CHOMP paper and dataset, the thesis will extend the system along the following dimensions:

  • Real-time feedback on chewing side behavior
  • Transparency and demonstrability, enabling users to understand and trust the system
  • Practical usability, including an OpenWearables in-app implementation
  • User-centered evaluation, including a study investigating whether users perceive the system as capable of meaningful chewing analysis

Ways this could be approached are, among other things, the following:

  • Investigating smaller window sizes and their implications (e.g., mixed left/right chewing cycles within a single window)
  • Selective or hierarchical streaming, e.g., preprocessing or filtering sensor data directly on the earable to reduce bandwidth, or simply reducing streaming frequencies
  • Experimenting with features beyond CWT-based scalograms
  • Incorporating short-term memory or autocorrelative features, which were not feasible in the original CHOMP evaluation

You will receive full access to the CHOMP dataset and the bwUniCluster computing cluster, including an introduction to the infrastructure and workflows used in the original study.

Bonus:

  • Consider personalization approaches such as Test Time Adaptation.
  • How could dentists or clinicians be meaningfully integrated into the loop?

Requirements

  • Ability to translate the insights from an exploratory study into a standalone system.
  • Strong skills in Python (for data analysis) and Flutter (for building the in-app in OpenWearables).
  • Optional: ZephyrOS for adapting the firmware of OpenEarable 2.0 and experience with bwUniCluster.

Application Documents

  • A paragraph explaining your motivation.
  • Your study program (Bachelor/Master), current semester, and field of study.
  • A transcript of records (courses and grades).
  • Your programming experience.
  • Any areas of interest relevant to the topic.
  • Your CV (if available)
KIT – Campus Süd – TECO
Vincenz-Prießnitz-Str. 1
76131 Karlsruhe, GERMANY
Visit our LinkedInVisit our YouTube channel
Impressum
Log In
©2025 TECO – Technology for Pervasive Computing
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram