In our previous newsletter, we discussed how AI can help in the early detection of periodontal diseases like gingivitis and periodontitis, which involve inflammation and infections that can cause progressive damage if left untreated. We highlighted AI’s capabilities in detecting periodontal bone loss, diagnosing gingivitis, and assessing connective tissues and other periodontal conditions. This article dives deeper into the processes that an AI model uses to detect symptoms of periodontal problems.
One key feature of AI in early detection is accurate plaque segmentation. Early and precise detection of dental plaque is crucial for preventing periodontal diseases and dental caries. However, current dental examinations struggle to recognize plaque without medical dyeing reagents due to the low contrast between plaque and healthy teeth. To address this issue, recent studies have proposed intelligent dental plaque segmentation methods. For instance, a study by Dr. Shuai Li et al. (2022) used intraoral cameras (IOC) to acquire oral endoscope images for precise pixel-level plaque segmentation.
Here’s an overview of the framework from Dr. Li’s study illustrated above:
1. Feature Extraction Phase: The framework begins with generating super-pixels from input oral endoscope images. It utilizes two feature extractors: the Heat Kernel Signature (HKS) extractor, based on Local Binary Patterns (LBP), and a Convolutional Neural Network (CNN)-based extractor. The circle-LBP captures local features for the HKS extractor, which then obtains local-to-global feature relations. Simultaneously, a self-attention neural network generates re-weighted pixel-level feature maps from the input images. These feature maps are fused with global super-pixel-level information using specially designed center-pooling modules.
2. Feature Fusion and Classification Phase: The fused feature vectors are classified using a random forest classifier. The classification results are then mapped to segmentation masks to determine the category of each pixel in the original image, producing the plaque segmentation masks as outputs.
To refine these multi-scale features, a CNN-based attention module was developed to better focus on regions of interest in the plaque, even in challenging cases. Extensive experiments and comprehensive evaluations show that this method outperforms state-of-the-art techniques, even with a small training dataset. User studies further verify that this method is more accurate than traditional dental practices performed by experienced dentists.
Let’s take a look at the comparison of segmentation results: (a) raw image, (b) disclosed image, (c) plaque mask of OCNet, (d) ablation study result (only using HKS), (e) ANet result, (f) FANet result (fusing HKS with dense-CNN), and (g) ground truth. ANet is great at recognizing plaque boundaries, but OCNet, which doesn’t have the self-attention module, often mistakes plaque for tooth areas. That’s where the self-attention mechanism shines, helping the network focus on plaque edges more accurately. This really shows how effective it is at capturing non-local features with the self-attention module.
In clinical practice, raw images from various imaging equipment often exhibit significant differences in illumination intensity, shading, tone, and other factors. Consequently, this framework may encounter challenges with images showing unknown variations from different endoscopes. These difficulties can be mitigated through popular dataset augmentation techniques, such as geometric and color space transformations.
Ref: https://ieeexplore.ieee.org/document/9677917