Journal Description
Journal of Imaging
Journal of Imaging
is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques published online monthly by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), PubMed, PMC, dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Graphics and Computer-Aided Design)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 21.9 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the first half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.2 (2022);
5-Year Impact Factor:
3.2 (2022)
Latest Articles
Biased Deep Learning Methods in Detection of COVID-19 Using CT Images: A Challenge Mounted by Subject-Wise-Split ISFCT Dataset
J. Imaging 2023, 9(8), 159; https://doi.org/10.3390/jimaging9080159 - 08 Aug 2023
Abstract
Accurate detection of respiratory system damage including COVID-19 is considered one of the crucial applications of deep learning (DL) models using CT images. However, the main shortcoming of the published works has been unreliable reported accuracy and the lack of repeatability with new
[...] Read more.
Accurate detection of respiratory system damage including COVID-19 is considered one of the crucial applications of deep learning (DL) models using CT images. However, the main shortcoming of the published works has been unreliable reported accuracy and the lack of repeatability with new datasets, mainly due to slice-wise splits of the data, creating dependency between training and test sets due to shared data across the sets. We introduce a new dataset of CT images (ISFCT Dataset) with labels indicating the subject-wise split to train and test our DL algorithms in an unbiased manner. We also use this dataset to validate the real performance of the published works in a subject-wise data split. Another key feature provides more specific labels (eight characteristic lung features) rather than being limited to COVID-19 and healthy labels. We show that the reported high accuracy of the existing models on current slice-wise splits is not repeatable for subject-wise splits, and distribution differences between data splits are demonstrated using t-distribution stochastic neighbor embedding. We indicate that, by examining subject-wise data splitting, less complicated models show competitive results compared to the exiting complicated models, demonstrating that complex models do not necessarily generate accurate and repeatable results.
Full article
(This article belongs to the Section Medical Imaging)
►
Show Figures
Open AccessArticle
Enhancing Fingerprint Liveness Detection Accuracy Using Deep Learning: A Comprehensive Study and Novel Approach
by
, , , , , and
J. Imaging 2023, 9(8), 158; https://doi.org/10.3390/jimaging9080158 - 07 Aug 2023
Abstract
Liveness detection for fingerprint impressions plays a role in the meaningful prevention of any unauthorized activity or phishing attempt. The accessibility of unique individual identification has increased the popularity of biometrics. Deep learning with computer vision has proven remarkable results in image classification,
[...] Read more.
Liveness detection for fingerprint impressions plays a role in the meaningful prevention of any unauthorized activity or phishing attempt. The accessibility of unique individual identification has increased the popularity of biometrics. Deep learning with computer vision has proven remarkable results in image classification, detection, and many others. The proposed methodology relies on an attention model and ResNet convolutions. Spatial attention (SA) and channel attention (CA) models were used sequentially to enhance feature learning. A three-fold sequential attention model is used along with five convolution learning layers. The method’s performances have been tested across different pooling strategies, such as Max, Average, and Stochastic, over the LivDet-2021 dataset. Comparisons against different state-of-the-art variants of Convolutional Neural Networks, such as DenseNet121, VGG19, InceptionV3, and conventional ResNet50, have been carried out. In particular, tests have been aimed at assessing ResNet34 and ResNet50 models on feature extraction by further enhancing the sequential attention model. A Multilayer Perceptron (MLP) classifier used alongside a fully connected layer returns the ultimate prediction of the entire stack. Finally, the proposed method is also evaluated on feature extraction with and without attention models for ResNet and considering different pooling strategies.
Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
►▼
Show Figures
Figure 1
Open AccessArticle
Target Design in SEM-Based Nano-CT and Its Influence on X-ray Imaging
J. Imaging 2023, 9(8), 157; https://doi.org/10.3390/jimaging9080157 - 04 Aug 2023
Abstract
►▼
Show Figures
Nano-computed tomography (nano-CT) based on scanning electron microscopy (SEM) is utilized for multimodal material characterization in one instrument. Since SEM-based CT uses geometrical magnification, X-ray targets can be adapted without any further changes to the system. This allows for designing targets with varying
[...] Read more.
Nano-computed tomography (nano-CT) based on scanning electron microscopy (SEM) is utilized for multimodal material characterization in one instrument. Since SEM-based CT uses geometrical magnification, X-ray targets can be adapted without any further changes to the system. This allows for designing targets with varying geometry and chemical composition to influence the X-ray focal spot, intensity and energy distribution with the aim to enhance the image quality. In this paper, three different target geometries with a varying volume are presented: bulk, foil and needle target. Based on the analyzed electron beam properties and X-ray beam path, the influence of the different target designs on X-ray imaging is investigated. With the obtained information, three targets for different applications are recommended. A platinum (Pt) bulk target tilted by 25° as an optimal combination of high photon flux and spatial resolution is used for fast CT scans and the investigation of high-absorbing or large sample volumes. To image low-absorbing materials, e.g., polymers or organic materials, a target material with a characteristic line energy right above the detector energy threshold is recommended. In the case of the observed system, we used a 30° tilted chromium (Cr) target, leading to a higher image contrast. To reach a maximum spatial resolution of about 100 nm, we recommend a tungsten (W) needle target with a tip diameter of about 100 nm.
Full article
Figure 1
Open AccessArticle
Classification of a 3D Film Pattern Image Using the Optimal Height of the Histogram for Quality Inspection
J. Imaging 2023, 9(8), 156; https://doi.org/10.3390/jimaging9080156 - 02 Aug 2023
Abstract
A 3D film pattern image was recently developed for marketing purposes, and an inspection method is needed to evaluate the quality of the pattern for mass production. However, due to its recent development, there are limited methods to inspect the 3D film pattern.
[...] Read more.
A 3D film pattern image was recently developed for marketing purposes, and an inspection method is needed to evaluate the quality of the pattern for mass production. However, due to its recent development, there are limited methods to inspect the 3D film pattern. The good pattern in the 3D film has a clear outline and high contrast, while the bad pattern has a blurry outline and low contrast. Due to these characteristics, it is challenging to examine the quality of the 3D film pattern. In this paper, we propose a simple algorithm that classifies the 3D film pattern as either good or bad by using the height of the histograms. Despite its simplicity, the proposed method can accurately and quickly inspect the 3D film pattern. In the experimental results, the proposed method achieved 99.09% classification accuracy with a computation time of 6.64 s, demonstrating better performance than existing algorithms.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures
Figure 1
Open AccessArticle
The Cross-Sectional Area Assessment of Pelvic Muscles Using the MRI Manual Segmentation among Patients with Low Back Pain and Healthy Subjects
by
, , , , and
J. Imaging 2023, 9(8), 155; https://doi.org/10.3390/jimaging9080155 - 31 Jul 2023
Abstract
The pain pathomechanism of chronic low back pain (LBP) is complex and the available diagnostic methods are insufficient. Patients present morphological changes in volume and cross-sectional area (CSA) of lumbosacral region. The main objective of this study was to assess if CSA measurements
[...] Read more.
The pain pathomechanism of chronic low back pain (LBP) is complex and the available diagnostic methods are insufficient. Patients present morphological changes in volume and cross-sectional area (CSA) of lumbosacral region. The main objective of this study was to assess if CSA measurements of pelvic muscle will indicate muscle atrophy between asymptomatic and symptomatic sides in chronic LBP patients, as well as between right and left sides in healthy volunteers. In addition, inter-rater reliability for CSA measurements was examined. The study involved 71 chronic LBP patients and 29 healthy volunteers. The CSA of gluteus maximus, medius, minimus and piriformis were measured using the MRI manual segmentation method. Muscle atrophy was confirmed in gluteus maximus, gluteus minimus and piriformis muscle for over 50% of chronic LBP patients (p < 0.05). Gluteus medius showed atrophy in patients with left side pain occurrence (p < 0.001). Muscle atrophy occurred on the symptomatic side for all inspected muscles, except gluteus maximus in rater one assessment. The reliability of CSA measurements between raters calculated using CCC and ICC presented great inter-rater reproducibility for each muscle both in patients and healthy volunteers (p < 0.95). Therefore, there is the possibility of using CSA assessment in the diagnosis of patients with symptoms of chronic LBP.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
Open-Set Recognition of Wood Species Based on Deep Learning Feature Extraction Using Leaves
J. Imaging 2023, 9(8), 154; https://doi.org/10.3390/jimaging9080154 - 30 Jul 2023
Abstract
An open-set recognition scheme for tree leaves based on deep learning feature extraction is presented in this study. Deep learning algorithms are used to extract leaf features for different wood species, and the leaf set of a wood species is divided into two
[...] Read more.
An open-set recognition scheme for tree leaves based on deep learning feature extraction is presented in this study. Deep learning algorithms are used to extract leaf features for different wood species, and the leaf set of a wood species is divided into two datasets: the leaf set of a known wood species and the leaf set of an unknown species. The deep learning network (CNN) is trained on the leaves of selected known wood species, and the features of the remaining known wood species and all unknown wood species are extracted using the trained CNN. Then, the single-class classification is performed using the weighted SVDD algorithm to recognize the leaves of known and unknown wood species. The features of leaves recognized as known wood species are fed back to the trained CNN to recognize the leaves of known wood species. The recognition results of a single-class classifier for known and unknown wood species are combined with the recognition results of a multi-class CNN to finally complete the open recognition of wood species. We tested the proposed method on the publicly available Swedish Leaf Dataset, which includes 15 wood species (5 species used as known and 10 species used as unknown). The test results showed that, with F1 scores of 0.7797 and 0.8644, mixed recognition rates of 95.15% and 93.14%, and Kappa coefficients of 0.7674 and 0.8644 under two different data distributions, the proposed method outperformed the state-of-the-art open-set recognition algorithms in all three aspects. And, the more wood species that are known, the better the recognition. This approach can extract effective features from tree leaf images for open-set recognition and achieve wood species recognition without compromising tree material.
Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
Nighttime Image Dehazing by Render
J. Imaging 2023, 9(8), 153; https://doi.org/10.3390/jimaging9080153 - 28 Jul 2023
Abstract
Nighttime image dehazing presents unique challenges due to the unevenly distributed haze caused by the color change of artificial light sources. This results in multiple interferences, including atmospheric light, glow, and direct light, which make the complex scattering haze interference difficult to accurately
[...] Read more.
Nighttime image dehazing presents unique challenges due to the unevenly distributed haze caused by the color change of artificial light sources. This results in multiple interferences, including atmospheric light, glow, and direct light, which make the complex scattering haze interference difficult to accurately distinguish and remove. Additionally, obtaining pairs of high-definition data for fog removal at night is a difficult task. These challenges make nighttime image dehazing a particularly challenging problem to solve. To address these challenges, we introduced the haze scattering formula to more accurately express the haze in three-dimensional space. We also proposed a novel data synthesis method using the latest CG textures and lumen lighting technology to build scenes where various hazes can be seen clearly through ray tracing. We converted the complex 3D scattering relationship transformation into a 2D image dataset to better learn the mapping from 3D haze to 2D haze. Additionally, we improved the existing neural network and established a night haze intensity evaluation label based on the idea of optical PSF. This allowed us to adjust the haze intensity of the rendered dataset according to the intensity of the real haze image and improve the accuracy of dehazing. Our experiments showed that our data construction and network improvement achieved better visual effects, objective indicators, and calculation speed.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures
Figure 1
Open AccessArticle
Quantification of Gas Flaring from Satellite Imagery: A Comparison of Two Methods for SLSTR and BIROS Imagery
J. Imaging 2023, 9(8), 152; https://doi.org/10.3390/jimaging9080152 - 27 Jul 2023
Abstract
Gas flaring is an environmental problem of local, regional and global concerns. Gas flares emit pollutants and greenhouse gases, yet knowledge about the source strength is limited due to disparate reporting approaches in different geographies, whenever and wherever those are considered. Remote sensing
[...] Read more.
Gas flaring is an environmental problem of local, regional and global concerns. Gas flares emit pollutants and greenhouse gases, yet knowledge about the source strength is limited due to disparate reporting approaches in different geographies, whenever and wherever those are considered. Remote sensing has bridged the gap but uncertainties remain. There are numerous sensors which provide measurements over flaring-active regions in wavelengths that are suitable for the observation of gas flares and the retrieval of flaring activity. However, their use for operational monitoring has been limited. Besides several potential sensors, there are also different approaches to conduct the retrievals. In the current paper, we compare two retrieval approaches over an offshore flaring area during an extended period of time. Our results show that retrieved activities are consistent between methods although discrepancies may originate for individual flares at the highly temporal scale, which are traced back to the variable nature of flaring. The presented results are helpful for the estimation of flaring activity from different sources and will be useful in a future integration of diverse sensors and methodologies into a single monitoring scheme.
Full article
(This article belongs to the Special Issue Infrared-Image Processing for Climate Change Monitoring from Space: 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Fast Compressed Sensing of 3D Radial T1 Mapping with Different Sparse and Low-Rank Models
J. Imaging 2023, 9(8), 151; https://doi.org/10.3390/jimaging9080151 - 26 Jul 2023
Abstract
Knowledge of the relative performance of the well-known sparse and low-rank compressed sensing models with 3D radial quantitative magnetic resonance imaging acquisitions is limited. We use 3D radial T1 relaxation time mapping data to compare the total variation, low-rank, and Huber penalty
[...] Read more.
Knowledge of the relative performance of the well-known sparse and low-rank compressed sensing models with 3D radial quantitative magnetic resonance imaging acquisitions is limited. We use 3D radial T1 relaxation time mapping data to compare the total variation, low-rank, and Huber penalty function approaches to regularization to provide insights into the relative performance of these image reconstruction models. Simulation and ex vivo specimen data were used to determine the best compressed sensing model as measured by normalized root mean squared error and structural similarity index. The large-scale compressed sensing models were solved by combining a GPU implementation of a preconditioned primal-dual proximal splitting algorithm to provide high-quality T1 maps within a feasible computation time. The model combining spatial total variation and locally low-rank regularization yielded the best performance, followed closely by the model combining spatial and contrast dimension total variation. Computation times ranged from 2 to 113 min, with the low-rank approaches taking the most time. The differences between the compressed sensing models are not necessarily large, but the overall performance is heavily dependent on the imaged object.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
Quantitative CT Metrics Associated with Variability in the Diffusion Capacity of the Lung of Post-COVID-19 Patients with Minimal Residual Lung Lesions
by
, , , , , , , , and
J. Imaging 2023, 9(8), 150; https://doi.org/10.3390/jimaging9080150 - 26 Jul 2023
Abstract
(1) Background: A reduction in the diffusion capacity of the lung for carbon monoxide is a prevalent longer-term consequence of COVID-19 infection. In patients who have zero or minimal residual radiological abnormalities in the lungs, it has been debated whether the cause was
[...] Read more.
(1) Background: A reduction in the diffusion capacity of the lung for carbon monoxide is a prevalent longer-term consequence of COVID-19 infection. In patients who have zero or minimal residual radiological abnormalities in the lungs, it has been debated whether the cause was mainly due to a reduced alveolar volume or involved diffuse interstitial or vascular abnormalities. (2) Methods: We performed a cross-sectional study of 45 patients with either zero or minimal residual lesions in the lungs (total volume < 7 cc) at two months to one year post COVID-19 infection. There was considerable variability in the diffusion capacity of the lung for carbon monoxide, with 27% of the patients at less than 80% of the predicted reference. We investigated a set of independent variables that may affect the diffusion capacity of the lung, including demographic, pulmonary physiology and CT (computed tomography)-derived variables of vascular volume, parenchymal density and residual lesion volume. (3) Results: The leading three variables that contributed to the variability in the diffusion capacity of the lung for carbon monoxide were the alveolar volume, determined via pulmonary function tests, the blood vessel volume fraction, determined via CT, and the parenchymal radiodensity, also determined via CT. These factors explained 49% of the variance of the diffusion capacity, with p values of 0.031, 0.005 and 0.018, respectively, after adjusting for confounders. A multiple-regression model combining these three variables fit the measured values of the diffusion capacity, with R = 0.70 and p < 0.001. (4) Conclusions: The results are consistent with the notion that in some post-COVID-19 patients, after their pulmonary lesions resolve, diffuse changes in the vascular and parenchymal structures, in addition to a low alveolar volume, could be contributors to a lingering low diffusion capacity.
Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessReview
Prospective of Pancreatic Cancer Diagnosis Using Cardiac Sensing
by
, , , , , , , and
J. Imaging 2023, 9(8), 149; https://doi.org/10.3390/jimaging9080149 - 25 Jul 2023
Abstract
Pancreatic carcinoma (Ca Pancreas) is the third leading cause of cancer-related deaths in the world. The malignancies of the pancreas can be diagnosed with the help of various imaging modalities. An endoscopic ultrasound with a tissue biopsy is so far considered to be
[...] Read more.
Pancreatic carcinoma (Ca Pancreas) is the third leading cause of cancer-related deaths in the world. The malignancies of the pancreas can be diagnosed with the help of various imaging modalities. An endoscopic ultrasound with a tissue biopsy is so far considered to be the gold standard in terms of the detection of Ca Pancreas, especially for lesions <2 mm. However, other methods, like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI), are also conventionally used. Moreover, newer techniques, like proteomics, radiomics, metabolomics, and artificial intelligence (AI), are slowly being introduced for diagnosing pancreatic cancer. Regardless, it is still a challenge to diagnose pancreatic carcinoma non-invasively at an early stage due to its delayed presentation. Similarly, this also makes it difficult to demonstrate an association between Ca Pancreas and other vital organs of the body, such as the heart. A number of studies have proven a correlation between the heart and pancreatic cancer. The tumor of the pancreas affects the heart at the physiological, as well as the molecular, level. An overexpression of the SMAD4 gene; a disruption in biomolecules, such as IGF, MAPK, and ApoE; and increased CA19-9 markers are a few of the many factors that are noted to affect cardiovascular systems with pancreatic malignancies. A comprehensive review of this correlation will aid researchers in conducting studies to help establish a definite relation between the two organs and discover ways to use it for the early detection of Ca Pancreas.
Full article
(This article belongs to the Special Issue Deep Learning and Data Analytics Techniques for Processing of Biomedical Images)
►▼
Show Figures
Figure 1
Open AccessArticle
Automatic Localization of Five Relevant Dermoscopic Structures Based on YOLOv8 for Diagnosis Improvement
J. Imaging 2023, 9(7), 148; https://doi.org/10.3390/jimaging9070148 - 21 Jul 2023
Abstract
The automatic detection of dermoscopic features is a task that provides the specialists with an image with indications about the different patterns present in it. This information can help them fully understand the image and improve their decisions. However, the automatic analysis of
[...] Read more.
The automatic detection of dermoscopic features is a task that provides the specialists with an image with indications about the different patterns present in it. This information can help them fully understand the image and improve their decisions. However, the automatic analysis of dermoscopic features can be a difficult task because of their small size. Some work was performed in this area, but the results can be improved. The objective of this work is to improve the precision of the automatic detection of dermoscopic features. To achieve this goal, an algorithm named yolo-dermoscopic-features is proposed. The algorithm consists of four points: (i) generate annotations in the JSON format for supervised learning of the model; (ii) propose a model based on the latest version of Yolo; (iii) pre-train the model for the segmentation of skin lesions; (iv) train five models for the five dermoscopic features. The experiments are performed on the ISIC 2018 task2 dataset. After training, the model is evaluated and compared to the performance of two methods. The proposed method allows us to reach average performances of 0.9758, 0.954, 0.9724, 0.938, and 0.9692, respectively, for the Dice similarity coefficient, Jaccard similarity coefficient, precision, recall, and average precision. Furthermore, comparing to other methods, the proposed method reaches a better Jaccard similarity coefficient of 0.954 and, thus, presents the best similarity with the annotations made by specialists. This method can also be used to automatically annotate images and, therefore, can be a solution to the lack of features annotation in the dataset.
Full article
(This article belongs to the Special Issue Imaging Informatics: Computer-Aided Diagnosis)
►▼
Show Figures
Figure 1
Open AccessEditorial
Deep Learning and Vision Transformer for Medical Image Analysis
J. Imaging 2023, 9(7), 147; https://doi.org/10.3390/jimaging9070147 - 21 Jul 2023
Abstract
Artificial intelligence (AI) refers to the field of computer science theory and technology [...]
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
Algebraic Multi-Layer Network: Key Concepts
J. Imaging 2023, 9(7), 146; https://doi.org/10.3390/jimaging9070146 - 18 Jul 2023
Abstract
The paper refers to interdisciplinary research in the areas of hierarchical cluster analysis of big data and ordering of primary data to detect objects in a color or in a grayscale image. To perform this on a limited domain of multidimensional data, an
[...] Read more.
The paper refers to interdisciplinary research in the areas of hierarchical cluster analysis of big data and ordering of primary data to detect objects in a color or in a grayscale image. To perform this on a limited domain of multidimensional data, an NP-hard problem of calculation of close to optimal piecewise constant data approximations with the smallest possible standard deviations or total squared errors (approximation errors) is solved. The solution is achieved by revisiting, modernizing, and combining classical Ward’s clustering, split/merge, and K-means methods. The concepts of objects, images, and their elements (superpixels) are formalized as structures that are distinguishable from each other. The results of structuring and ordering the image data are presented to the user in two ways, as tabulated approximations of the image showing the available object hierarchies. For not only theoretical reasoning, but also for practical implementation, reversible calculations with pixel sets are performed easily, as with individual pixels in terms of Sleator–Tarjan Dynamic trees and cyclic graphs forming an Algebraic Multi-Layer Network (AMN). The detailing of the latter significantly distinguishes this paper from our prior works. The establishment of the invariance of detected objects with respect to changing the context of the image and its transformation into grayscale is also new.
Full article
(This article belongs to the Special Issue Image Segmentation Techniques: Current Status and Future Directions)
►▼
Show Figures
Figure 1
Open AccessArticle
Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images
by
, , , , , and
J. Imaging 2023, 9(7), 145; https://doi.org/10.3390/jimaging9070145 - 18 Jul 2023
Abstract
The third trimester of pregnancy is the most critical period for human brain development, during which significant changes occur in the morphology of the brain. The development of sulci and gyri allows for a considerable increase in the brain surface. In preterm newborns,
[...] Read more.
The third trimester of pregnancy is the most critical period for human brain development, during which significant changes occur in the morphology of the brain. The development of sulci and gyri allows for a considerable increase in the brain surface. In preterm newborns, these changes occur in an extrauterine environment that may cause a disruption of the normal brain maturation process. We hypothesize that a normalized atlas of brain maturation with cerebral ultrasound images from birth to term equivalent age will help clinicians assess these changes. This work proposes a semi-automatic Graphical User Interface (GUI) platform for segmenting the main cerebral sulci in the clinical setting from ultrasound images. This platform has been obtained from images of a cerebral ultrasound neonatal database images provided by two clinical researchers from the Hospital Sant Joan de Déu in Barcelona, Spain. The primary objective is to provide a user-friendly design platform for clinicians for running and visualizing an atlas of images validated by medical experts. This GUI offers different segmentation approaches and pre-processing tools and is user-friendly and designed for running, visualizing images, and segmenting the principal sulci. The presented results are discussed in detail in this paper, providing an exhaustive analysis of the proposed approach’s effectiveness.
Full article
(This article belongs to the Special Issue Imaging Informatics: Computer-Aided Diagnosis)
►▼
Show Figures
Figure 1
Open AccessArticle
Varroa Destructor Classification Using Legendre–Fourier Moments with Different Color Spaces
by
, , , and
J. Imaging 2023, 9(7), 144; https://doi.org/10.3390/jimaging9070144 - 14 Jul 2023
Abstract
Bees play a critical role in pollination and food production, so their preservation is essential, particularly highlighting the importance of detecting diseases in bees early. The Varroa destructor mite is the primary factor contributing to increased viral infections that can lead to hive
[...] Read more.
Bees play a critical role in pollination and food production, so their preservation is essential, particularly highlighting the importance of detecting diseases in bees early. The Varroa destructor mite is the primary factor contributing to increased viral infections that can lead to hive mortality. This study presents an innovative method for identifying Varroa destructors in honey bees using multichannel Legendre–Fourier moments. The descriptors derived from this approach possess distinctive characteristics, such as rotation and scale invariance, and noise resistance, allowing the representation of digital images with minimal descriptors. This characteristic is advantageous when analyzing images of living organisms that are not in a static posture. The proposal evaluates the algorithm’s efficiency using different color models, and to enhance its capacity, a subdivision of the VarroaDataset is used. This enhancement allows the algorithm to process additional information about the color and shape of the bee’s legs, wings, eyes, and mouth. To demonstrate the advantages of our approach, we compare it with other deep learning methods, in semantic segmentation techniques, such as DeepLabV3, and object detection techniques, such as YOLOv5. The results suggest that our proposal offers a promising means for the early detection of the Varroa destructor mite, which could be an essential pillar in the preservation of bees and, therefore, in food production.
Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
The Dangers of Analyzing Thermographic Radiometric Data as Images
J. Imaging 2023, 9(7), 143; https://doi.org/10.3390/jimaging9070143 - 12 Jul 2023
Abstract
Thermography is probably the most used method of measuring surface temperature by analyzing radiation in the infrared part of the spectrum which accuracy depends on factors such as emissivity and reflected radiation. Contrary to popular belief that thermographic images represent temperature maps, they
[...] Read more.
Thermography is probably the most used method of measuring surface temperature by analyzing radiation in the infrared part of the spectrum which accuracy depends on factors such as emissivity and reflected radiation. Contrary to popular belief that thermographic images represent temperature maps, they are actually thermal radiation converted into an image, and if not properly calibrated, they show incorrect temperatures. The objective of this study is to analyze commonly used image processing techniques and their impact on radiometric data in thermography. In particular, the extent to which a thermograph can be considered as an image and how image processing affects radiometric data. Three analyzes are presented in the paper. The first one examines how image processing techniques, such as contrast and brightness, affect physical reality and its representation in thermographic imaging. The second analysis examines the effects of JPEG compression on radiometric data and how degradation of the data varies with the compression parameters. The third analysis aims to determine the optimal resolution increase required to minimize the effects of compression on the radiometric data. The output from an IR camera in CSV format was used for these analyses, and compared to images from the manufacturer’s software. The IR camera providing data in JPEG format was used, and the data included thermographic images, visible images, and a matrix of thermal radiation data. The study was verified with a reference blackbody radiation set at 60 C. The results highlight the dangers of interpreting thermographic images as temperature maps without considering the underlying radiometric data which can be affected by image processing and compression. The paper concludes with the importance of accurate and precise thermographic analysis for reliable temperature measurement.
Full article
(This article belongs to the Special Issue Data Processing with Artificial Intelligence in Thermal Imagery)
►▼
Show Figures
Figure 1
Open AccessArticle
Augmented Reality in Maintenance—History and Perspectives
J. Imaging 2023, 9(7), 142; https://doi.org/10.3390/jimaging9070142 - 10 Jul 2023
Abstract
Augmented Reality (AR) is a technology that allows virtual elements to be superimposed over images of real contexts, whether these are text elements, graphics, or other types of objects. Smart AR glasses are increasingly optimized, and modern ones have features such as Global
[...] Read more.
Augmented Reality (AR) is a technology that allows virtual elements to be superimposed over images of real contexts, whether these are text elements, graphics, or other types of objects. Smart AR glasses are increasingly optimized, and modern ones have features such as Global Positioning System (GPS), a microphone, and gesture recognition, among others. These devices allow users to have their hands free to perform tasks while they receive instructions in real time through the glasses. This allows maintenance professionals to carry out interventions more efficiently and in a shorter time than would be necessary without the support of this technology. In the present work, a timeline of important achievements is established, including important findings in object recognition, real-time operation. and integration of technologies for shop floor use. Perspectives on future research and related recommendations are proposed as well.
Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention
J. Imaging 2023, 9(7), 141; https://doi.org/10.3390/jimaging9070141 - 10 Jul 2023
Abstract
In hyperspectral image (HSI) classification, convolutional neural networks (CNNs) have been widely employed and achieved promising performance. However, CNN-based methods face difficulties in achieving both accurate and efficient HSI classification due to their limited receptive fields and deep architectures. To alleviate these limitations,
[...] Read more.
In hyperspectral image (HSI) classification, convolutional neural networks (CNNs) have been widely employed and achieved promising performance. However, CNN-based methods face difficulties in achieving both accurate and efficient HSI classification due to their limited receptive fields and deep architectures. To alleviate these limitations, we propose an effective HSI classification network based on multi-head self-attention and spectral-coordinate attention (MSSCA). Specifically, we first reduce the redundant spectral information of HSI by using a point-wise convolution network (PCN) to enhance discriminability and robustness of the network. Then, we capture long-range dependencies among HSI pixels by introducing a modified multi-head self-attention (M-MHSA) model, which applies a down-sampling operation to alleviate the computing burden caused by the dot-product operation of MHSA. Furthermore, to enhance the performance of the proposed method, we introduce a lightweight spectral-coordinate attention fusion module. This module combines spectral attention (SA) and coordinate attention (CA) to enable the network to better weight the importance of useful bands and more accurately localize target objects. Importantly, our method achieves these improvements without increasing the complexity or computational cost of the network. To demonstrate the effectiveness of our proposed method, experiments were conducted on three classic HSI datasets: Indian Pines (IP), Pavia University (PU), and Salinas. The results show that our proposed method is highly competitive in terms of both efficiency and accuracy when compared to existing methods.
Full article
(This article belongs to the Special Issue Convolutional Neural Networks Application in Remote Sensing, Volume II)
►▼
Show Figures
Figure 1
Open AccessArticle
Conv-ViT: A Convolution and Vision Transformer-Based Hybrid Feature Extraction Method for Retinal Disease Detection
J. Imaging 2023, 9(7), 140; https://doi.org/10.3390/jimaging9070140 - 10 Jul 2023
Abstract
►▼
Show Figures
The current advancement towards retinal disease detection mainly focused on distinct feature extraction using either a convolutional neural network (CNN) or a transformer-based end-to-end deep learning (DL) model. The individual end-to-end DL models are capable of only processing texture or shape-based information for
[...] Read more.
The current advancement towards retinal disease detection mainly focused on distinct feature extraction using either a convolutional neural network (CNN) or a transformer-based end-to-end deep learning (DL) model. The individual end-to-end DL models are capable of only processing texture or shape-based information for performing detection tasks. However, extraction of only texture- or shape-based features does not provide the model robustness needed to classify different types of retinal diseases. Therefore, concerning these two features, this paper developed a fusion model called ‘Conv-ViT’ to detect retinal diseases from foveal cut optical coherence tomography (OCT) images. The transfer learning-based CNN models, such as Inception-V3 and ResNet-50, are utilized to process texture information by calculating the correlation of the nearby pixel. Additionally, the vision transformer model is fused to process shape-based features by determining the correlation between long-distance pixels. The hybridization of these three models results in shape-based texture feature learning during the classification of retinal diseases into its four classes, including choroidal neovascularization (CNV), diabetic macular edema (DME), DRUSEN, and NORMAL. The weighted average classification accuracy, precision, recall, and F1 score of the model are found to be approximately 94%. The results indicate that the fusion of both texture and shape features assisted the proposed Conv-ViT model to outperform the state-of-the-art retinal disease classification models.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
31 July 2023
MDPI’s 2022 Best PhD Thesis Awards in Computer Science and Mathematics—Winners Announced
MDPI’s 2022 Best PhD Thesis Awards in Computer Science and Mathematics—Winners Announced
31 July 2023
MDPI’s 2022 Young Investigator Awards in Computer Science and Mathematics—Winners Announced
MDPI’s 2022 Young Investigator Awards in Computer Science and Mathematics—Winners Announced
Topics
Topic in
Applied Sciences, Sensors, J. Imaging, MAKE
Applications in Image Analysis and Pattern Recognition
Topic Editors: Bin Fan, Wenqi RenDeadline: 31 August 2023
Topic in
Applied Sciences, Electronics, MAKE, J. Imaging, Sensors
Applied Computer Vision and Pattern Recognition: 2nd Volume
Topic Editors: Antonio Fernández-Caballero, Byung-Gyu KimDeadline: 30 September 2023
Topic in
Applied Sciences, Electronics, J. Imaging, Sensors, Signals
Visual Object Tracking: Challenges and Applications
Topic Editors: Shunli Zhang, Xin Yu, Kaihua Zhang, Yang YangDeadline: 31 October 2023
Topic in
Applied Sciences, Biosensors, J. Imaging, Sensors, Signals
Bio-Inspired Systems and Signal Processing
Topic Editors: Donald Y.C. Lie, Chung-Chih Hung, Jian XuDeadline: 30 November 2023
Conferences
Special Issues
Special Issue in
J. Imaging
Advances in PET/CT Imaging for Diagnosis in Sarcoidosis
Guest Editor: Marco TanaDeadline: 1 September 2023
Special Issue in
J. Imaging
Explainable AI for Image-Aided Diagnosis
Guest Editors: António Cunha, Paulo A.C. Salgado, Teresa Paula PerdicoúlisDeadline: 30 September 2023
Special Issue in
J. Imaging
Brain Image Computation for Diagnosis and Treatment
Guest Editor: Jussi TohkaDeadline: 15 October 2023
Special Issue in
J. Imaging
Modelling of Human Visual System in Image Processing
Guest Editors: Alexey Mashtakov, Edoardo ProvenziDeadline: 22 October 2023