Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,267)

Search Parameters:
Journal = J. Imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Biased Deep Learning Methods in Detection of COVID-19 Using CT Images: A Challenge Mounted by Subject-Wise-Split ISFCT Dataset
J. Imaging 2023, 9(8), 159; https://doi.org/10.3390/jimaging9080159 - 08 Aug 2023
Viewed by 143
Abstract
Accurate detection of respiratory system damage including COVID-19 is considered one of the crucial applications of deep learning (DL) models using CT images. However, the main shortcoming of the published works has been unreliable reported accuracy and the lack of repeatability with new [...] Read more.
Accurate detection of respiratory system damage including COVID-19 is considered one of the crucial applications of deep learning (DL) models using CT images. However, the main shortcoming of the published works has been unreliable reported accuracy and the lack of repeatability with new datasets, mainly due to slice-wise splits of the data, creating dependency between training and test sets due to shared data across the sets. We introduce a new dataset of CT images (ISFCT Dataset) with labels indicating the subject-wise split to train and test our DL algorithms in an unbiased manner. We also use this dataset to validate the real performance of the published works in a subject-wise data split. Another key feature provides more specific labels (eight characteristic lung features) rather than being limited to COVID-19 and healthy labels. We show that the reported high accuracy of the existing models on current slice-wise splits is not repeatable for subject-wise splits, and distribution differences between data splits are demonstrated using t-distribution stochastic neighbor embedding. We indicate that, by examining subject-wise data splitting, less complicated models show competitive results compared to the exiting complicated models, demonstrating that complex models do not necessarily generate accurate and repeatable results. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Article
Enhancing Fingerprint Liveness Detection Accuracy Using Deep Learning: A Comprehensive Study and Novel Approach
J. Imaging 2023, 9(8), 158; https://doi.org/10.3390/jimaging9080158 - 07 Aug 2023
Viewed by 215
Abstract
Liveness detection for fingerprint impressions plays a role in the meaningful prevention of any unauthorized activity or phishing attempt. The accessibility of unique individual identification has increased the popularity of biometrics. Deep learning with computer vision has proven remarkable results in image classification, [...] Read more.
Liveness detection for fingerprint impressions plays a role in the meaningful prevention of any unauthorized activity or phishing attempt. The accessibility of unique individual identification has increased the popularity of biometrics. Deep learning with computer vision has proven remarkable results in image classification, detection, and many others. The proposed methodology relies on an attention model and ResNet convolutions. Spatial attention (SA) and channel attention (CA) models were used sequentially to enhance feature learning. A three-fold sequential attention model is used along with five convolution learning layers. The method’s performances have been tested across different pooling strategies, such as Max, Average, and Stochastic, over the LivDet-2021 dataset. Comparisons against different state-of-the-art variants of Convolutional Neural Networks, such as DenseNet121, VGG19, InceptionV3, and conventional ResNet50, have been carried out. In particular, tests have been aimed at assessing ResNet34 and ResNet50 models on feature extraction by further enhancing the sequential attention model. A Multilayer Perceptron (MLP) classifier used alongside a fully connected layer returns the ultimate prediction of the entire stack. Finally, the proposed method is also evaluated on feature extraction with and without attention models for ResNet and considering different pooling strategies. Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
Show Figures

Figure 1

Article
Target Design in SEM-Based Nano-CT and Its Influence on X-ray Imaging
J. Imaging 2023, 9(8), 157; https://doi.org/10.3390/jimaging9080157 - 04 Aug 2023
Viewed by 153
Abstract
Nano-computed tomography (nano-CT) based on scanning electron microscopy (SEM) is utilized for multimodal material characterization in one instrument. Since SEM-based CT uses geometrical magnification, X-ray targets can be adapted without any further changes to the system. This allows for designing targets with varying [...] Read more.
Nano-computed tomography (nano-CT) based on scanning electron microscopy (SEM) is utilized for multimodal material characterization in one instrument. Since SEM-based CT uses geometrical magnification, X-ray targets can be adapted without any further changes to the system. This allows for designing targets with varying geometry and chemical composition to influence the X-ray focal spot, intensity and energy distribution with the aim to enhance the image quality. In this paper, three different target geometries with a varying volume are presented: bulk, foil and needle target. Based on the analyzed electron beam properties and X-ray beam path, the influence of the different target designs on X-ray imaging is investigated. With the obtained information, three targets for different applications are recommended. A platinum (Pt) bulk target tilted by 25° as an optimal combination of high photon flux and spatial resolution is used for fast CT scans and the investigation of high-absorbing or large sample volumes. To image low-absorbing materials, e.g., polymers or organic materials, a target material with a characteristic line energy right above the detector energy threshold is recommended. In the case of the observed system, we used a 30° tilted chromium (Cr) target, leading to a higher image contrast. To reach a maximum spatial resolution of about 100 nm, we recommend a tungsten (W) needle target with a tip diameter of about 100 nm. Full article
Show Figures

Figure 1

Article
Classification of a 3D Film Pattern Image Using the Optimal Height of the Histogram for Quality Inspection
J. Imaging 2023, 9(8), 156; https://doi.org/10.3390/jimaging9080156 - 02 Aug 2023
Viewed by 314
Abstract
A 3D film pattern image was recently developed for marketing purposes, and an inspection method is needed to evaluate the quality of the pattern for mass production. However, due to its recent development, there are limited methods to inspect the 3D film pattern. [...] Read more.
A 3D film pattern image was recently developed for marketing purposes, and an inspection method is needed to evaluate the quality of the pattern for mass production. However, due to its recent development, there are limited methods to inspect the 3D film pattern. The good pattern in the 3D film has a clear outline and high contrast, while the bad pattern has a blurry outline and low contrast. Due to these characteristics, it is challenging to examine the quality of the 3D film pattern. In this paper, we propose a simple algorithm that classifies the 3D film pattern as either good or bad by using the height of the histograms. Despite its simplicity, the proposed method can accurately and quickly inspect the 3D film pattern. In the experimental results, the proposed method achieved 99.09% classification accuracy with a computation time of 6.64 s, demonstrating better performance than existing algorithms. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

Article
The Cross-Sectional Area Assessment of Pelvic Muscles Using the MRI Manual Segmentation among Patients with Low Back Pain and Healthy Subjects
J. Imaging 2023, 9(8), 155; https://doi.org/10.3390/jimaging9080155 - 31 Jul 2023
Viewed by 242
Abstract
The pain pathomechanism of chronic low back pain (LBP) is complex and the available diagnostic methods are insufficient. Patients present morphological changes in volume and cross-sectional area (CSA) of lumbosacral region. The main objective of this study was to assess if CSA measurements [...] Read more.
The pain pathomechanism of chronic low back pain (LBP) is complex and the available diagnostic methods are insufficient. Patients present morphological changes in volume and cross-sectional area (CSA) of lumbosacral region. The main objective of this study was to assess if CSA measurements of pelvic muscle will indicate muscle atrophy between asymptomatic and symptomatic sides in chronic LBP patients, as well as between right and left sides in healthy volunteers. In addition, inter-rater reliability for CSA measurements was examined. The study involved 71 chronic LBP patients and 29 healthy volunteers. The CSA of gluteus maximus, medius, minimus and piriformis were measured using the MRI manual segmentation method. Muscle atrophy was confirmed in gluteus maximus, gluteus minimus and piriformis muscle for over 50% of chronic LBP patients (p < 0.05). Gluteus medius showed atrophy in patients with left side pain occurrence (p < 0.001). Muscle atrophy occurred on the symptomatic side for all inspected muscles, except gluteus maximus in rater one assessment. The reliability of CSA measurements between raters calculated using CCC and ICC presented great inter-rater reproducibility for each muscle both in patients and healthy volunteers (p < 0.95). Therefore, there is the possibility of using CSA assessment in the diagnosis of patients with symptoms of chronic LBP. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Article
Open-Set Recognition of Wood Species Based on Deep Learning Feature Extraction Using Leaves
J. Imaging 2023, 9(8), 154; https://doi.org/10.3390/jimaging9080154 - 30 Jul 2023
Viewed by 326
Abstract
An open-set recognition scheme for tree leaves based on deep learning feature extraction is presented in this study. Deep learning algorithms are used to extract leaf features for different wood species, and the leaf set of a wood species is divided into two [...] Read more.
An open-set recognition scheme for tree leaves based on deep learning feature extraction is presented in this study. Deep learning algorithms are used to extract leaf features for different wood species, and the leaf set of a wood species is divided into two datasets: the leaf set of a known wood species and the leaf set of an unknown species. The deep learning network (CNN) is trained on the leaves of selected known wood species, and the features of the remaining known wood species and all unknown wood species are extracted using the trained CNN. Then, the single-class classification is performed using the weighted SVDD algorithm to recognize the leaves of known and unknown wood species. The features of leaves recognized as known wood species are fed back to the trained CNN to recognize the leaves of known wood species. The recognition results of a single-class classifier for known and unknown wood species are combined with the recognition results of a multi-class CNN to finally complete the open recognition of wood species. We tested the proposed method on the publicly available Swedish Leaf Dataset, which includes 15 wood species (5 species used as known and 10 species used as unknown). The test results showed that, with F1 scores of 0.7797 and 0.8644, mixed recognition rates of 95.15% and 93.14%, and Kappa coefficients of 0.7674 and 0.8644 under two different data distributions, the proposed method outperformed the state-of-the-art open-set recognition algorithms in all three aspects. And, the more wood species that are known, the better the recognition. This approach can extract effective features from tree leaf images for open-set recognition and achieve wood species recognition without compromising tree material. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

Article
Nighttime Image Dehazing by Render
J. Imaging 2023, 9(8), 153; https://doi.org/10.3390/jimaging9080153 - 28 Jul 2023
Viewed by 313
Abstract
Nighttime image dehazing presents unique challenges due to the unevenly distributed haze caused by the color change of artificial light sources. This results in multiple interferences, including atmospheric light, glow, and direct light, which make the complex scattering haze interference difficult to accurately [...] Read more.
Nighttime image dehazing presents unique challenges due to the unevenly distributed haze caused by the color change of artificial light sources. This results in multiple interferences, including atmospheric light, glow, and direct light, which make the complex scattering haze interference difficult to accurately distinguish and remove. Additionally, obtaining pairs of high-definition data for fog removal at night is a difficult task. These challenges make nighttime image dehazing a particularly challenging problem to solve. To address these challenges, we introduced the haze scattering formula to more accurately express the haze in three-dimensional space. We also proposed a novel data synthesis method using the latest CG textures and lumen lighting technology to build scenes where various hazes can be seen clearly through ray tracing. We converted the complex 3D scattering relationship transformation into a 2D image dataset to better learn the mapping from 3D haze to 2D haze. Additionally, we improved the existing neural network and established a night haze intensity evaluation label based on the idea of optical PSF. This allowed us to adjust the haze intensity of the rendered dataset according to the intensity of the real haze image and improve the accuracy of dehazing. Our experiments showed that our data construction and network improvement achieved better visual effects, objective indicators, and calculation speed. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

Article
Quantification of Gas Flaring from Satellite Imagery: A Comparison of Two Methods for SLSTR and BIROS Imagery
J. Imaging 2023, 9(8), 152; https://doi.org/10.3390/jimaging9080152 - 27 Jul 2023
Viewed by 303
Abstract
Gas flaring is an environmental problem of local, regional and global concerns. Gas flares emit pollutants and greenhouse gases, yet knowledge about the source strength is limited due to disparate reporting approaches in different geographies, whenever and wherever those are considered. Remote sensing [...] Read more.
Gas flaring is an environmental problem of local, regional and global concerns. Gas flares emit pollutants and greenhouse gases, yet knowledge about the source strength is limited due to disparate reporting approaches in different geographies, whenever and wherever those are considered. Remote sensing has bridged the gap but uncertainties remain. There are numerous sensors which provide measurements over flaring-active regions in wavelengths that are suitable for the observation of gas flares and the retrieval of flaring activity. However, their use for operational monitoring has been limited. Besides several potential sensors, there are also different approaches to conduct the retrievals. In the current paper, we compare two retrieval approaches over an offshore flaring area during an extended period of time. Our results show that retrieved activities are consistent between methods although discrepancies may originate for individual flares at the highly temporal scale, which are traced back to the variable nature of flaring. The presented results are helpful for the estimation of flaring activity from different sources and will be useful in a future integration of diverse sensors and methodologies into a single monitoring scheme. Full article
Show Figures

Figure 1

Article
Fast Compressed Sensing of 3D Radial T1 Mapping with Different Sparse and Low-Rank Models
J. Imaging 2023, 9(8), 151; https://doi.org/10.3390/jimaging9080151 - 26 Jul 2023
Viewed by 250
Abstract
Knowledge of the relative performance of the well-known sparse and low-rank compressed sensing models with 3D radial quantitative magnetic resonance imaging acquisitions is limited. We use 3D radial T1 relaxation time mapping data to compare the total variation, low-rank, and Huber penalty [...] Read more.
Knowledge of the relative performance of the well-known sparse and low-rank compressed sensing models with 3D radial quantitative magnetic resonance imaging acquisitions is limited. We use 3D radial T1 relaxation time mapping data to compare the total variation, low-rank, and Huber penalty function approaches to regularization to provide insights into the relative performance of these image reconstruction models. Simulation and ex vivo specimen data were used to determine the best compressed sensing model as measured by normalized root mean squared error and structural similarity index. The large-scale compressed sensing models were solved by combining a GPU implementation of a preconditioned primal-dual proximal splitting algorithm to provide high-quality T1 maps within a feasible computation time. The model combining spatial total variation and locally low-rank regularization yielded the best performance, followed closely by the model combining spatial and contrast dimension total variation. Computation times ranged from 2 to 113 min, with the low-rank approaches taking the most time. The differences between the compressed sensing models are not necessarily large, but the overall performance is heavily dependent on the imaged object. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Article
Quantitative CT Metrics Associated with Variability in the Diffusion Capacity of the Lung of Post-COVID-19 Patients with Minimal Residual Lung Lesions
J. Imaging 2023, 9(8), 150; https://doi.org/10.3390/jimaging9080150 - 26 Jul 2023
Viewed by 267
Abstract
(1) Background: A reduction in the diffusion capacity of the lung for carbon monoxide is a prevalent longer-term consequence of COVID-19 infection. In patients who have zero or minimal residual radiological abnormalities in the lungs, it has been debated whether the cause was [...] Read more.
(1) Background: A reduction in the diffusion capacity of the lung for carbon monoxide is a prevalent longer-term consequence of COVID-19 infection. In patients who have zero or minimal residual radiological abnormalities in the lungs, it has been debated whether the cause was mainly due to a reduced alveolar volume or involved diffuse interstitial or vascular abnormalities. (2) Methods: We performed a cross-sectional study of 45 patients with either zero or minimal residual lesions in the lungs (total volume < 7 cc) at two months to one year post COVID-19 infection. There was considerable variability in the diffusion capacity of the lung for carbon monoxide, with 27% of the patients at less than 80% of the predicted reference. We investigated a set of independent variables that may affect the diffusion capacity of the lung, including demographic, pulmonary physiology and CT (computed tomography)-derived variables of vascular volume, parenchymal density and residual lesion volume. (3) Results: The leading three variables that contributed to the variability in the diffusion capacity of the lung for carbon monoxide were the alveolar volume, determined via pulmonary function tests, the blood vessel volume fraction, determined via CT, and the parenchymal radiodensity, also determined via CT. These factors explained 49% of the variance of the diffusion capacity, with p values of 0.031, 0.005 and 0.018, respectively, after adjusting for confounders. A multiple-regression model combining these three variables fit the measured values of the diffusion capacity, with R = 0.70 and p < 0.001. (4) Conclusions: The results are consistent with the notion that in some post-COVID-19 patients, after their pulmonary lesions resolve, diffuse changes in the vascular and parenchymal structures, in addition to a low alveolar volume, could be contributors to a lingering low diffusion capacity. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

Review
Prospective of Pancreatic Cancer Diagnosis Using Cardiac Sensing
J. Imaging 2023, 9(8), 149; https://doi.org/10.3390/jimaging9080149 - 25 Jul 2023
Viewed by 269
Abstract
Pancreatic carcinoma (Ca Pancreas) is the third leading cause of cancer-related deaths in the world. The malignancies of the pancreas can be diagnosed with the help of various imaging modalities. An endoscopic ultrasound with a tissue biopsy is so far considered to be [...] Read more.
Pancreatic carcinoma (Ca Pancreas) is the third leading cause of cancer-related deaths in the world. The malignancies of the pancreas can be diagnosed with the help of various imaging modalities. An endoscopic ultrasound with a tissue biopsy is so far considered to be the gold standard in terms of the detection of Ca Pancreas, especially for lesions <2 mm. However, other methods, like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI), are also conventionally used. Moreover, newer techniques, like proteomics, radiomics, metabolomics, and artificial intelligence (AI), are slowly being introduced for diagnosing pancreatic cancer. Regardless, it is still a challenge to diagnose pancreatic carcinoma non-invasively at an early stage due to its delayed presentation. Similarly, this also makes it difficult to demonstrate an association between Ca Pancreas and other vital organs of the body, such as the heart. A number of studies have proven a correlation between the heart and pancreatic cancer. The tumor of the pancreas affects the heart at the physiological, as well as the molecular, level. An overexpression of the SMAD4 gene; a disruption in biomolecules, such as IGF, MAPK, and ApoE; and increased CA19-9 markers are a few of the many factors that are noted to affect cardiovascular systems with pancreatic malignancies. A comprehensive review of this correlation will aid researchers in conducting studies to help establish a definite relation between the two organs and discover ways to use it for the early detection of Ca Pancreas. Full article
Show Figures

Figure 1

Article
Automatic Localization of Five Relevant Dermoscopic Structures Based on YOLOv8 for Diagnosis Improvement
J. Imaging 2023, 9(7), 148; https://doi.org/10.3390/jimaging9070148 - 21 Jul 2023
Viewed by 343
Abstract
The automatic detection of dermoscopic features is a task that provides the specialists with an image with indications about the different patterns present in it. This information can help them fully understand the image and improve their decisions. However, the automatic analysis of [...] Read more.
The automatic detection of dermoscopic features is a task that provides the specialists with an image with indications about the different patterns present in it. This information can help them fully understand the image and improve their decisions. However, the automatic analysis of dermoscopic features can be a difficult task because of their small size. Some work was performed in this area, but the results can be improved. The objective of this work is to improve the precision of the automatic detection of dermoscopic features. To achieve this goal, an algorithm named yolo-dermoscopic-features is proposed. The algorithm consists of four points: (i) generate annotations in the JSON format for supervised learning of the model; (ii) propose a model based on the latest version of Yolo; (iii) pre-train the model for the segmentation of skin lesions; (iv) train five models for the five dermoscopic features. The experiments are performed on the ISIC 2018 task2 dataset. After training, the model is evaluated and compared to the performance of two methods. The proposed method allows us to reach average performances of 0.9758, 0.954, 0.9724, 0.938, and 0.9692, respectively, for the Dice similarity coefficient, Jaccard similarity coefficient, precision, recall, and average precision. Furthermore, comparing to other methods, the proposed method reaches a better Jaccard similarity coefficient of 0.954 and, thus, presents the best similarity with the annotations made by specialists. This method can also be used to automatically annotate images and, therefore, can be a solution to the lack of features annotation in the dataset. Full article
(This article belongs to the Special Issue Imaging Informatics: Computer-Aided Diagnosis)
Show Figures

Figure 1

Editorial
Deep Learning and Vision Transformer for Medical Image Analysis
J. Imaging 2023, 9(7), 147; https://doi.org/10.3390/jimaging9070147 - 21 Jul 2023
Viewed by 454
Abstract
Artificial intelligence (AI) refers to the field of computer science theory and technology [...] Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Article
Algebraic Multi-Layer Network: Key Concepts
J. Imaging 2023, 9(7), 146; https://doi.org/10.3390/jimaging9070146 - 18 Jul 2023
Viewed by 335
Abstract
The paper refers to interdisciplinary research in the areas of hierarchical cluster analysis of big data and ordering of primary data to detect objects in a color or in a grayscale image. To perform this on a limited domain of multidimensional data, an [...] Read more.
The paper refers to interdisciplinary research in the areas of hierarchical cluster analysis of big data and ordering of primary data to detect objects in a color or in a grayscale image. To perform this on a limited domain of multidimensional data, an NP-hard problem of calculation of close to optimal piecewise constant data approximations with the smallest possible standard deviations or total squared errors (approximation errors) is solved. The solution is achieved by revisiting, modernizing, and combining classical Ward’s clustering, split/merge, and K-means methods. The concepts of objects, images, and their elements (superpixels) are formalized as structures that are distinguishable from each other. The results of structuring and ordering the image data are presented to the user in two ways, as tabulated approximations of the image showing the available object hierarchies. For not only theoretical reasoning, but also for practical implementation, reversible calculations with pixel sets are performed easily, as with individual pixels in terms of Sleator–Tarjan Dynamic trees and cyclic graphs forming an Algebraic Multi-Layer Network (AMN). The detailing of the latter significantly distinguishes this paper from our prior works. The establishment of the invariance of detected objects with respect to changing the context of the image and its transformation into grayscale is also new. Full article
(This article belongs to the Special Issue Image Segmentation Techniques: Current Status and Future Directions)
Show Figures

Figure 1

Article
Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images
J. Imaging 2023, 9(7), 145; https://doi.org/10.3390/jimaging9070145 - 18 Jul 2023
Viewed by 390
Abstract
The third trimester of pregnancy is the most critical period for human brain development, during which significant changes occur in the morphology of the brain. The development of sulci and gyri allows for a considerable increase in the brain surface. In preterm newborns, [...] Read more.
The third trimester of pregnancy is the most critical period for human brain development, during which significant changes occur in the morphology of the brain. The development of sulci and gyri allows for a considerable increase in the brain surface. In preterm newborns, these changes occur in an extrauterine environment that may cause a disruption of the normal brain maturation process. We hypothesize that a normalized atlas of brain maturation with cerebral ultrasound images from birth to term equivalent age will help clinicians assess these changes. This work proposes a semi-automatic Graphical User Interface (GUI) platform for segmenting the main cerebral sulci in the clinical setting from ultrasound images. This platform has been obtained from images of a cerebral ultrasound neonatal database images provided by two clinical researchers from the Hospital Sant Joan de Déu in Barcelona, Spain. The primary objective is to provide a user-friendly design platform for clinicians for running and visualizing an atlas of images validated by medical experts. This GUI offers different segmentation approaches and pre-processing tools and is user-friendly and designed for running, visualizing images, and segmenting the principal sulci. The presented results are discussed in detail in this paper, providing an exhaustive analysis of the proposed approach’s effectiveness. Full article
(This article belongs to the Special Issue Imaging Informatics: Computer-Aided Diagnosis)
Show Figures

Figure 1

Back to TopTop