Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (55,953)

Search Parameters:
Journal = Sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
YOLOv5-Atn: An Algorithm for Residual Film Detection in Farmland Combined with an Attention Mechanism
Sensors 2023, 23(16), 7035; https://doi.org/10.3390/s23167035 - 08 Aug 2023
Abstract
The application of mulching film has significantly contributed to improving agricultural output and benefits, but residual film has caused severe impacts on agricultural production and the environment. In order to realize the accurate recycling of agricultural residual film, the detection of residual film [...] Read more.
The application of mulching film has significantly contributed to improving agricultural output and benefits, but residual film has caused severe impacts on agricultural production and the environment. In order to realize the accurate recycling of agricultural residual film, the detection of residual film is the first problem to be solved. The difference in color and texture between residual film and bare soil is not obvious, and residual film is of various sizes and morphologies. To solve these problems, the paper proposes a method for detecting residual film in agricultural fields that uses the attention mechanism. First, a two-stage pre-training approach with strengthened memory is proposed to enable the model to better understand the residual film features with limited data.Second, a multi-scale feature fusion module with adaptive weights is proposed to enhance the recognition of small targets of residual film by using attention. Finally, an inter-feature cross-attention mechanism that can realize full interaction between shallow and deep feature information to reduce the useless noise extracted from residual film images is designed. The experimental results on a self-made residual film dataset show that the improved model improves precision, recall, and mAP by 5.39%, 2.02%, and 3.95%, respectively, compared with the original model, and it also outperforms other recent detection models. The method provides strong technical support for accurately identifying farmland residual film and has the potential to be applied to mechanical equipment for the recycling of residual film. Full article
(This article belongs to the Section Smart Agriculture)
Article
Text Recognition Model Based on Multi-Scale Fusion CRNN
Sensors 2023, 23(16), 7034; https://doi.org/10.3390/s23167034 - 08 Aug 2023
Abstract
Scene text recognition is a crucial area of research in computer vision. However, current mainstream scene text recognition models suffer from incomplete feature extraction due to the small downsampling scale used to extract features and obtain more features. This limitation hampers their ability [...] Read more.
Scene text recognition is a crucial area of research in computer vision. However, current mainstream scene text recognition models suffer from incomplete feature extraction due to the small downsampling scale used to extract features and obtain more features. This limitation hampers their ability to extract complete features of each character in the image, resulting in lower accuracy in the text recognition process. To address this issue, a novel text recognition model based on multi-scale fusion and the convolutional recurrent neural network (CRNN) has been proposed in this paper. The proposed model has a convolutional layer, a feature fusion layer, a recurrent layer, and a transcription layer. The convolutional layer uses two scales of feature extraction, which enables it to derive two distinct outputs for the input text image. The feature fusion layer fuses the different scales of features and forms a new feature. The recurrent layer learns contextual features from the input sequence of features. The transcription layer outputs the final result. The proposed model not only expands the recognition field but also learns more image features at different scales; thus, it extracts a more complete set of features and achieving better recognition of text. The results of experiments are then presented to demonstrate that the proposed model outperforms the CRNN model on text datasets, such as Street View Text, IIIT-5K, ICDAR2003, and ICDAR2013 scenes, in terms of text recognition accuracy. Full article
Article
Laser Safety—What Is the Laser Hazard Distance for an Electro-Optical Imaging System?
Sensors 2023, 23(16), 7033; https://doi.org/10.3390/s23167033 - 08 Aug 2023
Abstract
Laser safety is an important topic. Everybody working with lasers has to follow the long-established occupational safety rules to prevent people from eye damage by accidental irradiation. These rules comprise, for example, the calculation of the Maximum Permissible Exposure (MPE), as well as [...] Read more.
Laser safety is an important topic. Everybody working with lasers has to follow the long-established occupational safety rules to prevent people from eye damage by accidental irradiation. These rules comprise, for example, the calculation of the Maximum Permissible Exposure (MPE), as well as the corresponding laser hazard distance, the so-called Nominal Ocular Hazard Distance (NOHD). At exposure levels below the MPE, laser eye dazzling may occur and is described by a quite new concept, leading to definitions such as the Maximum Dazzle Exposure (MDE) and to its corresponding Nominal Ocular Dazzle Distance (NODD). In earlier work, we defined exposure limits for sensors corresponding to those for the human eye: The Maximum Permissible Exposure for a Sensor, MPES, and the Maximum Dazzle Exposure for a Sensor, MDES. In this publication, we report on our continuative work concerning the laser hazard distances arising from these exposure limits. In contrast to the human eye, unexpected results occur for electro-optical imaging systems: For laser irradiances exceeding the exposure limit, MPES, it can happen that the laser hazard zone does not extend directly from the laser source, but only from a specific distance to it. This means that some scenarios are possible where an electro-optical imaging sensor may be in danger of getting damaged within a certain distance to the laser source but is safe from damage when located close to the laser source. This is in contrast to laser eye safety, where it is assumed that the laser hazard zone always extends directly from the laser source. Furthermore, we provide closed-form equations in order to estimate laser hazard distances related to the damaging and dazzling of the electro-optical imaging systems. Full article
(This article belongs to the Section Optical Sensors)
Article
LSGP-USFNet: Automated Attention Deficit Hyperactivity Disorder Detection Using Locations of Sophie Germain’s Primes on Ulam’s Spiral-Based Features with Electroencephalogram Signals
Sensors 2023, 23(16), 7032; https://doi.org/10.3390/s23167032 - 08 Aug 2023
Abstract
Anxiety, learning disabilities, and depression are the symptoms of attention deficit hyperactivity disorder (ADHD), an isogenous pattern of hyperactivity, impulsivity, and inattention. For the early diagnosis of ADHD, electroencephalogram (EEG) signals are widely used. However, the direct analysis of an EEG is highly [...] Read more.
Anxiety, learning disabilities, and depression are the symptoms of attention deficit hyperactivity disorder (ADHD), an isogenous pattern of hyperactivity, impulsivity, and inattention. For the early diagnosis of ADHD, electroencephalogram (EEG) signals are widely used. However, the direct analysis of an EEG is highly challenging as it is time-consuming, nonlinear, and nonstationary in nature. Thus, in this paper, a novel approach (LSGP-USFNet) is developed based on the patterns obtained from Ulam’s spiral and Sophia Germain’s prime numbers. The EEG signals are initially filtered to remove the noise and segmented with a non-overlapping sliding window of a length of 512 samples. Then, a time–frequency analysis approach, namely continuous wavelet transform, is applied to each channel of the segmented EEG signal to interpret it in the time and frequency domain. The obtained time–frequency representation is saved as a time–frequency image, and a non-overlapping n × n sliding window is applied to this image for patch extraction. An n × n Ulam’s spiral is localized on each patch, and the gray levels are acquired from this patch as features where Sophie Germain’s primes are located in Ulam’s spiral. All gray tones from all patches are concatenated to construct the features for ADHD and normal classes. A gray tone selection algorithm, namely ReliefF, is employed on the representative features to acquire the final most important gray tones. The support vector machine classifier is used with a 10-fold cross-validation criteria. Our proposed approach, LSGP-USFNet, was developed using a publicly available dataset and obtained an accuracy of 97.46% in detecting ADHD automatically. Our generated model is ready to be validated using a bigger database and it can also be used to detect other children’s neurological disorders. Full article
(This article belongs to the Special Issue EEG Sensors for Biomedical Applications)
Show Figures

Figure 1

Article
Detection of Audio Tampering Based on Electric Network Frequency Signal
Sensors 2023, 23(16), 7029; https://doi.org/10.3390/s23167029 - 08 Aug 2023
Abstract
The detection of audio tampering plays a crucial role in ensuring the authenticity and integrity of multimedia files. This paper presents a novel approach to identifying tampered audio files by leveraging the unique Electric Network Frequency (ENF) signal, which is inherent to the [...] Read more.
The detection of audio tampering plays a crucial role in ensuring the authenticity and integrity of multimedia files. This paper presents a novel approach to identifying tampered audio files by leveraging the unique Electric Network Frequency (ENF) signal, which is inherent to the power grid and serves as a reliable indicator of authenticity. The study begins by establishing a comprehensive Chinese ENF database containing diverse ENF signals extracted from audio files. The proposed methodology involves extracting the ENF signal, applying wavelet decomposition, and utilizing the autoregressive model to train effective classification models. Subsequently, the framework is employed to detect audio tampering and assess the influence of various environmental conditions and recording devices on the ENF signal. Experimental evaluations conducted on our Chinese ENF database demonstrate the efficacy of the proposed method, achieving impressive accuracy rates ranging from 91% to 93%. The results emphasize the significance of ENF-based approaches in enhancing audio file forensics and reaffirm the necessity of adopting reliable tamper detection techniques in multimedia authentication. Full article
(This article belongs to the Special Issue Advanced Technology in Acoustic Signal Processing)
Show Figures

Figure 1

Article
Large-Scale Cellular Vehicle-to-Everything Deployments Based on 5G—Critical Challenges, Solutions, and Vision towards 6G: A Survey
Sensors 2023, 23(16), 7031; https://doi.org/10.3390/s23167031 - 08 Aug 2023
Viewed by 27
Abstract
The proliferation of fifth-generation (5G) networks has opened up new opportunities for the deployment of cellular vehicle-to-everything (C-V2X) systems. However, the large-scale implementation of 5G-based C-V2X poses critical challenges requiring thorough investigation and resolution for successful deployment. This paper aims to identify and [...] Read more.
The proliferation of fifth-generation (5G) networks has opened up new opportunities for the deployment of cellular vehicle-to-everything (C-V2X) systems. However, the large-scale implementation of 5G-based C-V2X poses critical challenges requiring thorough investigation and resolution for successful deployment. This paper aims to identify and analyze the key challenges associated with the large-scale deployment of 5G-based C-V2X systems. In addition, we address obstacles and possible contradictions in the C-V2X standards caused by the special requirements. Moreover, we have introduced some quite influential C-V2X projects, which have influenced the widespread adoption of C-V2X technology in recent years. As the primary goal, this survey aims to provide valuable insights and summarize the current state of the field for researchers, industry professionals, and policymakers involved in the advancement of C-V2X. Furthermore, this paper presents relevant standardization aspects and visions for advanced 5G and 6G approaches to address some of the upcoming issues in mid-term timelines. Full article
Show Figures

Figure 1

Article
Performance Analysis of IRS-Assisted THz Communication Systems over α-μ Fading Channels with Pointing Errors
Sensors 2023, 23(16), 7028; https://doi.org/10.3390/s23167028 - 08 Aug 2023
Viewed by 47
Abstract
In this paper, we analyze the performance of an intelligent reflecting surface (IRS)-aided terahertz (THz) wireless communication system with pointing errors. Specifically, we derive closed-form analytical expressions for the upper bounded ergodic capacity and approximate expression of the outage probability. We adopt an [...] Read more.
In this paper, we analyze the performance of an intelligent reflecting surface (IRS)-aided terahertz (THz) wireless communication system with pointing errors. Specifically, we derive closed-form analytical expressions for the upper bounded ergodic capacity and approximate expression of the outage probability. We adopt an α-μ fading channel model for our analysis that is experimentally demonstrated to be a good fit for THz small-scale fading statistics, especially in indoor communication scenarios. In the proposed analysis, the statistical distribution of the α-μ fading channel is used to derive analytical expressions for the ergodic capacity and outage probability. Our proposed analysis considers not only the IRS reflected channels, but also the direct channel between the communication nodes. The results of the derived analytical expressions are validated through Monte Carlo simulations. Through simulations, it has been noticed that pointing errors degrade the performance of the IRS-assisted THz wireless communication system which can be compensated by deploying an IRS having a large number of reflecting elements. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Article
Triangle-Mesh-Rasterization-Projection (TMRP): An Algorithm to Project a Point Cloud onto a Consistent, Dense and Accurate 2D Raster Image
Sensors 2023, 23(16), 7030; https://doi.org/10.3390/s23167030 - 08 Aug 2023
Viewed by 76
Abstract
The projection of a point cloud onto a 2D camera image is relevant in the case of various image analysis and enhancement tasks, e.g., (i) in multimodal image processing for data fusion, (ii) in robotic applications and in scene analysis, and (iii) for [...] Read more.
The projection of a point cloud onto a 2D camera image is relevant in the case of various image analysis and enhancement tasks, e.g., (i) in multimodal image processing for data fusion, (ii) in robotic applications and in scene analysis, and (iii) for deep neural networks to generate real datasets with ground truth. The challenges of the current single-shot projection methods, such as simple state-of-the-art projection, conventional, polygon, and deep learning-based upsampling methods or closed source SDK functions of low-cost depth cameras, have been identified. We developed a new way to project point clouds onto a dense, accurate 2D raster image, called Triangle-Mesh-Rasterization-Projection (TMRP). The only gaps that the 2D image still contains with our method are valid gaps that result from the physical limits of the capturing cameras. Dense accuracy is achieved by simultaneously using the 2D neighborhood information (rx,ry) of the 3D coordinates in addition to the points P(X,Y,V). In this way, a fast triangulation interpolation can be performed. The interpolation weights are determined using sub-triangles. Compared to single-shot methods, our algorithm is able to solve the following challenges. This means that: (1) no false gaps or false neighborhoods are generated, (2) the density is XYZ independent, and (3) ambiguities are eliminated. Our TMRP method is also open source, freely available on GitHub, and can be applied to almost any sensor or modality. We also demonstrate the usefulness of our method with four use cases by using the KITTI-2012 dataset or sensors with different modalities. Our goal is to improve recognition tasks and processing optimization in the perception of transparent objects for robotic manufacturing processes. Full article
Show Figures

Figure 1

Article
A Novel Decoupled Feature Pyramid Networks for Multi-Target Ship Detection
Sensors 2023, 23(16), 7027; https://doi.org/10.3390/s23167027 - 08 Aug 2023
Viewed by 106
Abstract
The efficiency and accuracy of ship detection is of great significance to ship safety, harbor management, and ocean surveillance in coastal harbors. The main limitations of current ship detection methods lie in the complexity of application scenarios, the difficulty in diverse scales object [...] Read more.
The efficiency and accuracy of ship detection is of great significance to ship safety, harbor management, and ocean surveillance in coastal harbors. The main limitations of current ship detection methods lie in the complexity of application scenarios, the difficulty in diverse scales object detection, and the low efficiency of network training. In order to solve these problems, a novel multi-target ship detection method based on a decoupled feature pyramid algorithm (DFPN) is proposed in this paper. First, a feature decoupling module is introduced to separate ship contour features and position features from the multi-scale fused features, to overcome the problem of similar features in multi-target ships. Second, a feature pyramid structure combined with a gating attention module is constructed to improve the feature resolution of small ships by enhancing contour features and spatial semantic information. Finally, a feature pyramid-based multi-feature fusion algorithm is proposed to improve the adaptability of the network to changes in ship scale according to the contextual relationship of ship features. Experiments on the multi-target ship detection dataset showed that the proposed method increased by 6.3% mAP and 20 FPS higher than YOLOv4, 7.6% mAP and 36 FPS higher than Faster-R-CNN, 5% mAP and 36 FPS higher than Mask-R-CNN, and 4.1% mAP and 35 FPS higher than DetectoRS. The results demonstrate that the DFPN can detect multi-target ships in different scenes with high accuracy and a fast detection speed. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Article
Multi-Patch Hierarchical Transmission Channel Image Dehazing Network Based on Dual Attention Level Feature Fusion
Sensors 2023, 23(16), 7026; https://doi.org/10.3390/s23167026 - 08 Aug 2023
Viewed by 99
Abstract
Unmanned Aerial Vehicle (UAV) inspection of transmission channels in mountainous areas is susceptible to non-homogeneous fog, such as up-slope fog and advection fog, which causes crucial portions of transmission lines or towers to become fuzzy or even wholly concealed. This paper presents a [...] Read more.
Unmanned Aerial Vehicle (UAV) inspection of transmission channels in mountainous areas is susceptible to non-homogeneous fog, such as up-slope fog and advection fog, which causes crucial portions of transmission lines or towers to become fuzzy or even wholly concealed. This paper presents a Dual Attention Level Feature Fusion Multi-Patch Hierarchical Network (DAMPHN) for single image defogging to address the bad quality of cross-level feature fusion in Fast Deep Multi-Patch Hierarchical Networks (FDMPHN). Compared with FDMPHN before improvement, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) of DAMPHN are increased by 0.3 dB and 0.011 on average, and the Average Processing Time (APT) of a single picture is shortened by 11%. Additionally, compared with the other three excellent defogging methods, the PSNR and SSIM values DAMPHN are increased by 1.75 dB and 0.022 on average. Then, to mimic non-homogeneous fog, we combine the single picture depth information with 3D Berlin noise to create the UAV-HAZE dataset, which is used in the field of UAV power assessment. The experiment demonstrates that DAMPHN offers excellent defogging results and is competitive in no-reference and full-reference assessment indices. Full article
(This article belongs to the Special Issue Deep Power Vision Technology and Intelligent Vision Sensors)
Show Figures

Figure 1

Article
Visual-Based Children and Pet Rescue from Suffocation and Incidence of Hyperthermia Death in Enclosed Vehicles
Sensors 2023, 23(16), 7025; https://doi.org/10.3390/s23167025 - 08 Aug 2023
Viewed by 115
Abstract
Over the past several years, many children have died from suffocation due to being left inside a closed vehicle on a sunny day. Various vehicle manufacturers have proposed a variety of technologies to locate an unattended child in a vehicle, including pressure sensors, [...] Read more.
Over the past several years, many children have died from suffocation due to being left inside a closed vehicle on a sunny day. Various vehicle manufacturers have proposed a variety of technologies to locate an unattended child in a vehicle, including pressure sensors, passive infrared motion sensors, temperature sensors, and microwave sensors. However, these methods have not yet reliably located forgotten children in the vehicle. Recently, visual-based methods have taken the attention of manufacturers after the emergence of deep learning technology. However, the existing methods focus only on the forgotten child and neglect a forgotten pet. Furthermore, their systems only detect the presence of a child in the car with or without their parents. Therefore, this research introduces a visual-based framework to reduce hyperthermia deaths in enclosed vehicles. This visual-based system detects objects inside a vehicle; if the child or pet are without an adult, a notification is sent to the parents. First, a dataset is constructed for vehicle interiors containing children, pets, and adults. The proposed dataset is collected from different online sources, considering varying illumination, skin color, pet type, clothes, and car brands for guaranteed model robustness. Second, blurring, sharpening, brightness, contrast, noise, perspective transform, and fog effect augmentation algorithms are applied to these images to increase the training data. The augmented images are annotated with three classes: child, pet, and adult. This research concentrates on fine-tuning different state-of-the-art real-time detection models to detect objects inside the vehicle: NanoDet, YOLOv6_1, YOLOv6_3, and YOLO7. The simulation results demonstrate that YOLOv6_1 presents significant values with 96% recall, 95% precision, and 95% F1. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

Review
Development of a 2 μm Solid-State Laser for Lidar in the Past Decade
Sensors 2023, 23(16), 7024; https://doi.org/10.3390/s23167024 - 08 Aug 2023
Viewed by 113
Abstract
The 2 μm wavelength belongs to the eye-safe band and has a wide range of applications in the fields of lidar, biomedicine, and materials processing. With the rapid development of military, wind power, sensing, and other industries, new requirements for 2 μm solid-state [...] Read more.
The 2 μm wavelength belongs to the eye-safe band and has a wide range of applications in the fields of lidar, biomedicine, and materials processing. With the rapid development of military, wind power, sensing, and other industries, new requirements for 2 μm solid-state laser light sources have emerged, especially in the field of lidar. This paper focuses on the research progress of 2 μm solid-state lasers for lidar over the past decade. The technology and performance of 2 μm pulsed single longitudinal mode solid-state lasers, 2 μm seed solid-state lasers, and 2 μm high power solid-state lasers are, respectively, summarized and analyzed. This paper also introduces the properties of gain media commonly used in the 2 μm band, the construction method of new bonded crystals, and the fabrication method of saturable absorbers. Finally, the future prospects of 2 μm solid-state lasers for lidar are presented. Full article
(This article belongs to the Special Issue Important Achievements in Optical Measurements in China 2022–2023)
Show Figures

Figure 1

Article
A Convolutional Neural Network for Electrical Fault Recognition in Active Magnetic Bearing Systems
Sensors 2023, 23(16), 7023; https://doi.org/10.3390/s23167023 - 08 Aug 2023
Viewed by 104
Abstract
Active magnetic bearings are complex mechatronic systems that consist of mechanical, electrical, and software parts, unlike classical rolling bearings. Given the complexity of this type of system, fault detection is a critical process. This paper presents a new and easy way to detect [...] Read more.
Active magnetic bearings are complex mechatronic systems that consist of mechanical, electrical, and software parts, unlike classical rolling bearings. Given the complexity of this type of system, fault detection is a critical process. This paper presents a new and easy way to detect faults based on the use of a fault dictionary and machine learning. The dictionary was built starting from fault signatures consisting of images obtained from the signals available in the system. Subsequently, a convolutional neural network was trained to recognize such fault signature images. The objective of this study was to develop a fault dictionary and a classifier to recognize the most frequent soft electrical faults that affect position sensors and actuators. The proposed method permits, in a computationally convenient way that can be implemented in real time, the determination of which component has failed and what kind of failure has occurred. Therefore, this fault identification system allows determining which countermeasure to adopt in order to enhance the reliability of the system. The performance of this method was assessed by means of a case study concerning a real turbomachine supported by two active magnetic bearings for the oil and gas field. Seventeen fault classes were considered, and the neural network fault classifier reached an accuracy of 93% on the test dataset. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Article
Shot Boundary Detection with 3D Depthwise Convolutions and Visual Attention
Sensors 2023, 23(16), 7022; https://doi.org/10.3390/s23167022 - 08 Aug 2023
Viewed by 130
Abstract
Shot boundary detection is the process of identifying and locating the boundaries between individual shots in a video sequence. A shot is a continuous sequence of frames that are captured by a single camera, without any cuts or edits. Recent investigations have shown [...] Read more.
Shot boundary detection is the process of identifying and locating the boundaries between individual shots in a video sequence. A shot is a continuous sequence of frames that are captured by a single camera, without any cuts or edits. Recent investigations have shown the effectiveness of the use of 3D convolutional networks to solve this task due to its high capacity to extract spatiotemporal features of the video and determine in which frame a transition or shot change occurs. When this task is used as part of a scene segmentation use case with the aim of improving the experience of viewing content from streaming platforms, the speed of segmentation is very important for live and near-live use cases such as start-over. The problem with models based on 3D convolutions is the large number of parameters that they entail. Standard 3D convolutions impose much higher CPU and memory requirements than do the same 2D operations. In this paper, we rely on depthwise separable convolutions to address the problem but with a scheme that significantly reduces the number of parameters. To compensate for the slight loss of performance, we analyze and propose the use of visual self-attention as a mechanism of improvement. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

Article
Power Transformers OLTC Condition Monitoring Based on Feature Extraction from Vibro-Acoustic Signals: Main Peaks and Euclidean Distance
Sensors 2023, 23(16), 7020; https://doi.org/10.3390/s23167020 - 08 Aug 2023
Viewed by 139
Abstract
The detection of On-Load Tap-Changer (OLTC) faults at an early stage plays a significant role in the maintenance of power transformers, which is the most strategic component of the power network substations. Among the OLTC fault detection methods, vibro-acoustic signal analysis is known [...] Read more.
The detection of On-Load Tap-Changer (OLTC) faults at an early stage plays a significant role in the maintenance of power transformers, which is the most strategic component of the power network substations. Among the OLTC fault detection methods, vibro-acoustic signal analysis is known as a performant approach with the ability to detect many faults of different types. Extracting the characteristic features from the measured vibro-acoustic signal envelopes is a promising approach to precisely diagnose OLTC faults. The present research work is focused on developing a methodology to detect, locate, and track changes in on-line monitored vibro-acoustic signal envelopes based on the main peaks extraction and Euclidean distance analysis. OLTC monitoring systems have been installed on power transformers in services which allowed the recording of a rich dataset of vibro-acoustic signal envelopes in real time. The proposed approach was applied on six different datasets and a detailed analysis is reported. The results demonstrate the capability of the proposed approach in recognizing, following, and localizing the faults that cause changes in the vibro-acoustic signal envelopes over time. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop