Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (875)

Search Parameters:
Journal = Computers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Downlink Power Allocation for CR-NOMA-Based Femtocell D2D Using Greedy Asynchronous Distributed Interference Avoidance Algorithm
Computers 2023, 12(8), 158; https://doi.org/10.3390/computers12080158 - 03 Aug 2023
Viewed by 257
Abstract
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize [...] Read more.
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize the power allocation in the downlink transmission, considering the unique characteristics of the CR-NOMA-based femtocell D2D system. The GADIA algorithm is utilized to mitigate interference and effectively optimize power allocation across the network. This research uses a fairness index to present a novel fairness-constrained power allocation algorithm for a downlink non-orthogonal multiple access (NOMA) system. Through extensive simulations, the maximum rate under fairness (MRF) algorithm is shown to optimize system performance while maintaining fairness among users effectively. The fairness index is demonstrated to be adaptable to various user counts, offering a specified range with excellent responsiveness. The implementation of the GADIA algorithm exhibits promising results for sub-optimal frequency band distribution within the network. Mathematical models evaluated in MATLAB further confirm the superiority of CR-NOMA over optimum power allocation NOMA (OPA) and fixed power allocation NOMA (FPA) techniques. Full article
(This article belongs to the Special Issue Advances in Energy-Efficient Computer and Network Systems)
Show Figures

Figure 1

Article
Joining Federated Learning to Blockchain for Digital Forensics in IoT
Computers 2023, 12(8), 157; https://doi.org/10.3390/computers12080157 - 03 Aug 2023
Viewed by 293
Abstract
In present times, the Internet of Things (IoT) is becoming the new era in technology by including smart devices in every aspect of our lives. Smart devices in IoT environments are increasing and storing large amounts of sensitive data, which attracts a lot [...] Read more.
In present times, the Internet of Things (IoT) is becoming the new era in technology by including smart devices in every aspect of our lives. Smart devices in IoT environments are increasing and storing large amounts of sensitive data, which attracts a lot of cybersecurity threats. With these attacks, digital forensics is needed to conduct investigations to identify when and where the attacks happened and acquire information to identify the persons responsible for the attacks. However, digital forensics in an IoT environment is a challenging area of research due to the multiple locations that contain data, traceability of the collected evidence, ensuring integrity, difficulty accessing data from multiple sources, and transparency in the process of collecting evidence. For this reason, we proposed combining two promising technologies to provide a sufficient solution. We used federated learning to train models locally based on data stored on the IoT devices using a dataset designed to represent attacks on the IoT environment. Afterward, we performed aggregation via blockchain by collecting the parameters from the IoT gateway to make the blockchain lightweight. The results of our framework are promising in terms of consumed gas in the blockchain and an accuracy of over 98% using MLP in the federated learning phase. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

Article
Is the Privacy Paradox a Domain-Specific Phenomenon
Computers 2023, 12(8), 156; https://doi.org/10.3390/computers12080156 - 02 Aug 2023
Viewed by 173
Abstract
The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to [...] Read more.
The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to act in the digital world, taking steps that define their “privacy behavior”. It has been found that there is a gap between people’s privacy concern and their privacy behavior, a phenomenon that is called the “privacy paradox”. In this research, we investigated if the privacy paradox is domain-specific; in other words, does it vary for an individual when that person moves between different domains, for example, when using e-Health services vs. online social networks? A unique metric was developed to estimate the paradox in a way that enables comparisons, and an empirical study in which (n=437) validated participants acted in eight domains. It was found that the domain does indeed affect the magnitude of the privacy paradox. This finding has a profound significance both for understanding the privacy paradox phenomenon and for the process of developing effective means to protect privacy. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

Article
A Comprehensive Approach to Image Protection in Digital Environments
Computers 2023, 12(8), 155; https://doi.org/10.3390/computers12080155 - 02 Aug 2023
Viewed by 193
Abstract
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop [...] Read more.
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop effective strategies that preserve the visual quality of images and are resistant to various attacks. The work focuses on developing a watermarking algorithm in Python, implemented with embedding in the spatial domain, transformation in the frequency domain, and pixel modification techniques. A thorough evaluation of efficiency, accuracy, and robustness is performed using numerical metrics and visual assessment to validate the embedded watermarks. The results demonstrate the algorithm’s effectiveness in protecting the integrity of the images, although some attacks may cause visible degradation. Likewise, a comparison with related works is made to highlight the relevance and effectiveness of the proposed techniques. It is concluded that watermarks are presented as an additional layer of protection in applications where the authenticity and integrity of the image are essential. In addition, the importance of future research that addresses perspectives for improvement and new applications to strengthen the protection of the goodness of pictures and other digital media is highlighted. Full article
Show Figures

Figure 1

Article
Cooperative Vehicles versus Non-Cooperative Traffic Light: Safe and Efficient Passing
Computers 2023, 12(8), 154; https://doi.org/10.3390/computers12080154 - 30 Jul 2023
Viewed by 231
Abstract
Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages. [...] Read more.
Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages. We consider a scenario where vehicles approaching a traffic light are connected to each other, but the traffic light itself is not cooperative. Information about indented trajectories such as decisions on how and when to accelerate, decelerate and stop, is communicated among the vehicles involved. We provide an optimization-based procedure for efficient and safe passing of traffic lights (or other temporary road blockage) using vehicle-to-vehicle communication (V2V). We locally optimize objectives that promote efficiency such as less deceleration and larger minimum velocity, while maintaining safety in terms of no collisions. The procedure is computationally efficient as it mainly involves a gradient decent algorithm for one single parameter. Full article
(This article belongs to the Special Issue Cooperative Vehicular Networking 2023)
Show Figures

Figure 1

Review
Impact of the Implementation of ChatGPT in Education: A Systematic Review
Computers 2023, 12(8), 153; https://doi.org/10.3390/computers12080153 - 29 Jul 2023
Viewed by 466
Abstract
The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the [...] Read more.
The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the launch of this application (November 2022) in three leading scientific databases in the world of education (Web of Science, Scopus and Google Scholar). The sample consisted of 12 studies. Using a descriptive and quantitative methodology, the most significant data are presented. The results show that the implementation of ChatGPT in the educational environment has a positive impact on the teaching–learning process, however, the results also highlight the importance of teachers being trained to use the tool properly. Although ChatGPT can enhance the educational experience, its successful implementation requires teachers to be familiar with its operation. These findings provide a solid basis for future research and decision-making regarding the use of ChatGPT in the educational context. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

Article
Automated Diagnosis of Prostate Cancer Using mpMRI Images: A Deep Learning Approach for Clinical Decision Support
Computers 2023, 12(8), 152; https://doi.org/10.3390/computers12080152 - 28 Jul 2023
Viewed by 299
Abstract
Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of [...] Read more.
Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of the anatomy and tissue characteristics of the prostate gland. However, interpreting mpMRI images can be challenging for humans due to the wide range of appearances and features of PCa, which can be subtle and difficult to distinguish from normal prostate tissue. Deep learning (DL) approaches can be beneficial in this regard by automatically differentiating relevant features and providing an automated diagnosis of PCa. DL models can assist the existing clinical decision support system by saving a physician’s time in localizing regions of interest (ROIs) and help in providing better patient care. In this paper, contemporary DL models are used to create a pipeline for the segmentation and classification of mpMRI images. Our DL approach follows two steps: a U-Net architecture for segmenting ROI in the first stage and a long short-term memory (LSTM) network for classifying the ROI as either cancerous or non-cancerous. We trained our DL models on the I2CVB (Initiative for Collaborative Computer Vision Benchmarking) dataset and conducted a thorough comparison with our experimental setup. Our proposed DL approach, with simpler architectures and training strategy using a single dataset, outperforms existing techniques in the literature. Results demonstrate that the proposed approach can detect PCa disease with high precision and also has a high potential to improve clinical assessment. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

Article
Convolutional Neural Networks: A Survey
Computers 2023, 12(8), 151; https://doi.org/10.3390/computers12080151 - 28 Jul 2023
Viewed by 380
Abstract
Artificial intelligence (AI) has become a cornerstone of modern technology, revolutionizing industries from healthcare to finance. Convolutional neural networks (CNNs) are a subset of AI that have emerged as a powerful tool for various tasks including image recognition, speech recognition, natural language processing [...] Read more.
Artificial intelligence (AI) has become a cornerstone of modern technology, revolutionizing industries from healthcare to finance. Convolutional neural networks (CNNs) are a subset of AI that have emerged as a powerful tool for various tasks including image recognition, speech recognition, natural language processing (NLP), and even in the field of genomics, where they have been utilized to classify DNA sequences. This paper provides a comprehensive overview of CNNs and their applications in image recognition tasks. It first introduces the fundamentals of CNNs, including the layers of CNNs, convolution operation (Conv_Op), Feat_Maps, activation functions (Activ_Func), and training methods. It then discusses several popular CNN architectures such as LeNet, AlexNet, VGG, ResNet, and InceptionNet, and compares their performance. It also examines when to use CNNs, their advantages and limitations, and provides recommendations for developers and data scientists, including preprocessing the data, choosing appropriate hyperparameters (Hyper_Param), and evaluating model performance. It further explores the existing platforms and libraries for CNNs such as TensorFlow, Keras, PyTorch, Caffe, and MXNet, and compares their features and functionalities. Moreover, it estimates the cost of using CNNs and discusses potential cost-saving strategies. Finally, it reviews recent developments in CNNs, including attention mechanisms, capsule networks, transfer learning, adversarial training, quantization and compression, and enhancing the reliability and efficiency of CNNs through formal methods. The paper is concluded by summarizing the key takeaways and discussing the future directions of CNN research and development. Full article
Show Figures

Figure 1

Article
The Generation of Articulatory Animations Based on Keypoint Detection and Motion Transfer Combined with Image Style Transfer
Computers 2023, 12(8), 150; https://doi.org/10.3390/computers12080150 - 28 Jul 2023
Viewed by 256
Abstract
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations [...] Read more.
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations require varying animation styles, so a comprehensive redraw of all the articulatory animations is necessary. To address this issue, we developed a method for the automatic generation of articulatory animations using a deep learning system. Our method leverages an automatic keypoint-based detection network, a motion transfer network, and a style transfer network to generate a series of articulatory animations that adhere to the desired style. By inputting a target-style articulation image, our system is capable of producing animations with the desired characteristics. We created a dataset of articulation images and animations from public sources, including the International Phonetic Association (IPA), to establish our articulation image animation dataset. We performed preprocessing on the articulation images by segmenting them into distinct areas each corresponding to a specific articulatory part, such as the tongue, upper jaw, lower jaw, soft palate, and vocal cords. We trained a deep neural network model capable of automatically detecting the keypoints in typical articulation images. Also, we trained a generative adversarial network (GAN) model that can generate end-to-end animation of different styles automatically from the characteristics of keypoints and the learned image style. To train a relatively robust model, we used four different style videos: one magnetic resonance imaging (MRI) articulatory video and three hand-drawn videos. For further applications, we combined the consonant and vowel animations together to generate a syllable animation and the animation of a word consisting of many syllables. Experiments show that this system can auto-generate articulatory animations according to input phonetic symbols and should be helpful to people for English articulation correction. Full article
(This article belongs to the Topic Selected Papers from ICCAI 2023 and IMIP 2023)
Show Figures

Figure 1

Article
The Impact of the Web Data Access Object (WebDAO) Design Pattern on Productivity
Computers 2023, 12(8), 149; https://doi.org/10.3390/computers12080149 - 27 Jul 2023
Viewed by 297
Abstract
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity [...] Read more.
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity of software development. Data Access Object (DAO) is a frequently used design pattern that provides an abstraction layer between the application and the database and is present in the back-end. As serverless development arises, more and more applications are using the DAO design pattern, but it has been moved to the front-end. We refer to this pattern as WebDAO. It is evident that the DAO pattern improves development productivity, but it has never been demonstrated for WebDAO. Here, we evaluated the open source Angular projects to determine whether they use WebDAO. For automatic evaluation, we trained a Natural Language Processing (NLP) model that can recognize the WebDAO design pattern with 92% accuracy. On the basis of the results, we analyzed the entire history of the projects and presented how the WebDAO design pattern impacts productivity, taking into account the number of commits, changes, and issues. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

Article
Toward Improved Machine Learning-Based Intrusion Detection for Internet of Things Traffic
Computers 2023, 12(8), 148; https://doi.org/10.3390/computers12080148 - 27 Jul 2023
Viewed by 346
Abstract
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful [...] Read more.
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful design and implementation of such techniques require the use of effective methods in terms of data and model quality. This paper encloses an empirical impact analysis for the latter in the context of a multi-class classification scenario. A series of experiments were conducted using six ML models, along with four benchmarking datasets, including UNSW-NB15, BOT-IoT, ToN-IoT, and Edge-IIoT. The proposed framework investigates the marginal benefit of employing data pre-processing and model configurations considering IoT limitations. In fact, the empirical findings indicate that the accuracy of ML-based IDS detection rapidly increases when methods that use quality data and models are deployed. Specifically, data cleaning, transformation, normalization, and dimensionality reduction, along with model parameter tuning, exhibit significant potential to minimize computational complexity and yield better performance. In addition, MLP- and clustering-based algorithms outperformed the remaining models, and the obtained accuracy reached up to 99.97%. One should note that the performance of the challenger models was assessed using similar test sets, and this was compared to the results achieved using the relevant pieces of research. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Feature Selection with Weighted Ensemble Ranking for Improved Classification Performance on the CSE-CIC-IDS2018 Dataset
Computers 2023, 12(8), 147; https://doi.org/10.3390/computers12080147 - 25 Jul 2023
Viewed by 267
Abstract
Feature selection is a crucial step in machine learning, aiming to identify the most relevant features in high-dimensional data in order to reduce the computational complexity of model development and improve generalization performance. Ensemble feature-ranking methods combine the results of several feature-selection techniques [...] Read more.
Feature selection is a crucial step in machine learning, aiming to identify the most relevant features in high-dimensional data in order to reduce the computational complexity of model development and improve generalization performance. Ensemble feature-ranking methods combine the results of several feature-selection techniques to identify a subset of the most relevant features for a given task. In many cases, they produce a more comprehensive ranking of features than the individual methods used alone. This paper presents a novel approach to ensemble feature ranking, which uses a weighted average of the individual ranking scores calculated using these individual methods. The optimal weights are determined using a Taguchi-type design of experiments. The proposed methodology significantly improves classification performance on the CSE-CIC-IDS2018 dataset, particularly for attack types where traditional average-based feature-ranking score combinations result in low classification metrics. Full article
(This article belongs to the Special Issue Advances in Database Engineered Applications 2023)
Show Figures

Figure 1

Review
Exploring the Landscape of Data Analysis: A Review of Its Application and Impact in Ecuador
Computers 2023, 12(7), 146; https://doi.org/10.3390/computers12070146 - 22 Jul 2023
Viewed by 925
Abstract
Data analysis is increasingly critical in aiding decision-making within public and private institutions. This paper scrutinizes the status quo of big data and data analysis and its applications within Ecuador, focusing on its societal, educational, and industrial impact. A detailed literature review was [...] Read more.
Data analysis is increasingly critical in aiding decision-making within public and private institutions. This paper scrutinizes the status quo of big data and data analysis and its applications within Ecuador, focusing on its societal, educational, and industrial impact. A detailed literature review was conducted from academic databases such as SpringerLink, Scopus, IEEE Xplore, Web of Science, and ACM, incorporating research from inception until May 2023. The search process adhered to the PRISMA statement, employing specific inclusion and exclusion criteria. The analysis revealed that data implementation in Ecuador, while recent, has found noteworthy applications in six principal areas, classified using ISCED: education, science, engineering, health, social, and services. In the scientific and engineering sectors, big data has notably contributed to disaster mitigation and optimizing resource allocation in smart cities. Its application in the social sector has fortified cybersecurity and election data integrity, while in services, it has enhanced residential ICT adoption and urban planning. Health sector applications are emerging, particularly in disease prediction and patient monitoring. Educational applications predominantly involve student performance analysis and curricular evaluation. This review emphasizes that while big data’s potential is being gradually realized in Ecuador, further research, data security measures, and institutional interoperability are required to fully leverage its benefits. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

Article
Kernel-Based Regularized EEGNet Using Centered Alignment and Gaussian Connectivity for Motor Imagery Discrimination
Computers 2023, 12(7), 145; https://doi.org/10.3390/computers12070145 - 21 Jul 2023
Viewed by 320
Abstract
Brain–computer interfaces (BCIs) from electroencephalography (EEG) provide a practical approach to support human–technology interaction. In particular, motor imagery (MI) is a widely used BCI paradigm that guides the mental trial of motor tasks without physical movement. Here, we present a deep learning methodology, [...] Read more.
Brain–computer interfaces (BCIs) from electroencephalography (EEG) provide a practical approach to support human–technology interaction. In particular, motor imagery (MI) is a widely used BCI paradigm that guides the mental trial of motor tasks without physical movement. Here, we present a deep learning methodology, named kernel-based regularized EEGNet (KREEGNet), leveled on centered kernel alignment and Gaussian functional connectivity, explicitly designed for EEG-based MI classification. The approach proactively tackles the challenge of intrasubject variability brought on by noisy EEG records and the lack of spatial interpretability within end-to-end frameworks applied for MI classification. KREEGNet is a refinement of the widely accepted EEGNet architecture, featuring an additional kernel-based layer for regularized Gaussian functional connectivity estimation based on CKA. The superiority of KREEGNet is evidenced by our experimental results from binary and multiclass MI classification databases, outperforming the baseline EEGNet and other state-of-the-art methods. Further exploration of our model’s interpretability is conducted at individual and group levels, utilizing classification performance measures and pruned functional connectivities. Our approach is a suitable alternative for interpretable end-to-end EEG-BCI based on deep learning. Full article
Show Figures

Figure 1

Article
FGPE+: The Mobile FGPE Environment and the Pareto-Optimized Gamified Programming Exercise Selection Model—An Empirical Evaluation
Computers 2023, 12(7), 144; https://doi.org/10.3390/computers12070144 - 21 Jul 2023
Viewed by 338
Abstract
This paper is poised to inform educators, policy makers and software developers about the untapped potential of PWAs in creating engaging, effective, and personalized learning experiences in the field of programming education. We aim to address a significant gap in the current understanding [...] Read more.
This paper is poised to inform educators, policy makers and software developers about the untapped potential of PWAs in creating engaging, effective, and personalized learning experiences in the field of programming education. We aim to address a significant gap in the current understanding of the potential advantages and underutilisation of Progressive Web Applications (PWAs) within the education sector, specifically for programming education. Despite the evident lack of recognition of PWAs in this arena, we present an innovative approach through the Framework for Gamification in Programming Education (FGPE). This framework takes advantage of the ubiquity and ease of use of PWAs, integrating it with a Pareto optimized gamified programming exercise selection model ensuring personalized adaptive learning experiences by dynamically adjusting the complexity, content, and feedback of gamified exercises in response to the learners’ ongoing progress and performance. This study examines the mobile user experience of the FGPE PLE in different countries, namely Poland and Lithuania, providing novel insights into its applicability and efficiency. Our results demonstrate that combining advanced adaptive algorithms with the convenience of mobile technology has the potential to revolutionize programming education. The FGPE+ course group outperformed the Moodle group in terms of the average perceived knowledge (M = 4.11, SD = 0.51). Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

Back to TopTop