Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.9 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the first half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.8 (2022);
5-Year Impact Factor:
2.6 (2022)
Latest Articles
Downlink Power Allocation for CR-NOMA-Based Femtocell D2D Using Greedy Asynchronous Distributed Interference Avoidance Algorithm
Computers 2023, 12(8), 158; https://doi.org/10.3390/computers12080158 - 03 Aug 2023
Abstract
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize
[...] Read more.
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize the power allocation in the downlink transmission, considering the unique characteristics of the CR-NOMA-based femtocell D2D system. The GADIA algorithm is utilized to mitigate interference and effectively optimize power allocation across the network. This research uses a fairness index to present a novel fairness-constrained power allocation algorithm for a downlink non-orthogonal multiple access (NOMA) system. Through extensive simulations, the maximum rate under fairness (MRF) algorithm is shown to optimize system performance while maintaining fairness among users effectively. The fairness index is demonstrated to be adaptable to various user counts, offering a specified range with excellent responsiveness. The implementation of the GADIA algorithm exhibits promising results for sub-optimal frequency band distribution within the network. Mathematical models evaluated in MATLAB further confirm the superiority of CR-NOMA over optimum power allocation NOMA (OPA) and fixed power allocation NOMA (FPA) techniques.
Full article
(This article belongs to the Special Issue Advances in Energy-Efficient Computer and Network Systems)
►
Show Figures
Open AccessArticle
Joining Federated Learning to Blockchain for Digital Forensics in IoT
by
and
Computers 2023, 12(8), 157; https://doi.org/10.3390/computers12080157 - 03 Aug 2023
Abstract
In present times, the Internet of Things (IoT) is becoming the new era in technology by including smart devices in every aspect of our lives. Smart devices in IoT environments are increasing and storing large amounts of sensitive data, which attracts a lot
[...] Read more.
In present times, the Internet of Things (IoT) is becoming the new era in technology by including smart devices in every aspect of our lives. Smart devices in IoT environments are increasing and storing large amounts of sensitive data, which attracts a lot of cybersecurity threats. With these attacks, digital forensics is needed to conduct investigations to identify when and where the attacks happened and acquire information to identify the persons responsible for the attacks. However, digital forensics in an IoT environment is a challenging area of research due to the multiple locations that contain data, traceability of the collected evidence, ensuring integrity, difficulty accessing data from multiple sources, and transparency in the process of collecting evidence. For this reason, we proposed combining two promising technologies to provide a sufficient solution. We used federated learning to train models locally based on data stored on the IoT devices using a dataset designed to represent attacks on the IoT environment. Afterward, we performed aggregation via blockchain by collecting the parameters from the IoT gateway to make the blockchain lightweight. The results of our framework are promising in terms of consumed gas in the blockchain and an accuracy of over 98% using MLP in the federated learning phase.
Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
►▼
Show Figures
Figure 1
Open AccessArticle
Is the Privacy Paradox a Domain-Specific Phenomenon
Computers 2023, 12(8), 156; https://doi.org/10.3390/computers12080156 - 02 Aug 2023
Abstract
The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to
[...] Read more.
The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to act in the digital world, taking steps that define their “privacy behavior”. It has been found that there is a gap between people’s privacy concern and their privacy behavior, a phenomenon that is called the “privacy paradox”. In this research, we investigated if the privacy paradox is domain-specific; in other words, does it vary for an individual when that person moves between different domains, for example, when using e-Health services vs. online social networks? A unique metric was developed to estimate the paradox in a way that enables comparisons, and an empirical study in which validated participants acted in eight domains. It was found that the domain does indeed affect the magnitude of the privacy paradox. This finding has a profound significance both for understanding the privacy paradox phenomenon and for the process of developing effective means to protect privacy.
Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
►▼
Show Figures
Figure 1
Open AccessArticle
A Comprehensive Approach to Image Protection in Digital Environments
Computers 2023, 12(8), 155; https://doi.org/10.3390/computers12080155 - 02 Aug 2023
Abstract
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop
[...] Read more.
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop effective strategies that preserve the visual quality of images and are resistant to various attacks. The work focuses on developing a watermarking algorithm in Python, implemented with embedding in the spatial domain, transformation in the frequency domain, and pixel modification techniques. A thorough evaluation of efficiency, accuracy, and robustness is performed using numerical metrics and visual assessment to validate the embedded watermarks. The results demonstrate the algorithm’s effectiveness in protecting the integrity of the images, although some attacks may cause visible degradation. Likewise, a comparison with related works is made to highlight the relevance and effectiveness of the proposed techniques. It is concluded that watermarks are presented as an additional layer of protection in applications where the authenticity and integrity of the image are essential. In addition, the importance of future research that addresses perspectives for improvement and new applications to strengthen the protection of the goodness of pictures and other digital media is highlighted.
Full article
(This article belongs to the Special Issue Current Issue and Future Directions in Multimedia Hiding and Signal Processing)
►▼
Show Figures
Figure 1
Open AccessArticle
Cooperative Vehicles versus Non-Cooperative Traffic Light: Safe and Efficient Passing
Computers 2023, 12(8), 154; https://doi.org/10.3390/computers12080154 - 30 Jul 2023
Abstract
Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages.
[...] Read more.
Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages. We consider a scenario where vehicles approaching a traffic light are connected to each other, but the traffic light itself is not cooperative. Information about indented trajectories such as decisions on how and when to accelerate, decelerate and stop, is communicated among the vehicles involved. We provide an optimization-based procedure for efficient and safe passing of traffic lights (or other temporary road blockage) using vehicle-to-vehicle communication (V2V). We locally optimize objectives that promote efficiency such as less deceleration and larger minimum velocity, while maintaining safety in terms of no collisions. The procedure is computationally efficient as it mainly involves a gradient decent algorithm for one single parameter.
Full article
(This article belongs to the Special Issue Cooperative Vehicular Networking 2023)
►▼
Show Figures
Figure 1
Open AccessReview
Impact of the Implementation of ChatGPT in Education: A Systematic Review
by
, , and
Computers 2023, 12(8), 153; https://doi.org/10.3390/computers12080153 - 29 Jul 2023
Abstract
The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the
[...] Read more.
The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the launch of this application (November 2022) in three leading scientific databases in the world of education (Web of Science, Scopus and Google Scholar). The sample consisted of 12 studies. Using a descriptive and quantitative methodology, the most significant data are presented. The results show that the implementation of ChatGPT in the educational environment has a positive impact on the teaching–learning process, however, the results also highlight the importance of teachers being trained to use the tool properly. Although ChatGPT can enhance the educational experience, its successful implementation requires teachers to be familiar with its operation. These findings provide a solid basis for future research and decision-making regarding the use of ChatGPT in the educational context.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Automated Diagnosis of Prostate Cancer Using mpMRI Images: A Deep Learning Approach for Clinical Decision Support
by
, , , , and
Computers 2023, 12(8), 152; https://doi.org/10.3390/computers12080152 - 28 Jul 2023
Abstract
Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of
[...] Read more.
Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of the anatomy and tissue characteristics of the prostate gland. However, interpreting mpMRI images can be challenging for humans due to the wide range of appearances and features of PCa, which can be subtle and difficult to distinguish from normal prostate tissue. Deep learning (DL) approaches can be beneficial in this regard by automatically differentiating relevant features and providing an automated diagnosis of PCa. DL models can assist the existing clinical decision support system by saving a physician’s time in localizing regions of interest (ROIs) and help in providing better patient care. In this paper, contemporary DL models are used to create a pipeline for the segmentation and classification of mpMRI images. Our DL approach follows two steps: a U-Net architecture for segmenting ROI in the first stage and a long short-term memory (LSTM) network for classifying the ROI as either cancerous or non-cancerous. We trained our DL models on the I2CVB (Initiative for Collaborative Computer Vision Benchmarking) dataset and conducted a thorough comparison with our experimental setup. Our proposed DL approach, with simpler architectures and training strategy using a single dataset, outperforms existing techniques in the literature. Results demonstrate that the proposed approach can detect PCa disease with high precision and also has a high potential to improve clinical assessment.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
►▼
Show Figures
Figure 1
Open AccessArticle
Convolutional Neural Networks: A Survey
by
Computers 2023, 12(8), 151; https://doi.org/10.3390/computers12080151 - 28 Jul 2023
Abstract
Artificial intelligence (AI) has become a cornerstone of modern technology, revolutionizing industries from healthcare to finance. Convolutional neural networks (CNNs) are a subset of AI that have emerged as a powerful tool for various tasks including image recognition, speech recognition, natural language processing
[...] Read more.
Artificial intelligence (AI) has become a cornerstone of modern technology, revolutionizing industries from healthcare to finance. Convolutional neural networks (CNNs) are a subset of AI that have emerged as a powerful tool for various tasks including image recognition, speech recognition, natural language processing (NLP), and even in the field of genomics, where they have been utilized to classify DNA sequences. This paper provides a comprehensive overview of CNNs and their applications in image recognition tasks. It first introduces the fundamentals of CNNs, including the layers of CNNs, convolution operation (Conv_Op), Feat_Maps, activation functions (Activ_Func), and training methods. It then discusses several popular CNN architectures such as LeNet, AlexNet, VGG, ResNet, and InceptionNet, and compares their performance. It also examines when to use CNNs, their advantages and limitations, and provides recommendations for developers and data scientists, including preprocessing the data, choosing appropriate hyperparameters (Hyper_Param), and evaluating model performance. It further explores the existing platforms and libraries for CNNs such as TensorFlow, Keras, PyTorch, Caffe, and MXNet, and compares their features and functionalities. Moreover, it estimates the cost of using CNNs and discusses potential cost-saving strategies. Finally, it reviews recent developments in CNNs, including attention mechanisms, capsule networks, transfer learning, adversarial training, quantization and compression, and enhancing the reliability and efficiency of CNNs through formal methods. The paper is concluded by summarizing the key takeaways and discussing the future directions of CNN research and development.
Full article
(This article belongs to the Special Issue Artificial Intelligence Models, Tools and Applications with A Social and Semantic Impact)
►▼
Show Figures
Figure 1
Open AccessArticle
The Generation of Articulatory Animations Based on Keypoint Detection and Motion Transfer Combined with Image Style Transfer
Computers 2023, 12(8), 150; https://doi.org/10.3390/computers12080150 - 28 Jul 2023
Abstract
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations
[...] Read more.
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations require varying animation styles, so a comprehensive redraw of all the articulatory animations is necessary. To address this issue, we developed a method for the automatic generation of articulatory animations using a deep learning system. Our method leverages an automatic keypoint-based detection network, a motion transfer network, and a style transfer network to generate a series of articulatory animations that adhere to the desired style. By inputting a target-style articulation image, our system is capable of producing animations with the desired characteristics. We created a dataset of articulation images and animations from public sources, including the International Phonetic Association (IPA), to establish our articulation image animation dataset. We performed preprocessing on the articulation images by segmenting them into distinct areas each corresponding to a specific articulatory part, such as the tongue, upper jaw, lower jaw, soft palate, and vocal cords. We trained a deep neural network model capable of automatically detecting the keypoints in typical articulation images. Also, we trained a generative adversarial network (GAN) model that can generate end-to-end animation of different styles automatically from the characteristics of keypoints and the learned image style. To train a relatively robust model, we used four different style videos: one magnetic resonance imaging (MRI) articulatory video and three hand-drawn videos. For further applications, we combined the consonant and vowel animations together to generate a syllable animation and the animation of a word consisting of many syllables. Experiments show that this system can auto-generate articulatory animations according to input phonetic symbols and should be helpful to people for English articulation correction.
Full article
(This article belongs to the Topic Selected Papers from ICCAI 2023 and IMIP 2023)
►▼
Show Figures
Figure 1
Open AccessArticle
The Impact of the Web Data Access Object (WebDAO) Design Pattern on Productivity
Computers 2023, 12(8), 149; https://doi.org/10.3390/computers12080149 - 27 Jul 2023
Abstract
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity
[...] Read more.
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity of software development. Data Access Object (DAO) is a frequently used design pattern that provides an abstraction layer between the application and the database and is present in the back-end. As serverless development arises, more and more applications are using the DAO design pattern, but it has been moved to the front-end. We refer to this pattern as WebDAO. It is evident that the DAO pattern improves development productivity, but it has never been demonstrated for WebDAO. Here, we evaluated the open source Angular projects to determine whether they use WebDAO. For automatic evaluation, we trained a Natural Language Processing (NLP) model that can recognize the WebDAO design pattern with 92% accuracy. On the basis of the results, we analyzed the entire history of the projects and presented how the WebDAO design pattern impacts productivity, taking into account the number of commits, changes, and issues.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures
Figure 1
Open AccessArticle
Toward Improved Machine Learning-Based Intrusion Detection for Internet of Things Traffic
Computers 2023, 12(8), 148; https://doi.org/10.3390/computers12080148 - 27 Jul 2023
Abstract
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful
[...] Read more.
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful design and implementation of such techniques require the use of effective methods in terms of data and model quality. This paper encloses an empirical impact analysis for the latter in the context of a multi-class classification scenario. A series of experiments were conducted using six ML models, along with four benchmarking datasets, including UNSW-NB15, BOT-IoT, ToN-IoT, and Edge-IIoT. The proposed framework investigates the marginal benefit of employing data pre-processing and model configurations considering IoT limitations. In fact, the empirical findings indicate that the accuracy of ML-based IDS detection rapidly increases when methods that use quality data and models are deployed. Specifically, data cleaning, transformation, normalization, and dimensionality reduction, along with model parameter tuning, exhibit significant potential to minimize computational complexity and yield better performance. In addition, MLP- and clustering-based algorithms outperformed the remaining models, and the obtained accuracy reached up to 99.97%. One should note that the performance of the challenger models was assessed using similar test sets, and this was compared to the results achieved using the relevant pieces of research.
Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Feature Selection with Weighted Ensemble Ranking for Improved Classification Performance on the CSE-CIC-IDS2018 Dataset
by
and
Computers 2023, 12(8), 147; https://doi.org/10.3390/computers12080147 - 25 Jul 2023
Abstract
Feature selection is a crucial step in machine learning, aiming to identify the most relevant features in high-dimensional data in order to reduce the computational complexity of model development and improve generalization performance. Ensemble feature-ranking methods combine the results of several feature-selection techniques
[...] Read more.
Feature selection is a crucial step in machine learning, aiming to identify the most relevant features in high-dimensional data in order to reduce the computational complexity of model development and improve generalization performance. Ensemble feature-ranking methods combine the results of several feature-selection techniques to identify a subset of the most relevant features for a given task. In many cases, they produce a more comprehensive ranking of features than the individual methods used alone. This paper presents a novel approach to ensemble feature ranking, which uses a weighted average of the individual ranking scores calculated using these individual methods. The optimal weights are determined using a Taguchi-type design of experiments. The proposed methodology significantly improves classification performance on the CSE-CIC-IDS2018 dataset, particularly for attack types where traditional average-based feature-ranking score combinations result in low classification metrics.
Full article
(This article belongs to the Special Issue Advances in Database Engineered Applications 2023)
►▼
Show Figures
Figure 1
Open AccessReview
Exploring the Landscape of Data Analysis: A Review of Its Application and Impact in Ecuador
Computers 2023, 12(7), 146; https://doi.org/10.3390/computers12070146 - 22 Jul 2023
Abstract
Data analysis is increasingly critical in aiding decision-making within public and private institutions. This paper scrutinizes the status quo of big data and data analysis and its applications within Ecuador, focusing on its societal, educational, and industrial impact. A detailed literature review was
[...] Read more.
Data analysis is increasingly critical in aiding decision-making within public and private institutions. This paper scrutinizes the status quo of big data and data analysis and its applications within Ecuador, focusing on its societal, educational, and industrial impact. A detailed literature review was conducted from academic databases such as SpringerLink, Scopus, IEEE Xplore, Web of Science, and ACM, incorporating research from inception until May 2023. The search process adhered to the PRISMA statement, employing specific inclusion and exclusion criteria. The analysis revealed that data implementation in Ecuador, while recent, has found noteworthy applications in six principal areas, classified using ISCED: education, science, engineering, health, social, and services. In the scientific and engineering sectors, big data has notably contributed to disaster mitigation and optimizing resource allocation in smart cities. Its application in the social sector has fortified cybersecurity and election data integrity, while in services, it has enhanced residential ICT adoption and urban planning. Health sector applications are emerging, particularly in disease prediction and patient monitoring. Educational applications predominantly involve student performance analysis and curricular evaluation. This review emphasizes that while big data’s potential is being gradually realized in Ecuador, further research, data security measures, and institutional interoperability are required to fully leverage its benefits.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures
Figure 1
Open AccessArticle
Kernel-Based Regularized EEGNet Using Centered Alignment and Gaussian Connectivity for Motor Imagery Discrimination
Computers 2023, 12(7), 145; https://doi.org/10.3390/computers12070145 - 21 Jul 2023
Abstract
Brain–computer interfaces (BCIs) from electroencephalography (EEG) provide a practical approach to support human–technology interaction. In particular, motor imagery (MI) is a widely used BCI paradigm that guides the mental trial of motor tasks without physical movement. Here, we present a deep learning methodology,
[...] Read more.
Brain–computer interfaces (BCIs) from electroencephalography (EEG) provide a practical approach to support human–technology interaction. In particular, motor imagery (MI) is a widely used BCI paradigm that guides the mental trial of motor tasks without physical movement. Here, we present a deep learning methodology, named kernel-based regularized EEGNet (KREEGNet), leveled on centered kernel alignment and Gaussian functional connectivity, explicitly designed for EEG-based MI classification. The approach proactively tackles the challenge of intrasubject variability brought on by noisy EEG records and the lack of spatial interpretability within end-to-end frameworks applied for MI classification. KREEGNet is a refinement of the widely accepted EEGNet architecture, featuring an additional kernel-based layer for regularized Gaussian functional connectivity estimation based on CKA. The superiority of KREEGNet is evidenced by our experimental results from binary and multiclass MI classification databases, outperforming the baseline EEGNet and other state-of-the-art methods. Further exploration of our model’s interpretability is conducted at individual and group levels, utilizing classification performance measures and pruned functional connectivities. Our approach is a suitable alternative for interpretable end-to-end EEG-BCI based on deep learning.
Full article
(This article belongs to the Special Issue Artificial Intelligence Models, Tools and Applications with A Social and Semantic Impact)
►▼
Show Figures
Figure 1
Open AccessArticle
FGPE+: The Mobile FGPE Environment and the Pareto-Optimized Gamified Programming Exercise Selection Model—An Empirical Evaluation
by
, , , , and
Computers 2023, 12(7), 144; https://doi.org/10.3390/computers12070144 - 21 Jul 2023
Abstract
This paper is poised to inform educators, policy makers and software developers about the untapped potential of PWAs in creating engaging, effective, and personalized learning experiences in the field of programming education. We aim to address a significant gap in the current understanding
[...] Read more.
This paper is poised to inform educators, policy makers and software developers about the untapped potential of PWAs in creating engaging, effective, and personalized learning experiences in the field of programming education. We aim to address a significant gap in the current understanding of the potential advantages and underutilisation of Progressive Web Applications (PWAs) within the education sector, specifically for programming education. Despite the evident lack of recognition of PWAs in this arena, we present an innovative approach through the Framework for Gamification in Programming Education (FGPE). This framework takes advantage of the ubiquity and ease of use of PWAs, integrating it with a Pareto optimized gamified programming exercise selection model ensuring personalized adaptive learning experiences by dynamically adjusting the complexity, content, and feedback of gamified exercises in response to the learners’ ongoing progress and performance. This study examines the mobile user experience of the FGPE PLE in different countries, namely Poland and Lithuania, providing novel insights into its applicability and efficiency. Our results demonstrate that combining advanced adaptive algorithms with the convenience of mobile technology has the potential to revolutionize programming education. The FGPE+ course group outperformed the Moodle group in terms of the average perceived knowledge (M = 4.11, SD = 0.51).
Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
►▼
Show Figures
Figure 1
Open AccessArticle
Adaptive Gamification in Science Education: An Analysis of the Impact of implementation and Adapted game Elements on Students’ Motivation
Computers 2023, 12(7), 143; https://doi.org/10.3390/computers12070143 - 18 Jul 2023
Abstract
In recent years, gamification has captured the attention of researchers and educators, particularly in science education, where students often express negative emotions. Gamification methods aim to motivate learners to participate in learning by incorporating intrinsic and extrinsic motivational factors. However, the effectiveness of
[...] Read more.
In recent years, gamification has captured the attention of researchers and educators, particularly in science education, where students often express negative emotions. Gamification methods aim to motivate learners to participate in learning by incorporating intrinsic and extrinsic motivational factors. However, the effectiveness of gamification has yielded varying outcomes, prompting researchers to explore adaptive gamification as an alternative approach. Nevertheless, there needs to be more research on adaptive gamification approaches, particularly concerning motivation, which is the primary objective of gamification. In this study, we developed and tested an adaptive gamification environment based on specific motivational and psychological frameworks. This environment incorporated adaptive criteria, learning strategies, gaming elements, and all crucial aspects of science education for six classes of third-grade students in primary school. We employed a quantitative approach to gain insights into the motivational impact on students and their perception of the adaptive gamification application. We aimed to understand how each game element experienced by students influenced their motivation. Based on our findings, students were more motivated to learn science when using an adaptive gamification environment. Additionally, the adaptation process was largely successful, as students generally liked the game elements integrated into their lessons, indicating the effectiveness of the multidimensional framework employed in enhancing students’ experiences and engagement.
Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
►▼
Show Figures
Figure 1
Open AccessArticle
Efficient Day-Ahead Scheduling of PV-STATCOMs in Medium-Voltage Distribution Networks Using a Second-Order Cone Relaxation
Computers 2023, 12(7), 142; https://doi.org/10.3390/computers12070142 - 18 Jul 2023
Abstract
This paper utilizes convex optimization to implement a day-ahead scheduling strategy for operating a photovoltaic distribution static compensator (PV-STATCOM) in medium-voltage distribution networks. The nonlinear non-convex programming model of the day-ahead scheduling strategy is transformed into a convex optimization model using the second-order
[...] Read more.
This paper utilizes convex optimization to implement a day-ahead scheduling strategy for operating a photovoltaic distribution static compensator (PV-STATCOM) in medium-voltage distribution networks. The nonlinear non-convex programming model of the day-ahead scheduling strategy is transformed into a convex optimization model using the second-order cone programming approach in the complex domain. The main goal of efficiently operating PV-STATCOMs in distribution networks is to dynamically compensate for the active and reactive power generated by renewable energy resources such as photovoltaic plants. This is achieved by controlling power electronic converters, usually voltage source converters, to manage reactive power with lagging or leading power factors. Numerical simulations were conducted to analyze the effects of different power factors on the IEEE 33- and 69-bus systems. The simulations considered operations with a unity power factor (active power injection only), a zero power factor (reactive power injection only), and a variable power factor (active and reactive power injections). The results demonstrated the benefits of dynamic, active and reactive power compensation in reducing grid power losses, voltage profile deviations, and energy purchasing costs at the substation terminals. These simulations were conducted using the CVX tool and the Gurobi solver in the MATLAB programming environment.
Full article
(This article belongs to the Special Issue Feature Papers in Computers 2023)
►▼
Show Figures
Figure 1
Open AccessArticle
A Deep Learning Network with Aggregation Residual Transformation for Human Activity Recognition Using Inertial and Stretch Sensors
Computers 2023, 12(7), 141; https://doi.org/10.3390/computers12070141 - 17 Jul 2023
Abstract
With the rise of artificial intelligence, sensor-based human activity recognition (S-HAR) is increasingly being employed in healthcare monitoring for the elderly, fitness tracking, and patient rehabilitation using smart devices. Inertial sensors have been commonly used for S-HAR, but wearable devices have been demanding
[...] Read more.
With the rise of artificial intelligence, sensor-based human activity recognition (S-HAR) is increasingly being employed in healthcare monitoring for the elderly, fitness tracking, and patient rehabilitation using smart devices. Inertial sensors have been commonly used for S-HAR, but wearable devices have been demanding more comfort and flexibility in recent years. Consequently, there has been an effort to incorporate stretch sensors into S-HAR with the advancement of flexible electronics technology. This paper presents a deep learning network model, utilizing aggregation residual transformation, that can efficiently extract spatial–temporal features and perform activity classification. The efficacy of the suggested model was assessed using the w-HAR dataset, which included both inertial and stretch sensor data. This dataset was used to train and test five fundamental deep learning models (CNN, LSTM, BiLSTM, GRU, and BiGRU), along with the proposed model. The primary objective of the w-HAR investigations was to determine the feasibility of utilizing stretch sensors for recognizing human actions. Additionally, this study aimed to explore the effectiveness of combining data from both inertial and stretch sensors in S-HAR. The results clearly demonstrate the effectiveness of the proposed approach in enhancing HAR using inertial and stretch sensors. The deep learning model we presented achieved an impressive accuracy of 97.68%. Notably, our method outperformed existing approaches and demonstrated excellent generalization capabilities.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
►▼
Show Figures
Figure 1
Open AccessArticle
Developing a Sustainable Online Platform for Language Learning across Europe
Computers 2023, 12(7), 140; https://doi.org/10.3390/computers12070140 - 15 Jul 2023
Abstract
In this paper, we present a sustainable approach for addressing the language skills gap among EU citizens, which significantly hinders their mobility across the EU and their participation in education, in training, as well as in youth programmes. Our approach is based on
[...] Read more.
In this paper, we present a sustainable approach for addressing the language skills gap among EU citizens, which significantly hinders their mobility across the EU and their participation in education, in training, as well as in youth programmes. Our approach is based on the sustainable design of the OpenLang Network platform, which provides an open and collaborative online learning environment for language learners and teachers across Europe, and addresses the limitations of existing computer-assisted language learning approaches. The OpenLang Network platform is bringing together educators and Erasmus+ mobility participants to improve their language skills and cultural knowledge. To this end, the OpenLang Network platform offers a collection of multilingual Open Educational Resources and language learning services. The paper presents the results from the user evaluation of the platform, which has been conducted with members of its community of language teachers and learners. A mixed methods approach has been adopted in order to collect and analyse both qualitative and quantitative data from users about the sustainable design of the OpenLang Network platform, as well as to measure the user satisfaction levels of the platform’s language learning services. According to the user evaluation results, the platform offers a sustainable online environment and a positive user experience for language learning. The user evaluation has also helped us identify a set of best practices and challenges associated with the long-term sustainability of an online language learning community.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
An Experimental Approach to Estimation of the Energy Cost of Dynamic Branch Prediction in an Intel High-Performance Processor
Computers 2023, 12(7), 139; https://doi.org/10.3390/computers12070139 - 11 Jul 2023
Abstract
Power and energy efficiency are among the most crucial requirements in high-performance and other computing platforms. In this work, extensive experimental methods and procedures were used to assess the power and energy efficiency of fundamental hardware building blocks inside a typical high-performance CPU,
[...] Read more.
Power and energy efficiency are among the most crucial requirements in high-performance and other computing platforms. In this work, extensive experimental methods and procedures were used to assess the power and energy efficiency of fundamental hardware building blocks inside a typical high-performance CPU, focusing on the dynamic branch predictor (DBP). The investigation relied on the Running Average Power Limit (RAPL) interface from Intel, a software tool for credibly reporting the power and energy based on instrumentation inside the CPU. We used well-known microbenchmarks under various run conditions to explore potential pitfalls and to develop precautions to raise the precision of the measurements obtained from RAPL for more reliable power estimation. The authors discuss the factors that affect the measurements and share the difficulties encountered and the lessons learned.
Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2023)
►▼
Show Figures
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
31 July 2023
MDPI’s 2022 Best PhD Thesis Awards in Computer Science and Mathematics—Winners Announced
MDPI’s 2022 Best PhD Thesis Awards in Computer Science and Mathematics—Winners Announced
31 July 2023
MDPI’s 2022 Young Investigator Awards in Computer Science and Mathematics—Winners Announced
MDPI’s 2022 Young Investigator Awards in Computer Science and Mathematics—Winners Announced
Topics
Topic in
Applied Sciences, Computers, Digital, Electronics, Smart Cities
Artificial Intelligence Models, Tools and Applications
Topic Editors: Phivos Mylonas, Katia Lida Kermanidis, Manolis MaragoudakisDeadline: 31 August 2023
Topic in
Computers, Entropy, Information, Mathematics
Selected Papers from ICCAI 2023 and IMIP 2023
Topic Editors: Zhitao Xiao, Guangxu LiDeadline: 31 October 2023
Topic in
Applied Sciences, BDCC, Computers, Electronics, JSAN, Inventions, Technologies, Telecom
Electronic Communications, IOT and Big Data
Topic Editors: Teen-Hang Meen, Charles Tijus, Cheng-Chien Kuo, Kuei-Shu Hsu, Kuo-Kuang Fan, Jih-Fu TuDeadline: 30 November 2023
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds
Simulations and Applications of Augmented and Virtual Reality
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 December 2023
Conferences
Special Issues
Special Issue in
Computers
Using New Technologies on Cyber Security Solutions
Guest Editors: Ömer Aslan, Refik SametDeadline: 20 August 2023
Special Issue in
Computers
Future Systems Based on Healthcare 5.0 for Pandemic Preparedness
Guest Editors: Radhya Sahal, Xuhui Chen, Hager SalehDeadline: 31 August 2023
Special Issue in
Computers
Selected Papers from 18th Iberian Conference on Information Systems and Technologies (CISTI'2023)
Guest Editor: Álvaro RochaDeadline: 20 September 2023
Special Issue in
Computers
Recent Advances in Quantum Computing
Guest Editor: Majid HaghparastDeadline: 30 September 2023