Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (916)

Search Parameters:
Journal = Computation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
A Parametric Family of Triangular Norms and Conorms with an Additive Generator in the Form of an Arctangent of a Linear Fractional Function
Computation 2023, 11(8), 155; https://doi.org/10.3390/computation11080155 - 08 Aug 2023
Abstract
At present, fuzzy modeling has established itself as an effective tool for designing and developing systems for various purposes that are used to solve problems of control, diagnostics, forecasting, and decision making. One of the most important problems is the choice and justification [...] Read more.
At present, fuzzy modeling has established itself as an effective tool for designing and developing systems for various purposes that are used to solve problems of control, diagnostics, forecasting, and decision making. One of the most important problems is the choice and justification of an appropriate functional representation of the main fuzzy operations. It is known that, in the class of rational functions, such operations can be represented by additive generators in the form of a linear fractional function, a logarithm of a linear fractional function, and an arctangent of a linear fractional function. The paper is devoted to the latter case. Restrictions on the parameters, under which the arctangent of a linear fractional function is an increasing or decreasing generator, are defined. For each case, a corresponding fuzzy operation (a triangular norm or a conorm) is constructed. The theoretical significance of the research results lies in the fact that the obtained parametric families enrich the theory of Archimedean triangular norms and conorms and provide additional opportunities for the functional representation of fuzzy operations in the framework of fuzzy modeling. In addition, in fact, we formed a scheme for study functions that can be considered additive generators and constructed the corresponding fuzzy operations. Full article
(This article belongs to the Special Issue Control Systems, Mathematical Modeling and Automation II)
Article
Revealing the Genetic Code Symmetries through Computations Involving Fibonacci-like Sequences and Their Properties
Computation 2023, 11(8), 154; https://doi.org/10.3390/computation11080154 - 07 Aug 2023
Viewed by 298
Abstract
In this work, we present a new way of studying the mathematical structure of the genetic code. This study relies on the use of mathematical computations involving five Fibonacci-like sequences; a few of their “seeds” or “initial conditions” are chosen according to the [...] Read more.
In this work, we present a new way of studying the mathematical structure of the genetic code. This study relies on the use of mathematical computations involving five Fibonacci-like sequences; a few of their “seeds” or “initial conditions” are chosen according to the chemical and physical data of the three amino acids serine, arginine and leucine, playing a prominent role in a recent symmetry classification scheme of the genetic code. It appears that these mathematical sequences, of the same kind as the famous Fibonacci series, apart from their usual recurrence relations, are highly intertwined by many useful linear relationships. Using these sequences and also various sums or linear combinations of them, we derive several physical and chemical quantities of interest, such as the number of total coding codons, 61, obeying various degeneracy patterns, the detailed number of H/CNOS atoms and the integer molecular mass (or nucleon number), in the side chains of the coded amino acids and also in various degeneracy patterns, in agreement with those described in the literature. We also discover, as a by-product, an accurate description of the very chemical structure of the four ribonucleotides uridine monophosphate (UMP), cytidine monophosphate (CMP), adenosine monophosphate (AMP) and guanosine monophosphate (GMP), the building blocks of RNA whose groupings, in three units, constitute the triplet codons. In summary, we find a full mathematical and chemical connection with the “ideal sextet’s classification scheme”, which we alluded to above, as well as with others—notably, the Findley–Findley–McGlynn and Rumer’s symmetrical classifications. Full article
(This article belongs to the Special Issue Computations in Mathematics, Mathematical Education, and Science)
Article
Uncoupling Techniques for Multispecies Diffusion–Reaction Model
Computation 2023, 11(8), 153; https://doi.org/10.3390/computation11080153 - 04 Aug 2023
Viewed by 167
Abstract
We consider the multispecies model described by a coupled system of diffusion–reaction equations, where the coupling and nonlinearity are given in the reaction part. We construct a semi-discrete form using a finite volume approximation by space. The fully implicit scheme is used for [...] Read more.
We consider the multispecies model described by a coupled system of diffusion–reaction equations, where the coupling and nonlinearity are given in the reaction part. We construct a semi-discrete form using a finite volume approximation by space. The fully implicit scheme is used for approximation by time, which leads to solving the coupled nonlinear system of equations at each time step. This paper presents two uncoupling techniques based on the explicit–implicit scheme and the operator-splitting method. In the explicit–implicit scheme, we take the concentration of one species in coupling term from the previous time layer to obtain a linear uncoupled system of equations. The second approach is based on the operator-splitting technique, where we first solve uncoupled equations with the diffusion operator and then solve the equations with the local reaction operator. The stability estimates are derived for both proposed uncoupling schemes. We present a numerical investigation for the uncoupling techniques with varying time step sizes and different scales of the diffusion coefficient. Full article
Show Figures

Figure 1

Article
Enhancing the Hardware Pipelining Optimization Technique of the SHA-3 via FPGA
Computation 2023, 11(8), 152; https://doi.org/10.3390/computation11080152 - 03 Aug 2023
Viewed by 176
Abstract
Information is transmitted between multiple insecure routing hops in text, image, video, and audio. Thus, this multi-hop digital data transfer makes secure transmission with confidentiality and integrity imperative. This protection of the transmitted data can be achieved via hashing algorithms. Furthermore, data integrity [...] Read more.
Information is transmitted between multiple insecure routing hops in text, image, video, and audio. Thus, this multi-hop digital data transfer makes secure transmission with confidentiality and integrity imperative. This protection of the transmitted data can be achieved via hashing algorithms. Furthermore, data integrity must be ensured, which is feasible using hashing algorithms. The advanced cryptographic Secure Hashing Algorithm 3 (SHA-3) is not sensitive to a cryptanalysis attack and is widely preferred due to its long-term security in various applications. However, due to the ever-increasing size of the data to be transmitted, an effective improvement is required to fulfill real-time computations with multiple types of optimization. The use of FPGAs is the ideal mechanism to improve algorithm performance and other metrics, such as throughput (Gbps), frequency (MHz), efficiency (Mbps/slices), reduction of area (slices), and power consumption. Providing upgraded computer architectures for SHA-3 is an active area of research, with continuous performance improvements. In this article, we have focused on enhancing the hardware performance metrics of throughput and efficiency by reducing the area cost of the SHA-3 for all output size lengths (224, 256, 384, and 512 bits). Our approach introduces a novel architectural design based on pipelining, which is combined with a simplified format for the round constant (RC) generator in the Iota (ι) step only consisting of 7 bits rather than the standard 64 bits. By reducing hardware resource utilization in the area and minimizing the amount of computation required at the Iota (ι) step, our design achieves the highest levels of throughput and efficiency. Through extensive experimentation, we have demonstrated the remarkable performance of our approach. Our results showcase an impressive throughput rate of 22.94 Gbps and an efficiency rate of 19.95 Mbps/slices. Our work contributes to advancing computer architectures tailored for SHA-3, therefore unlocking new possibilities for secure and high-performance data transmission. Full article
Show Figures

Figure 1

Article
Finite Element Analysis of ACL Reconstruction-Compatible Knee Implant Design with Bone Graft Component
Computation 2023, 11(8), 151; https://doi.org/10.3390/computation11080151 - 02 Aug 2023
Viewed by 287
Abstract
Knee osteoarthritis is a musculoskeletal defect specific to the soft tissues in the knee joint and is a degenerative disease that affects millions of people. Although drug intake can slow down progression, total knee arthroplasty has been the gold standard for the treatment [...] Read more.
Knee osteoarthritis is a musculoskeletal defect specific to the soft tissues in the knee joint and is a degenerative disease that affects millions of people. Although drug intake can slow down progression, total knee arthroplasty has been the gold standard for the treatment of this disease. This surgical procedure involves replacing the tibiofemoral joint with an implant. The most common implants used for this require the removal of either the anterior cruciate ligament (ACL) alone or both cruciate ligaments which alters the native knee joint mechanics. Bi-cruciate-retaining implants have been developed but not frequently used due to the complexity of the procedure and the occurrences of intraoperative failures such as ACL and tibial eminence rupture. In this study, a knee joint implant was modified to have a bone graft that should aid in ACL reconstruction. The mechanical behavior of the bone graft was studied through finite element analysis (FEA). The results show that the peak Christensen safety factor for cortical bone is 0.021 while the maximum shear stress of the cancellous bone is 3 MPa which signifies that the cancellous bone could fail when subjected to the ACL loads, depending on the graft shear strength which could vary depending on the graft source, while cortical bone could withstand the walking load. It would be necessary to optimize the bone graft geometry for stress distribution as well as to evaluate the effectiveness of bone healing prior to implementation. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Article
The Problem of Effective Evacuation of the Population from Floodplains under Threat of Flooding: Algorithmic and Software Support with Shortage of Resources
Computation 2023, 11(8), 150; https://doi.org/10.3390/computation11080150 - 01 Aug 2023
Viewed by 343
Abstract
Extreme flooding of the floodplains of large lowland rivers poses a danger to the population due to the vastness of the flooded areas. This requires the organization of safe evacuation in conditions of a shortage of temporary and transport resources due to significant [...] Read more.
Extreme flooding of the floodplains of large lowland rivers poses a danger to the population due to the vastness of the flooded areas. This requires the organization of safe evacuation in conditions of a shortage of temporary and transport resources due to significant differences in the moments of flooding of different spatial parts. We consider the case of a shortage of evacuation vehicles, in which the safe evacuation of the entire population to permanent evacuation points is impossible. Therefore, the evacuation is divided into two stages with the organization of temporary evacuation points on evacuation routes. Our goal is to develop a method for analyzing the minimum resource requirement for the safe evacuation of the population of floodplain territories based on a mathematical model of flood dynamics and minimizing the number of vehicles on a set of safe evacuation schedules. The core of the approach is a numerical hydrodynamic model in shallow water approximation. Modeling the hydrological regime of a real water body requires a multi-layer geoinformation model of the territory with layers of relief, channel structure, and social infrastructure. High-performance computing is performed on GPUs using CUDA. The optimization problem is a variant of the resource investment problem of scheduling theory with deadlines for completing work and is solved on the basis of a heuristic algorithm. We use the results of numerical simulation of floods for the Northern part of the Volga-Akhtuba floodplain to plot the dependence of the minimum number of vehicles that ensure the safe evacuation of the population. The minimum transport resources depend on the water discharge in the Volga river, the start of the evacuation, and the localization of temporary evacuation points. The developed algorithm constructs a set of safe evacuation schedules for the minimum allowable number of vehicles in various flood scenarios. The population evacuation schedules constructed for the Volga-Akhtuba floodplain can be used in practice for various vast river valleys. Full article
(This article belongs to the Special Issue Control Systems, Mathematical Modeling and Automation II)
Show Figures

Figure 1

Article
Adaptive Sparse Grids with Nonlinear Basis in Interval Problems for Dynamical Systems
Computation 2023, 11(8), 149; https://doi.org/10.3390/computation11080149 - 01 Aug 2023
Viewed by 265
Abstract
Problems with interval uncertainties arise in many applied fields. The authors have earlier developed, tested, and proved an adaptive interpolation algorithm for solving this class of problems. The algorithm’s idea consists of constructing a piecewise polynomial function that interpolates the dependence of the [...] Read more.
Problems with interval uncertainties arise in many applied fields. The authors have earlier developed, tested, and proved an adaptive interpolation algorithm for solving this class of problems. The algorithm’s idea consists of constructing a piecewise polynomial function that interpolates the dependence of the problem solution on point values of interval parameters. The classical version of the algorithm uses polynomial full grid interpolation and, with a large number of uncertainties, the algorithm becomes difficult to apply due to the exponential growth of computational costs. Sparse grid interpolation requires significantly less computational resources than interpolation on full grids, so their use seems promising. A representative number of examples have previously confirmed the effectiveness of using adaptive sparse grids with a linear basis in the adaptive interpolation algorithm. The purpose of this paper is to apply adaptive sparse grids with a nonlinear basis for modeling dynamic systems with interval parameters. The corresponding interpolation polynomials on the quadratic basis and the fourth-degree basis are constructed. The efficiency, performance, and robustness of the proposed approach are demonstrated on a representative set of problems. Full article
Show Figures

Graphical abstract

Article
The Weights Reset Technique for Deep Neural Networks Implicit Regularization
Computation 2023, 11(8), 148; https://doi.org/10.3390/computation11080148 - 01 Aug 2023
Viewed by 410
Abstract
We present a new regularization method called Weights Reset, which includes periodically resetting a random portion of layer weights during the training process using predefined probability distributions. This technique was applied and tested on several popular classification datasets, Caltech-101, CIFAR-100 and Imagenette. We [...] Read more.
We present a new regularization method called Weights Reset, which includes periodically resetting a random portion of layer weights during the training process using predefined probability distributions. This technique was applied and tested on several popular classification datasets, Caltech-101, CIFAR-100 and Imagenette. We compare these results with other traditional regularization methods. The subsequent test results demonstrate that the Weights Reset method is competitive, achieving the best performance on Imagenette dataset and the challenging and unbalanced Caltech-101 dataset. This method also has sufficient potential to prevent vanishing and exploding gradients. However, this analysis is of a brief nature. Further comprehensive studies are needed in order to gain a deep understanding of the computing potential and limitations of the Weights Reset method. The observed results show that the Weights Reset method can be estimated as an effective extension of the traditional regularization methods and can help to improve model performance and generalization. Full article
Show Figures

Graphical abstract

Article
Multiobjective Optimization of Fuzzy System for Cardiovascular Risk Classification
Computation 2023, 11(7), 147; https://doi.org/10.3390/computation11070147 - 23 Jul 2023
Viewed by 335
Abstract
Since cardiovascular diseases (CVDs) pose a critical global concern, identifying associated risk factors remains a pivotal research focus. This study aims to propose and optimize a fuzzy system for cardiovascular risk (CVR) classification using a multiobjective approach, addressing computational aspects such as the [...] Read more.
Since cardiovascular diseases (CVDs) pose a critical global concern, identifying associated risk factors remains a pivotal research focus. This study aims to propose and optimize a fuzzy system for cardiovascular risk (CVR) classification using a multiobjective approach, addressing computational aspects such as the configuration of the fuzzy system, the optimization process, the selection of a suitable solution from the optimal Pareto front, and the interpretability of the fuzzy logic system after the optimization process. The proposed system utilizes data, including age, weight, height, gender, and systolic blood pressure to determine cardiovascular risk. The fuzzy model is based on preliminary information from the literature; therefore, to adjust the fuzzy logic system using a multiobjective approach, the body mass index (BMI) is considered as an additional output as data are available for this index, and body mass index is acknowledged as a proxy for cardiovascular risk given the propensity for these diseases attributed to surplus adipose tissue, which can elevate blood pressure, cholesterol, and triglyceride levels, leading to arterial and cardiac damage. By employing a multiobjective approach, the study aims to obtain a balance between the two outputs corresponding to cardiovascular risk classification and body mass index. For the multiobjective optimization, a set of experiments is proposed that render an optimal Pareto front, as a result, to later determine the appropriate solution. The results show an adequate optimization of the fuzzy logic system, allowing the interpretability of the fuzzy sets after carrying out the optimization process. In this way, this paper contributes to the advancement of the use of computational techniques in the medical domain. Full article
Show Figures

Figure 1

Article
Analysis of the Dynamics of Tuberculosis in Algeria Using a Compartmental VSEIT Model with Evaluation of the Vaccination and Treatment Effects
Computation 2023, 11(7), 146; https://doi.org/10.3390/computation11070146 - 21 Jul 2023
Viewed by 295
Abstract
Despite low tuberculosis (TB) mortality rates in China, Europe, and the United States, many countries are still struggling to control the epidemic, including India, South Africa, and Algeria. This study aims to contribute to the body of knowledge on this topic and provide [...] Read more.
Despite low tuberculosis (TB) mortality rates in China, Europe, and the United States, many countries are still struggling to control the epidemic, including India, South Africa, and Algeria. This study aims to contribute to the body of knowledge on this topic and provide a valuable tool and evidence-based guidance for the Algerian healthcare managers in understanding the spread of TB and implementing control strategies. For this purpose, a compartmental mathematical model is proposed to analyze TB dynamics in Algeria and investigate the vaccination and treatment effects on disease breaks. A qualitative study is conducted to discuss the stability property of both disease-free equilibrium and endemic equilibrium. In order to adopt the proposed model for the Algerian case, we estimate the model parameters using Algerian TB-reported data from 1990 to 2020. The obtained results using the proposed mathematical compartmental model show that the reproduction number (R0) of TB in Algeria is less than one, suggesting that the disease can be eradicated or effectively controlled through a combination of interventions, including vaccination, high-quality treatment, and isolation measures. Full article
(This article belongs to the Special Issue Mathematical Modeling and Study of Nonlinear Dynamic Processes)
Show Figures

Figure 1

Article
Simultaneous Integration of D-STATCOMs and PV Sources in Distribution Networks to Reduce Annual Investment and Operating Costs
Computation 2023, 11(7), 145; https://doi.org/10.3390/computation11070145 - 20 Jul 2023
Viewed by 287
Abstract
This research analyzes electrical distribution networks using renewable generation sources based on photovoltaic (PV) sources and distribution static compensators (D-STATCOMs) in order to minimize the expected annual grid operating costs for a planning period of 20 years. The separate and simultaneous placement of [...] Read more.
This research analyzes electrical distribution networks using renewable generation sources based on photovoltaic (PV) sources and distribution static compensators (D-STATCOMs) in order to minimize the expected annual grid operating costs for a planning period of 20 years. The separate and simultaneous placement of PVs and D-STATCOMs is evaluated through a mixed-integer nonlinear programming model (MINLP), whose binary part pertains to selecting the nodes where these devices must be located, and whose continuous part is associated with the power flow equations and device constraints. This optimization model is solved using the vortex search algorithm for the sake of comparison. Numerical results in the IEEE 33- and 69-bus grids demonstrate that combining PV sources and D-STATCOM devices entails the maximum reduction in the expected annual grid operating costs when compared to the solutions reached separately by each device, with expected reductions of about 35.50% and 35.53% in the final objective function value with respect to the benchmark case. All computational validations were carried out in the MATLAB programming environment (version 2021b) with our own scripts. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

Article
Modeling of Heat Flux in a Heating Furnace
Computation 2023, 11(7), 144; https://doi.org/10.3390/computation11070144 - 17 Jul 2023
Viewed by 421
Abstract
Modern heating furnaces use combined modes of heating the charge. At high heating temperatures, more radiation heating is used; at lower temperatures, more convection heating is used. In large heating furnaces, such as pusher furnaces, it is necessary to monitor the heating of [...] Read more.
Modern heating furnaces use combined modes of heating the charge. At high heating temperatures, more radiation heating is used; at lower temperatures, more convection heating is used. In large heating furnaces, such as pusher furnaces, it is necessary to monitor the heating of the material zonally. Zonal heating allows the appropriate thermal regime to be set in each zone, according to the desired parameters for heating the charge. The problem for each heating furnace is to set the optimum thermal regime so that at the end of the heating, after the material has been cross-sectioned, there is a uniform temperature field with a minimum temperature differential. In order to evaluate the heating of the charge, a mathematical model was developed to calculate the heat fluxes of the moving charge (slabs) along the length of the pusher furnace. The obtained results are based on experimental measurements on a test slab on which thermocouples were installed, and data acquisition was provided by a TERMOPHIL-stor data logger placed directly on the slab. Most of the developed models focus only on energy balance assessment or external heat exchange. The results from the model created showed reserves for changing the thermal regimes in the different zones. The developed model was used to compare the heating evaluation of the slabs after the rebuilding of the pusher furnace. Changing the furnace parameters and altering the heat fluxes or heating regimes in each zone contributed to more uniform heating and a reduction in specific heat consumption. The developed mathematical heat flux model is applicable as part of the powerful tools for monitoring and controlling the thermal condition of the charge inside the furnace as well as evaluating the operating condition of such furnaces. Full article
Show Figures

Figure 1

Article
Mathematical Modelling of Tuberculosis Outbreak in an East African Country Incorporating Vaccination and Treatment
Computation 2023, 11(7), 143; https://doi.org/10.3390/computation11070143 - 17 Jul 2023
Viewed by 479
Abstract
In this paper, we develop a deterministic mathematical epidemic model for tuberculosis outbreaks in order to study the disease’s impact in a given population. We develop a qualitative analysis of the model by showing that the solution of the model is positive and [...] Read more.
In this paper, we develop a deterministic mathematical epidemic model for tuberculosis outbreaks in order to study the disease’s impact in a given population. We develop a qualitative analysis of the model by showing that the solution of the model is positive and bounded. The global stability analysis of the model uses Lyapunov functions and the threshold quantity of the model, which is the basic reproduction number is estimated. The existence and uniqueness analysis for Caputo fractional tuberculosis outbreak model is presented by transforming the deterministic model to a Caputo sense model. The deterministic model is used to predict real data from Uganda and Rwanda to see how well our model captured the dynamics of the disease in the countries considered. Furthermore, the sensitivity analysis of the parameters according to R0 was considered in this study. The normalised forward sensitivity index is used to determine the most sensitive variables that are important for infection control. We simulate the Caputo fractional tuberculosis outbreak model using the Adams–Bashforth–Moulton approach to investigate the impact of treatment and vaccine rates, as well as the disease trajectory. Overall, our findings imply that increasing vaccination and especially treatment availability for infected people can reduce the prevalence and burden of tuberculosis on the human population. Full article
Show Figures

Figure 1

Article
Computational Fracture Modeling for Effects of Healed Crack Length and Interfacial Cohesive Properties in Self-Healing Concrete Using XFEM and Cohesive Surface Technique
Computation 2023, 11(7), 142; https://doi.org/10.3390/computation11070142 - 16 Jul 2023
Viewed by 286
Abstract
Healing patterns are a critical issue that influence the fracture mechanism of self-healing concrete (SHC) structures. Partial healing cracks could happen even during the normal operating conditions of the structure, such as sustainable applied loads or quick crack spreading. In this paper, the [...] Read more.
Healing patterns are a critical issue that influence the fracture mechanism of self-healing concrete (SHC) structures. Partial healing cracks could happen even during the normal operating conditions of the structure, such as sustainable applied loads or quick crack spreading. In this paper, the effects of two main factors that control healing patterns, the healed crack length and the interfacial cohesive properties between the solidified healing agent and the cracked surfaces on the load carrying capacity and the fracture mechanism of healed SHC samples, are computationally investigated. The proposed computational modeling framework is based on the extended finite element method (XFEM) and cohesive surface (CS) technique to model the fracture and debonding mechanism of 2D healed SHC samples under a uniaxial tensile test. The interfacial cohesive properties and the healed crack length have significant effects on the load carrying capacity, the crack initiation, the propagation, and the debonding potential of the solidified healing agent from the concrete matrix. The higher their values, the higher the load carrying capacity. The solidified healing agent will be debonded from the concrete matrix when the interfacial cohesive properties are less than 25% of the fracture properties of the solidified healing agent. Full article
(This article belongs to the Special Issue Application of Finite Element Methods)
Show Figures

Figure 1

Article
Incorporating Time-Series Forecasting Techniques to Predict Logistics Companies’ Staffing Needs and Order Volume
Computation 2023, 11(7), 141; https://doi.org/10.3390/computation11070141 - 14 Jul 2023
Viewed by 924
Abstract
Time-series analysis is a widely used method for studying past data to make future predictions. This paper focuses on utilizing time-series analysis techniques to forecast the resource needs of logistics delivery companies, enabling them to meet their objectives and ensure sustained growth. The [...] Read more.
Time-series analysis is a widely used method for studying past data to make future predictions. This paper focuses on utilizing time-series analysis techniques to forecast the resource needs of logistics delivery companies, enabling them to meet their objectives and ensure sustained growth. The study aims to build a model that optimizes the prediction of order volume during specific time periods and determines the staffing requirements for the company. The prediction of order volume in logistics companies involves analyzing trend and seasonality components in the data. Autoregressive (AR), Autoregressive Integrated Moving Average (ARIMA), and Seasonal Autoregressive Integrated Moving Average with Exogenous Variables (SARIMAX) are well-established and effective in capturing these patterns, providing interpretable results. Deep-learning algorithms require more data for training, which may be limited in certain logistics scenarios. In such cases, traditional models like SARIMAX, ARIMA, and AR can still deliver reliable predictions with fewer data points. Deep-learning models like LSTM can capture complex patterns but lack interpretability, which is crucial in the logistics industry. Balancing performance and practicality, our study combined SARIMAX, ARIMA, AR, and Long Short-Term Memory (LSTM) models to provide a comprehensive analysis and insights into predicting order volume in logistics companies. A real dataset from an international shipping company, consisting of the number of orders during specific time periods, was used to generate a comprehensive time-series dataset. Additionally, new features such as holidays, off days, and sales seasons were incorporated into the dataset to assess their impact on order forecasting and workforce demands. The paper compares the performance of the four different time-series analysis methods in predicting order trends for three countries: United Arab Emirates (UAE), Kingdom of Saudi Arabia (KSA), and Kuwait (KWT), as well as across all countries. By analyzing the data and applying the SARIMAX, ARIMA, LSTM, and AR models to predict future order volume and trends, it was found that the SARIMAX model outperformed the other methods. The SARIMAX model demonstrated superior accuracy in predicting order volumes and trends in the UAE (MAPE: 0.097, RMSE: 0.134), KSA (MAPE: 0.158, RMSE: 0.199), and KWT (MAPE: 0.137, RMSE: 0.215). Full article
(This article belongs to the Special Issue Computational Social Science and Complex Systems)
Show Figures

Figure 1

Back to TopTop