Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Journal = Network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Arithmetic Study about Efficiency in Network Topologies for Data Centers
Network 2023, 3(3), 298-325; https://doi.org/10.3390/network3030015 - 26 Jun 2023
Viewed by 585
Abstract
Data centers are getting more and more attention due the rapid increase of IoT deployments, which may result in the implementation of smaller facilities being closer to the end users as well as larger facilities up in the cloud. In this paper, an [...] Read more.
Data centers are getting more and more attention due the rapid increase of IoT deployments, which may result in the implementation of smaller facilities being closer to the end users as well as larger facilities up in the cloud. In this paper, an arithmetic study has been carried out in order to measure a coefficient related to both the average number of hops among nodes and the average number of links among devices for a range of typical network topologies fit for data centers. Such topologies are either tree-like or graph-like designs, where this coefficient provides a balance between performance and simplicity, resulting in lower values in the coefficient accounting for a better compromise between both factors in redundant architectures. The motivation of this contribution is to craft a coefficient that is easy to calculate by applying simple arithmetic operations. This coefficient can be seen as another tool to compare network topologies in data centers that could act as a tie-breaker so as to select a given design when other parameters offer contradictory results. Full article
(This article belongs to the Special Issue Advances on Networks and Cyber Security)
Show Figures

Figure 1

Review
Recent Development of Emerging Indoor Wireless Networks towards 6G
Network 2023, 3(2), 269-297; https://doi.org/10.3390/network3020014 - 12 May 2023
Cited by 1 | Viewed by 1237
Abstract
Sixth-generation (6G) mobile technology is currently under development, and is envisioned to fulfill the requirements of a fully connected world, providing ubiquitous wireless connectivity for diverse users and emerging applications. Transformative solutions are expected to drive the surge to accommodate a rapidly growing [...] Read more.
Sixth-generation (6G) mobile technology is currently under development, and is envisioned to fulfill the requirements of a fully connected world, providing ubiquitous wireless connectivity for diverse users and emerging applications. Transformative solutions are expected to drive the surge to accommodate a rapidly growing number of intelligent devices and services. In this regard, wireless local area networks (WLANs) have a major role to play in indoor spaces, from supporting explosive growth in high-bandwidth applications to massive sensor arrays with diverse network requirements. Sixth-generation technology is expected to have a superconvergence of networks, including WLANs, to support this growth in applications in multiple dimensions. To this end, this paper comprehensively reviews the latest developments in diverse WLAN technologies, including WiFi, visible light communication, and optical wireless communication networks, as well as their technical capabilities. This paper also discusses how well these emerging WLANs align with supporting 6G requirements. The analyses presented in the paper provide insight into the research opportunities that need to be investigated to overcome the challenges in integrating WLANs in a 6G ecosystem. Full article
Show Figures

Figure 1

Article
Clustered Distributed Learning Exploiting Node Centrality and Residual Energy (CINE) in WSNs
Network 2023, 3(2), 253-268; https://doi.org/10.3390/network3020013 - 23 Apr 2023
Viewed by 865
Abstract
With the explosion of big data, the implementation of distributed machine learning mechanisms in wireless sensor networks (WSNs) is becoming required for reducing the number of data traveling throughout the network and for identifying anomalies promptly and reliably. In WSNs, the above need [...] Read more.
With the explosion of big data, the implementation of distributed machine learning mechanisms in wireless sensor networks (WSNs) is becoming required for reducing the number of data traveling throughout the network and for identifying anomalies promptly and reliably. In WSNs, the above need has to be considered along with the limited energy and processing resources available at the nodes. In this paper, we tackle the resulting complex problem by designing a multi-criteria protocol CINE that stands for “Clustered distributed learnIng exploiting Node centrality and residual Energy” for distributed learning in WSNs. More specifically, considering the energy and processing capabilities of nodes, we design a scheme that assumes that nodes are partitioned in clusters and selects a central node in each cluster, called cluster head (CH), that executes the training of the machine learning (ML) model for all the other nodes in the cluster, called cluster members (CMs). In fact, CMs are responsible for executing the inference only. Since the CH role requires the consumption of more resources, the proposed scheme rotates the CH role among all nodes in the cluster. The protocol has been simulated and tested using real environmental data sets. Full article
Show Figures

Figure 1

Article
Improvement of Network Flow Using Multi-Commodity Flow Problem
Network 2023, 3(2), 239-252; https://doi.org/10.3390/network3020012 - 04 Apr 2023
Viewed by 1147
Abstract
In recent years, Internet traffic has increased due to its widespread use. This can be attributed to the growth of social games on smartphones and video distribution services with increasingly high image quality. In these situations, a routing mechanism is required to control [...] Read more.
In recent years, Internet traffic has increased due to its widespread use. This can be attributed to the growth of social games on smartphones and video distribution services with increasingly high image quality. In these situations, a routing mechanism is required to control congestion, but most existing routing protocols select a single optimal path. This causes the load to be concentrated on certain links, increasing the risk of congestion. In addition to the optimal path, the network has redundant paths leading to the destination node. In this study, we propose a multipath control with multi-commodity flow problem. Comparing the proposed method with OSPF, which is single-path control, and OSPF-ECMP, which is multipath control, we confirmed that the proposed method records higher packet arrival rates. This is expected to reduce congestion. Full article
Show Figures

Figure 1

Article
SDN-Based Routing Framework for Elephant and Mice Flows Using Unsupervised Machine Learning
Network 2023, 3(1), 218-238; https://doi.org/10.3390/network3010011 - 02 Mar 2023
Viewed by 1159
Abstract
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large [...] Read more.
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large packets (elephant flow) that enter the network at any time, which affects a particular flow (mice flow). Therefore, it is crucial to find a solution for identifying and finding an appropriate routing path in order to improve the network management system. This work proposes a SDN application to find the best path based on the type of flow using network performance metrics. These metrics are used to characterize and identify flows as elephant and mice by utilizing unsupervised machine learning (ML) and the thresholding method. A developed routing algorithm was proposed to select the path based on the type of flow. A validation test was performed by testing the proposed framework using different topologies of the DCN and comparing the performance of a SDN-Ryu controller with that of the proposed framework based on three factors: throughput, bandwidth, and data transfer rate. The results show that 70% of the time, the proposed framework has higher performance for different types of flows. Full article
Show Figures

Figure 1

Article
Machine Learning Applied to LoRaWAN Network for Improving Fingerprint Localization Accuracy in Dense Urban Areas
Network 2023, 3(1), 199-217; https://doi.org/10.3390/network3010010 - 09 Feb 2023
Cited by 1 | Viewed by 1150
Abstract
In the area of low-power wireless networks, one technology that many researchers are focusing on relates to positioning methods such as fingerprinting in densely populated urban areas. This work presents an experimental study aimed at quantifying mean location estimation error in populated areas. [...] Read more.
In the area of low-power wireless networks, one technology that many researchers are focusing on relates to positioning methods such as fingerprinting in densely populated urban areas. This work presents an experimental study aimed at quantifying mean location estimation error in populated areas. Using a dataset provided by the University of Antwerp, a neural network was implemented with the aim of providing end-device location. In this way, we were able to measure the mean localization error in areas of high urban density. The results obtained show a deviation of less than 150 m in locating the end device. This offset can be decreased up to a few meters, provided that there is a greater density of nodes per square meter. This result could enable Internet of Things (IoT) applications to use fingerprinting in place of energy-consuming alternatives. Full article
Show Figures

Figure 1

Article
Improving Bundle Routing in a Space DTN by Approximating the Transmission Time of the Reliable LTP
Network 2023, 3(1), 180-198; https://doi.org/10.3390/network3010009 - 03 Feb 2023
Viewed by 1031
Abstract
Because the operation of space networks is carefully planned, it is possible to predict future contact opportunities from link budget analysis using the anticipated positions of the nodes over time. In the standard approach to space delay-tolerant networking (DTN), such knowledge is used [...] Read more.
Because the operation of space networks is carefully planned, it is possible to predict future contact opportunities from link budget analysis using the anticipated positions of the nodes over time. In the standard approach to space delay-tolerant networking (DTN), such knowledge is used by contact graph routing (CGR) to decide the paths for data bundles. However, the computation assumes nearly ideal channel conditions, disregarding the impact of the convergence layer retransmissions (e.g., as implemented by the Licklider transmission protocol (LTP)). In this paper, the effect of the bundle forwarding time estimation (i.e., the link service time) to routing optimality is analyzed, and an accurate expression for lossy channels is discussed. The analysis is performed first from a general and protocol-agnostic perspective, assuming knowledge of the statistical properties and general features of the contact opportunities. Then, a practical case is studied using the standard space DTN protocol, evaluating the performance improvement of CGR under the proposed forwarding time estimation. The results of this study provide insight into the optimal routing problem for a space DTN and a suggested improvement to the current routing standard. Full article
Show Figures

Figure 1

Article
A Federated Learning-Based Approach for Improving Intrusion Detection in Industrial Internet of Things Networks
Network 2023, 3(1), 158-179; https://doi.org/10.3390/network3010008 - 30 Jan 2023
Cited by 2 | Viewed by 2242
Abstract
The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks [...] Read more.
The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks eventually. The rapidly growing IoT-connected devices under a centralized ML system could threaten data privacy. The popular centralized machine learning (ML)-assisted approaches are difficult to apply due to their requirement of enormous amounts of data in a central entity. Owing to the growing distribution of data over numerous networks of connected devices, decentralized ML solutions are needed. In this paper, we propose a Federated Learning (FL) method for detecting unwanted intrusions to guarantee the protection of IoT networks. This method ensures privacy and security by federated training of local IoT device data. Local IoT clients share only parameter updates with a central global server, which aggregates them and distributes an improved detection algorithm. After each round of FL training, each of the IoT clients receives an updated model from the global server and trains their local dataset, where IoT devices can keep their own privacy intact while optimizing the overall model. To evaluate the efficiency of the proposed method, we conducted exhaustive experiments on a new dataset named Edge-IIoTset. The performance evaluation demonstrates the reliability and effectiveness of the proposed intrusion detection model by achieving an accuracy (92.49%) close to that offered by the conventional centralized ML models’ accuracy (93.92%) using the FL method. Full article
(This article belongs to the Special Issue Networking Technologies for Cyber-Physical Systems)
Show Figures

Figure 1

Article
Formal Algebraic Model of an Edge Data Center with a Redundant Ring Topology
Network 2023, 3(1), 142-157; https://doi.org/10.3390/network3010007 - 30 Jan 2023
Viewed by 1009
Abstract
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by [...] Read more.
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by lowering carbon emissions. In this paper, a model for a field monitoring system has been proposed, where an edge data center topology in the form of a redundant ring has been designed for redundancy purposes to join together nodes spread apart. Additionally, a formal algebraic model of such a design has been exposed and verified. Full article
(This article belongs to the Special Issue Emerging Networks and Systems for Edge Computing)
Show Figures

Figure 1

Article
IoT and Blockchain Integration: Applications, Opportunities, and Challenges
Network 2023, 3(1), 115-141; https://doi.org/10.3390/network3010006 - 24 Jan 2023
Cited by 2 | Viewed by 1817
Abstract
During the recent decade, two variants of evolving computing networks have augmented the Internet: (i) The Internet of Things (IoT) and (ii) Blockchain Network(s) (BCNs). The IoT is a network of heterogeneous digital devices embedded with sensors and software for various automation and [...] Read more.
During the recent decade, two variants of evolving computing networks have augmented the Internet: (i) The Internet of Things (IoT) and (ii) Blockchain Network(s) (BCNs). The IoT is a network of heterogeneous digital devices embedded with sensors and software for various automation and monitoring purposes. A Blockchain Network is a broadcast network of computing nodes provisioned for validating digital transactions and recording the “well-formed” transactions in a unique data storage called a blockchain ledger. The power of a blockchain network is that (ideally) every node maintains its own copy of the ledger and takes part in validating the transactions. Integrating IoT and BCNs brings promising applications in many areas, including education, health, finance, agriculture, industry, and the environment. However, the complex, dynamic and heterogeneous computing and communication needs of IoT technologies, optionally integrated by blockchain technologies (if mandated), draw several challenges on scaling, interoperability, and security goals. In recent years, numerous models integrating IoT with blockchain networks have been proposed, tested, and deployed for businesses. Numerous studies are underway to uncover the applications of IoT and Blockchain technology. However, a close look reveals that very few applications successfully cater to the security needs of an enterprise. Needless to say, it makes less sense to integrate blockchain technology with an existing IoT that can serve the security need of an enterprise. In this article, we investigate several frameworks for IoT operations, the applicability of integrating them with blockchain technology, and due security considerations that the security personnel must make during the deployment and operations of IoT and BCN. Furthermore, we discuss the underlying security concerns and recommendations for blockchain-integrated IoT networks. Full article
Show Figures

Figure 1

Article
Edge Data Center Organization and Optimization by Using Cage Graphs
Network 2023, 3(1), 93-114; https://doi.org/10.3390/network3010005 - 18 Jan 2023
Viewed by 900
Abstract
Data center organization and optimization are increasingly receiving attention due to the ever-growing deployments of edge and fog computing facilities. The main aim is to achieve a topology that processes the traffic flows as fast as possible and that does not only depend [...] Read more.
Data center organization and optimization are increasingly receiving attention due to the ever-growing deployments of edge and fog computing facilities. The main aim is to achieve a topology that processes the traffic flows as fast as possible and that does not only depend on AI-based computing resources, but also on the network interconnection among physical hosts. In this paper, graph theory is introduced, due to its features related to network connectivity and stability, which leads to more resilient and sustainable deployments, where cage graphs may have an advantage over the rest. In this context, the Petersen graph cage is studied as a convenient candidate for small data centers due to its small number of nodes and small network diameter, thus providing an interesting solution for edge and fog data centers. Full article
(This article belongs to the Special Issue Advances in Edge and Cloud Computing)
Show Figures

Figure 1

Editorial
Acknowledgment to the Reviewers of Network in 2022
Network 2023, 3(1), 91-92; https://doi.org/10.3390/network3010004 - 17 Jan 2023
Viewed by 690
Abstract
High-quality academic publishing is built on rigorous peer review [...] Full article
Review
On Attacking Future 5G Networks with Adversarial Examples: Survey
Network 2023, 3(1), 39-90; https://doi.org/10.3390/network3010003 - 30 Dec 2022
Viewed by 2324
Abstract
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various [...] Read more.
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various requirements in different vertical sectors while operating on top of the same physical infrastructure. The recent progress in artificial intelligence and machine learning is theorized to be a potential answer to the arising resource allocation challenges. It is therefore expected that future generation mobile networks will heavily depend on its artificial intelligence components which may result in those components becoming a high-value attack target. In particular, a smart adversary may exploit vulnerabilities of the state-of-the-art machine learning models deployed in a 5G system to initiate an attack. This study focuses on the analysis of adversarial example generation attacks against machine learning based frameworks that may be present in the next generation networks. First, various AI/ML algorithms and the data used for their training and evaluation in mobile networks is discussed. Next, multiple AI/ML applications found in recent scientific papers devoted to 5G are overviewed. After that, existing adversarial example generation based attack algorithms are reviewed and frameworks which employ these algorithms for fuzzing stat-of-art AI/ML models are summarised. Finally, adversarial example generation attacks against several of the AI/ML frameworks described are presented. Full article
Article
Towards Software-Defined Delay Tolerant Networks
Network 2023, 3(1), 15-38; https://doi.org/10.3390/network3010002 - 28 Dec 2022
Viewed by 1657
Abstract
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in [...] Read more.
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in deep space. Current space communication involves relatively few nodes and is heavily deterministic and scheduled, which will not be true in the future. It is unclear how these large space DTN networks, consisting of inherently intermittent links, will be able to adapt to dynamically changing network conditions. In addition to the proposed SDDTN architecture, this paper explores data plane programming and the Programming Protocol-Independent Packet Processors (P4) language as a possible method of implementing this SDDTN architecture, enumerates the challenges of this approach, and presents intermediate results. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

Article
A Performance Evaluation of In-Memory Databases Operations in Session Initiation Protocol
Network 2023, 3(1), 1-14; https://doi.org/10.3390/network3010001 - 28 Dec 2022
Cited by 1 | Viewed by 832
Abstract
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just [...] Read more.
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just like other Internet technology, SIP stores its related data in databases with a predefined data structure. In recent, SIP technologies have adopted the real advantages of in-memory databases as cache systems to ensure fast database operations during real-time communication. Meanwhile, in industry, there are several names of in-memory databases that have been implemented with different structures (e.g., query types, data structure, persistency, and key/value size). However, there are limited resources and poor recommendations on how to select a proper in-memory database in SIP communications. This paper provides recommended and efficient in-memory databases which are most fitted to SIP servers by evaluating three types of databases including Memcache, Redis, and Local (OpenSIPS built-in). The evaluation has been conducted based on the experimental performance of the impact of in-memory operations (store and fetch) against the SIP server by applying heavy load traffic through different scenarios. To sum up, evaluation results show that the Local database consumed less memory compared to Memcached and Redis for read and write operations. While persistency was considered, Memcache is the preferable database selection due to its 25.20 KB/s for throughput and 0.763 s of call–response time. Full article
Show Figures

Figure 1

Back to TopTop