As enterprise access networks evolve with a larger number of mobile users, a wide range of devices and new cloud-based applications, managing user performance on an end-to-end basis has become rather challenging. Recent advances in big data network analytics combined with AI and cloud computing are being leveraged to tackle this growing problem. AI is becoming further integrated with software that manage networks, storage, and can compute.
This edited book focuses on how new network analytics, IoTs and Cloud Computing platforms are being used to ingest, analyse and correlate a myriad of big data across the entire network stack in order to increase quality of service and quality of experience (QoS/QoE) and to improve network performance. From big data and AI analytical techniques for handling the huge amount of data generated by IoT devices, the authors cover cloud storage optimization, the design of next generation access protocols and internet architecture, fault tolerance and reliability in intelligent networks, and discuss a range of emerging applications.
This book will be useful to researchers, scientists, engineers, professionals, advanced students and faculty members in ICTs, data science, networking, AI, machine learning and sensing. It will also be of interest to professionals in data science, AI, cloud and IoT start-up companies, as well as developers and designers.
Inspec keywords: health care; quality of service; Internet of Things; cloud computing; Big Data
Other keywords: internet of things; cloud computing; artificial intelligence; data analysis; big data; health care; Internet; medical information systems; quality of service
Subjects: Data handling techniques; General and management topics; Computer communications; Internet software; General electrical engineering topics; Biology and medical computing
As enterprise access networks evolve with more mobile users, diverse devices and cloud-based applications, managing user performance on an end-to-end basis has become next to impossible. Recent advances in big data network analytics, combined with AI and cloud computing are being leveraged to tackle this growing problem. The book focuses on how new network analytics platforms are being used to ingest, analyze and correlate a myriad of infrastructure data across the entire network stack with the goal of finding and fixing the quality of service network performance problems.
This book presents new upcoming technologies in the field of networking and telecommunication. It addresses major new technological developments and reflects on industry needs, current research trends and future directions. The authors focus on the development of AI-powered mechanisms for future wireless networking applications and architectures which will lead to more performant, resilient and valuable ecosystems and automated services. The book is a primary readership and is a "must-read" for researchers, academicians, engineers and scientists involved in the design and development of protocols and AI applications for wireless communication devices and wireless networking technologies.
All chapters presented here are the product of extensive field research involving applications and techniques related to data analysis in general, and to big data, AI, IoT and network technologies in particular.
There are more smart devices in our world today than individuals. A growing number of people are linked to the Internet 24 hours a day, in some form or another. A growing number of people own three, four, or more smart devices and rely on them. Smartphones, fitness and health trackers, e-readers, and tablets are just a few examples. It is forecast that on average there will be 3.4 smart devices or connections for every person on earth. The Internet of Things (IoT) is relevant to many industries. IoT systems contribute to the environmental controls, retail, transportation, healthcare, and agriculture industries among many others. According to Statista, the number of IoT devices that are in use across all relevant industries is forecast to grow to more than 8 billion by 2030. As for consumers, important growth areas are the Internet and digital media devices, which include smartphones. This area is also predicted to grow to more than 8 billion by 2030. Other applications with more than 1 million connected devices are connected and autonomous vehicles, IT infrastructure, asset management, and electric utility smart grid.
All of this is made possible through intelligent networks. The planet is rapidly becoming covered in networks that allow digital devices to communicate and interconnect. Consider the network mesh as a digital skin that surrounds the earth. Mobile devices, electronic sensors, electronic measuring equipment, medical gadgets, and gauges can all link with this digital skin. They keep track of, communicate with, analyze, and, in some situations, automatically adjust to the data collected and transmitted.
Any masterpiece is conjoined with all works of engineering, which includes the field of computer science or an electrical and electronics or mixture of both computer and electronics. Today, this gives the industry to understand research, evolve and develop into newer technology unfolding many scriptures behind the engineering works. In the similar manner, this chapter unfolds the prominent works involved in the verification of the designs involved in VLSI domain. Considering machine learning (ML), neural networks and artificial intelligence (AI) concepts and applying these to a wide range of verification approaches are quite interesting. The specific kinds of Register Transfer Level (RTL) design require rigorous verification which is targeted over any type of Field Programmable Gate Array (FPGA) or application-specific integrated circuits (ASICs). The verification process should be closed with testing all possible scenarios that too with intelligent verification methods. This chapter in the following pages brings the unique way of verification procedure involved in the RTL development methodologies using hardware description languages. With the help of system Verilog language, the developed reusable testbench is used for verification. The injected inputs to the testbench are randomized with constraints, such that the design should produce accurate output. To unify the verification language, there is a dedicated methodology commonly known as Universal Verification Methodology (UVM); by this, the chapter is extended to experience the readers also through the coverage-based formal verification. For continuous functional verification, an intelligent regression model is also developed with the help of ML and scripting. With this repeated injection of various test cases is possible in order to verify the functionality. Thus, with the adoption of the presented verification environment and distinctive approach, one can affirm that the design is ready to be deployed over the targeted semiconductor chips. As the verification is an unignorable procedure, this can be used to classify the algorithms developed in ML for data clustering, data encoding and its accurate analysis. More importantly, this chapter allows us to understand an intelligent verification model for testing the design with regression run with the corresponding set-up and the pass/failure analysis steps. This structure may result in a significant reduction of the simulation time for a VLSI verification engineer.
Identifying the most accurate methods for forecasting students' academic achievement is the focus of this research. Globally, all educational institutions are concerned about student attrition. The goal of all educational institutions is to increase the student's retention and graduation rates and this is only possible if at-risk students are identified early. Due to inherent classifier constraints and the incorporation of fewer student features, most commonly used prediction models are inefficient and incur. Different data mining algorithms like classification, clustering, regression, and association rule mining are used to uncover hidden patterns and relevant information in student performance big datasets in academics. Naïve Bayes, random forest, decision tree, multilayer perceptron (MLP), decision table (DT), JRip, and logistic regression (LR) are some of the data mining techniques that can be applied. A student's academic performance big dataset comprises many features, none of which are relevant or play a significant role in the mining process. So, features with a variance close to 0 are removed from the student's academic performance big dataset because they have no impact on the mining process. To determine the influence of various attributes on the class level, various feature selection (FS) techniques such as the correlation attribute evaluator (CAE), information gain attribute evaluator (IGAE), and gain ratio attribute evaluator (GRAE) are utilized. In this study, authors have investigated the performance of various data mining algorithms on the big dataset, as well as the effectiveness of various FS techniques. In conclusion, each classification algorithm that is built with some FS methods improves the performance of the classification algorithms in their overall predictive performance.
Statistical investigations are concerned with circumstances including arranging, information assortment, association of data, and scrutiny of compiled data, conversion, and exposure in a clear and chosen procedure. To do so, research techniques can be distinguished in two different ways: conclusion overviews and statistical surveying. In assessments of public sentiment, the primary objective is to assemble data about deciding subjects dependent on close-to-home meetings. Statistical surveying is directed through the market investigation of a specific item. Descriptive statistics are responsible for collection, association, depiction of information, estimation, and translation of coefficients, whereas inductive or inferential statistics, also known as the proportion of vulnerability or techniques that rely on likelihood hypothesis, are responsible for investigation and understanding of information related to an edge of vulnerability. In statistics, we look at how to use tables and charts. The tables organise and classify the data, while the graphics convey the information in a clear and easy manner, aiding in goal attainment.
The present digital world technology is evolving at a rapid pace. To store, manage and protect the digital information, it is necessary to back up and recover the data with utmost efficiency. As a solution, cloud computing that offers customers a wide range of services can be used. Storage-as-a-Service (SaaS) is one of the cloud platform's services, in which a large volume of digital data is maintained in the cloud database. Enterprise's most sensitive data are stored in the cloud, ensuring that it is secure and accessible at all times and from all locations. At times, information may become unavailable due to natural disasters such as windstorms, rainfall, earthquakes, or any technical fault and accidental deletion. To ensure data security and availability under such circumstances, it is vital to have a good understanding of the data backup and recovery strategies. This chapter examines a variety of cloud computing backup and recovery techniques.
The Internet of Things can be perceived as a collection of millions of devices that are connected among each other and with the internet as a connectivity backbone to acquire and share real-time data for providing intelligent services. The tremendous rise in the number of devices requires an adequate network infrastructure to remotely deal with data orchestration. To overcome this issue, a new approach of infrastructure sharing over the cloud among service providers has transpired, with the goal of lowering excessive infrastructure deployment costs. The software-defined networking (SDN) is a networking architecture that enables network operators and users to monitor and manage the network devices remotely and more flexibly by using software that runs on external servers. As SDN and cloud integration improves reliability, scalability, and manageability, this chapter combines cloud infrastructure with SDN. Although SDN-based cloud networks have numerous advantages as mentioned above, there still exist certain challenges that draw the attention of researchers like energy efficiency, security, load balancing, and so on. The work carried out in this chapter is an attempt to address one of the challenging tasks, namely the load balancing, by developing a new multiple-controller load-balancing strategy. The proposed strategy effectively balances the load even if one or more super controllers fail. Furthermore, results are simulated and compared under different operational environments, both with and without the Modified Bully algorithm. The comparison results ensure that the introduced technique exhibits better performance with metrics such as packet loss, packet transmission ratio, and throughput.
Cloud computing has been evolved as a new computing prototype with the aim of providing reliability, quality of service and cost-effective with no location barrier. More massive databases and applications are relocated to an immense centralized data center known as the cloud. Cloud computing has enormous benefits of no need to purchase physical space from a separate vendor instead of using the cloud, but these benefits have security threats. The resource virtualization, the data and the machine are physically absent in the cloud; the storage of data in the cloud causes security issues. An unauthorized person can penetrate through the cloud security and can cause data manipulation, data loss or theft might take place. This chapter has described cloud computing and various security issues and challenges that are present in different cloud models and cloud environment. It also gives an idea of different threat management techniques available to encounter security issues and challenges. The RSA algorithm implementation has been described in detail, and the Advance Encryption Standard policy, along with its implementation, has also been discussed. For better clarification, several reviews are conducted on the existing models.
The method of identifying a speaker based on his or her speech is known as automatic speaker recognition. Speaker/voice recognition is a biometric sensory device that recognizes people by their voices. Most speaker recognition systems nowadays are focused on spectral information, which means they use spectral information derived from speech signal segments of 10-30 ms in length. However, if the received speech signal contains some noise, the cepstral-based system's output suffers. The primary goal of the study is to see the various factors responsible for improved performance of the speaker recognition systems by modeling prosodic features, and phases of speaker recognition system. Furthermore, in the presence of background noise, the analysis focused on a text-independent speaker recognition system.
Water is an essential resource that we use in our daily life. The standard of the water quality must be observed in real time to make sure that we obtain a secured and clean supply of water to our residential areas. A water quality-monitoring and decision-making system (WQMDMS) is implemented for this purpose based on Internet of Things (IoT) and fuzzy logic to decide the usage of water (drinking or tap water) in a common water tank system. The physical and chemical properties of data are obtained through continuous monitoring of sensors. The work describes in detail the design of a fuzzy logic controller (FLC) for a water quality measurement system, to determine the quality of water by decision-making, and accordingly, the usage of water is decided. The WQMDM system measures the physico-chemical characteristics of water like pH, turbidity, and temperature by the use of corresponding analog and digital sensors. The values of the parameters obtained are used to detect the presence of water contaminants and accordingly, the quality of water is determined. The measurements from the sensor are handled and processed by ESP32, and these refined values follow the rules determined by the fuzzy inference system (FIS). The output highlights the water quality that is categorized as very poor, poor, average, and good. The usage of the water will be determined by the results obtained using the FLC and as per the percentage of water quality, the water is decided as drinking water or tap water.
This work is based on the wireless sensor networks (WSN), which contain an insufficient number of device nodes, regularly similarly stated nodes or sensors, and sensor knots that are associated with all other wireless communications. There are numerous assumptions or overall possessions of WSNs, and a lot more applications of WSNs around the creation are presented, making it unbearable to protect all their application areas. Applications of WSNs span ecological and animal monitoring, factory and manufacturing monitoring, farming monitoring and mechanization, healthiness monitoring, and many other areas. One of the most characteristics of WSNs is that they are strongly coupled with their application. In this chapter, WI-MAX without wormhole attack is explained, and the related results are explained with their outputs The NS2 evaluation system is applied to production out of all imitations.
The pandemic has forced industries to move immediately their critical workload to the cloud in order to ensure continuous functioning. As cloud computing expansions pace and organisations strive for methods to increase their network, agility and storage, edge computing has shown to be the best alternative. The healthcare business has a long history of collaborating with cutting-edge information technology, and the Internet of Things (IoT) is no exception. Researchers are still looking for substantial methods to collect, view, process, and analyse data that can signify a quantitative revolution in healthcare as devices become more convenient, and smaller data becomes larger. To provide real-time analytics, healthcare organisations frequently deploy cloud technology as the storage layer between system and insight. Edge computing, also known as fog computing, allows computers to perform important analyses without having to go through the time-consuming cloud storage process. For this form of processing, speed is key, and it may be crucial in constructing a healthcare IoT that is useful for patient interaction, inpatient treatment, population health management and remote monitoring. We present a thorough overview to highlight the most recent trends in fog computing activities related to the IoT in healthcare. Other perspectives on the edge computing domain are also offered, such as styles of application support, techniques and resources. Finally, necessity of edge computing in the era of Covid-19 pandemic is addressed.
Intelligence in medical imaging explores how intelligent computing can create a large amount of changes to existing technology in the field of medical image processing. The book presents various algorithms, techniques, and models for integrating medical image processing with artificial intelligence (AI) and biocomputing. Bioinformatics solutions lead to an effective method for processing the image data for the purpose of retrieving the information of interest and collecting various data sources for extracting the knowledge. Moreover, image processing methods and techniques help scientists and physicians in the medical field with diagnosis and therapies. It describes evolutionary optimization techniques, support vector machines (SVMs), fuzzy logic, a Bayesian probabilistic framework, a reinforcement learning-based multistage image segmentation algorithm, and a machine learning (ML) approach. It discusses how these techniques are used for image classification, image formation, image visualization, image analysis, image management, and image enhancement. The term "medical image processing" illustrates the provision of digital image processing, particularly for medicine. Medical imaging intends to identify internal structures hidden in the human body. It helps to find abnormalities in the body. Digital images can be processed effectively, also evaluated, and utilized in many circumstances concurrently with help of suitable communication protocols.
Internet of Things (IoT) provides a pathway for connecting physical entities with digital entities using devices and communication technologies. The rapid growth of IoT in recent days has made a significant influence in many fields. Healthcare is one of those fields which will be hugely benefited by IoT. IoT can resolve many challenges faced by patients and doctors in healthcare. Smart health-care applications allow the doctor to monitor the patient’s health state without human intervention. Sensors collect and send the data from the patient. Recorded data are stored in a database that enables medical experts to analyze those data. Any abnormal change in the status of the patient can be notified to the doctor. This chapter aims to study different research works made on IoT-based health-care systems that are implemented using basic development boards. Various hardware parameters of health-care systems and sensors used for those parameters are explored. A basic Arduino-based health-care application is proposed using sensors and global system for mobile communication (GSM) module.
In India, almost 80% of patients who die from heart disease do not receive adequate care. This is a challenging task for doctors because they often seem unable to make an accurate diagnosis. This condition is extremely expensive to treat. The proposed solution uses data mining technologies to simplify the decision support system in order to increase the cost-effectiveness of therapy. To oversee their patients' care, most hospitals use a hospital management system. Unfortunately, many of these tools do not employ large amounts of clinical data to derive useful information. Because these systems generate a considerable amount of data in many embodiments, the data is rarely accessed and remains unusable. As a result, making sensible selections requires a lot of effort during this procedure. The process of diagnosing a disease currently entails identifying the disease's numerous symptoms and characteristics. This research employs a number of data mining approaches to assist with medical diagnostics.
In today's world, information-centric networking (ICN) is a brand-new next-generation network for distributing multimedia content. ICN focuses on sharing content across the network rather than obtaining content from a single fixed server [1]. In-network caching aids in the dissemination of content from the network, and the ICN also includes a number of intrusive security mechanisms. Despite the ICN network's many security measures, several attacks, especially interest flooding attacks (IFA), continue to wreak havoc on the network's distribution capability. In order to address security threats, the literature includes a number of mitigating procedures. However, legitimate users' requests are misclassified as an attack in an emergency circumstance, affecting the network's QoS [2]. In this chapter, Detection of Interest Flooding Attack using Artificial Intelligence Framework (DIAIF) is proposed in ICN. DIAIF seeks to lighten the load on ICN routers by removing the source of the attack without interfering with legitimate user requests. DIAIF depends on router feedback to assign a beneficial value (BV) to each piece of content and to block dangerous users based on the BV. The ICN testbed was designed to assess the proposed DIAIF's performance in terms of QoS during severe flood scenarios, responding with malicious content without interfering with genuine user requests, and identifying the source of attack in a communication scenario.
Nowadays, one of the most significant components of road infrastructure is monitoring road surface conditions, which leads to better driving conditions and reduces the chance of a road accident. Traditional road condition monitoring systems are incapable of gathering real-time information concerning road conditions. In previous generations, road surface condition monitoring was done for fixed roadways and vehicles travelling at a constant pace. Several systems have presented a method for exploiting the sensors installed in automobiles. However, this method will not assist in forecasting the precise placement of potholes, speed bumps, or staggered roads.
As a result, smartphone-based road condition evaluation and navigation are becoming increasingly popular. We propose exploring several machine learning techniques to accurately assess road conditions using accelerometer, gyroscope, and Global Positioning System (GPS) data collected from cellphones. We also recorded footage of the roadways in order to reduce noise in the data. This two-pronged approach to data collection will aid in the exact positioning of potholes, speed bumps, and staggered roads. This method of data collection will aid in the classification of road conditions into numerous features such as roads with smooth surfaces, potholes, speed bumps and staggered highways using machine learning algorithms. machine learning algorithms are used to create characteristics such as smooth roads, potholes, speed breakers, and staggered highways. The user will receive this information via the map, which will classify the various road conditions. Accelerometers and gyroscope sensors will analyze multiple features from all three axes of the sensors in order to produce a more precise location of designated routes. To classify the road conditions, we investigate the performance utilizing support vector machine (SVM), random forest, neural network and deep neural network. As a result, our findings demonstrate that models trained using a dual data gathering strategy will produce more accurate outcomes. Data classification will be substantially more accurate when neural networks, are used. The methods described here can be used on a broader scale to monitor roads for problems that pose a safety concern to commuters and to give maintenance data to appropriate authorities.
This book presented applications, technologies, challenges, and implementation design models of intelligent networks design by Big Data, Internet of Things, artificial intelligence (AI), and cloud computing approaches in different sectors. Research into real-time fault tolerance, security, and data analytics by addressing significant data volume and velocity measurement are essential in intelligent networks, for example, where different analytical models and AI and machine learning algorithms improve end-to-end performance, efficiency, and quality of service (QoS) [1-10].
The availability of Big Data, AI, cloud computing, low-cost commodity hardware, and new information management and analytic software has produced a unique moment in the history of intelligent networks [11-21]. The convergence of these trends means that we have the capabilities required to analyze unique data sets quickly and cost-effectively. They represent a genuine leap forward and a clear opportunity to realize enormous efficiency, productivity, revenue, and profitability.
This book also discussed issues and challenges that can be addressed and overcome in the future using the new upcoming technologies. The age of intelligent networks is here, and these are truly revolutionary times if business and technology professionals continue to work together and deliver on the promise.
Thank you for taking the time to read this book, and we hope you enjoyed reading it as much as we did write it.