Other computer applications
More general concepts than this:
More specific concepts than this:
Filter by subject:
- Other computer applications [1393]
- Computer and control engineering [1390]
- Computer applications [1390]
- Computer software [652]
- Software techniques and systems [652]
- Electrical and electronic engineering [647]
- Social and behavioural sciences computing [550]
- Computer hardware [532]
- Communications [467]
- Computer-aided instruction [390]
- [363]
- http://iet.metastore.ingenta.com/content/subject/c7400,http://iet.metastore.ingenta.com/content/subject/c5260,http://iet.metastore.ingenta.com/content/subject/b6100,http://iet.metastore.ingenta.com/content/subject/c6130,http://iet.metastore.ingenta.com/content/subject/e,http://iet.metastore.ingenta.com/content/subject/b7000,http://iet.metastore.ingenta.com/content/subject/c7830,http://iet.metastore.ingenta.com/content/subject/b0000,http://iet.metastore.ingenta.com/content/subject/c5260b,http://iet.metastore.ingenta.com/content/subject/e0000,http://iet.metastore.ingenta.com/content/subject/c7840,http://iet.metastore.ingenta.com/content/subject/b6135,http://iet.metastore.ingenta.com/content/subject/c6180,http://iet.metastore.ingenta.com/content/subject/c1000,http://iet.metastore.ingenta.com/content/subject/c7820,http://iet.metastore.ingenta.com/content/subject/c7850,http://iet.metastore.ingenta.com/content/subject/c7200,http://iet.metastore.ingenta.com/content/subject/e0200,http://iet.metastore.ingenta.com/content/subject/c6170,http://iet.metastore.ingenta.com/content/subject/a,http://iet.metastore.ingenta.com/content/subject/c3000,http://iet.metastore.ingenta.com/content/subject/b6200,http://iet.metastore.ingenta.com/content/subject/c1100,http://iet.metastore.ingenta.com/content/subject/c7300,http://iet.metastore.ingenta.com/content/subject/c0000,http://iet.metastore.ingenta.com/content/subject/c3300,http://iet.metastore.ingenta.com/content/subject/b0100,http://iet.metastore.ingenta.com/content/subject/c7210,http://iet.metastore.ingenta.com/content/subject/e0250,http://iet.metastore.ingenta.com/content/subject/b0120,http://iet.metastore.ingenta.com/content/subject/c7100,http://iet.metastore.ingenta.com/content/subject/c6170k,http://iet.metastore.ingenta.com/content/subject/c7210n,http://iet.metastore.ingenta.com/content/subject/b7500,http://iet.metastore.ingenta.com/content/subject/e3000,http://iet.metastore.ingenta.com/content/subject/c0200,http://iet.metastore.ingenta.com/content/subject/c7410,http://iet.metastore.ingenta.com/content/subject/c7330,http://iet.metastore.ingenta.com/content/subject/b6210,http://iet.metastore.ingenta.com/content/subject/c7445,http://iet.metastore.ingenta.com/content/subject/c5600,http://iet.metastore.ingenta.com/content/subject/b0200,http://iet.metastore.ingenta.com/content/subject/c7420,http://iet.metastore.ingenta.com/content/subject/b7700,http://iet.metastore.ingenta.com/content/subject/c7830d,http://iet.metastore.ingenta.com/content/subject/e1000,http://iet.metastore.ingenta.com/content/subject/b6140,http://iet.metastore.ingenta.com/content/subject/b7520,http://iet.metastore.ingenta.com/content/subject/b6135e,http://iet.metastore.ingenta.com/content/subject/c5620,http://iet.metastore.ingenta.com/content/subject/c1140,http://iet.metastore.ingenta.com/content/subject/c6150,http://iet.metastore.ingenta.com/content/subject/b7710,http://iet.metastore.ingenta.com/content/subject/a9000,http://iet.metastore.ingenta.com/content/subject/a8000,http://iet.metastore.ingenta.com/content/subject/b6250,http://iet.metastore.ingenta.com/content/subject/b7520h,http://iet.metastore.ingenta.com/content/subject/c6190,http://iet.metastore.ingenta.com/content/subject/c5290,http://iet.metastore.ingenta.com/content/subject/a8700,http://iet.metastore.ingenta.com/content/subject/a9300,http://iet.metastore.ingenta.com/content/subject/c6130b,http://iet.metastore.ingenta.com/content/subject/a9385,http://iet.metastore.ingenta.com/content/subject/b6210l,http://iet.metastore.ingenta.com/content/subject/c1140z,http://iet.metastore.ingenta.com/content/subject/a8770,http://iet.metastore.ingenta.com/content/subject/e0400,http://iet.metastore.ingenta.com/content/subject/e0410,http://iet.metastore.ingenta.com/content/subject/c7860,http://iet.metastore.ingenta.com/content/subject/c6150n,http://iet.metastore.ingenta.com/content/subject/b8000,http://iet.metastore.ingenta.com/content/subject/c5500,http://iet.metastore.ingenta.com/content/subject/c6110,http://iet.metastore.ingenta.com/content/subject/c6130v,http://iet.metastore.ingenta.com/content/subject/b7200,http://iet.metastore.ingenta.com/content/subject/c3365,http://iet.metastore.ingenta.com/content/subject/c6190v,http://iet.metastore.ingenta.com/content/subject/c6130m,http://iet.metastore.ingenta.com/content/subject/b6400,http://iet.metastore.ingenta.com/content/subject/e3010,http://iet.metastore.ingenta.com/content/subject/a9365,http://iet.metastore.ingenta.com/content/subject/b0240,http://iet.metastore.ingenta.com/content/subject/b7510,http://iet.metastore.ingenta.com/content/subject/c0220,http://iet.metastore.ingenta.com/content/subject/c5540,http://iet.metastore.ingenta.com/content/subject/e1400,http://iet.metastore.ingenta.com/content/subject/c5260d,http://iet.metastore.ingenta.com/content/subject/c6130s,http://iet.metastore.ingenta.com/content/subject/c6160
- c7400,c5260,b6100,c6130,e,b7000,c7830,b0000,c5260b,e0000,c7840,b6135,c6180,c1000,c7820,c7850,c7200,e0200,c6170,a,c3000,b6200,c1100,c7300,c0000,c3300,b0100,c7210,e0250,b0120,c7100,c6170k,c7210n,b7500,e3000,c0200,c7410,c7330,b6210,c7445,c5600,b0200,c7420,b7700,c7830d,e1000,b6140,b7520,b6135e,c5620,c1140,c6150,b7710,a9000,a8000,b6250,b7520h,c6190,c5290,a8700,a9300,c6130b,a9385,b6210l,c1140z,a8770,e0400,e0410,c7860,c6150n,b8000,c5500,c6110,c6130v,b7200,c3365,c6190v,c6130m,b6400,e3010,a9365,b0240,b7510,c0220,c5540,e1400,c5260d,c6130s,c6160
- [339],[328],[326],[295],[277],[252],[246],[228],[228],[208],[204],[203],[187],[186],[184],[167],[164],[160],[159],[157],[152],[145],[144],[144],[143],[141],[140],[138],[129],[127],[126],[120],[116],[115],[115],[109],[108],[104],[97],[97],[95],[94],[92],[89],[88],[88],[85],[85],[84],[84],[82],[78],[76],[73],[72],[72],[72],[72],[71],[70],[70],[69],[67],[67],[67],[66],[66],[66],[63],[62],[60],[60],[58],[57],[55],[55],[55],[53],[52],[50],[48],[47],[47],[46],[45],[44],[42],[42],[42]
- /search/morefacet;jsessionid=2i9t552xuog8t.x-iet-live-01
- /content/searchconcept;jsessionid=2i9t552xuog8t.x-iet-live-01?option1=pub_concept&sortField=prism_publicationDate&pageSize=20&sortDescending=true&value1=c7800&facetOptions=2&facetNames=pub_concept_facet&operator2=AND&option2=pub_concept_facet&value2=
- See more See less
Filter by content type:
Filter by publication date:
- 2006 [80]
- 1997 [75]
- 1995 [74]
- 2008 [64]
- 2019 [62]
- 2011 [59]
- 2018 [54]
- 2012 [52]
- 2020 [51]
- 1996 [49]
- 2007 [46]
- 1998 [43]
- 2015 [42]
- 2013 [41]
- 2005 [39]
- 2016 [38]
- 2001 [37]
- 2010 [33]
- 2017 [31]
- 2009 [29]
- 2003 [27]
- 1999 [23]
- 2002 [23]
- 2014 [22]
- 2000 [20]
- 1994 [17]
- 2004 [11]
- 1992 [9]
- 1990 [6]
- 1974 [5]
- 1993 [5]
- 1972 [3]
- 1975 [3]
- 1977 [3]
- 1987 [3]
- 1991 [3]
- 1956 [2]
- 1970 [2]
- 1983 [2]
- 1984 [2]
- 1986 [2]
- 1989 [2]
- 2021 [2]
- 1967 [1]
- 1969 [1]
- 1971 [1]
- 1973 [1]
- 1976 [1]
- 1978 [1]
- 1979 [1]
- 1980 [1]
- 1981 [1]
- 1982 [1]
- 1988 [1]
- See more See less
Filter by author:
- C. Andrews [9]
- P. Dempsey [9]
- V. Callaghan [8]
- K. Allan [6]
- D. Birkett [5]
- Chaozhong Wu [4]
- H. Hagras [4]
- K. Sangani [4]
- M.C. Pistorius [4]
- Mu-Chun Su [4]
- Mukta Goyal [4]
- P.M. Alexander [4]
- S. Ablameyko [4]
- T. Ward [4]
- A. Kumar [3]
- B. Knowles [3]
- C. De Villiers [3]
- C. Edwards [3]
- Chandan Kumar [3]
- D.J. Cook [3]
- Divakar Yadav [3]
- F. Fallside [3]
- Francisco Florez-Revuelta [3]
- Ioannis Kompatsiaris [3]
- J.S.Y. Chin [3]
- Jingyuan Yin [3]
- K.O. Jones [3]
- Lizhe Wang [3]
- M. Colley [3]
- Muhammad Nazrul Islam [3]
- N.F. du Plooy [3]
- P. Blenkhorn [3]
- P.D. Noakes [3]
- Rajalakshmi Krishnamurthi [3]
- S. McLoone [3]
- S. Paul [3]
- Spiros Nikolopoulos [3]
- Wanggen Wan [3]
- Wei Wang [3]
- A. Al-Qayedi [2]
- A. Bodhani [2]
- A. Materka [2]
- A. Ribeiro [2]
- A. Shanahan [2]
- A. Waller [2]
- A. Wolisz [2]
- A.E. Al-Naser [2]
- A.F. Clark [2]
- A.F. Newell [2]
- A.J. Walker [2]
- A.N. Evans [2]
- A.S. Crandall [2]
- Affan Yasin [2]
- Alan Stevens [2]
- Alexandros Andre Chaaraoui [2]
- B. Allen [2]
- B. Beregov [2]
- Baojun Zhao [2]
- C. Clark [2]
- C. Evans-Pughe [2]
- C. Harris [2]
- C. Hicks [2]
- C. Jesshope [2]
- C. Magerkurth [2]
- C. Powell [2]
- C. Rocker [2]
- C. Zhou [2]
- C.J. James [2]
- C.R. Baker [2]
- Chang-an Yuan [2]
- Chong Shen [2]
- Christopher J. James [2]
- Chunmei Qing [2]
- Cunchen Tang [2]
- D. Bainbridge [2]
- D. Chaves [2]
- D. Delaney [2]
- D. Lenton [2]
- D. Magee [2]
- D. Marshall [2]
- D. Ross [2]
- D. Stanton [2]
- D. Yang [2]
- D.A. Sanders [2]
- Danlin Yu [2]
- David Finch [2]
- Dhanalekshmi Gopinathan [2]
- Dongming Lu [2]
- Duanfeng Chu [2]
- E. Coyle [2]
- E.L. Andrade [2]
- E.R. Hancock [2]
- Ennio Gambi [2]
- F. Kawsar [2]
- F. Ramparany [2]
- F. Rivera-Illingworth [2]
- Fan Zhang [2]
- Feiyue Ye [2]
- G. Clapperton [2]
- G. Clarke [2]
- See more See less
Filter by access type:
The sudden spread of novel coronavirus COVID-19 across the world has been leading to the drastic changes in complete structural, organizational and social aspects of every sector, including the education system. The quick closure of universities and schools for public health safety during COVID-19 pandemic has become a catalyst for searching innovative solutions within a short span of time. In the context of this new and challenging situation, e-learning tools have become the new educational policy and practice for virtual classrooms. This chapter presents an analysis of various e-learning tools for synchronous and asynchronous learning. It also focuses on the various health issues arising due to the excessive exposure of everyone to screens with the growing adoption of online learning tools and technologies.
E-learning has become an important part of our educational life with the development of e-learning systems and platforms and the need for online and remote learning. ICT and computational intelligence techniques are being used to design more intelligent and adaptive systems. However, the art of designing good real-time e-learning systems is difficult as different aspects of learning need to be considered including challenges such as learning rates, involvement, knowledge, qualifications, as well as networking and security issues. The earlier concepts of standalone integrated virtual e-learning systems have been greatly enhanced with emerging technologies such as cloud computing, mobile computing, big data, Internet of Things (IoT), AI and machine learning, and AR/VT technologies. With this book, the editors and authors wish to help researchers, scholars, professionals, lecturers, instructors, developers, and designers understand the fundamental concepts, challenges, methodologies and technologies for the design of performant and reliable intelligent and adaptive real time e-learning systems and platforms. This edited volume covers state of the art topics including user modeling for e-learning systems and cloud, IOT, and mobile-based frameworks. It also considers security challenges and ethical conduct using Blockchain technology.
Machine learning models have been widely adopted for passenger flow prediction in urban metros; however, the authors find machine learning models may underperform under anomalous large passenger flow conditions. In this study, they develop a prediction framework that combines the advantage of complex network models in capturing the collective behaviour of passengers and the advantage of online learning algorithms in characterising rapid changes in real-time data. The proposed method considerably improves the accuracy of passenger flow prediction under anomalous conditions. This study can also serve as an exploration of interdisciplinary methods for transportation research.
This work introduces and evaluates a model for predicting driver behaviour, namely turns or proceeding straight, at traffic light intersections from driver three-dimensional gaze data and traffic light recognition. Based on vehicular data, this work relates the traffic light position, the driver's gaze, head movement, and distance from the centre of the traffic light to build a model of driver behaviour. The model can be used to predict the expected driver manoeuvre 3 to 4 s prior to arrival at the intersection. As part of this study, a framework for driving scene understanding based on driver gaze is presented. The outcomes of this study indicate that this deep learning framework for measuring, accumulating and validating different driving actions may be useful in developing models for predicting driver intent before intersections and perhaps in other key-driving situations. Such models are an essential part of advanced driving assistance systems that help drivers in the execution of manoeuvres.
The water quality, contaminant migration characteristics, and emissions quantity of pollutants in the basin would have a great impact on aquatic creatures, agricultural irrigation, human life, and so on. In the aquaculture industry, because water colour can reflect the species and number of phytoplankton in the water, the water quality type can be obtained by analysing the colour of the aquaculture water using image processing techniques. Therefore, this study proposes an intelligent monitoring approach for water quality. The critical features of water colour images are extracted, and then using the machine learning methods, an intelligent system for water quality monitoring is established based on the fused random vector functional link network (RVFL) and group method of data handling (GMDH) model. The proposed approach presents a superior performance relative to other state-of-the-art methods, and it achieves an average predicting accuracy of 96.19% on the feature dataset. Experimental findings demonstrate the validity of the proposed approach, and it is accomplished efficiently for the monitoring of water quality.
The proposed training system helps the examined people to generate motor images based on the example maps of the activity of neuronal cell fractions presented to them. The study involved 16 students at the Laboratory of Neuroinformatics and Decision Systems of the Technical University of Opole. The group was divided into two equal subgroups, one of which was acquainted with the operation of the system, while the other – considered as a control – was not. Electroencephalographic signals were recorded when users were imagining the upper limb movement for two subgroups before and after the imagery training in order to verify the introduction of the proposed training system. The area used for data acquisition as part of the monitoring session implemented with the use of the Emotiv EPOC Flex device is a sensorimotor cortex. As it results from the carried-out literature analysis, it was the first attempt to use the 32-channel Emotive EPOC Flex device in the scope of the training system construction in the field of motor imagery.
The World Health Organization defines mental health as the foundation for physical health and well-being and effective functioning. Mental health encompasses the self and others within an environment that promotes emotional, social, and cognitive well-being. Further, improvement of mental health is not an elusive ideal to be reached, but a priority to be intentionally addressed and maintained. Traditional mental health models are not reaching the amount of children and adolescents in need of services. Technology, however, may offer a unique platform for the creation of innovative solutions to reach a broader number of children globally given the number of children connected to various forms of digital platforms. Therefore, programming that integrates the fields of child development, psychology, learning, and gaming offer a significant potential to address the pro-motion of mental health and wellness.
Driver's intention is a self-internal state that represents a commitment to carrying out driving action at the next moment, which could be affected by driver's emotion. Therefore, understanding driver's emotion is an important basis for developing driver intention recognition models. This study aims to gain a better insight of the characteristics of driver intention transition trigged by driver's emotion. The Hidden Markov model was used to develop a driver intention recognition model with the involvement of driver's emotions. Assorted materials including visual, auditory and olfactory stimuli were used to evoke driver's emotions before the driving experiments, as well as keep and increase the emotional level during driving. Real and virtual driving experiments were conducted to collect human-vehicle-environment dynamic data in two-lane roads. The results show that the proposed model can achieve high accuracy and reliability in estimating driver's intention transition with the evolution of driver emotion. Our findings of this study can be used to develop the personalized driving warning system and intelligent human-machine interaction in vehicles. This study would be of great theoretical significance for improving road traffic safety.
Hyperspectral image (HSI) consists of hundreds of contiguous spectral bands, which can be used in the classification of different objects on the earth. The inclusion of both spectral as well as spatial features stands essential in order that high classification accuracy is achieved. However, incorporation of the spectral and spatial information without preserving the intrinsic structure of the data leads on to downscaling the classification accuracy. To address the issue aforementioned, the proposed method which involves using unsupervised spectral band selection based on three major constrains: (i) low reconstruction error with neighbourhood bands, (ii) low noise, (iii) high information entropy, is put forward. In addition, the structure-preserving recursive filter is used to extract spatial features. Finally, the classification is performed using convolutional neural networks (CNNs) with different sets of convolutional, pooling, and fully connected layers. To test the performance of the proposed method, experiments have been carried out with three benchmark HSI datasets Indian pines, University of Pavia, and Salinas. These experiments reveal that the proposed method offers better classification accuracy over the purportedly state-of-the-art methods in terms of standard metrics like overall accuracy, average accuracy, and kappa coefficient (K). The proposed method has attained OAs of 99.9, 98.9, and 99.93% for the three datasets, respectively.
P300 speller-based brain–computer interface (BCI) is an immediate correspondence between the human brain and computer that depends on the translation of mind reactions produced by the stimulus of a subject utilising the P300 speller. No muscle movements are required for this communication. As a P300 paradigm, a novel 2 × 3 matrix consisting of visual home appliances is proposed, which helps disabled people ease their lives by accessing mobile, light, fan, door, television, electric heater etc. In most of the current P300-based BCIs, 5–15 trials work better and the low information transfer rate (ITR) is a major issue in its adaptation in real-time. The objective of this Letter is to improve accuracy as well as an ITR for real-time home appliance control applications. To address this, the authors proposed a single trial weighted ensemble of compact convolution neural network and obtained an ITR of 46.45 bits per minute and an average target appliance accuracy of 93.22% for the BCI-based home environment system. The experimental findings confirmed the feasibility of the proposed method and thus can provide guidance for future use of the system for paralysed patients.
Coffee is an important economic crop and one of the most popular beverages worldwide. The rise of speciality coffees has changed people's standards regarding coffee quality. However, green coffee beans are often mixed with impurities and unpleasant beans. Therefore, this study aimed to solve the problem of time-consuming and labour-intensive manual selection of coffee beans for speciality coffee products. The second objective of the authors’ study was to develop an automatic coffee bean picking system. They first used image processing and data augmentation technologies to deal with the data. They then used deep learning of the convolutional neural network to analyse the image information. Finally, they applied the training model to connect an IP camera for recognition. They successfully divided good and bad beans. The false-positive rate was 0.1007, and the overall coffee bean recognition rate was 93%.
To improve the robustness and discrimination power of the triangle-area representation, a novel shape matching method based on multi-scale angle representation is proposed in this study. By analysing the configurations of different sample points from each shape contour, shape descriptors are constructed by using space angles at different scale levels. With the proposed shape representation, the multi-scale information of shape contours is efficiently described, and the dynamic programming is further used to determine the correspondence between samples from different shapes and calculate the shape distance in the feature matching step. Moreover, to improve the shape retrieval results based on pairwise shape distances, the dynamic label propagation is introduced as the post-processing step. Unlike previous distance learning methods learning the database manifold implicitly, the authors method retrieves relative objects on the shortest paths from near to far explicitly, and the underlying structure can be effectively captured. The proposed method tested on different shape databases provides the performances superior to many other methods, and it can be applied to visual data processing and understanding of the internet of things.
Existing style transfer methods have achieved great success in artwork generation by transferring artistic styles onto everyday photographs while keeping their contents unchanged. Despite this success, these methods have one inherent limitation: they cannot produce newly created image contents, lacking creativity and flexibility. On the other hand, generative adversarial networks (GANs) can synthesise images with new content, whereas cannot specify the artistic style of these images. The authors consider combining style transfer with convolutional GANs to generate more creative and diverse artworks. Instead of simply concatenating these two networks: the first for synthesising new content and the second for transferring artistic styles, which is inefficient and inconvenient, they design an end-to-end network called ArtistGAN to perform these two operations at the same time and achieve visually better results. Moreover, to generate images of higher quality, they propose the bi-discriminator GAN containing a pixel discriminator and a feature discriminator that constrain the generated image from pixel level and feature level, respectively. They conduct extensive experiments and comparisons to evaluate their methods quantitatively and qualitatively. The experimental results verify the effectiveness of their methods.
This chapter considers the use of publicly available social media data as a potential additional source of traffic information. Social media data with geographical information may be useful for estimating the speed of traffic. Information on traffic flow, delays, infrastructure and environment -related traffic issues may be obtained from studying the textual content of the messages. This chapter is concerned with assessing the relevance of these social media data to the needs of road administrations, particularly in the context of traffic management. We aim to focus on the potential of one commonly available type of social media data, Twitter, as a new source of travel time information. We consider the efficacy of the data, its availability and different business models for accessing and processing the data. A case study is used to provide detailed illustration of some of the issues with the functional contribution of Twitter data and the surrounding eco-system.
Blockchain technology is a powerful, cost-effective method for network security. Essentially, it is a decentralized ledger for storing all committed transactions in trustless environments by integrating several core technologies such as cryptographic hash, digital signature and distributed consensus mechanisms. Over the past few years, blockchain technology has been used in a variety of network interaction systems such as smart contracts, public services, Internet of Things (IoT), social networks, reputation systems and security and financial services. With its widespread adoption, there has been increased focus on utilizing blockchain technologies to address network security concerns and vulnerabilities as well as understanding real-world security implications. The book begins with an introduction to blockchains, covering key principles and applications. Further chapters cover blockchain system architecture, applications and research issues; blockchain consensuses and incentives; blockchain applications, projects and implementations; blockchain for internet of things; blockchain in 5G and 6G networks; edgechain to provide security in organization based multi agent systems; blockchain driven privacy-preserving machine learning; performance evaluation of differential privacy mechanisms in blockchain based smart metering; scaling-out blockchains with sharding; blockchain for GIS; and finally blockchain applications in remote sensing big data management and production.
For Chinese font images, when all their strokes are replaced by pattern elements such as flowers and birds, they become flower–bird character paintings, which are traditional Chinese art treasures. The generation of flower–bird painting requires professional painters’ great efforts. How to automatically generate these paintings from font images? There is a huge gap between the font domain and the painting domain. Although many image-to-image translation frameworks have been proposed, they are unable to handle this situation effectively. In this study, a novel method called font-to-painting network (F2PNet) is proposed for font-to-painting translation. Specifically, an encoder equipped with dilated convolutions extracts features of the font image, and then the features are fed into the domain translation module for mapping the font feature space to the painting feature space. The acquired features are further adjusted by the refinement module and utilised by the decoder to obtain the target painting. The authors apply adversarial loss and cycle-consistency loss to F2PNet and further propose a loss term, which is called recognisability loss and makes the generated painting have font-level recognisability. It is proved by experiments that F2PNet is effective and can be used as an unsupervised image-to-image translation framework to solve more image translation tasks.
Student performance prediction plays an important role in improving education quality. Noticing that students' exercise-answering processes exhibit different characteristics according to their different performance levels, this paper aims to mine the performance-related information from students' exercising logs and to explore the possibility of predicting students' performance using such process-characteristic information. A formal model of student-shared exercising processes and its discovery method from students' exercising logs are presented. Several similarity measures between students' individual exercising behavior and student-shared exercising processes are presented. A prediction method of students' performance level considering these similarity measures is explored based on classification algorithms. An experiment on real-life exercise-answering event logs shows the effectiveness of the proposed prediction method.
Conventional AlexNet has the problems of slow training speed, single characteristic scale and low recognition accuracy. To solve these problems, a convolutional neural network identification model based on Inception module and dilated convolution is proposed in this study. The inception module combined with dilated convolution, could extract disease characteristics at different scales and increase the receptive field. By setting different parameters, six improved models were obtained. They were trained to identify 26 diseases of 14 different crops; then the authors selected optimal recognition model. On this basis, the segmented dataset and the grey-scaled dataset were trained as comparative experiments to explore the influence of background and colour features on the recognition results. After only two training epochs, the improved optimal model could achieve an accuracy of over 95%. Moreover, the final average identification accuracy reached 99.37%. Contrast experiments indicate that colour and background features may influence the recognition effect. The improved model can extract disease information from different scales in the feature map to identify diverse diseases of different crops. The proposed model has faster training speed and higher recognition accuracy than the traditional model, and thus it can provide a reference for crop disease identification in actual production.
The image-to-image translation, i.e. from source image domain to target image domain, has made significant progress in recent years. The most popular method for unpaired image-to-image translation is CycleGAN. However, it always cannot accurately and rapidly learn the key features in target domains. So, the CycleGAN model learns slowly and the translation quality needs to be improved. In this study, a multi-head mutual-attention CycleGAN (MMA-CycleGAN) model is proposed for unpaired image-to-image translation. In MMA-CycleGAN, the cycle-consistency loss and adversarial loss in CycleGAN are still used, but a mutual-attention (MA) mechanism is introduced, which allows attention-driven, long-range dependency modelling between the two image domains. Moreover, to efficiently deal with the large image size, the MA is further improved to the multi-head mutual-attention (MMA) mechanism. On the other hand, domain labels are adopted to simplify the MMA-CycleGAN architecture, so only one generator is required to perform bidirectional translation tasks. Experiments on multiple datasets demonstrate MMA-CycleGAN is able to learn rapidly and obtain photo-realistic images in a shorter time than CycleGAN.
In this study, specifically for the detection of ripe/unripe tomatoes with/without defects in the crop field, two distinct methods are described and compared from captured images by a camera mounted on a mobile robot. One is a machine learning approach, known as ‘Cascaded Object Detector’ (COD) and the other is a composition of traditional customised methods, individually known as ‘Colour Transformation’: ‘Colour Segmentation’ and ‘Circular Hough Transformation’. The (Viola-Jones) COD generates ‘histogram of oriented gradient’ (HOG) features to detect tomatoes. For ripeness checking, the RGB mean is calculated with a set of rules. However, for traditional methods, colour thresholding is applied to detect tomatoes either from natural or solid background and RGB colour is adjusted to identify ripened tomatoes. This algorithm is shown to be optimally feasible for any micro-controller based miniature electronic devices in terms of its run time complexity of O(n 3) for a traditional method in best and average cases. Comparisons show that the accuracy of the machine learning method is 95%, better than that of the Colour Segmentation Method using MATLAB.