Cognitive Computation and Systems
Volume 1, Issue 2, July 2019
Volume 1, Issue 2
July 2019
-
- Author(s): Yilin Wang ; Hong Cheng ; Lei Hou
- Source: Cognitive Computation and Systems, Volume 1, Issue 2, p. 33 –39
- DOI: 10.1049/ccs.2018.0012
- Type: Article
- + Show details - Hide details
-
p.
33
–39
(7)
Lower extremity exoskeleton systems have been widely applied in walking assistance, rehabilitation, and augmentation-related applications merely through human-exoskeleton movement collaboration, which cannot analyse cognitive load and pressure of pilots. Cognitive exoskeleton systems can reinforce cognitive cooperation of the human-exoskeleton systems through perception and assessment. Cognitive cloud exoskeleton systems can enhance the ability of the continuous learning and transfer learning of the exoskeleton systems through cloud brain platform. This paper presents a cognitive cloud exoskeleton system Cognitive Cloud AssItive DEvice for paralysed patient (c 2 AIDER). The main idea is that the cooperation between the c 2 AIDER system and pilots is more intelligent and natural through cloud brain platform, which can achieve high-performance computing thus providing better walking assistance for pilots.
- Author(s): Xinwu Li ; Huaping Liu ; Junfeng Zhou ; FuChun Sun
- Source: Cognitive Computation and Systems, Volume 1, Issue 2, p. 40 –44
- DOI: 10.1049/ccs.2018.0014
- Type: Article
- + Show details - Hide details
-
p.
40
–44
(5)
In this study, the authors study a deep learning model that can convert vision into tactile information, so that different texture images can be fed back to the tactile signal close to the real tactile sensation after training and learning. This study focuses on the classification of different image visual information and its corresponding tactile feedback output mode. A training model of ensembled generative adversarial networks is proposed, which has the characteristics of simple training and stable efficiency of the result. At the same time, compared with the previous methods of judging the tactile output, in addition to subjective human perception, this study also provides an objective and quantitative evaluation system to verify the performance of the model. The experimental results show that the learning model can transform the visual information of the image into the tactile information, which is close to the real tactile sensation, and also verify the scientificity of the tactile evaluation method.
- Author(s): Quanbo Ge ; Tianxiang Chen ; Zhansheng Duan ; Mingxin Liu ; Zhuyun Niu
- Source: Cognitive Computation and Systems, Volume 1, Issue 2, p. 45 –54
- DOI: 10.1049/ccs.2018.0006
- Type: Article
- + Show details - Hide details
-
p.
45
–54
(10)
State estimation suffers from some new challenging problems with a multi-platform multi-sensor observation system. An important problem for multisensor integration is that the data from the local sensors needs to be transformed into a common reference frame free of systematic bias or registration. In this study, the relative sensor registration problem is discussed. It aligns measurement from the global sensor with the local sensor under the assumptions that the global sensor is bias free and all biases reside with the local sensor. The traditional methods failed in the condition when attitude bias becomes large because the error caused by linearisation of rotation matrix increases with growing attitude bias. Motivated by this, a two-step method is established. By estimating the measurement bias through augmented extended Kalman filter in local sensor coordinate independent of attitude and location bias, and by introducing the unit quaternion method compute the attitude and location bias, the proposed method can avoid the problem the traditional methods faced. Simulation examples are provided to verify the proposed method by comparing with the existing linear least square algorithm.
- Author(s): Tingting Zhang ; Ling Xia ; Xiaofeng Liu ; Xiaoli Wu
- Source: Cognitive Computation and Systems, Volume 1, Issue 2, p. 55 –59
- DOI: 10.1049/ccs.2019.0003
- Type: Article
- + Show details - Hide details
-
p.
55
–59
(5)
This study investigates the eye movements when detecting changes in the scene with different levels of depth of field. A within-subjects experiment was conducted using a flicker paradigm to create change blindness phenomenon. This experiment investigated two main factors: depth of field and position. The eye-tracker Tobii X120 was used to record participants’ eye movements when looking for changes in the flickering scenes. It was concluded that a small depth of field could indeed direct viewers’ attention into the sharp area. The size of the depth of field could not influence the amount of time for change detection whereas uniform blur could facilitate change detection.
c 2 AIDER: cognitive cloud exoskeleton system and its applications
Learning cross-modal visual-tactile representation using ensembled generative adversarial networks
Relative sensor registration with two-step method for state estimation
Eye movements during change detection: the role of depth of field
Most viewed content
Most cited content for this Journal
-
A review on manipulation skill acquisition through teleoperation‐based learning from demonstration
- Author(s): Weiyong Si ; Ning Wang ; Chenguang Yang
- Type: Article
-
Ensemble learning‐based classification of microarray cancer data on tree‐based features
- Author(s): Guesh Dagnew and B.H. Shekar
- Type: Article
-
Development of numerical cognition in children and artificial systems: a review of the current knowledge and proposals for multi-disciplinary research
- Author(s): Alessandro Di Nuovo and Tim Jay
- Type: Article
-
Medical image encryption algorithm based on hyper‐chaotic system and DNA coding
- Author(s): Mingzhen Li ; Shuaihao Pan ; Weiming Meng ; Wang Guoyong ; Zhihang Ji ; Lin Wang
- Type: Article
-
Research and sustainable design of wearable sensor for clothing based on body area network
- Author(s): Ren Xiangfang ; Shen Lei ; Liu Miaomiao ; Zhang Xiying ; Chen Han
- Type: Article