Filter by subject:
- Electrical and electronic engineering [88]
- Communications [88]
- Information and communication theory [88]
- Computer and control engineering [88]
- Computer hardware [88]
- Logic design and digital techniques [88]
- Digital signal processing [88]
- Computer vision and image processing techniques [88]
- Computer applications [88]
- Other computer applications [88]
- [88]
- http://iet.metastore.ingenta.com/content/subject/b7000,http://iet.metastore.ingenta.com/content/subject/b6135,http://iet.metastore.ingenta.com/content/subject/b7700,http://iet.metastore.ingenta.com/content/subject/b7710,http://iet.metastore.ingenta.com/content/subject/a,http://iet.metastore.ingenta.com/content/subject/a9000,http://iet.metastore.ingenta.com/content/subject/a9300,http://iet.metastore.ingenta.com/content/subject/a9385,http://iet.metastore.ingenta.com/content/subject/a9365,http://iet.metastore.ingenta.com/content/subject/b0000,http://iet.metastore.ingenta.com/content/subject/b0200,http://iet.metastore.ingenta.com/content/subject/c1000,http://iet.metastore.ingenta.com/content/subject/c1100,http://iet.metastore.ingenta.com/content/subject/b6135e,http://iet.metastore.ingenta.com/content/subject/b7730,http://iet.metastore.ingenta.com/content/subject/c6000,http://iet.metastore.ingenta.com/content/subject/c6100,http://iet.metastore.ingenta.com/content/subject/b6300,http://iet.metastore.ingenta.com/content/subject/b6140,http://iet.metastore.ingenta.com/content/subject/b0240,http://iet.metastore.ingenta.com/content/subject/c7300,http://iet.metastore.ingenta.com/content/subject/c7340,http://iet.metastore.ingenta.com/content/subject/b0240z,http://iet.metastore.ingenta.com/content/subject/b6320,http://iet.metastore.ingenta.com/content/subject/c1140,http://iet.metastore.ingenta.com/content/subject/b6140c,http://iet.metastore.ingenta.com/content/subject/c1140z,http://iet.metastore.ingenta.com/content/subject/c5290,http://iet.metastore.ingenta.com/content/subject/a0000,http://iet.metastore.ingenta.com/content/subject/a0200,http://iet.metastore.ingenta.com/content/subject/a9100,http://iet.metastore.ingenta.com/content/subject/c6130,http://iet.metastore.ingenta.com/content/subject/c6170,http://iet.metastore.ingenta.com/content/subject/c6170k,http://iet.metastore.ingenta.com/content/subject/a9190,http://iet.metastore.ingenta.com/content/subject/a0250,http://iet.metastore.ingenta.com/content/subject/b0210,http://iet.metastore.ingenta.com/content/subject/b0250,http://iet.metastore.ingenta.com/content/subject/c1110,http://iet.metastore.ingenta.com/content/subject/c4000,http://iet.metastore.ingenta.com/content/subject/b0290,http://iet.metastore.ingenta.com/content/subject/b7200,http://iet.metastore.ingenta.com/content/subject/b7230,http://iet.metastore.ingenta.com/content/subject/b7230g,http://iet.metastore.ingenta.com/content/subject/c1160,http://iet.metastore.ingenta.com/content/subject/c1180,http://iet.metastore.ingenta.com/content/subject/c1200,http://iet.metastore.ingenta.com/content/subject/c4100,http://iet.metastore.ingenta.com/content/subject/c6130b,http://iet.metastore.ingenta.com/content/subject/a0210,http://iet.metastore.ingenta.com/content/subject/b0260,http://iet.metastore.ingenta.com/content/subject/b6140b,http://iet.metastore.ingenta.com/content/subject/c1230,http://iet.metastore.ingenta.com/content/subject/c1250,http://iet.metastore.ingenta.com/content/subject/a9330,http://iet.metastore.ingenta.com/content/subject/b0230,http://iet.metastore.ingenta.com/content/subject/b0290h,http://iet.metastore.ingenta.com/content/subject/b5000,http://iet.metastore.ingenta.com/content/subject/b5200,http://iet.metastore.ingenta.com/content/subject/b6135c,http://iet.metastore.ingenta.com/content/subject/b6310,http://iet.metastore.ingenta.com/content/subject/b6330,http://iet.metastore.ingenta.com/content/subject/b7710d,http://iet.metastore.ingenta.com/content/subject/c1130,http://iet.metastore.ingenta.com/content/subject/c1230d,http://iet.metastore.ingenta.com/content/subject/c4140,http://iet.metastore.ingenta.com/content/subject/c6130v,http://iet.metastore.ingenta.com/content/subject/c6160,http://iet.metastore.ingenta.com/content/subject/c6160s,http://iet.metastore.ingenta.com/content/subject/c7100,http://iet.metastore.ingenta.com/content/subject/c7130,http://iet.metastore.ingenta.com/content/subject/c7400,http://iet.metastore.ingenta.com/content/subject/a9200,http://iet.metastore.ingenta.com/content/subject/a9330d,http://iet.metastore.ingenta.com/content/subject/b0290f,http://iet.metastore.ingenta.com/content/subject/b0290x,http://iet.metastore.ingenta.com/content/subject/b5230,http://iet.metastore.ingenta.com/content/subject/b6200,http://iet.metastore.ingenta.com/content/subject/b6250,http://iet.metastore.ingenta.com/content/subject/b6250g,http://iet.metastore.ingenta.com/content/subject/c1250m,http://iet.metastore.ingenta.com/content/subject/c4130,http://iet.metastore.ingenta.com/content/subject/c4188,http://iet.metastore.ingenta.com/content/subject/c5500,http://iet.metastore.ingenta.com/content/subject/c6190,http://iet.metastore.ingenta.com/content/subject/c6190v,http://iet.metastore.ingenta.com/content/subject/c7200,http://iet.metastore.ingenta.com/content/subject/c7445,http://iet.metastore.ingenta.com/content/subject/e
- b7000,b6135,b7700,b7710,a,a9000,a9300,a9385,a9365,b0000,b0200,c1000,c1100,b6135e,b7730,c6000,c6100,b6300,b6140,b0240,c7300,c7340,b0240z,b6320,c1140,b6140c,c1140z,c5290,a0000,a0200,a9100,c6130,c6170,c6170k,a9190,a0250,b0210,b0250,c1110,c4000,b0290,b7200,b7230,b7230g,c1160,c1180,c1200,c4100,c6130b,a0210,b0260,b6140b,c1230,c1250,a9330,b0230,b0290h,b5000,b5200,b6135c,b6310,b6330,b7710d,c1130,c1230d,c4140,c6130v,c6160,c6160s,c7100,c7130,c7400,a9200,a9330d,b0290f,b0290x,b5230,b6200,b6250,b6250g,c1250m,c4130,c4188,c5500,c6190,c6190v,c7200,c7445,e
- [78],[75],[75],[63],[59],[59],[58],[56],[38],[32],[32],[31],[28],[26],[24],[22],[22],[20],[17],[15],[15],[15],[14],[14],[14],[13],[13],[13],[12],[12],[10],[10],[10],[10],[9],[7],[6],[6],[6],[6],[5],[5],[5],[5],[5],[5],[5],[5],[5],[4],[4],[4],[4],[4],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2]
- /search/morefacet;jsessionid=1lgv5ep2s1ins.x-iet-live-01
- /content/searchconcept;jsessionid=1lgv5ep2s1ins.x-iet-live-01?operator4=AND&pageSize=20&sortDescending=true&facetNames=pub_concept_facet+pub_concept_facet+pub_concept_facet+pub_concept_facet&value3=c5260b&value4=b&value1=c7840&value2=b6100&facetOptions=2+3+4+5&option1=pub_concept&option2=pub_concept_facet&option3=pub_concept_facet&option4=pub_concept_facet&sortField=prism_publicationDate&operator3=AND&operator2=AND&operator5=AND&option5=pub_concept_facet&value5=
- See more See less
Filter by content type:
Filter by publication date:
- 2019 [12]
- 2013 [8]
- 2018 [8]
- 2012 [7]
- 1995 [6]
- 2011 [5]
- 1996 [4]
- 2005 [4]
- 2009 [4]
- 2017 [4]
- 2015 [3]
- 1997 [2]
- 2016 [2]
- 1998 [1]
- 1999 [1]
- 2001 [1]
- 2004 [1]
- 2006 [1]
- 2010 [1]
- 2014 [1]
- See more See less
Filter by author:
- A. Kumar [3]
- A.N. Evans [2]
- Baojun Zhao [2]
- Chunmei Qing [2]
- E.R. Hancock [2]
- Maryam Imani [2]
- R. Kumaran [2]
- R.M. Parmar [2]
- S. Ablameyko [2]
- S. Danaher [2]
- S. Paul [2]
- Wanggen Wan [2]
- Weiwei Duan [2]
- Yao-Yi Chiang [2]
- A. Coccia [1]
- A. Hirose [1]
- A. Kurekin [1]
- A. Meta [1]
- A. Murray [1]
- A. Ribeiro [1]
- A. Tsourdas [1]
- A.E. Clark [1]
- A.K. Bhandari [1]
- A.M. Padokhin [1]
- A.P. Quinn [1]
- Alim Samat [1]
- Alok Kanti Deb [1]
- Andrea Marinoni [1]
- Asmaa Sadiq [1]
- Aurobinda Routray [1]
- B. Huet [1]
- B. Kaur [1]
- B. White [1]
- B.J. Devereux [1]
- Bin Cui [1]
- Bing Zhang [1]
- Bingliang Hu [1]
- Boya Zhao [1]
- C. Clark [1]
- C. Costa [1]
- C. Harris [1]
- C. Yang [1]
- C. Zhou [1]
- C.A. Knoblock [1]
- C.G. Dai [1]
- C.G. Low [1]
- C.J. Oliver [1]
- Cao Liu [1]
- Catherine Deegan [1]
- Chaoguang Men [1]
- Cheng Li [1]
- Cheng Ye [1]
- Chengfan Li [1]
- Chenhui Duan [1]
- Craig A. Knoblock [1]
- D. Blacknell [1]
- D. Creighton [1]
- D. Marshall [1]
- D. Radford [1]
- D. Yang [1]
- D.G. Goodenough [1]
- D.W. Auckland [1]
- D.Z. Fan [1]
- David Hernández-López [1]
- Dazhong Xia [1]
- Deliang Xiang [1]
- Diego González-Aguilera [1]
- Dilbag Singh [1]
- Dingyi Li [1]
- E. Basaeed [1]
- E. Turkbeyler [1]
- E.S. Andreeva [1]
- Erzhu Li [1]
- F. Gustafsson [1]
- Fan Feng [1]
- Fan Zhang [1]
- Fang Shang [1]
- Fatemeh Tabib Mahmoudi [1]
- Faxian Cao [1]
- Feixiang Tao [1]
- Ferda Ofli [1]
- G. Herries [1]
- G. Pajares [1]
- G.M. Herries [1]
- G.M. Smith [1]
- Geng Hui [1]
- Ghazali Sulong [1]
- Gregory A. Showman [1]
- Guangyuan Wu [1]
- Guoyin Zhang [1]
- H. Bhaskar [1]
- H. Chen [1]
- H. Zhou [1]
- H.G. Lewis [1]
- Hai Zhang [1]
- Hakan Çevikalp [1]
- Hani Nassif [1]
- Haopeng Zhang [1]
- Hassan Ghassemian [1]
- Higinio González-Jorge [1]
- See more See less
Filter by access type:
A new class of augmented map application is introduced which can provide detailed knowledge about any area, to a user. This brief particularly focuses on obtaining itinerary perception subject to different environmental conditions. This refers to extraction of traffic related information from an augmented map. The problem is modelled as a machine learning technique where the traffic distribution at different times (including same days, different days and different weather) are observed continuously using a service robot. This data is posed as a Gaussian process for post-estimation. Our system consists of a vision sensor which will acquire the region of interest input, queried to a database of traffic density distributions, learned from the scenes at different points of time. The user interacting with the system will obtain an information pertaining to the region conditioned on environmental and timing events.
Since 2003, the scan line corrector (SLC) of the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor has failed permanently, inhibiting the retrieval or scanning of 22% of the pixels in each Landsat 7 SLC-off image. This utter failure has seriously limited the scientific applications and usability of ETM+ data. Precise and complete recovery of the missing pixels for the Landsat 7 SLC-off images is a challenging issue and developing an efficient gap-fill algorithm with improved ETM+ data usability has been ever-demanding. In this study, a new gap filling method has been introduced to reconstruct the SLC-off images via multi-temporal SLC-off auxiliary fill images. A correlation is established between the corresponding pixels in the target SLC-off image and two auxiliary fill images in parallel using the multiple linear regressions model. Both simulated and actual defective Landsat 7 images were tested to assess the performance of the proposed model by comparing with two multi-temporal data based methods, the local linear histogram matching method and Neighbourhood Similar Pixel Interpolator method. The quantitative evaluations indicate that the proposed method makes an accurate estimate of the missing values even for more temporally distant fill images.
Morphological profiles (MPs) are efficiently exploited for modelling the geometrical features of structures in a scene. They increase the discriminability between different classes. The degree of processing of images depends on the geometrical structure and shape of the used structure element (SE) in the transformation. Since the geometric structures of an image are not the same in the whole image, the use of a fixed shape for SE may not be so efficient. Thus, it is proposed to extract an edge patch image-based morphological profile (EPIMP), which considers SEs with different shapes for different areas of image. The used SE in each patch of image is corresponding to the shape (i.e. edge image) of that patch. The proposed method is experimented on both multispectral and hyperspectral images and the obtained results show that the proposed method is much more efficient than the conventional MPs. Moreover, the experiments show the superiority of EPIMP compared with some state-of-the-art spectral-spatial classification methods such as generalised composite kernel, multiple feature learning, weighted joint collaborative representation and multiple-structure-element non-linear multiple kernel learning.
Automatically classifying an image has been a central problem in computer vision for decades. A plethora of models has been proposed, from handcrafted feature solutions to more sophisticated approaches such as deep learning. The authors address the problem of remote sensing image classification, which is an important problem to many real world applications. They introduce a novel deep recurrent architecture that incorporates high-level feature descriptors to tackle this challenging problem. Their solution is based on the general encoder–decoder framework. To the best of the authors’ knowledge, this is the first study to use a recurrent network structure on this task. The experimental results show that the proposed framework outperforms the previous works in the three datasets widely used in the literature. They have achieved a state-of-the-art accuracy rate of 97.29% on the UC Merced dataset.
This study presents a transfer learning method for addressing the insufficient sample problem in hyperspectral image classification. In order to find common feature representation for both the source domain and target domain, we introduce a regularisation based on Bregman divergence into the objective function of the subspace learning algorithm, which can minimise the Bregman divergence between the distribution of training samples in the source domain and the test samples in the target domain. Hyperspectral image with biased sampling is used to evaluate the effectiveness of the proposed method. The results show that the proposed method can achieve a higher classification accuracy than traditional subspace learning methods under the condition of biased sampling.
A new scheme for pixel-based polarimetric synthetic aperture radar (PolSAR) classification of the urban area was proposed. First, the characteristic of urban backscattering was analysed and it was found that the backscattering of buildings is very sensitive to the orientation of buildings. Second, by utilising Euler rotation to the polarimetric coherency matrix, a sequence of data with different rotation angles was simulated. Then a polarimetric statistical feature vector would be extracted from the simulated data. At last, the feature vector together with four components decomposition result would be put into a multiple layer perceptron neural network to get the classification result. The proposed scheme can improve the accuracy of urban area classification in a PolSAR image and be verified by using AIRSAR image data of San Francisco.
Tomography SAR (TomoSAR) imaging can achieve three-dimensional (3D) imaging, which is significant for urban 3D mapping. When the baseline of TomoSAR is sparse, the scattering centres of several different altitudes in the same resolution unit are aliased together, which affects the accuracy of TomoSAR imaging. To solve this problem, an altitude ambiguity suppression method based on dual-frequency interfering and multi-frequency averaging is proposed. Firstly, dual-frequency interfering is used to obtain the radar cross-section (RCS) of targets and the sum of the results of the cross interference between targets and ambiguities in different positions. Then multi-frequency averaging is used to suppress the power of the cross-interference, and finally the RCS of target is accurately obtained. Simulation results verify the effectiveness of the proposed method.
Street architectures play an essential role in city image and streetscape analysing. However, existing approaches are all supervised which require costly labeled data. To solve this, we propose a street architectural unsupervised classification framework based on Information maximizing Generative Adversarial Nets (InfoGAN), in which we utilize the auxiliary distribution Q of InfoGAN as an unsupervised classifier. Experiments on database of true street view images in Nanjing, China validate the practicality and accuracy of our framework. Furthermore, we draw a series of heuristic conclusions from the intrinsic information hidden in true images. These conclusions will assist planners to know the architectural categories better.