Medical and biomedical uses of fields, radiations, and radioactivity; health physics
More general concepts than this:
More specific concepts than this:
- Nuclear medicine, emission tomography | Radioactive pollution and natural radioactivity: health aspects | X-rays and particle beams (medical uses) | Electric and magnetic fields (medical uses) | Medical magnetic resonance imaging and spectroscopy | Radiation protection in medical physics | Sonic and ultrasonic radiation (medical uses) | Microwaves and other electromagnetic waves (medical uses) | Radiation dosimetry in medical physics | Optical and laser radiation (medical uses) | Preparation of radioactive materials for medical and biomedical uses
Filter by subject:
- Medical and biomedical uses of fields, radiations, and radioactivity; health physics [1627]
- Physics [1624]
- Cross-disciplinary physics and related areas of science and technology [1624]
- Biophysics, medical physics, and biomedical engineering [1624]
- Electrical and electronic engineering [1556]
- Instrumentation and special applications [1533]
- Medical physics and biomedical engineering [1529]
- Biomedical engineering [1454]
- Biomedical measurement and imaging [1421]
- Patient diagnostic methods and instrumentation [1275]
- [1041]
- http://iet.metastore.ingenta.com/content/subject/b6000,http://iet.metastore.ingenta.com/content/subject/c7000,http://iet.metastore.ingenta.com/content/subject/c7300,http://iet.metastore.ingenta.com/content/subject/c7330,http://iet.metastore.ingenta.com/content/subject/b6100,http://iet.metastore.ingenta.com/content/subject/c5000,http://iet.metastore.ingenta.com/content/subject/c5200,http://iet.metastore.ingenta.com/content/subject/c5260,http://iet.metastore.ingenta.com/content/subject/c5260b,http://iet.metastore.ingenta.com/content/subject/b6135,http://iet.metastore.ingenta.com/content/subject/a0000,http://iet.metastore.ingenta.com/content/subject/b0000,http://iet.metastore.ingenta.com/content/subject/b0200,http://iet.metastore.ingenta.com/content/subject/a8760f,http://iet.metastore.ingenta.com/content/subject/a0200,http://iet.metastore.ingenta.com/content/subject/c1000,http://iet.metastore.ingenta.com/content/subject/b7510j,http://iet.metastore.ingenta.com/content/subject/c1100,http://iet.metastore.ingenta.com/content/subject/a8760i,http://iet.metastore.ingenta.com/content/subject/a8760j,http://iet.metastore.ingenta.com/content/subject/b6140,http://iet.metastore.ingenta.com/content/subject/b7510n,http://iet.metastore.ingenta.com/content/subject/a8760b,http://iet.metastore.ingenta.com/content/subject/b7520,http://iet.metastore.ingenta.com/content/subject/c6000,http://iet.metastore.ingenta.com/content/subject/c6100,http://iet.metastore.ingenta.com/content/subject/b7510p,http://iet.metastore.ingenta.com/content/subject/a8760g,http://iet.metastore.ingenta.com/content/subject/b6135e,http://iet.metastore.ingenta.com/content/subject/b7510b,http://iet.metastore.ingenta.com/content/subject/b7510h,http://iet.metastore.ingenta.com/content/subject/b0240,http://iet.metastore.ingenta.com/content/subject/c1140,http://iet.metastore.ingenta.com/content/subject/a4000,http://iet.metastore.ingenta.com/content/subject/b0240z,http://iet.metastore.ingenta.com/content/subject/b7800,http://iet.metastore.ingenta.com/content/subject/c1140z,http://iet.metastore.ingenta.com/content/subject/b7820,http://iet.metastore.ingenta.com/content/subject/a0250,http://iet.metastore.ingenta.com/content/subject/b0290,http://iet.metastore.ingenta.com/content/subject/c6170,http://iet.metastore.ingenta.com/content/subject/b7200,http://iet.metastore.ingenta.com/content/subject/a8730,http://iet.metastore.ingenta.com/content/subject/c6170k,http://iet.metastore.ingenta.com/content/subject/a8770g,http://iet.metastore.ingenta.com/content/subject/c4000,http://iet.metastore.ingenta.com/content/subject/a8745,http://iet.metastore.ingenta.com/content/subject/b7230,http://iet.metastore.ingenta.com/content/subject/c4100,http://iet.metastore.ingenta.com/content/subject/b4000,http://iet.metastore.ingenta.com/content/subject/b7300,http://iet.metastore.ingenta.com/content/subject/c5290,http://iet.metastore.ingenta.com/content/subject/a4200,http://iet.metastore.ingenta.com/content/subject/a0260,http://iet.metastore.ingenta.com/content/subject/b7510l,http://iet.metastore.ingenta.com/content/subject/a8725,http://iet.metastore.ingenta.com/content/subject/b6140c,http://iet.metastore.ingenta.com/content/subject/b7520c,http://iet.metastore.ingenta.com/content/subject/a0700,http://iet.metastore.ingenta.com/content/subject/b0290f,http://iet.metastore.ingenta.com/content/subject/a0230,http://iet.metastore.ingenta.com/content/subject/b4300,http://iet.metastore.ingenta.com/content/subject/a8740,http://iet.metastore.ingenta.com/content/subject/c4130,http://iet.metastore.ingenta.com/content/subject/a8745h,http://iet.metastore.ingenta.com/content/subject/b7320,http://iet.metastore.ingenta.com/content/subject/b0230,http://iet.metastore.ingenta.com/content/subject/b1000,http://iet.metastore.ingenta.com/content/subject/a8716,http://iet.metastore.ingenta.com/content/subject/c1130,http://iet.metastore.ingenta.com/content/subject/a0210,http://iet.metastore.ingenta.com/content/subject/c6130,http://iet.metastore.ingenta.com/content/subject/a8770h,http://iet.metastore.ingenta.com/content/subject/b2000,http://iet.metastore.ingenta.com/content/subject/a8783,http://iet.metastore.ingenta.com/content/subject/b0260,http://iet.metastore.ingenta.com/content/subject/b5000,http://iet.metastore.ingenta.com/content/subject/b6140b,http://iet.metastore.ingenta.com/content/subject/b4360,http://iet.metastore.ingenta.com/content/subject/c1180,http://iet.metastore.ingenta.com/content/subject/b5200,http://iet.metastore.ingenta.com/content/subject/a0600,http://iet.metastore.ingenta.com/content/subject/b0250,http://iet.metastore.ingenta.com/content/subject/c1160,http://iet.metastore.ingenta.com/content/subject/a0100,http://iet.metastore.ingenta.com/content/subject/a8715,http://iet.metastore.ingenta.com/content/subject/b5270,http://iet.metastore.ingenta.com/content/subject/a8732,http://iet.metastore.ingenta.com/content/subject/a8760d
- b6000,c7000,c7300,c7330,b6100,c5000,c5200,c5260,c5260b,b6135,a0000,b0000,b0200,a8760f,a0200,c1000,b7510j,c1100,a8760i,a8760j,b6140,b7510n,a8760b,b7520,c6000,c6100,b7510p,a8760g,b6135e,b7510b,b7510h,b0240,c1140,a4000,b0240z,b7800,c1140z,b7820,a0250,b0290,c6170,b7200,a8730,c6170k,a8770g,c4000,a8745,b7230,c4100,b4000,b7300,c5290,a4200,a0260,b7510l,a8725,b6140c,b7520c,a0700,b0290f,a0230,b4300,a8740,c4130,a8745h,b7320,b0230,b1000,a8716,c1130,a0210,c6130,a8770h,b2000,a8783,b0260,b5000,b6140b,b4360,c1180,b5200,a0600,b0250,c1160,a0100,a8715,b5270,a8732,a8760d
- [1038],[1023],[1022],[1022],[1005],[982],[979],[974],[887],[808],[489],[436],[413],[376],[324],[324],[310],[303],[287],[266],[252],[247],[239],[216],[209],[209],[206],[203],[186],[185],[183],[173],[159],[158],[154],[154],[149],[145],[140],[138],[137],[134],[133],[132],[131],[122],[116],[115],[114],[110],[107],[106],[105],[102],[97],[96],[92],[89],[84],[84],[81],[77],[76],[74],[72],[71],[69],[69],[68],[68],[67],[67],[66],[66],[64],[64],[63],[63],[59],[56],[55],[54],[52],[52],[50],[50],[49],[48],[48]
- /search/morefacet;jsessionid=b24l90bdac9a8.x-iet-live-01
- /content/searchconcept;jsessionid=b24l90bdac9a8.x-iet-live-01?option1=pub_concept&sortField=prism_publicationDate&pageSize=20&sortDescending=true&value1=a8760&facetOptions=2&facetNames=pub_concept_facet&operator2=AND&option2=pub_concept_facet&value2=
- See more See less
Filter by content type:
Filter by publication date:
- 2018 [152]
- 2020 [106]
- 2015 [91]
- 2019 [87]
- 2017 [84]
- 2016 [71]
- 1995 [55]
- 1997 [54]
- 2006 [53]
- 2012 [52]
- 2008 [47]
- 1996 [44]
- 1999 [39]
- 2013 [34]
- 2010 [31]
- 2009 [29]
- 2011 [29]
- 2014 [26]
- 2002 [24]
- 1998 [23]
- 2005 [22]
- 2007 [21]
- 1987 [19]
- 2000 [15]
- 2004 [15]
- 1993 [12]
- 1984 [11]
- 2003 [10]
- 1980 [8]
- 1994 [8]
- 1985 [7]
- 2001 [7]
- 1990 [6]
- 1981 [5]
- 1992 [5]
- 1954 [4]
- 1988 [4]
- 1918 [3]
- 1919 [3]
- 1936 [3]
- 1937 [3]
- 1938 [3]
- 1942 [3]
- 1982 [3]
- 1983 [3]
- 1934 [2]
- 1979 [2]
- 1920 [1]
- 1943 [1]
- 1955 [1]
- 1956 [1]
- 1961 [1]
- 1977 [1]
- 1978 [1]
- 1991 [1]
- See more See less
Filter by author:
- G.Z. Yang [8]
- D.V. Land [7]
- F.S. Pavone [7]
- Christos P. Loizou [6]
- H. Kanai [6]
- J. Ramírez [6]
- J.J. Soraghan [6]
- J.M. Górriz [6]
- Nassir Navab [6]
- A. Mamouni [5]
- C.J. James [5]
- Chia-Hung Lin [5]
- Constantinos S. Pattichis [5]
- D. Salas-Gonzalez [5]
- J.W. Hand [5]
- M. O'Halloran [5]
- M.C. Fairhurst [5]
- P.E. Undrill [5]
- Qingming Luo [5]
- S. Jacobsen [5]
- S.K. Setarehdan [5]
- Salim Lahmiri [5]
- A.M. Abbosh [4]
- Anan Li [4]
- Beiji Zou [4]
- C. Tauber [4]
- C.G. Puntonet [4]
- Changqing Jiang [4]
- Chien-Ming Li [4]
- E.C. Fear [4]
- F. Segovia [4]
- Francesco Conversano [4]
- H. Yoo [4]
- Hui Gong [4]
- Jian-Xing Wu [4]
- Jianqi Ding [4]
- L. Crocco [4]
- Luming Li [4]
- M. López [4]
- M.A. Elahi [4]
- M.A. Ferrara [4]
- Mohammad Reza Zare [4]
- N. Chubachi [4]
- P.M.W. French [4]
- Paola Pisani [4]
- R. Pini [4]
- R.H.T. Bates [4]
- Savita Gupta [4]
- Sergio Casciaro [4]
- X. Chen [4]
- Y. Leroy [4]
- A. Jennings [3]
- A. Kumar [3]
- A. Shahzad [3]
- A.J. Vyavahare [3]
- A.P. Anderson [3]
- Aly A. Farag [3]
- B.H. Brown [3]
- B.J. Mohammed [3]
- B.S. Sharif [3]
- Birendra Biswal [3]
- C. Arus [3]
- C. Choi [3]
- C. Evans-Pughe [3]
- C.W. Miller [3]
- Chih-Cheng Hung [3]
- D. Wermser [3]
- D.A. Jackson [3]
- D.M. Bappy [3]
- D.M. Hansell [3]
- D.N. Firmin [3]
- Deep Gupta [3]
- E. Jones [3]
- E.G. Chester [3]
- Enmin Song [3]
- Ernesto Casciaro [3]
- F. Carpignano [3]
- F. Rossi [3]
- Feng Zhang [3]
- G. Tiberi [3]
- G.D. Clifford [3]
- G.G. Bellizzi [3]
- Gonzalo Vegas-Sanchez-Ferrero [3]
- H. Dyball [3]
- Hong Liu [3]
- Hossein Rabbani [3]
- I. Álvarez [3]
- I.A. Brezovich [3]
- Ilangko Balasingham [3]
- Insu Jeon [3]
- J. Velander [3]
- J.-H. Han [3]
- J.C. Dainty [3]
- J.C. van de Velde [3]
- J.H. Young [3]
- J.N. Torry [3]
- J.T.E. McDonnell [3]
- J.U. Kang [3]
- Jianwu Li [3]
- Jie Yang [3]
- See more See less
Filter by access type:
With the development of artificial intelligence and image processing technology, more and more intelligent diagnosis technologies are used in cervical cancer screening. Among them, the detection of cervical lesions by thin liquid-based cytology is the most common method for cervical cancer screening. At present, most cervical cancer detection algorithms use the object detection technology of natural images, and often only minor modifications are made while ignoring the specificity of the complex application scenario of cervical lesions detection in cervical smear images. In this study, the authors combine the domain knowledge of cervical cancer detection and the characteristics of pathological cells to design a network and propose a booster for cervical cancer detection (CCDB). The booster mainly consists of two components: the refinement module and the spatial-aware module. The characteristics of cancer cells are fully considered in the booster, and the booster is light and transplantable. As far as the authors know, they are the first to design a CCDB according to the characteristics of cervical cancer cells. Compared with baseline (Retinanet), the sensitivity at four false positives per image and average precision of the proposed method are improved by 2.79 and 7.2%, respectively.
Retinal vessel segmentation has important application value in clinical diagnosis. If experts manually segment the retinal vessels, the workload is heavy, and the result is strong subjectively. However, some existing automatic segmentation methods have the problems of incomplete vessel segmentation and low-segmentation accuracy. In order to solve the above problems, this study proposes a retinal vessel segmentation method based on task-driven generative adversarial network (GAN). In the generative model, a U-Net network is used to segment the retinal vessels. In the discriminative model, multi-scale discriminators with different receptive fields are used to guide the generative model to generate more details. On the other hand, in view of the uncontrollable characteristics of the data generated by the traditional GAN, a task-driven model based on perceptual loss is added to traditional GAN for feature matching, which makes the generated image more task-specific. Experimental results show that the accuracy, sensitivity, specificity and area under the receiver operating characteristic curve of the proposed method on data set digital retinal images for vessel extraction are 96.83, 80.66, 98.97 and 0.9830%, respectively.
Computer-aided diagnosis (CAD) is a common tool for the detection of diseases, particularly different types of cancers, based on medical images. Digital image processing thus plays a significant role in the processing and analysis of medical images for diseases identification and detection purposes. In this study, an efficient CAD system for the acute lymphoblastic leukaemia (ALL) detection is proposed. The proposed approach entails two phases. In the first phase, the white blood cells (WBCs) are segmented from the microscopic blood image. The second phase involves extracting important features, such as shape and texture features from the segmented cells. Eventually, on the extracted features, Naïve Bayes and k-nearest neighbour classifier techniques are implemented to identify the segmented cells into normal and abnormal cells. The performance of the proposed approach has been assessed through comprehensive experiments carried out on the well-known ALL-IDB data set of microscopic blood images. The experimental results demonstrate the superior performance of the proposed approach over the state-of-the-art in terms of accuracy rate in which achieved 98.7%.
Using the vibration signals, the facial tissue characteristics may be utilised for the detection of nasal diseases. In this study, the tissue characteristics were specified by applying constant frequency vibration signals to the facial tissue. The temperature changes caused by an external vibration source applied to the human face were investigated using thermal imaging techniques. Vibrations were applied to the forehead, right cheek, and left cheek regions of the facial tissue. Temperature differences were examined using dynamic and static analyses. Temperature increases of 500, 562, and 606 m°C were acquired in the F region, MR, and ML regions, respectively. While the F region has the lowest soft tissue thickness and temperature difference, the ML region has the highest values. The temperature difference between ML and F regions was acquired as 106 m°C. The temperature distributions of the facial area indicate that the change of the temperature is lower in the regions where the soft tissue thickness is low, and higher in the regions where the soft tissue thickness is high. Therefore, the thickness information about the soft tissue can be provided from the temperature distribution of the facial area after the application of the vibration signal.
Diabetic Maculopathy, the major cause of vision loss, occurs due to uncontrolled diabetes. This affects the retinal layers of the eye causing bleeding of vessels, which results in Diabetic Macular Edema (DME), macular detachment etc., Basically three structural changes are involved in DME viz., Cystoid Macular Edema (CME), Serous Macular Detachment (SMD) and Intra Retinal Fluid (IRF). These changes may also coexist with each other such as CME with SMD or CME with IRF etc. In this work, the retinal images acquired through Spectral Domain Optical Coherence Tomography (SDOCT) imaging modality, which would provide high level of precision and resolution of the retinal layers are utilized. An automated algorithm to detect and differentiate seven types of DME based on deep learning is implemented using transfer learning on three Convolutional Neural Networks (CNNs), ResNet 50, VGGNet and AlexNet. In detection of DME, ResNet 50 performs excellently well, when compared with Alexnet and VGGNet, because of its depth and skip connection features. The average values of the statistical parameters such as accuracy (0.993), F1 score (0.975), Mathews Correlation Coefficient(MCC)(0.972) of ResNet are high when compared to that of AlexNet and VGGNet
Since histopathological images exist in various forms, performing segmentation on these images is tedious. While in cancer-free colon tissue, epithelial cells generally have an elliptical shape; their structure alters in a malignant tissue. This study proposes a technique consisting of colon biopsy image segmentation and a hybrid set of features for classification, and is evaluated on multiple databases with various levels of magnifications. This study presents a novel image segmentation method with multi-level thresholding based on Rényi's two-dimensional entropy with a cultural algorithm (2DRCA). Based on the entropy, elliptical epithelial cells, being the region of interest, are identified from the segmented background. After successful segmentation, shape descriptors are extracted with morphological operations. Two sets of texture features (grey-level co-occurrence matrix and block-wise elliptical local binary pattern) are calculated based on pre-processed grey-scale colon images. The proposed hybrid feature vector set, then concatenates the extracted features for training and testing with a random forest classifier. The proposed segmentation and classification model is evaluated by considering four data sets consisting of various colon images at different magnifications. In addition, it is evaluated by multiple performance measures and compared with existing techniques.
Accurately segmenting lungs from CT images is a fundamental step for quantitative analysis of lung diseases. However, it is still a challenging task because of some interferential factors, such as juxta-pleural nodules, pulmonary inflammation, as well as individual anatomical varieties. In this study, with the combination of a superpixel approach and a hybrid model composed of convolutional neural network and random forest (CNN-RF), the authors propose a novel algorithm to segment lungs from CT images in an automatic and accurate fashion. The authors' lung segmentation covers three main stages: image preprocessing, lung segmenting and segmentation refining. A lung CT image denoised with a fractional-order grey similarity approach is first segmented to a set of superpixels, and the CNN-RF model is then employed to classify the superpixels and identify lungs from the CT image. The segmentation result is further refined by separating the left and right lungs, eliminating trachea, and correcting lung contours. Experiments show that their algorithm can generate more accurate lung segmentation results with 94.98% Jaccard's index and 97.99% Dice similarity coefficient, compared with ground truths, and it achieved better results compared with several feature-based machine learning techniques and current methods on lung segmentation.
The novel coronavirus has spread quite rapidly across the globe. The current testing rate is failing to match the exponential rate of rising cases. Moreover, the available testing methodologies are expensive and time-consuming. A sensitive automated diagnosis is one of the biggest need of the hour. In the proposed work, the authors analyse the chest X-ray images of normal, pneumonia and coronavirus disease-2019 (COVID-19) patients and process them to boost the COVID-specific features (opacities etc.), which enable to perform sensitive identification of COVID-19 patients. The sets of original and processed images are used with a stack of pre-trained deep models for ensemble learning. They used VGG-16 as base-learners, trained with a diverse set of inputs followed by a logistic regression model, the meta learner, to combine the base-learner predictions. The proposed fusion-based model is trained and tested for three types of classification, TYPE-I: binary (NORMAL/ABNORMAL), TYPE-II: binary (PNEUMONIA/COVID-19) and TYPE-III: multi-class (NORMAL/PNEUMONIA/COVID-19). The diagnosis results are quite promising, with high accuracy and sensitivity values for all the cases. The proposed algorithm can be used to assist the medical experts for quick identification and isolation of COVID-19 patients and thereby mitigating the effect of the virus.
Brain tumour detection is still a challenging problem both in medical and image processing. In this study, two scenarios are applied to diagnose human brain tumours, classical image processing (CIP) and a new algorithm called binary image with variable fuzzy level (BIVFL). Magnetic resonance imaging (MRI) process from the BraTs 2019 database. Edge detection and prognosis area cropping are parts of the high-computational CIP algorithm. For the CIP, the best filter is the combination of the horizontal and vertical modes of the Prewitt filter. In the BIVFL, by changing the fuzzy level for binarisation images two clusters are filled with tumour and no tumour areas, and the tumour area is extracted in various accuracies. Sensitivity, specificity, precision, and accuracy of the BIVFL are varied according to the fuzzy level, and the best value of sensitivity is 0.9, but the value of three other parameters is 1 for an interval of fuzzy levels. The BIVFL is a simple and fast tumour area extraction algorithm, and also it is easy to implement with digital signal processors. Histogram and power signal-to-noise ratio of the BIVFL is remarkable.
Diagnostic medical imaging plays an imperative role in clinical assessment and treatment of medical abnormalities. The fusion of multimodal medical images merges complementary information present in the multi-source images and provides a better interpretation with improved diagnostic accuracy. This paper presents a CT–MR neurological image fusion method using an optimised biologically inspired neural network in nonsubsampled shearlet (NSST) domain. NSST decomposed coefficients are utilised to activate the optimised neural model using particle swarm optimisation method and to generate the firing maps. Low and high-frequency NSST subbands get fused using max-rule based on firing maps. In the optimisation process, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image. To analyse the fusion performance, extensive experiments are conducted on the different CT–MR neurological image dataset. Objective performance is evaluated based on different metrics to highlight the clarity, contrast, correlation, visual quality, complementary information, salient information, and edge information present in the fused images. Experimental results show that the proposed method is able to provide better-fused images and outperforms other existing methods in both visual and quantitative assessments.
In recent years, data mining and algorithm-based methods have been used frequently for the prediction and diagnosis of various diseases. Traumas, being one of the significant health problems in the world, are also one of the most important causes of death. This study aims to predict the presence of traumatic pathology in the lung of the patients admitted to the emergency department due to blunt thorax trauma with no X-ray and computed tomography (CT) history by machine learning methods. The models developed in the study using the 5-fold cross-validation method are most accurately classified by the ensemble (voting) classifier, whether there is a pathology in X-ray (mean accuracy = 0.82) and CT (mean accuracy = 0.83). The K-nearest neighbourhood method classifies patients with pathology in X-ray by 83% accuracy, while the ensemble (voting) method classifies non-pathology patients by 94% accuracy in models. Of CT results, random forest, ensemble (voting), and ensemble (stacking) classifiers are precisely classified by 96%, while those patients with pathology are classified perspicuously by 77%. As a result, a mathematical framework using data mining methods was proposed based on estimating the X-ray and CT results for the thorax graph scan.
Vision is one of the most valued of human sensory perceptions, and vision loss is associated with a significant decrease in quality of life as well as serious medical, psychological, social and financial consequences. Due to the high value people place on vision, ophthalmologists often find that patients are motivated to take an active role in reducing their risk for vision loss. Age-related macular degeneration (AMID) is the leading cause of irreversible vision loss in the western world. Despite major advances in treating this condition over the past two decades, additional efforts are needed to significantly alter current rates of visual decline due to AMID. This unmet need provides an opportunity to utilise home monitoring technology to enable self-aware AMID patients to preserve their vision. Remote patient monitoring is growing in clinical applicability generally, and AMID is an excellent target for this valuable approach to patient care. The ForeseeHome®preferential hyperacuity perimeter is a telemedicine home-based monitoring system and has been proven to improve visual outcomes in patients suffering from AMID. The development of ForeseeHome is the result of a global cooperative effort to change the lives of people with AMID by using a simple at-home test. As is true for other successes in biomedicine, this program was founded on excellent basic science, strong engineering, an experienced, dedicated team and well-designed clin-ical trials showing unquestionable efficacy.
This study proposed a GAN-based reconstruction method-cascaded data consistency generative adversarial network (CDCGAN) to recover high-quality PET images from filtered back projection PET images with streaking artifacts and high noise. First, the authors embed defined data consistency layer (DC layer) in their generator network to constrain the reconstruction process and adjust accurately generated faked PET images. Second, to improve the accuracy of reconstruction on average, their generator network was built iteratively to achieve better performance with simple structures. They observed that the proposed CDCGAN allows the preservation of fine anomalous features while eliminating the streaking artifacts and noise. Experimental results show that the reconstructed PET images by their methods perform well comparably to other state-of-the-art methods but at a faster speed. A clinical experiment was also performed to show the validity of the CDCGAN for artifacts reduction.
This work aims at developing an automated ensemble-based glioma grade classification framework that classifies glioma into low-grade glioma (LGG) and high-grade glioma (HGG). Discriminant features are extracted using the Gabor filter bank and concatenated in a vectorised form. The feature set is then divided into k subsets of features. An ensemble of base classifiers known as rotation forest is employed for classification purpose. Independent components analysis (ICA) is applied on every feature subset and independent features are extracted. Each classifier in the ensemble is trained with these independent features from all the subset of features. These k feature subsets are responsible for different rotations during the training phase. This results in classifier diversity in the ensemble. Extensive experiments are conducted on benchmark BraTS 2017 data set and comparative analysis reveals that the proposed framework outperforms the competitive techniques in terms of various performance metrics. Data-augmentation technique, synthetic minority over-sampling technique is applied to oversample minority class samples alleviate class biasness problem. The proposed classification framework achieves an accuracy of 98.63%, dice similarity coefficient of 0.98 and sensitivity of 0.96. The authors conduct different comparative experiments with state-of-the-art ensemble-based, deep learning-based and traditional machine learning-based classification approaches to validate the performance of the proposed framework.
Limited labelled data is a challenge in the field of medical imaging and the need for a large number of them is paramount for the training of machine learning algorithms, as well as measuring the performance of image processing algorithms. The purpose of this study is to construct synthetic and labelled optical coherence tomography (OCT) data to solve the problems of having access to accurately labelled data and evaluating the processing algorithms. In this study, a modified active shape model is used which considers the anatomical features of available images such as the number and thickness of the layers as well as their associated brightness, the location of retinal blood vessels and shadow information with respect to speckle noise. The algorithm is also able to provide different data sets with the varying noise level. The validity of the proposed method for the synthesis of retinal images is measured by two methods (qualitative assessment and quantitative analysis).
Image processing applications remarkably contributes to modern ophthalmology. This technology is designed to analyse the characteristics of the human eye microvasculature images. The retinal microvasculature is an excellent non-invasive screening window for the assessment of systemic diseases such as diabetes, hypertension, and stroke. Retinal microvasculature character such as widening vessel diameter is recognised as an analysable feature for stroke or transient ischemic attack for predicting the progression of this pathology. Thus, in this study, a computer-assisted method has been developed for this task applying the Euclidean distance transform (EDT) technique. This newly developed algorithm computes the Euclidean distance of the remaining white pixels on the area of interest. Central Light Reflex Image Set (CLRIS) and Vascular Disease Image Set (VDIS) of Retinal Vessel Image set for Estimation of Width database were used for the performance evaluation of the proposed algorithm that showed 98.1 and 97.7% accurate result for both CLRIS and VDIS, respectively. The significantly high accuracy in this newly developed vessel diameter quantification algorithm indicates excellent potential for further development, evaluation, validation, and integration into ophthalmic diagnostic instruments.
Segmentation of tissues in brain magnetic resonance (MR) images has a crucial role in computer-aided diagnosis (CAD) of various brain diseases. However, due to the complex anatomical structure and the presence of intensity non-uniformity (INU) artefact, the segmentation of brain MR images is considered as a complicated task. In this study, the authors propose a novel locally influenced fuzzy C-means (LIFCM) clustering for segmentation of tissues in MR brain images. The proposed method incorporates local information in the clustering process to achieve accurate labelling of pixels. A novel local influence factor is proposed, which estimates the influence of a neighbouring pixel on the centre pixel. Furthermore, they have introduced the kernel-induced distance in LIFCM, which deals with complex brain MR data and produces effective segmentation. To evaluate the performance of the proposed method, they have used one simulated and one real MRI data set. Extensive experimental findings suggest that the authors' method not only produces effective segmentation but also retains crucial image details. The statistical significance test has been further conducted to support their experimental observations.
This Letter illustrates prefrontal haemodynamics as a neurovascular basis of inter-personal working memory differences. A functional near-infrared spectroscopy with sampling frequency ∼2 Hz is used to record the blood oxyhaemoglobin and deoxyhaemoglobin signals from 19 subjects engaged in working memory task of encoding and retrieval of ten symbol-meaning association learning. The individual difference in working memory performance is classified by supervised learning-based linear discriminant analysis and ensemble classifiers. Prior to the classification approach, individual performance is labelled as high, moderate and low on the basis of the performance index. The spontaneous haemodynamic activity and task-evoked responses are marked as background and foreground signals, respectively, which are scaled by means of stream-independent and stream-dependent models. The classifiers' performance shows that the stream-dependent model-based feature construction-classification improves classification accuracy to a major extent compared to the stream-independent model and no gain model. To understand the neurovascular basis of the inter-individual performance difference, diffused voxel plots are constructed. The voxel plots showed that concurrent activation of orbitofrontal and dorsolateral prefrontal cortex could have a possible association with persons' higher working memory performance.
Single modality brain–computer interface (BCI) systems often mislabel the electroencephalography (EEG) signs as a command, even though the participant is not executing some task. In this Letter, the classification of different working memory load levels is presented using a hybrid BCI system. N-back cognitive tasks such as 0-back, 2-back, and 3-back are used to create working memory load on participants while recording EEG and functional near-infrared spectroscopy (fNIRS) signals simultaneously. A combination of statistically significant features obtained from EEG and fNIRS corresponding to frontal region channels are used to classify different N-back commands. Kernel-based support vector machine (SVM) classifiers are employed with and without cross-validation schemes. Classification accuracy of 100% is achieved for binary classification of 0-back against 2-back and 0-back against 3-back using linear SVM, quadratic SVM, and cubic SVM under holdout data division protocol.
Wilson's disease (WD) is caused by the excessive accumulation of copper in the brain and liver, leading to death if not diagnosed early. WD shows its prevalence as white matter hyperintensity (WMH) in MRI scans. It is challenging and tedious to classify WD against controls when comparing visually, primarily due to subtle differences in WMH. This Letter presents a computer-aided design-based automated classification strategy that uses optimised transfer learning (TL) utilising two novel paradigms known as (i) MobileNet and (ii) the Visual Geometric Group-19 (VGG-19). Further, the authors benchmark TL systems against a machine learning (ML) paradigm. Using four-fold augmentation, VGG-19 is superior to MobileNet demonstrating accuracy and area under the curve (AUC) pairs as 95.46 ± 7.70%, 0.932 (p < 0.0001) and 86.87 ± 2.23%, 0.871 (p < 0.0001), respectively. Further, MobileNet and VGG-19 showed an improvement of 3.4 and 13.5%, respectively, when benchmarked against the ML-based soft classifier – Random Forest.