access icon free Feature detection and matching on atmospheric nuclear detonation video

Automated feature matching of nuclear detonations (NUDETs) enables three-dimensional point cloud reconstruction, and establishment a volume-based model to reduce uncertainty in estimating the yield of NUDETs solely from video. Establishing a volume-based model requires feature correspondences between wide viewpoints of 58°–110° that are larger than scale-invariant feature transform-based techniques can reliably match. The presented technique detects relative bright features in the NUDET known as ‘hotspots,’ and matches them across wide viewpoints using a spherical based object model. Results show that hotspots can be detected with a 71.95% hit rate and 86.03% precision. Hotspots are matched to films from different viewpoints with 76.6% correctness and a standard deviation of 16.4%. Hotspot descriptors are also matched in time sequence with 99.6% correctness and a standard deviation of 1.07%. The results demonstrate that a spherical model can serve as a viable descriptor model for matching across wide viewpoints when the object is known to be spherical. It also demonstrates an automated feature detection and matching combination that enables features to be matched from unsynchronised video across wide viewpoints of 58°–110° on spherical objects where state-of-the-art techniques are insufficient.

Inspec keywords: image reconstruction; feature extraction; image matching; nuclear explosions; video signal processing; military computing; image sequences

Other keywords: viable descriptor model; spherical based object model; atmospheric nuclear detonation video; time sequence; hotspot matching; automated feature matching; uncertainty reduction; automated feature detection; volume-based model; three-dimensional point cloud reconstruction; bright features; unsynchronised video matching

Subjects: Video signal processing; Military engineering computing; Image recognition; Weapons

References

    1. 1)
      • 2. Taylor, G.: ‘The formation of a blast wave by a very intense explosion’. Proc. of the Royal Society of London, 1950, vol. 201, pp. 159174, Series A, Mathematical and Physical Sciences.
    2. 2)
      • 12. Spriggs, G.D.: ‘Nuclear weapon effects laboratory class’ (Lawrence Livermore National Laboratory, Livermore, CA, 2013).
    3. 3)
      • 11. Schmitt, D.T., Peterson, G.L.: ‘Machine learning nuclear detonation features’. 2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), October 2014, pp. 17.
    4. 4)
      • 7. Morel, J.-M., Yu, G.: ‘ASIFT: a new framework for fully affine invariant image comparison’, SIAM J. Imaging Sci., 2009, 2, (2), pp. 438469.
    5. 5)
      • 16. Reinsch, C.H.: ‘Smoothing by spline functions’, Numer. Math., 1967, 10, (3), pp. 177183.
    6. 6)
      • 9. Slaughter, R.C., McClory, J.W., Schmitt, D.T., et al: ‘3d sparse point reconstructions of atmospheric nuclear detonations’. 2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), October 2014, pp. 19.
    7. 7)
      • 18. Fischler, M.A., Bolles, R.C.: ‘Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography’, Commun. ACM, 1981, 24, (6), pp. 381395.
    8. 8)
      • 5. Bay, H., Ess, A., Tuytelaars, T., et al: ‘Speeded-up robust features (SURF)’, Comput. Vis. Image Underst., 2008, 110, (3), pp. 346359.
    9. 9)
      • 6. Snavely, N., Seitz, S.M., Szeliski, R.: ‘Photo tourism: exploring photo collections in 3d’, ACM Trans. Graph., 2006, 25, (3), pp. 835846.
    10. 10)
      • 13. Lynes, D.: ‘An analysis of methods to determine nuclear weapon yield using digital fireball films’ (Air Force Institute of Technology, Wright Patterson AFB, OH, 2013).
    11. 11)
      • 1. Spriggs, G.D.: ‘Film scanning project’ (Lawrence Livermore National Laboratory, Livermore, CA, 2011).
    12. 12)
      • 10. Schmitt, D.T., Peterson, G.L., Slaughter, R.: ‘Quantifying 3D positional uncertainty of radiological material from nuclear detonation video’, J. Nucl. Sci. Eng., 2016, 181, (2), pp. 243255, dx.doi.org/10.13182/NSE14-141.
    13. 13)
      • 15. Schmitt, D.T., Peterson, G.L.: ‘Timing mark detection on nuclear detonation video’. 2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), October 2014, pp. 15.
    14. 14)
      • 8. Harris, C., Stephens, M.: ‘A combined corner and edge detector’. Alvey Vision Conf., Manchester, UK, 1988, vol. 15, p. 50.
    15. 15)
      • 3. Curless, B., Diebel, J., Scharstein, D., et al: ‘A comparison and evaluation of multi-view stereo reconstruction algorithms’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2006, pp. 519526.
    16. 16)
      • 14. Pacleb, C.W.: ‘Analysis of the nuclear thermal pulse using digitized scientific test films’ (Air Force Institute of Technology, Wright Patterson AFB, OH, 2013).
    17. 17)
      • 4. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    18. 18)
      • 17. Xie, Y., Ji, Q.: ‘A new efficient ellipse detection method’. ICPR (2)’02, 2002, pp. 957960.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2015.0145
Loading

Related content

content/journals/10.1049/iet-cvi.2015.0145
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading