Relative accuracy of two methods for approximating observed Fisher information

Relative accuracy of two methods for approximating observed Fisher information

For access to this article, please select a purchase option:

Buy chapter PDF
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Data-Driven Modeling, Filtering and Control: Methods and applications — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The Fisher information matrix (FIM) has long been of interest in statistics and other areas. It is widely used to measure the amount of information and calculate the lower bound for the variance for maximum likelihood estimation (MLE). In practice, we do not always know the actual FIM. This is often because obtaining the firstor second-order derivative of the log-likelihood function is difficult, or simply because the calculation of FIM is too formidable. In such cases, we need to utilize the approximation of FIM. In general, there are two ways to estimate FIM. One is to use the product of gradient and the transpose of itself, and the other is to calculate the Hessian matrix and then take negative sign. Mostly people use the latter method in practice. However, this is not necessarily the optimal way. To find out which of the two methods is better, we need to conduct a theoretical study to compare their efficiency. In this paper, we mainly focus on the case where the unknown parameter that needs to be estimated by MLE is scalar, and the random variables we have are independent. In this scenario, FIM is virtually Fisher information number (FIN). Using the Central Limit Theorem (CLT), we get asymptotic variances for the two methods, by which we compare their accuracy. Taylor expansion assists in estimating the two asymptotic variances. A numerical study is provided as an illustration of the conclusion. The next is a summary of limitations of this paper. We also enumerate several fields of interest for future study in the end of this paper.

Chapter Contents:

  • 10.1 Introduction
  • 10.2 Background
  • 10.2.1 The Central Limit Theorem
  • Lindeberg–Lévy CLT
  • Lyapunov CLT
  • Lindeberg CLT
  • 10.2.2 Taylor expansion (Taylor series)
  • 10.3 Theoretical analysis
  • 10.4 Numerical studies
  • 10.5 Conclusions and future work
  • 10.5.1 Conclusion
  • 10.5.2 Future work
  • AppendixA
  • References

Inspec keywords: matrix algebra; maximum likelihood estimation; optimisation

Other keywords: CLT; Hessian matrix; FIM; central limit theorem; Fisher information matrix; Fisher information number; MLE; maximum likelihood estimation

Subjects: Algebra; Algebra; Other topics in statistics; Optimisation; Optimisation techniques; Statistics; Algebra, set theory, and graph theory; Numerical approximation and analysis; Probability and statistics; Algebra; Optimisation techniques

Preview this chapter:
Zoom in

Relative accuracy of two methods for approximating observed Fisher information, Page 1 of 2

| /docserver/preview/fulltext/books/ce/pbce123e/PBCE123E_ch10-1.gif /docserver/preview/fulltext/books/ce/pbce123e/PBCE123E_ch10-2.gif

Related content

This is a required field
Please enter a valid email address