2 edition of **Uncertainties in adaptive maximum entropy frequency estimators** found in the catalog.

Uncertainties in adaptive maximum entropy frequency estimators

R. Jeffrey Keeler

- 133 Want to read
- 17 Currently reading

Published
**1979**
by U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, Environmental Research Laboratories in Boulder, Colo
.

Written in English

- Spectrum analysis,
- Time-series analysis

**Edition Notes**

Statement | R. Jeffrey Keeler |

Series | NOAA technical report ERL -- 405. WPL ; -- 53, NOAA technical report ERL -- 405, NOAA technical report ERL -- 53 |

Contributions | Wave Propagation Laboratory |

The Physical Object | |
---|---|

Pagination | iii, 11 p. : |

Number of Pages | 11 |

ID Numbers | |

Open Library | OL14851603M |

). A variety of nonparametric approaches for estimating the differential entropy have been studied, including histogram-based estimators, “plug-in” kernel estimators, resampled kernel estimators, and nearest-neighbor estimators; see (Beirlant et al., ) for a nice review. In particular, this previous work has established the consistency. " Computers & Structures /ruc Online publication date: September Song Pengchao Mignolet Marc P. "Maximum entropy-based uncertainty modeling at the elemental level in linear structural and thermal problems.

Cited by: repec:rim:rimwps is not listed on IDEAS Usta, Ilhan & Kantar, Yeliz Mert, "On the performance of the flexible maximum entropy distributions within partially adaptive estimation," Computational Statistics & Data Analysis, Elsevier, vol. 55(6), pages , is, Leonidas S., "Retrieving risk neutral densities from European option . Results from the motion estimation or intra estimation stages are transformed from the spatial domain into the frequency domain. H/MPEG-4 AVC uses a DCT-like 4x4 integer transform. In contrast, MPEG-2 and MPEG-4 ASP employ a true DCT 8x8 transform that operates on floating-point coefficients.

3-Spectrum Estimation: Non-parametric methods, minimum-variance spectrum estimation, maximum entropy method, parametric methods, frequency estimation, principal components spectrum estimation. 4-Optimal and Adaptive Filtering: FIR and IIR Wiener filters, Discrete Kalman filter, FIR Adaptive filters: Steepest descent, LMS, LMS- based algorithms. The maximum entropy distribution corresponds to the macrostate (as indexed by the empirical distribution) that has the most microstates (the actual gas velocities). Implicit in the use of maximum entropy methods in physics is a sort of AEP that .

You might also like

Education beyond high school: The two-year community college.

Education beyond high school: The two-year community college.

new crusaders and other poems

new crusaders and other poems

Semiconductor electronics by worked example

Semiconductor electronics by worked example

Marketing California fresh Tokay grapes.

Marketing California fresh Tokay grapes.

Planning now for your successful retirement

Planning now for your successful retirement

Wynkoop genealogy in the United States of America

Wynkoop genealogy in the United States of America

LETHAL WEAPON 1 2 3.

LETHAL WEAPON 1 2 3.

Advances in fluidics.

Advances in fluidics.

Purchase of additional land in California for addition to Cahuilla Indian Reservation.

Purchase of additional land in California for addition to Cahuilla Indian Reservation.

Apple peelers and coin stackers -

Apple peelers and coin stackers -

Psychiatry for social workers

Psychiatry for social workers

A submaximal cardiovascular fitness test for fourth, fifth and sixth grade girls

A submaximal cardiovascular fitness test for fourth, fifth and sixth grade girls

[Protest of Navy contract award for paint floats]

[Protest of Navy contract award for paint floats]

Developments in long-distance commuter coaching following the Transport Act 1980

Developments in long-distance commuter coaching following the Transport Act 1980

Americas top military careers

Americas top military careers

Get this from a library. Uncertainties in adaptive maximum entropy frequency estimators. [R Jeffrey Keeler; Wave Propagation Laboratory,]. Spectrum Estimation Workshop. Maximum entropy spectral analysis is one of a number of high resolution spectral analysis techniques.

The impact of the Burg's maximum entropy spectral analysis method is far more significant than the technique itself. Thus in this report we present not only the bibliography of the maximum entropy.

In contrast, the Maximum Likelihood Estimator (MLE), which is the empirical entropy, requires n ≫ S samples. In the present paper we significantly refine the minimax results of existing work.

To alleviate the pessimism of minimaxity, we adopt the adaptive estimation framework, and show that the JVHW estimator is an adaptive estimator, i.e Cited by: For stationary deterministic constant sinusoidal parameters, numerous frequency estimators, which include periodogram [2], maximum likelihood [2].

2 rates in estimating entropy. Their estimators are consistent given n ˛ S lnS samples, where S is the support size, and it is the best possible sample complexity. In contrast, the Maximum Likelihood Estimator (MLE), which is the empirical entropy, requires n ˛ S samples.

In the present paper we signiﬁcantly reﬁne the minimax results of. Alternatively, the maximum-entropy model can be used, which makes no assumption of a normal distribution. One disadvantage of the maximum-entropy model is the learning cost of its parameters. This paper proposes an Adaptive Estimated Maximum-Entropy Distribution (Adaptive MEED) model, which aims to reduce learning complexity of building a.

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text.

Heidbreder G. () Maximum Entropy Applications in Radar. In: Grandy W.T., Schick L.H. (eds) Maximum Entropy and Bayesian Methods. Fundamental Theories of Physics (An International Book Series on the Fundamental Theories of Physics: Their Clarification, Development and Application), vol Springer, Dordrecht.

For this problem, we discussed two estimators: Plugin Estimator The plugin estimator uses empirical estimates of the frequencies ^p j= 1 n P n i=1 1[X i= j] to obtain an estimate of the entropy as follows: H^ n= Xd j=1 p^ jlog 2 (^p j) LP Estimator The LP Estimator works by transforming the samples fX ign i=1 into a ngerprint, which is the.

The Shannon entropy can measure the uncertainty of a random process. Rolling element machinery without failure tends to generate a more random signal, and the machine with failure usually tends to have a more deterministic signal; i.e., the Shannon entropy will be different. To extract the periodicity in the signal, a feature named symbolized Shannon entropy (SSE) is.

This paper presents an improved estimation strategy for the rotor flux, the rotor speed and the frequency required in the control scheme of a standalone wind energy conversion system based on self-excited three-phase squirrel-cage induction generator with battery storage.

At the generator side control, the rotor flux is estimated using an adaptive Kalman filter, and the rotor. tially adaptive estimator. The fourth and ﬂfth section report Monte Carlo simulations and an empirical application of the proposed estimator.

The last section concludes. 2 Maximum Entropy Density In this section, we introduce the Principle of Maximum Entropy and the maximum entropy densities. Non-Extensive Entropy Econometrics for Low Frequency Series.

National Accounts-Based Inverse Problems Author: Bwanakare, Second ISBN: Year: Pages: DOI: / Language: English. Chapter 3 is an introduction to statistical estimation theory: properties of estimators, consistency concepts, maximum likelihood, least-squares estimation, maximum entropy estimation, and so on.

Chapter 4 presents the classical spectral estimation theory based on periodograms. Chapter 5 deals with parametric estimation for stationary processes. But beyond Bayes, an introduction to information theory, the maximum entropy principle, model sensitivity analysis and sampling methods such as MCMC are presented.

In Part 3, the central problem of predictive computational science is addressed: the selection, adaptive control and validation of mathematical and computational models of complex. parameter estimation, quantization theory and spectral estimation. I Introduction Let Xbe a random vector taking values in Rd with probability density function (pdf) f(x), then its di erential entropy is de ned by H(f) = Z f(x)lnf(x)dx: (1) We assume that H(f) is well-de ned and is nite.

() Infinite Time Horizon Maximum Causal Entropy Inverse Reinforcement Learning. IEEE Transactions on Automatic Control() Reduced Wiener Chaos representation of random fields via basis adaptation and projection.

Maximum Entropy, Analytic Form The Principle of Maximum Entropy is based on the premise that when estimating the probability distribution, you should select that distribution which leaves you the largest remaining uncertainty (i.e., the maximum entropy) consistent with your constraints.

That way you have not introduced any additional. In this work, an improved frequency Doppler estimator is presented. The estimator is implemented in the field programmable gate array (FPGA) included in a real-time ultrasound board [].The proposed algorithm takes advantage of the characteristics of both the full centroid and the peak estimators: it features the low calculation effort typical of the peak estimator.

Chen, C. Richard, S.-K. Ting, and A. Sayed, “Multitask learning over adaptive networks with grouping strategies,” in Cooperative and Graph Signal Processing. Principle of Maximum Entropy Bayesian Occam's Razor Minimum Message Length (MML) Methods for finding estimators Minimum-variance unbiased estimator (MVUE) Best linear unbiased estimator (BLUE) Maximum entropy method (MEM) Method of moments estimator (MME) / Empirical estimate of a moment Maximum likelihood estimator (MLE).An alternative estimation technique based on the Burg maximum entropy (BME) principle was introduced by Lygre and Krogstad ().

When applied to certain theoretical distributions, the differences between the BME and SME algorithms range from the ability to match both broad and narrow peaks to a strong tendency for peak splitting (Krogstad.

In the limit of adequately large data sets, different uncertainty estimates would be expected to behave similarly, however, an interesting aspect to be explored is the behavior when data sizes are.