Adaptive Testing on a Regression Function at a Point Yale University
... without a loss in the minimax rate (or, for adaptivity over L, even the constant). In these papers, the optimal constants C ∗ and C∗ are also derived in some cases. Spokoiny (1996) considers adaptivity to Besov classes under the ℓ2 norm and shows that, as we derive in our case, the minimax rate can ...
... without a loss in the minimax rate (or, for adaptivity over L, even the constant). In these papers, the optimal constants C ∗ and C∗ are also derived in some cases. Spokoiny (1996) considers adaptivity to Besov classes under the ℓ2 norm and shows that, as we derive in our case, the minimax rate can ...
Quantile Regression for Large-scale Applications
... ates (Koenker & Bassett, 1978), in a manner analogous to the way in which Least-squares regression estimates the conditional mean. The Least Absolute Deviations regression (i.e., `1 regression) is a special case of quantile regression that involves computing the median of the conditional distributio ...
... ates (Koenker & Bassett, 1978), in a manner analogous to the way in which Least-squares regression estimates the conditional mean. The Least Absolute Deviations regression (i.e., `1 regression) is a special case of quantile regression that involves computing the median of the conditional distributio ...
http://www.york.ac.uk/depts/maths/histstat/normal_history.pdf
... the numerator and the denominator oof this quantity become zero when i = 0, and if we differentiate them both, regarding i alone as variable, we will have (1 − µ2i )/(2i) = ln µ, therefore 1 − µ2i = −2i ln µ. Under these conditions we will thus have Z Z Z Z ...
... the numerator and the denominator oof this quantity become zero when i = 0, and if we differentiate them both, regarding i alone as variable, we will have (1 − µ2i )/(2i) = ln µ, therefore 1 − µ2i = −2i ln µ. Under these conditions we will thus have Z Z Z Z ...