Publication | Open Access
Accelerated Stochastic Approximation
435
Citations
0
References
1958
Year
Large DeviationsEngineeringStochastic OptimizationStochastic ProcessesStochastic Approximation ProcedureStochastic ApproximationProbability TheoryComputer ScienceFrequent FluctuationsApproximation MethodTheorems 2Approximation TheoryStatistics
In stochastic approximation, frequent sign changes in successive iterates indicate closeness to the target value θ, whereas few changes suggest the estimate is still far from θ. The authors propose procedures in which the step size X_{n+1}−X_n is scaled by a factor that decreases as the count of sign changes in the sequence up to n increases. Theorems 2 and 3 show that the step can be written as b_nZ_n, where Z_n’s conditional mean opposes X_n−θ and b_n shrinks with more sign changes, leading to faster almost‑sure convergence than conventional methods and providing a useful framework for optimizing Robbins‑Monro procedures.
Using a stochastic approximation procedure $\{X_n\}, n = 1, 2, \cdots$, for a value $\theta$, it seems likely that frequent fluctuations in the sign of $(X_n - \theta) - (X_{n - 1} - \theta) = X_n - X_{n - 1}$ indicate that $|X_n - \theta|$ is small, whereas few fluctuations in the sign of $X_n - X_{n - 1}$ indicate that $X_n$ is still far away from $\theta$. In view of this, certain approximation procedures are considered, for which the magnitude of the $n$th step (i.e., $X_{n + 1} - X_n$) depends on the number of changes in sign in $(X_i - X_{i - 1})$ for $i = 2, \cdots, n$. In theorems 2 and 3, $$X_{n + 1} - X_n$$ is of the form $b_nZ_n$, where $Z_n$ is a random variable whose conditional expectation, given $X_1, \cdots, X_n$, has the opposite sign of $X_n - \theta$ and $b_n$ is a positive real number. $b_n$ depends in our processes on the changes in sign of $$X_i - X_{i - 1}(i \leqq n)$$ in such a way that more changes in sign give a smaller $b_n$. Thus the smaller the number of changes in sign before the $n$th step, the larger we make the correction on $X_n$ at the $n$th step. These procedures may accelerate the convergence of $X_n$ to $\theta$, when compared to the usual procedures ([3] and [5]). The result that the considered procedures converge with probability one may be useful for finding optimal procedures. Application to the Robbins-Monro procedure (Theorem 2) seems more interesting than application to the Kiefer-Wolfowitz procedure (Theorem 3).