1) the non-uniform rate of convergence
非一致性收敛速度
1.
This paper studies the non-uniform rate of convergence on a L-statistics with higher order moment and shows our result as well as i.
本文研究了高阶矩条件下L—统计量的非一致性收敛速度,得到了与独立和类似的收敛速度。
2) uniform convergence rate
一致收敛速度
1.
In this paper,the profile least squares estimatin on semivarying coefficient models is introduced,the uniform convergence rates of nonparametric component in semivarying coefficient models is investigated.
变系数模型已经获得了广泛的应用,半变系数模型是变系数模型的有效推广,文章介绍了半变系数模型的PLS估计,给出该估计函数系数的一致收敛速度。
2.
We give uniform convergence rates in the central limit theorem for negatively associated sequences with finite third moment, No stationarity is required.
本文对NA(NegativelyAssociated)序列建立了中心极限定理的一致收敛速度,只要其三阶矩有限及描述NA序列协方差结构的一个系数u(n)被负指数序列所控制,而无需平稳性便获得了其收敛速度O(n(-1/2)logn)。
3.
In this paper, it is proved that there exists no multi-parameter liner empirical Bayes estimator with uniform convergence rate larger than one.
证明出任何一个多维参数线性经验Bayes估计的一致收敛速度不可能超过1,从而说明文中构造的线性经验Bayes估计的一致收敛速度1是最优的。
3) non-uniform convergence
非一致收敛
1.
Several theorems about non-uniform convergence and a few examples were used to explain the application of them.
本文给出了非一致收敛的几个定理 ,并以较多的实例说明它们的应用。
4) bounds on the rate of uniform convergence
一致收敛速度的界
1.
Finally the key theorem of statistical learning theory based on birandom samples is proved,and the bounds on the rate of uniform convergence of learning process are discussed.
最后证明基于双重随机样本的统计学习理论的关键定理并讨论学习过程一致收敛速度的界。
2.
Then the key theorem of learning theory on quasi-probability spaces is proved,and the bounds on the rate of uniform convergence of learning process on quasi-probability spaces are constructed.
进一步讨论了拟概率的一些性质,给出了拟概率空间上的拟随机变量及其分布函数、期望和方差的概念及若干性质;证明了拟概率空间上的Markov不等式、Chebyshev不等式和Khinchine大数定律;给出并证明了拟概率空间上学习理论的关键定理和学习过程一致收敛速度的界,把概率空间上的学习理论的关键定理和学习过程一致收敛速度的界推广到了拟概率空间,为系统地建立拟概率上的统计学习理论与构建支持向量机奠定了理论基础。
3.
This paper mainly investigates the bounds on the rate of uniform convergence of the fuzzy learning processes.
本文主要研究模糊学习过程一致收敛速度的界。
5) uniformly optimal convergence rate
一致最优强收敛速度
1.
This paper studies the nonparametric estimates of general weight function of the nonparametric regression function with fixed design points,when the model error is NA sequence,and the uniformly optimal convergence rate under some conditions is also provided.
在误差为NA序列的条件下,研究了固定设计点列情形下非参数回归函数一般权函数的非参数估计,并在一些基本条件下给出了估计的一致最优强收敛速度。
2.
This paper studies the nonparametric estimates of general weight function of the nonparametric regression function with fixed design points,when the model error is martingale sequence,and the uniformly optimal convergence rate under some conditions is also provided.
当误差为鞅差序列时,研究了固定设计点列情形下非参数回归函数一般权函数的非参数估计,并在一些基本条件下给出了估计的一致最优强收敛速度。
6) the bounds on the rate of uniform convergence
一致收敛速度的界
1.
Finally the key theorem of statistical learning theory based on random rough samples is proved,and the bounds on the rate of uniform convergence of learning process are discussed.
最后证明基于随机粗糙样本的统计学习理论的关键定理并讨论学习过程一致收敛速度的界。
2.
In view of the uncertainty of the real world, trust theory and statistical learning theory are combined to generalize the key theorem and the bounds on the rate of uniform convergence of learning theory.
统计学习理论的关键定理和学习过程一致收敛速度的界两部分内容为支持向量机等应用性研究提供了理论依据,因此在统计学习理论中起着非常重要的作用。
3.
In the paper,Rough Em- pirical Risk Minimization(BERM)principle is proposed,and the bounds on the rate of uniform convergence of learning process with rough samples are presented and proven,which provide a theoretical basis for the research of rough support vector machine.
支持向量机(SVM)是机器学习领域一个研究热点,而统计学习理论中的学习过程一致收敛速度的界描述了采用 ERM 原则的学习机器的推广能力。
补充资料:连续性与非连续性(见间断性与不间断性)
连续性与非连续性(见间断性与不间断性)
continuity and discontinuity
11an父ux泊g四f“山。麻以角g、.连续性与非连续性(c。nt,n琳t:nuity一)_见间断性与不间断性。and diseo红ti-
说明:补充资料仅用于学习参考,请勿用于其它任何用途。
参考词条