1) policy gradient estimation
策略梯度估计
1.
The policy gradient estimation for continuous-time partially observable Markovian decision processes
连续时间部分可观Markov决策过程的策略梯度估计
2) policy gradient
策略梯度
1.
Theories, Algortihms and Applications of Policy Gradient Reinforcement Learning;
策略梯度增强学习的理论、算法及应用研究
2.
The adaptive heuristic critic(AHC) reinforcement learning frame is approximate of the value function and the policy function of Markov decision process(MDP),the stochastic MDPs can be converted to deterministic MDPs by the policy gradient reinforcement learning.
自适应启发评价(AHC)增强学习结构分别逼近马尔可夫决策过程的值函数和策略函数,策略梯度增强学习能够将随机不确定的马尔可夫决策过程转换为确定性的马尔可夫决策过程。
3.
Although policy gradient reinforcement learning (PGRL) has good convergence properties, the variance of policy gradient estimation in existing PGRL algorithms is usually large, which becomes a significant problem for policy gradient algorithms in theory and in practice.
尽管策略梯度强化学习算法有较好的收敛性,但是在梯度估计的过程中方差过大,却是该方法在理论和应用上的一个主要弱点。
6) Gradient estimate
梯度估计
1.
We prove the global existence and gradient estimate of the solution to the initial boundary problem for the quasi-linear parabolic equationu_t-div{σ(|u(t)|~2)u(t)}+g(t,x,u,u)=0 in Ω×[0,∞).
证明了如下抛物方程解的整体存在性和梯度估计ut-d iv{σ(|u(t)|2)u(t)}+g(t,x,u,u)=0,Ω×[0,∞)。
2.
This dissertation is devoted to the existence and gradient estimates of the solutions or periodic solutions to two degenerate parabolic equations with a nonlinear convection term.
利用退化抛物方程的正则性理论、Galiardo-Nirenberg不等式、Moser迭代技巧及Aubin紧致性引理我们得到了解的存在性及梯度估计。
补充资料:G(?)teaux梯度
G(?)teaux梯度
Gateaux gradient
‘凌如以梯度【珑加倒优脚曲斌;raTo rp”脱.TI,田-咖李回H的俘甲f夺丁卓x0牛的 H中与f在x。的C自妞.玫导数(G云姗uxderi珊tiVe)f。(x。)相等的向量.换句话说,G舀teaux梯度由公式 f(x。+h)二f(凡)+(无(x。),h)+。(h)定义,其中。(th)/t~0,当t~0.在”维Eodid空间中C冶姗以梯度f。(x。)为具有坐标 了叮(x。)___盯(凡)、 \口x:”口x,了的向量,并简称为梯度(脚djent).C冶如ux梯度概念可以推广到下列情形:X为侧组日的n流形(有限维)或无穷维Hilbert流形,而f为X上光滑实函数.f在其C冶如以梯度方向上的增长大于过此点任何其他方向的增长. B.M.THxo栩叼Po.撰郑维行译沈永欢、王声望校
说明:补充资料仅用于学习参考,请勿用于其它任何用途。
参考词条