找回密码
 注册
查看: 2738|回复: 0

R语言 optimx包 optimx()函数中文帮助文档(中英文对照)

  [复制链接]
发表于 2012-9-23 23:14:49 | 显示全部楼层 |阅读模式
optimx(optimx)
optimx()所属R语言包:optimx

                                        General-purpose optimization
                                         通用优化

                                         译者:生物统计家园网 机器人LoveR

描述----------Description----------

General-purpose optimization wrapper function that calls other R tools for optimization, including the existing optim() function. optimx also tries to unify the calling sequence to allow a number of tools to use the same front-end. These include  spg from the BB package, ucminf, nlm, and  nlminb. Note that  optim() itself allows Nelder–Mead, quasi-Newton and  conjugate-gradient algorithms as well as box-constrained optimization  via L-BFGS-B. Because SANN does not return a meaningful convergence code (conv), optimx() does not call the SANN method.
通用优化包装函数调用R工具进行优化,包括现有的OPTIM()函数。 optimx还试图统一的调用序列,允许一些工具使用相同的前端。这些措施包括spg从BB包,ucminf,nlm和nlminb。请注意,OPTIM()本身允许内尔德美赞臣,准牛顿和共轭梯度算法,以及通过L-BFGS-B箱约束优化。由于建森不会返回一个有意义的收敛代码(CONV),optimx()不调用建森方法。


用法----------Usage----------


optimx(par, fn, gr=NULL, hess=NULL, lower=-Inf, upper=Inf,
            method=c("Nelder-Mead","BFGS"), itnmax=NULL, hessian=FALSE,
            control=list(),
             ...)




参数----------Arguments----------

参数:par
A vector of initial values for the parameters for which optimal  values are to be found.  
初始值的参数的最佳值是要找到一种向量。


参数:fn
A function to be minimized (or maximized), with first argument the vector of parameters over which minimization is to take place.  It should return a scalar result.
一个函数被最小化或最大化,与第一个参数的参数向量最小化是要发生的。它应该返回一个标量结果。


参数:gr
A function to return (as a vector) the gradient for those methods that  can use this information. This includes the following methods: "BFGS" "CG" "L-BFGS-B"  If 'gr' is NULL, a finite-difference approximation will be used. An open question concerns whether the SAME approximation code used for all methods,  or whether there are differences that could/should be examined?  
函数返回(矢量)的梯度这些方法可以使用此信息。这包括以下几种方法:"BFGS""CG""L-BFGS-B"如果克是NULL,有限差分近似使用。一个悬而未决的问题的关注是否相同近似使用的所有方法的代码,或是否有差异,可以/应该检查吗?


参数:hess
A function to return (as a symmetric matrix) the Hessian of the objective  function for those methods that can use this information.
一个函数来返回(对称矩阵),这些方法可以使用此信息的目标函数的Hessian。


参数:lower, upper
Bounds on the variables for methods such as "L-BFGS-B" that can handle box (or bounds) constraints.
边界上的变量的方法,如"L-BFGS-B"可以处理箱(或范围)的限制。


参数:method
A list of the methods to be used.  Note that this is an important change from optim() that allows just one method to be specified. See "Details".
要使用的方法的列表。请注意,这是一个重要的变化从OPTIM(),它允许指定方法之一。请参阅“详细信息”。


参数:itnmax
If provided as a vector of the same length as the list of methods method,  gives the maximum number of iterations or function values for the corresponding  method. If a single number is provided, this will be used for all methods. Note that there may be control list elements with similar functions, but this should be the preferred approach when using optimx.
如果作为向量的相同的长度的方法的列表提供method,给出了对应的方法的迭代或函数值的最大数目。如果提供一个单一的数字,这将被用于所有的方法。请注意,有可能是控制列表中的元素具有相似功能,但使用时optimx,这应该是首选的方法。


参数:hessian
A logical control that if TRUE forces the computation of an approximation  to the Hessian at the final set of parameters. If FALSE (default), the hessian is calculated if needed to provide the KKT optimality tests (see kkt in "Details" for the control list). This setting is provided primarily for compatibility with optim().
一个逻辑控制,如果TRUE力量的近似的Hessian在最后一组参数的计算。如果为FALSE(默认值),计算Hessian是如果需要提供KKT最优性测试(见kkt在“详细信息”的control名单)。此设置主要是提供兼容性与Optim()。


参数:control
A list of control parameters. See "Details".
控制参数列表。请参阅“详细信息”。


参数:...
Further arguments to be passed to fn and gr.
进一步的参数被传递给fn和gr。


Details

详细信息----------Details----------

Note that arguments after ... must be matched exactly.
需要注意的是参数后...必须完全匹配。

By default this function performs minimization, but it will maximize if control$maximize is TRUE. The original optim() function allows control$fnscale to be set negative to accomplish this. DO NOT use both methods.
默认情况下,此功能进行最小化,但如果control$maximize是TRUE,将最大限度地。原OPTIM()函数允许control$fnscale被设定为负做到这一点。不要使用这两种方法。

Possible method codes at the time of writing are 'Nelder-Mead', 'BFGS', 'CG', 'L-BFGS-B', 'nlm', 'nlminb', 'spg', and 'ucminf'.
在书写时的可能的方法的代码是“内尔德-米德,BFGS,CG,L-BFGS-B,NLM中,nlminb,抢断,和ucminf。

The default methods for unconstrained problems (no lower or upper specified) are an implementation of the Nelder and Mead (1965) and a Variable Metric method based on the ideas of Fletcher (1970) as modified by him in conversation with Nash (1979). Nelder-Mead uses only function values and is robust but relatively slow.  It will  work reasonably well for non-differentiable functions. The Variable Metric method, "BFGS" updates an approximation to the inverse Hessian using the BFGS update formulas, along with an acceptable point line search strategy. This method appears to work best with analytic gradients. ("Rvmmmin" provides a box-constrained version of this algorithm.
无约束的问题(没有lower或upper指定)是一个默认的方法实施修改他的内尔德和米德(1965年)和一个变度量法根据弗莱彻的想法(1970)在谈话中纳什(1979)。内尔德酒只使用函数值,是强大的,但相对缓慢。它会工作得很好,非可微函数。变尺度方法,"BFGS"更新近似的逆黑森州使用的BFGS更新公式,以及一个可以接受的点线搜索策略。这种方法看起来最好的解析梯度。 ("Rvmmmin"该算法提供了一个盒子受限版本。

If no method is given, and there are bounds constraints provided, the method is set to "L-BFGS-B".
如果没有method给定的,有界约束,方法是设置为"L-BFGS-B"。

Method "CG" is a conjugate gradients method based on that by Fletcher and Reeves (1964) (but with the option of Polak–Ribiere or Beale–Sorenson updates). The particular implementation is now dated, and improved yet simpler codes are being implemented (as at June 2009), and furthermore a version with box constraints is being tested. Conjugate gradient methods will generally be more fragile than the  BFGS method, but as they do not store a matrix they may be successful  in much larger optimization problems.
方法"CG"是一个共轭梯度法的基础上,由Fletcher和Reeves(1964)(但波拉克 - 里比埃或比尔 - 索伦森更新的选择)。现在已经过时,特别是实施,改善了简单的代码正在实施(2009年6月),而且正在测试一个版本的框式约束。共轭梯度方法通常会比BFGS方法更加脆弱,但他们不存储矩阵,他们可能是成功的,在更大的优化问题。

Method "L-BFGS-B" is that of Byrd et. al. (1995) which allows box constraints, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning.
方法"L-BFGS-B"是伯德等。人。 (1995年),它允许盒的限制,是可以给予每个变量的下和/或上限。初始值必须满足约束条件。这使用了有限的内存BFGS拟牛顿法的变形例。如果提供了不平凡的范围内,这种方法将被选中,一个警告。

Nocedal and Wright (1999) is a comprehensive reference for the previous three methods.
Nocedal和Wright(1999)是一个全面的参考前三种方法。

Function fn can return NA or Inf if the function cannot be evaluated at the supplied value, but the initial value must have a computable finite value of fn. However, some methods, of which "L-BFGS-B" is known to be a case, require that the values returned should always be finite.
功能fn返回NA或Inf如果该函数可以在提供的价值进行评估,但必须有一个初始值计算的有限值的fn。然而,一些方法,其中"L-BFGS-B"是已知的情况下,需要返回的值应始终是有限的。

While optim can be used recursively, and for a single parameter as well as many, this may not be true for optimx. optim also accepts a zero-length par, and just evaluates the function  with that argument.
而optim可以递归使用,并为单个参数以及许多,这可能不是真实的optimx。 optim也可以接受一个零长度par,而这样的说法只是评估的功能。

Method "nlm" is from the package of the same name that implements ideas of Dennis and Schnabel (1983) and Schnabel et al. (1985). See nlm() for more details.
方法"nlm"是从包中实现相同的名称,丹尼斯和施纳贝尔(1983)和施纳贝尔等人的想法。 (1985)。 NLM()的更多细节。

Method "nlminb" is the package of the same name that uses the minimization tools of the PORT library.  The PORT documentation is at  <URL: http://netlib.bell-labs.com/cm/cs/cstr/153.pdf>. See nlminb() for details. (Though there is very little information about the methods.)
方法"nlminb"是相同的名称,使用的最小化工具的港口库的包。的PORT文件是在<URL: http://netlib.bell-labs.com/cm/cs/cstr/153.pdf>。见nlminb()的详细信息。 (虽然很少有信息的方法。)

Method "spg" is from package BB implementing a spectral projected  gradient method for large-scale optimization with simple constraints due R adaptation, with significant modifications, by Ravi Varadhan, Johns Hopkins University (Varadhan and Gilbert, 2009), from the original FORTRAN code of Birgin, Martinez, and Raydan (2001).
方法"spg"是从包BB实施一个大规模的优化与简单的限制,由于&#341;适应,具有显著的修改,由拉维Varadhan的,约翰·霍普金斯大学(Varadhan和Gilbert,2009)的谱投影梯度法,从原来的FORTRAN代码的Birgin,马丁内斯,Raydan的(2001年)。

Method "Rcgmin" is from the package of that name. It implements a conjugate gradient algorithm with the Yuan/Dai update (ref??) and also  allows bounds constraints on the parameters. (Rcgmin also allows mask  constraints &ndash; fixing individual parameters &ndash; but there is no interface from "optimx".)
方法"Rcgmin"是从包的那名。它实现了一个共轭梯度算法与元/戴更新(参考?)和也使边界约束的参数。 (Rcgmin也可以让面膜的限制 - 确定各个参数 - 但有没有接口"optimx")。

Methods "bobyqa", "uobyqa" and "newuoa" are from the  package "minqa" which implement optimization by quadratic approximation routines of the similar names due to M J D Powell (2009). See package minqa  for details. Note that "uobyqa" and "newuoa" are for  unconstrained minimization, while "bobyqa" is for box constrained problems.
方法"bobyqa","uobyqa"和"newuoa"从包"minqa"实现优化的二次近似例程类似的名字,因为为MJD鲍威尔(2009)。的详细信息,请参阅包minqa。需要注意的是"uobyqa"和"newuoa"是无约束极小化,而"bobyqa"是约束问题。

The control argument is a list that can supply any of the following components:
control参数是一个列表,可以提供以下几部分组成:




trace Non-negative integer. If positive, tracing information on the progress of the optimization is produced. Higher values may produce more tracing information: for method "L-BFGS-B" there are six levels of tracing. trace = 0 gives no output  (To understand exactly what these do see the source code: higher
trace非负整数。如果是正值,产生跟踪信息的优化的进展。较高的值可能会产生更多的跟踪信息的方法"L-BFGS-B"有六个级别的跟踪。跟踪= 0时不输出(为确切了解这些做的源代码:高




follow.on   = TRUE or FALSE. If TRUE, and there are multiple  methods, then the last set of
follow.on = TRUE或FALSE。如果为true,并且有多种方法,然后在最后一组




save.failures  = TRUE if we wish to keep "answers" from runs
save.failures= TRUE,如果我们要保持从运行的“答案”




maximize  = TRUE if we want to maximize rather than minimize  a function. (Default FALSE). Methods nlm, nlminb, ucminf cannot maximize a function, so the user must explicitly minimize and carry out the adjustment externally. However, there is a check to avoid usage of these codes when maximize is TRUE. See fnscale below for
maximize= TRUE,如果我们想最大化,而不是最小化功能。 (默认为false)。方法NLM,nlminb,ucminf不能最大化的功能,所以用户必须明确最大限度地减少和外部进行调整。然而,有检查,以避免使用这些代码时,最大限度地为TRUE。见fnscale下面的




all.methods = TRUE if we want to use all available (and suitable)
all.methods= TRUE,如果我们要使用所有可用的(和合适的)




kkt =FALSE if we do NOT want to test the Kuhn, Karush, Tucker optimality conditions. The default is TRUE. However, because the Hessian computation may be very slow, we set kkt to be FALSE if there are  more than than 50 parameters when the gradient function gr is not  provided, and more than 500 parameters when such a function is specified. We return logical values KKT1 and KKT2 TRUE if first and second order conditions are satisfied approximately. Note, however, that the tests are sensitive to scaling, and users may need to perform additional verification. If kkt is FALSE but hessian
kkt= FALSE,如果我们不想要测试的Karush,库恩,Tucker最优性条件。默认值是TRUE。然而,由于Hessian的计算可能会很慢,我们将kkt是假的,如果有超过50个参数时gr是不提供的,以及超过500个参数时,这样的梯度功能功能被指定。我们返回逻辑值KKT1和KKT2TRUE,如果一阶和二阶条件都满足约的。但是,请注意,这些测试是敏感的缩放,用户可根据需要进行额外的验证。如果kkt是虚假的,但hessian




all.methods = TRUE if we want to use all available (and suitable)
all.methods= TRUE,如果我们要使用所有可用的(和合适的)




sort.result = TRUE if we want to have the results table sorted
sort.result= TRUE,如果我们要排序的结果表




kkttol = value to use to check for small gradient and negative
kkttol= value来检查梯度小,负




kkt2tol = Tolerance for eigenvalue ratio in KKT test of positive
kkt2tol=公差特征值比率在积极的KKT测试的




starttests = TRUE if we want to run tests of the function and  parameters: feasibility relative to bounds, analytic vs numerical gradient,
starttests= TRUE,如果我们想运行测试的功能和参数:的可行性相对的界限,分析与数值梯度,




dowarn = TRUE if we want warnings generated by optimx. Default is
dowarn= TRUE,如果我们想警告所产生的optimx。默认值是




badval = The value to set for the function value when try(fn()) fails.
badval=要设置的值的函数值(FN())时,尝试失败。

The following control elements apply only to some of the methods. The list may be incomplete. See individual packages for details.
下面的control元素仅适用于其中的一些方法。该列表可能不完整。的详细信息,请参阅单独的软件包。




fnscale An overall scaling to be applied to the value of fn and gr during optimization. If negative, turns the problem into a maximization problem. Optimization is performed on fn(par)/fnscale. For methods from the set in
fnscale一个整体的比例在优化过程中要应用到的价值fn和gr。如果为负,将问题转化为最大化问题。优化上进行fn(par)/fnscale。对于从该组中的方法




parscale A vector of scaling values for the parameters. Optimization is performed on par/parscale and these should be comparable in the sense that a unit change in any element produces
parscaleA矢量缩放的参数值。优化上进行par/parscale,这些都应该是在这个意义上,一个单位中的任何元素产生变化比较




ndeps A vector of step sizes for the finite-difference approximation to the gradient, on par/parscale
ndeps的矢量步长的有限差分近似的梯度,在par/parscale




maxit The maximum number of iterations. Defaults to 100 for the derivative-based methods, and
maxit的最大迭代次数。默认为100基于导数的方法,




abstol The absolute convergence tolerance. Only
abstol绝对收敛性。仅




reltol Relative convergence tolerance.  The algorithm stops if it is unable to reduce the value by a factor of reltol * (abs(val) + reltol) at a step.  Defaults to
reltol相对收敛性。该算法停止,如果它是无法减少的值的一个因素reltol * (abs(val) + reltol)在步骤。默认为




alpha, beta, gamma Scaling parameters for the "Nelder-Mead" method. alpha is the reflection factor (default 1.0), beta the contraction factor (0.5) and
alpha,beta,gamma的缩放参数为"Nelder-Mead"方法。 alpha是反射系数(默认值是1.0),beta的收缩系数(0.5),




REPORT The frequency of reports for the "BFGS" and "L-BFGS-B" methods if control$trace is positive. Defaults to every 10 iterations for "BFGS" and
REPORT的频率的报告"BFGS"和"L-BFGS-B"如果control$trace的方法是积极的。默认为每10次迭代"BFGS"和




type for the conjugate-gradients method. Takes value 1 for the Fletcher&ndash;Reeves update, 2 for
type的共轭梯度法。注意到1的弗莱彻里维斯更新,2




lmm is an integer giving the number of BFGS updates
lmm是一个整数,BFGS更新的数量




factr controls the convergence of the "L-BFGS-B" method. Convergence occurs when the reduction in the objective is within this factor of the machine tolerance. Default is 1e7,
factr控制的融合"L-BFGS-B"方法。收敛时,会发生在目标是减少机器的公差这个因素内。默认是1e7,




pgtol helps control the convergence of the "L-BFGS-B" method. It is a tolerance on the projected gradient in the current search direction. This defaults to zero, when the check is
pgtol有助于控制收敛的"L-BFGS-B"方法。这是一个宽容在当前的搜索方向上的投影梯度。默认为0,当检查

Any names given to par will be copied to the vectors passed to fn and gr.  Note that no other attributes of par are copied over. (We have not verified this as at 2009-07-29.)
par的任何名称,将被复制到向量传递给fn和gr。请注意,没有其他属性par被复制。 (我们没有核实这是在2009年7月29日)。


值----------Value----------

ansout <- data.frame(cbind(par=pars, fvalues=vals, method=meths, fns=fevals, grs=gevals,  itns=nitns, conv=convcode, KKT1=kkt1, KKT2=kkt2, xtimes=xtimes))
ansout < - 数据框(CBIND(PAR =收杆,fvalues =丘壑,方法meths,FNS = fevals,GRS = gevals,驱虫蚊帐= nitns,CONV = convcode,KKT1 = kkt1,KKT2 = kkt2,xtimes = xtimes) )

A dataframe with the following columns, each having one element (row) for each each of the methods that has returned an answer. In each row the columne elements give:
甲数据框与下面的列,每列具有一个元素(行)的每个方法返回一个答案。在每一行的columne的元素给:


参数:par
The best set of parameters found.
找到最好的一组参数。


参数:fvalues
The value of fn corresponding to par.
fn对应的par。


参数:method
The name of the method used for this answer.
该答案所使用的方法的名称。


参数:fns
The number of calls to fn.
数的调用fn。


参数:grs
The number of calls to gr. This excludes those calls needed to compute the Hessian, if requested, and any calls to fn to compute a finite-difference approximation to the gradient.
数的调用gr。这不包括那些需要计算Hessian的,如果要求的呼叫,并调用fn计算的有限差分近似的渐变。


参数:itns
For those methods where it is reported, the number of &ldquo;iterations&rdquo;. See the documentation or code for particular methods for the meaning of such counts.
对于那些报道的方法中,“迭代”的数目。请参阅文档或代码的含义,这种计数的具体方法。


参数:conv
An integer code. 0 indicates successful convergence. Various methods may or may not return sufficient information to allow all the codes to be specified. An incomplete list of codes includes     
的整数代码。 0表示成功融合。不同的方法可能会或可能不会返回足够的信息,让所有的代码来指定。一个不完整的代码列表,包括

1indicates that the iteration limit maxit had been reached.  
1表示迭代限制maxit已达到。

20indicates that the initial set of parameters is inadmissible, that is, that the function cannot be computed or returns an infinite, NULL, or NA value.  
20表明,最初的一组参数是不可接受的,那就是,该函数不能计算或返回一个无限的,NULL,NA值。

21indicates that an intermediate set of parameters is inadmissible.  
21表明,中间的一组参数是不可接受的。

10indicates degeneracy of the Nelder&ndash;Mead simplex.  
10表示内尔德Mead单纯的退化。

51indicates a warning from the "L-BFGS-B" method; see component message for further details.  
51表示警告"L-BFGS-B"方法,为进一步的细节见组成部分message。

52indicates an error from the "L-BFGS-B" method; see component message for further details.     
52表示"L-BFGS-B"方法的错误;组件message的进一步详情。


参数:KKT1
A logical value returned TRUE if the solution reported has a &ldquo;small&rdquo; gradient.
返回逻辑值TRUE,如果报告的解决方案有一个“小”的梯度。


参数:KKT2
A logical value returned TRUE if the solution reported appears to have a  positive-definite Hessian.
返回逻辑值TRUE,如果报告的解决方案似乎有一个正定的Hessian的。


参数:xtimes
The reported execution time of the calculations for the particular method.
报告的执行时间的计算的特定方法。

More detail is provided in the attribute "details" to the returned answer object.  If the returned object from optimx() is ans, this is accessed  via the construct attr(ans, "details") This object is a list of lists, one list for each method that has been successful  or that has been forced by save.failures=TRUE. To get the details list for the  third method, we use attr(ans, "details")[[3]]]]
在属性中的“详细信息”返回的答案对象提供更多的细节。如果返回的对象从optimx()ans,这是通过访问时,attr(ans, "details")这个对象是一个列表的列表,一个列表中的每个方法都已经成功,或被迫保存故障= TRUE。要获得详细信息列表的第三种方法,我们使用的attr(ans, "details")[[3]]]]

Each such list has possible elements:
每一个这样的列表有可能的要素:


参数:par
- the final parameters of the function
- 最后一个参数的功能


参数:value
- the final value of the objective function
- 最终的目标函数值


参数:convergence
- a code indicating whether and/or how the method terminated
- 一个代码,指示是否和/或如何使用该方法终止


参数:message
- a descriptive message, if one is provided
- 一个描述性的信息,如果提供


参数:fevals
- the number of times the objective function was evaluated during the run; this item has an attribute "names" that can take on the value "function"
- 在运行过程中进行评价的次数的数目的目标函数,该资料具有属性“名称”的值“功能,可以采取”


参数:gevals
- the number of times the gradient of the objective function was evaluated  during the run; this item has an attribute "names" that can take on the value "gradient"
- 在运行过程中进行评价的次数的数目的目标函数的梯度,该资料具有属性“名称”的值“梯度,可以采取”


参数:kkt1
- an indicator if the first order Kuhn, Karush, Tucker condition for a  local minimum is satisfied. Note that we must use a tolerance to make this test, and the indicator is NOT definitive.
- 一个指标,如果第一顺序的Karush,库恩,Tucker条件为当地最低限度的满足。需要注意的是,我们必须使用,使这个测试的耐受性,指示灯是不明确的。


参数:kkt2
  - an indicator if the second order Kuhn, Karush, Tucker condition for a  local minimum is satisfied. Note that we must use a tolerance to make this test, and the indicator is NOT definitive.
- 如果第二个指标的Karush,库恩,Tucker条件为当地最低限度的满足。需要注意的是,我们必须使用,使这个测试的耐受性,指示灯是不明确的。


参数:ngatend
- the gradient vector computed or estimated at termination
- 梯度向量计算或估计在终止


参数:nhatend
- the Hessian matrix computed or estimated at termination
-  Hessian矩阵计算或估计在终止


参数:evnhatend
- a vector of the eigenvalues of the Hessian at termination
- 终止的Hessian的特征值的向量


参数:systime
- the system time required for execution of the method in question (named in "method" below). This element has the attribute "names".
- 系统所需的时间的问题的方法的执行(在下面的“方法”命名)。该元素具有属性“名”。


参数:method
- The name of the method used. This has the attribute "CPU times (s)" which  itself has an attribute "names" that corresponds to the "method".
- 所使用的方法的名称。这具有属性“CPU时间(s)”,其本身具有的属性的“名称”,对应于“方法”。


注意----------Note----------

optimx will work with one-dimensional pars, but the default method does not work well (and will warn).  Use optimize instead.
optimx将一维par的,但默认的方法不能很好地工作(警告)。使用optimize“。

There are a series of demos available. These were set up as tests, but take quite a long time to execute. Once the package is loaded (via require(optimx) or library(optimx), you may see available demos via
有一系列的演示。这些测试,但需要相当长的时间来执行。一旦加载包(通过require(optimx)或library(optimx),你可能会看到可演示通过

demo(package="optimx")
演示(包=“optimx”)

The demo 'brown_test' may be run with the command demo(brown_test, package="optimx")
的演示“brown_test的”可运行的命令演示(brown_test,包=的“optimx”的)


源----------Source----------

The code for methods "Nelder-Mead", "BFGS" and "CG" was based originally on Pascal code in Nash (1990) that was translated by p2c and then hand-optimized.  Dr Nash has agreed that the code can be made freely available.
方法的代码"Nelder-Mead","BFGS"和"CG"最初是基于Pascal代码中的“纳什”(1990年),这是翻译p2c“,然后手工优化。纳什博士一致认为,该代码可以免费提供。

The code for method "L-BFGS-B" is based on Fortran code by Zhu, Byrd, Lu-Chen and Nocedal obtained from Netlib (file "opt/lbfgs\_bcm.shar": another version is in "toms/778").
的代码的方法"L-BFGS-B" Fortran代码的基础上,从Netlib(文件朱,陆陈,伯德和Nocedal opt/lbfgs\_bcm.shar:另一个版本是在toms/778 )。

See documentation for "nlm" and "nlminb" for those methods, package "ucminf" for the funciton of the same name, and package BB for method "spg".
请参阅说明文件"nlm"和"nlminb"这些方法,包"ucminf"的功能可按相同的名称,包装BB的方法"spg"的。


参考文献----------References----------

annealing algorithms on <code>Rd</code>. J Applied Probability, 29, 885&ndash;895.
convex-constrained optimization, ACM Transactions on Mathematical Software. ??incomplete reference??
memory algorithm for bound constrained optimization. SIAM J. Scientific Computing, 16, 1190&ndash;1208.
Unconstrained Optimization and Nonlinear Equations._ Prentice-Hall, Englewood Cliffs, NJ.
conjugate gradients. Computer Journal 7, 148&ndash;154.
Computer Journal, 13, 317-322.
Computers. Linear Algebra and Function Minimisation. Adam Hilger.
minimization. Computer Journal 7, 308&ndash;313.
Springer.
system of algorithms for unconstrained minimization. _ACM Trans. Math. Software_, *11*, 419-440.
Nonlinear Equations and for Optimizing a High-Dimensional Nonlinear Objective Function. Journal of Statistical Software,  32, 1&ndash;26.
Journal of Statistical Software, 43(9), 1-14.,  URL http://www.jstatsoft.org/v43/i09/.

参见----------See Also----------

spg, nlm, nlminb, bobyqa, Rcgmin,  Rvmmin, ucminf.
spg,nlm,nlminb,bobyqa,Rcgmin,Rvmmin,ucminf。

optimize for one-dimensional minimization; constrOptim or spg for linearly constrained optimization.
optimize的一维最小化,“constrOptim或spg线性约束优化。


实例----------Examples----------


require(graphics)

fr &lt;- function(x) {   ## Rosenbrock Banana function[#的Rosenbrock香蕉功能]
    x1 <- x[1]
    x2 <- x[2]
    100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}
grr &lt;- function(x) { ## Gradient of 'fr'[梯度“FR”]
    x1 <- x[1]
    x2 <- x[2]
    c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
       200 *      (x2 - x1 * x1))
}
ans1<-optimx(c(-1.2,1), fr)
print(ans1)
print(attr(ans1,"details"))
cat("\n\n")
ans2<-optimx(c(-1.2,1), fr, grr, method = "BFGS")
print(ans2)
## The next line will fail if executed because 'hessian = TRUE' no longer allowed[#将失败,如果因为“麻= TRUE不再允许执行下一行]
# ans3&lt;-optimx(c(-1.2,1), fr, NULL, method = "BFGS", hessian = TRUE)[ANS3 <optimx(C(-1.2,1),FR,NULL,BFGS方法=“”,麻= TRUE)]
cat("\n\n")
ans4<-optimx(c(-1.2,1), fr, grr, method = "CG",control=list(trace=1))
print(ans4)
cat("\n\n")
ans5<-optimx(c(-1.2,1), fr, grr, method = "CG", control=list(type=2))
print(ans5)
cat("\n\n")
ans6<-optimx(c(-1.2,1), fr, grr, method = "L-BFGS-B")
print(ans6)
cat("\n\n")

flb <- function(x)
    { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) }
## 25-dimensional box constrained[第25维约束]
optimx(rep(3, 25), flb, NULL, method = "L-BFGS-B",
      lower=rep(2, 25), upper=rep(4, 25)) # par[24] is *not* at boundary[票面[24] *不*在边界]

## "wild" function , global minimum at about -15.81515[“野生”功能,全球至少约-15.81515]
fw <- function (x)
    10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80
plot(fw, -50, 50, n=1000, main = "optim() minimising 'wild function'")

## Suppressed for optimx() ans7 &lt;- optimx(50, fw, method="SANN",[#压抑的optimx()ANS7  -  optimx(50,FW,“建森”,]
##             control=list(maxit=20000, temp=20, parscale=20))[#控制列表(麦克斯特= 20000,温度= 20,parscale = 20))]
## ans7[#ANS7]
## Now improve locally {typically only by a small bit}:[#现在改善当地通常只有一个小位}]
## newpar&lt;-unlist(ans7$par) # NOTE: you need to unlist the parameters as optimx() has multiple outputs[#newpar不公开的(ANS7元面值)#注意:您需要不公开的optimx(参数)有多个输出]
##(r2 &lt;- optimx(newpar, fw, method="BFGS"))[#(R2 < -  optimx(newpar,FW =“BFGS方法”))]
##points(r2$par, r2$value, pch = 8, col = "red", cex = 2)[#点(R2美元面值,R2美元的价值,PCH = 8,列=“红色”,CEX = 2)]

## Show multiple outputs of optimx using all.methods[#显示多路输出optimx使用all.methods]
# genrose function code[genrose功能代码]
genrose.f&lt;- function(x, gs=NULL){ # objective function[目标函数]
## One generalization of the Rosenbrock banana valley function (n parameters)[第一个泛化的Rosenbrock香蕉山谷功能(参数)]
        n <- length(x)
        if(is.null(gs)) { gs=100.0 }
        fval<-1.0 + sum (gs*(x[1n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
        return(fval)
}

genrose.g <- function(x, gs=NULL){
# vectorized gradient for genrose.f[矢量化梯度genrose.f]
# Ravi Varadhan 2009-04-03[拉维Varadhan 2009-04-03]
        n <- length(x)
        if(is.null(gs)) { gs=100.0 }
        gg <- as.vector(rep(0, n))
        tn <- 2:n
        tn1 <- tn - 1
        z1 <- x[tn] - x[tn1]^2
        z2 <- 1 - x[tn]
        gg[tn] <- 2 * (gs * z1 - z2)
        gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1
        return(gg)
}

genrose.h &lt;- function(x, gs=NULL) { ## compute Hessian[#计算黑森州]
   if(is.null(gs)) { gs=100.0 }
        n <- length(x)
        hh<-matrix(rep(0, n*n),n,n)
        for (i in 2:n) {
                z1<-x[i]-x[i-1]*x[i-1]
                z2<-1.0-x[i]
                hh[i,i]<-hh[i,i]+2.0*(gs+1.0)
                hh[i-1,i-1]<-hh[i-1,i-1]-4.0*gs*z1-4.0*gs*x[i-1]*(-2.0*x[i-1])
                hh[i,i-1]<-hh[i,i-1]-4.0*gs*x[i-1]
                hh[i-1,i]<-hh[i-1,i]-4.0*gs*x[i-1]
        }
        return(hh)
}

startx<-4*seq(1:10)/3.
ans8<-optimx(startx,fn=genrose.f,gr=genrose.g, hess=genrose.h, control=list(all.methods=TRUE, save.failures=TRUE, trace=0), gs=10)
print(ans8)

get.result(ans8, attribute="grs")
get.result(ans8, method="spg")


startx<-4*seq(1:10)/3.
cat("olyalgorithm with 200 steps NM followed by up to 75 of ucminf\n")
ans9<-optimx(startx,fn=genrose.f,gr=genrose.g, hess=genrose.h, method=c("Nelder-Mead","ucminf"),
             itnmax=c(200,75), control=list(follow.on=TRUE, save.failures=TRUE,trace=0), gs=10)
print(ans9)

startx<-4*seq(1:10)/3.
cat("200 steps NM is not enough to terminate\n")
ans10<-optimx(startx,fn=genrose.f,gr=genrose.g, method=c("Nelder-Mead"),
             itnmax=c(200), control=list(trace=0, save.failures=FALSE), gs=10)
cat("The answer should be NULL\n")
print(ans10)


startx<-4*seq(1:10)/3.
cat("Try getting hessian but not kkt tests\n")
ans11<-optimx(startx,fn=genrose.f,gr=genrose.g, hessian=TRUE,  
        control=list(all.methods=TRUE, trace=0, save.failures=FALSE, kkt=FALSE), gs=10)
print(ans11)
cat("Method 7 should be Rvmmin\n")
print(attr(ans11,"details")[7])


genrose.h &lt;- function(x, gs=NULL) { ## compute Hessian[#计算黑森州]
   if(is.null(gs)) { gs=100.0 }
        n <- length(x)
        hh<-matrix(rep(0, n*n),n,n)
        for (i in 2:n) {
                z1<-x[i]-x[i-1]*x[i-1]
                z2<-1.0-x[i]
                hh[i,i]<-hh[i,i]+2.0*(gs+1.0)
                hh[i-1,i-1]<-hh[i-1,i-1]-4.0*gs*z1-4.0*gs*x[i-1]*(-2.0*x[i-1])
                hh[i,i-1]<-hh[i,i-1]-4.0*gs*x[i-1]
                hh[i-1,i]<-hh[i-1,i]-4.0*gs*x[i-1]
        }
        return(hh)
}


startx<-4*seq(1:10)/3.
cat("Use analytic hessian and no KKT tests\n")
ans12<-optimx(startx,fn=genrose.f,gr=genrose.g, hess=genrose.h,
      control=list(trace=0, save.failures=FALSE, kkt=FALSE), gs=10)
print(ans12)
print(attr(ans12,"details"))



转载请注明:出自 生物统计家园网(http://www.biostatistic.net)。


注:
注1:为了方便大家学习,本文档为生物统计家园网机器人LoveR翻译而成,仅供个人R语言学习参考使用,生物统计家园保留版权。
注2:由于是机器人自动翻译,难免有不准确之处,使用时仔细对照中、英文内容进行反复理解,可以帮助R语言的学习。
注3:如遇到不准确之处,请在本贴的后面进行回帖,我们会逐渐进行修订。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

手机版|小黑屋|生物统计家园 网站价格

GMT+8, 2024-11-23 20:49 , Processed in 0.030869 second(s), 16 queries .

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表