找回密码
 注册
查看: 368|回复: 0

R语言 SpatialVx包 kernel2dmeitsjer()函数中文帮助文档(中英文对照)

[复制链接]
发表于 2012-9-30 12:56:46 | 显示全部楼层 |阅读模式
kernel2dmeitsjer(SpatialVx)
kernel2dmeitsjer()所属R语言包:SpatialVx

                                         Create a kernel matrix
                                         创建一个内核矩阵

                                         译者:生物统计家园网 机器人LoveR

描述----------Description----------

Create a kernel matrix to be used in a 2-D convolution smooth (e.g., using kernel2dsmooth).
创建中要使用的2-D卷积光滑(例如,使用kernel2dsmooth)内核矩阵。


用法----------Usage----------


kernel2dmeitsjer(type = "gauss", ...)



参数----------Arguments----------

参数:type
character name of the kernel to be created.  
字符的名称要创建的内核。


参数:...
Other arguments to the specific kernel type.  See Details section below.  
其他特定的内核类型的参数。请参阅下面的详细信息部分。


Details

详细信息----------Details----------

The specific types of kernels that can be made are as follows.  In each case, h=||x-x_c|| is the distance from the center of the kernel.  Every kernel that requires single numerics nx and ny be specified returns an nx X ny metrix.  Distances h are found by setting up a grid based on 1 to nx and 1 to ny denoting the points as (xgrid, ygrid), finding the center of the grid as (x.center, y.center)=(nx/2,ny/2), and then h = sqrt( (xgrid - x.center)^2 + (ygrid - y.center)^2).  For kernels that better reflect distance (e.g., using great-circle distance, anisotropic distances, etc.), the matrix h can be passed instead of nx and ny, but only for those kernels that take h as an argument.  In each case with sigma as an argument, sigma is the smoothing parameter.  There are many kernel functions allowed here, and not all of them make sense for every purpose.
的特定类型的内核,可以如下所示。在每一种情况下,H = | |的X x_c | |是内核从中心的距离。每的内核需要单一数值计算nx和ny指定的返回NX X NY Metrix的。距离h被发现通过设置基于1的网格nx和1,ny的(XGRID,YGRID)表示的点,发现栅格中心(x.center,y.center)=(nx个/ 2, NY / 2),和则H = SQRT((XGRID  -  x.center)(YGRID  -  y.center)^ 2)^ 2 +。的内核能够更好地反映距离(例如,使用大圈的距离,各向异性距离等),矩阵H可以通过,而不是nx和ny,但只有那些内核参数取h。在每种情况下与标准差作为参数,σ为平滑参数。有很多核心功能允许在这里,而不是所有的人都适用于各种用途的感觉。

"average" gives a kernel that will give an average of the nearest neighbors in each direction (can take an average grid points further in the x- direction than the y-direction, and vice versa).  Requires that 'nx' and 'ny' be specified, and the resulting kernel is defined by an 'nx X ny' matrix with each element equal to 1/(nx*ny).  If nx = ny, then the result is the same as the boxcar kernel below.
“平均”给出了一个内核,将给出一个平均在每个方向上的近邻(可以取一个平均值的在x-方向上的网格点进一步比的y方向,反之亦然)。要求nx的和ny的被指定,并且所得的内核定义由为nx X ny的矩阵与每个元素等于1 /(为nx * ny)表示。如果nx = ny的,那么其结果是相同的,下面的棚车内核。

"boxcar" the boxcar kernel is an n X n matrix of 1/n^2.  This results in a neighborhood smoothing when used with 'kernel2dsmooth' giving the type of smoothed fields utilized, e.g., in Roberts and Lean (2008) and Ebert (2008).  Requires that n be specified.  Note that usually the boxcar is a square matrix of ones, which gives the sum of the nearest n^2 grid points.  This gives the average.
“棚车”棚车内核是一个N×N矩阵的1 / n ^ 2的。在附近的平滑结果时使用“kernel2dsmooth的类型使用平滑的领域,例如,在罗伯茨和精益生产(2008年)和艾伯特(2008)。需要使n来指定。请注意,通常的矩形波串是一个方阵的,这给最接近n ^ 2的网格点的总和。这给出了平均。

"cauchy" The Cauchy kernel is given by K(sigma)=1/(1+h^2/sigma).  Requires the arguments nx, ny and sigma.  See Souza (2010) for more details.
“柯西”Cauchy核K(Σ)= 1 /(1 + H ^ 2/sigma)。需要的参数nx,ny和标准差。有关详细信息,请参阅索萨(2010年)。

"disk" gives a circular averaging (pill-box) kernel (aka, disk kernel).  Similar to "average" or "boxcar", but this kernel accounts for a specific distance in all directions from the center (i.e., an average of grid squares within a circular radius of the central point).  This results in the convolution radius smoothing applied in Davis et al. (2006a,2006b).  Requires that 'r' (the desired radius) be supplied, and a square matrix of appropriate dimension is returned.
“磁盘”给出了一个圆形的平均(丸盒)的内核(又名磁盘内核)。类似“一般”或“棚车”,但这个内核帐户中心在各个方向上的一个特定的距离(即平均方格内圆半径的中央点)。这结果在卷积半径平滑施加在Davis等人的。 (2006年a,2006年b)。需要,r(下所需的半径)被供给,并返回适当维数的一个方阵。

"epanechnikov" The Epanechnikov kernel is defined by max(0, 3/4*(1-h/(sigma^2))). See, e.g., Hastie and Tibshirani (1990).  Requires arguments 'nx', 'ny', and 'sigma'.
“叶帕涅奇尼科夫”叶帕涅奇尼科夫内核定义者max(0,3/4 *(1小时/(σ^ 2)))。见,例如Hastie和Tibshirani(1990)。需要参数的NX,NY,西格玛“。

"exponential" The exponential kernel is given by K(sigma) = a*exp(-h/(2*sigma)).  Requires the arguments nx, ny and sigma, and optionally takes the argument 'a' (default is a=1).  An 'nx X ny' matrix is returned.  See Souza (2010) for more details.
“指数”K(Σ)= * exp的信息(-h /(2 *西格玛))由下式给出的指数内核。需要的参数nx,ny和标准差,和可选参数A(默认为A = 1)。返回带有“NX X纽约矩阵。有关详细信息,请参阅索萨(2010年)。

"gauss" The Gaussian kernel defined by K(sigma) = 1/(2*pi*sigma^2)*exp(-h/(2*sigma)).  Requires the arguments 'nx', 'ny' and 'sigma' be specified.  The convolution with this kernel results in a Gaussian smoothed field as used in the practically perfect hindcast method of Brooks et al. (1998) (see also Ebert 2008) and studied by Sobash et al (2011) for spatial forecast verification purposes.  Returns an 'nx X ny' matrix.
“高斯”的定义的高斯核K(SIGMA)= 1 /(2 * PI * SIGMA ^ 2)* exp(-H /(2 * SIGMA))。需要的参数“NX”,“纽约”和“西格玛”指定。与这个kernel结果几乎完美的后报Brooks等人的方法中使用的高斯平滑的字段作为卷积。 (1998)(见2008年也艾伯特)和Sobash等人(2011)研究的空间预测核查的目的。返回的NX X纽约矩阵。

"laplacian" Laplacian Operator kernel, which gives the sum of second partial derivatives for each direction.  It is often used for edge detection because it identifies areas of rapid intensity change.  Typically, it is first applied to a field that has been smoothed first by a Gaussian kernel smoother (or an approximation thereof; cf. type "LoG" below).  This method optionally the parameter 'alpha', which controls the shape of the Laplacian kernel, which must be between 0 and 1 (inclusive), or else it will be set to 0 (if < 0) or 1 (if > 1).  Returns a 3 X 3 kernel matrix.
“拉普拉斯算子Laplace算子相关的内核,使第二,每个方向的偏导数的总和。它经常被用于边缘检测,因为它确定了强度的变化的快速的区域。通常情况下,它是首次应用到一个领域,已先被平滑的高斯核平滑(或近似;比照“log”下面)。此方法任选“alpha”的参数,它控制形状的拉普拉斯内核,它必须是在0和1之间(含),否则,将被设置为0(如果<0)或1(如果> 1 )。返回一个3×3的核矩阵。

"LoG" Laplacian of Gaussian kernel.  This combines the Laplacian Operator kernel with that of a Gaussian kernel.  The form is given by K(sigma) = -1/(pi*sigma^4)*exp(-h/(2*sigma^2))*(1-h/(2*sigma^2)).  Requires the arguments 'nx', 'ny' and 'sigma' be specified.  Returns an 'nx X ny' matrix.
“log”拉普拉斯高斯核。结合拉普拉斯算子,高斯内核的内核。的形式由下式给出K(Σ)= -1 /(π*西格玛^ 4)* exp的信息(-h /(2 *西格玛^ 2))*(1小时/(2 *西格玛^ 2))。需要的参数“NX”,“纽约”和“西格玛”指定。返回的NX X纽约矩阵。

"minvar" A minimum variance kernel, which is given by 3/8*(3 - 5*h/sigma^2) if h <= 1, and zero otherwise (see, e.g., Hastie and Tibshirani, 1990).  Requires the arguments 'nx', 'ny', and 'sigma' be specified.  Returns an 'nx X ny' matrix.
的“minvar”的最小方差内核,这是3/8 *(3  -  5小时/ SIGMA ^ 2)如果H <= 1,否则为零(见,例如,Hastie和Tibshirani,1990)。需要参数为nx,NY,和西格玛被指定。返回的NX X纽约矩阵。

"multiquad" The multiquadratic kernel is similar to the rational quadratic kernel, and is given by K(a) = sqrt(h + a^2).  The inverse is given by 1/K(a).  Requires the arguments nx, ny and a be specified.  Optionally takes a logical named inverse determining whether to return the inverse multiquadratic kernel or not.
“multiquad”的multiquadratic的核心是合理的二次内核,并给出了K(A)= SQRT(H + A ^ 2)。由下式给出1 / K(一)的逆。需要的参数nx,ny和一个被指定。 (可选)需要决定是否返回逆multiquadratic内核或不一个的逻辑命名为逆。

"prewitt" Prewitt filter kernel, which emphasizes horizontal (vertical) edges through approximation of a vertical (horizontal) gradient.  Optionally takes a logical argument named 'transpose', which if FALSE (default) emphasis is on horizontal, and if TRUE emphasis is on vertical.  Returns a 3 X 3 matrix whose first row is all ones, second row is all zeros, and third row is all negative ones for the transpose=FALSE case, and the transpose of this matrix in teh transpose=TRUE case.
“普里威特”Prewitt滤波内核,它强调通过近似垂直(水平)的梯度的水平(垂直)的边缘。 (可选)一个名为“转置”的逻辑论证,如果为FALSE(默认)强调的是水平,如果真正的重点是垂直的。返回一个3×3的矩阵,它的第一行是所有的人,第二排是全零,第三排的转置= FALSE的情况下,所有负面的,这在格兰转置矩阵的转= TRUE的情况下。

"power" The power kernel is defined by K(p) = -h^p.  The log power kernel is similarly defined as K(p) = -log(h^p+1).  Requires specification of the arguments nx, ny and p.  Alternatively takes the logical do.log to determine whether the log power kernel should be returned (TRUE) or not (FALSE).  Default if not passed is to do the power kernel.  Returns an nx X ny matrix.  See Souza (2010) for more details.
“权力”的权力的内核被定义为K(P)=-H ^ P。log功率内核类似地定义为K(对)=-log(^ P +1个)。需要说明的论点,NX,NY和p。另外需要的逻辑do.log的,以确定是否的log电力内核应该被返回(TRUE)或(FALSE)。默认情况下,如果不通过是做电源的核心。返回NX的X NY矩阵。有关详细信息,请参阅索萨(2010年)。

"radial" The radial kernel is returns a*|h|^(2*m-d)*log(|h|) if d is even and a*|h|^(2*m-d) otherwise.  Requires arguments a, m, d nx and ny.  Replaces any missing values with zero.
“放射状”的径向内核返回* | H | ^(2 * MD)*log(| H |)如果D是偶数和* | H | ^(2 * MD),否则。需要的参数,M,D nx和ny。替换任何缺失值为零。

"ratquad" The rational quadratic kernel is used as an alternative to the Gaussian, and is given by K(a) = 1 - h/(h+a).  Requires the arguments nx, ny and a, and returns an 'nx X ny' matrix.  See Souza (2010) for more details.
“ratquad”有理二次内核是用来作为一种替代的高斯,和由下式给出K(α)= 1  - 小时/(H + A)。需要的参数nx,ny和一个,并返回一个“NX X纽约矩阵。有关详细信息,请参阅索萨(2010年)。

"sobel" Same as prewitt, except that the elements 1,2 and 3,2 are replaced by two and neative two, resp.
“sobel算”相同普里威特的,不同的是由两个和neative 2,分别取代元件1,2和3,2。

"student" The generalized Student's t kernel is defined by K(p)=1/(1+h^p).  Requires the arguments nx, ny and p be specified.  Returns an nx X ny matrix.  See Souza (2010) for more details.
“学生”被定义为广义学生的t内核K(P)= 1 /(1 + H ^ P)。 nx,ny和p是指定需要的参数。返回NX的X NY矩阵。有关详细信息,请参阅索萨(2010年)。

"unsharp" Unsharp contrast enhancement filter.  This is simply given by a 3 X 3 matrix of al zeros, except for a one in the center subtracted by a laplacian operator kernel matrix.  Requires the same arguments as for "laplacian".  Returns a 3 X 3 matrix.
“不清晰的”不清晰的对比度增强过滤器。这简直是一个3×3矩阵的人零,除了一个在减去拉普拉斯算子核矩阵的中心。需要相同的参数为“拉普拉斯算子”。返回一个3×3的矩阵。

"wave" The wave kernel is defined by K(phi) = phi/h * sin( h/phi).  Requires arguments nx, ny and phi be specified.  Returns an nx X ny matrix.
“波”的浪潮内核是指由K(φ)= PHI /小时*罪(H / PHI)。需要nx,ny和披指定的参数。返回NX的X NY矩阵。


值----------Value----------

matrix of dimension determined by the specific type of kernel, and possibly user passed arguments giving the kernel to be used by kernel2dsmooth.
矩阵的维数确定的特定类型的内核,和可能的用户传递的参数,要使用的kernel2dsmooth给内核。


(作者)----------Author(s)----------



Eric Gilleland




参考文献----------References----------










参见----------See Also----------

fft, kernel2dsmooth, hoods2d
fft,kernel2dsmooth,hoods2d


实例----------Examples----------



x <- matrix( 0, 10, 12)
x[4,5] <- 1
kmat <- kernel2dmeitsjer( "average", nx=7, ny=5)
kernel2dsmooth( x, K=kmat)

##[#]
## Can als call 'kernel2dsmooth' directly.[#ALS致电kernel2dsmooth“直接。]
##[#]
kernel2dsmooth( x, kernel.type="boxcar", n=5)
kernel2dsmooth( x, kernel.type="cauchy", sigma=20, nx=10, ny=12)
kernel2dsmooth( x, kernel.type="disk", r=3)
kernel2dsmooth( x, kernel.type="epanechnikov", nx=10, ny=12, sigma=4)
kernel2dsmooth( x, kernel.type="exponential", a=0.1, sigma=4, nx=10, ny=12)
kernel2dsmooth( x, kernel.type="gauss", nx=10, ny=12, sigma=4)
kernel2dsmooth( x, kernel.type="laplacian", alpha=0)
kernel2dsmooth( x, kernel.type="LoG", nx=10, ny=12, sigma=1)
kernel2dsmooth( x, kernel.type="minvar", nx=10, ny=12, sigma=4)
kernel2dsmooth( x, kernel.type="multiquad", a=0.1, nx=10, ny=12)
kernel2dsmooth( x, kernel.type="power", p=0.5, nx=10, ny=12)
kernel2dsmooth( x, kernel.type="prewitt")
kernel2dsmooth( x, kernel.type="prewitt", transpose=TRUE)
kernel2dsmooth( x, kernel.type="radial", a=1, m=2, d=1, nx=10, ny=12)
kernel2dsmooth( x, kernel.type="ratquad", a=0.1, nx=10, ny=12)
kernel2dsmooth( x, kernel.type="sobel")
kernel2dsmooth( x, kernel.type="sobel", transpose=TRUE)
kernel2dsmooth( x, kernel.type="student", p=1.5, nx=10, ny=12)
kernel2dsmooth( x, kernel.type="unsharp", alpha=0)
kernel2dsmooth( x, kernel.type="wave", phi=45, nx=10, ny=12)

## Not run: [#不运行:]
data(lennon)
kmat <- kernel2dmeitsjer( "average", nx=7, ny=5)
lennon.smAvg <- kernel2dsmooth( lennon, K=kmat)
## Can also just make a call to kernel2dsmooth, which[#也只是打个检测kernel2dsmooth,这]
## will call this function.[#将调用这个函数。]
lennon.smBox <- kernel2dsmooth( lennon, kernel.type="boxcar", n=7)
lennon.smDsk <- kernel2dsmooth( lennon, kernel.type="disk", r=5)
par( mfrow=c(2,2), mar=rep(0.1,4))
image.plot( lennon, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smAvg, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smBox, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smDsk, col=tim.colors(256), axes=FALSE)

lennon.smEpa <- kernel2dsmooth( lennon, kernel.type="epanechnikov", nx=10, ny=10, sigma=20)
lennon.smGau <- kernel2dsmooth( lennon, kernel.type="gauss", nx=10, ny=10, sigma=20)
lennon.smMvr <- kernel2dsmooth( lennon, kernel.type="minvar", nx=10, ny=10, sigma=20)
par( mfrow=c(2,2), mar=rep(0.1,4))
image.plot( lennon, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smEpa, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smGau, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smMvr, col=tim.colors(256), axes=FALSE)

lennon.smLa0 <- kernel2dsmooth( lennon, kernel.type="laplacian", alpha=0)
lennon.smLap <- kernel2dsmooth( lennon, kernel.type="laplacian", alpha=0.999)
lennon.smLoG <- kernel2dsmooth( lennon, kernel.type="LoG", nx=10, ny=10, sigma=20)
par( mfrow=c(2,2), mar=rep(0.1,4))
image.plot( lennon, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smLa0, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smLap, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smLoG, col=tim.colors(256), axes=FALSE)

lennon.smPrH <- kernel2dsmooth( lennon, kernel.type="prewitt")
lennon.smPrV <- kernel2dsmooth( lennon, kernel.type="prewitt", transpose=TRUE)
lennon.smSoH <- kernel2dsmooth( lennon, kernel.type="sobel")
lennon.smSoV <- kernel2dsmooth( lennon, kernel.type="sobel", transpose=TRUE)
par( mfrow=c(2,2), mar=rep(0.1,4))
image.plot( lennon.smPrH, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smPrV, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smSoH, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smSoV, col=tim.colors(256), axes=FALSE)

lennon.smUsh <- kernel2dsmooth( lennon, kernel.type="unsharp", alpha=0.999)
par( mfrow=c(2,1), mar=rep(0.1,4))
image.plot( lennon, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smUsh, col=tim.colors(256), axes=FALSE)

lennon.smRad1 <- kernel2dsmooth( lennon, kernel.type="radial", a=2, m=2, d=1, nx=10, ny=10)
lennon.smRad2 <- kernel2dsmooth( lennon, kernel.type="radial", a=2, m=2, d=2, nx=10, ny=10)
par( mfrow=c(2,1), mar=rep(0.1,4))
image.plot( lennon.smRad1, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smRad2, col=tim.colors(256), axes=FALSE)

lennon.smRQd <- kernel2dsmooth( lennon, kernel.type="ratquad", a=0.5, nx=10, ny=10)
lennon.smExp <- kernel2dsmooth( lennon, kernel.type="exponential", a=0.5, sigma=20, nx=10, ny=10)
lennon.smMQd <- kernel2dsmooth( lennon, kernel.type="multiquad", a=0.5, nx=10, ny=10)
par( mfrow=c(2,2), mar=rep(0.1,4))
image.plot( lennon.smGau, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smRQd, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smExp, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smMQd, col=tim.colors(256), axes=FALSE)

lennon.smIMQ <- kernel2dsmooth( lennon, kernel.type="multiquad", a=0.5, nx=10, ny=10, inverse=TRUE)
par( mfrow=c(2,1), mar=rep(0.1,4))
image.plot( lennon.smMQd, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smIMQ, col=tim.colors(256), axes=FALSE)

lennon.smWav <- kernel2dsmooth( lennon, kernel.type="wave", phi=45, nx=10, ny=10)
par( mfrow=c(1,1), mar=rep(0.1,4))
image.plot( lennon.smWav, col=tim.colors(256), axes=FALSE)

lennon.smPow <- kernel2dsmooth( lennon, kernel.type="power", p=0.5, nx=10, ny=10)
lennon.smLpw <- kernel2dsmooth( lennon, kernel.type="power", p=0.5, nx=10, ny=10, do.log=TRUE)
par( mfrow=c(2,1), mar=rep(0.1,4))
image.plot( lennon.smPow, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smLpw, col=tim.colors(256), axes=FALSE)

lennon.smCau <- kernel2dsmooth( lennon, kernel.type="cauchy", sigma=20, nx=10, ny=10)
lennon.smStd <- kernel2dsmooth( lennon, kernel.type="student", p=1.5, nx=10, ny=10)
par( mfrow=c(2,1), mar=rep(0.1,4))
image.plot( lennon.smCau, col=tim.colors(256), axes=FALSE)
image.plot( lennon.smStd, col=tim.colors(256), axes=FALSE)

## End(Not run)[#(不执行)]

转载请注明:出自 生物统计家园网(http://www.biostatistic.net)。


注:
注1:为了方便大家学习,本文档为生物统计家园网机器人LoveR翻译而成,仅供个人R语言学习参考使用,生物统计家园保留版权。
注2:由于是机器人自动翻译,难免有不准确之处,使用时仔细对照中、英文内容进行反复理解,可以帮助R语言的学习。
注3:如遇到不准确之处,请在本贴的后面进行回帖,我们会逐渐进行修订。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

手机版|小黑屋|生物统计家园 网站价格

GMT+8, 2025-6-11 05:37 , Processed in 0.035318 second(s), 16 queries .

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表