找回密码
 注册
查看: 1118|回复: 0

R语言 aroma.light包 wpca.matrix()函数中文帮助文档(中英文对照)

[复制链接]
发表于 2012-2-25 12:07:49 | 显示全部楼层 |阅读模式
wpca.matrix(aroma.light)
wpca.matrix()所属R语言包:aroma.light

                                        Light-weight Weighted Principal Component Analysis
                                         重量轻加权主成分分析

                                         译者:生物统计家园网 机器人LoveR

描述----------Description----------

Calculates the (weighted) principal components of a matrix, that is, finds a new coordinate system (not unique) for representing the given multivariate data such that i) all dimensions are orthogonal to each other, and ii) all dimensions have maximal variances.
计算矩阵(加权)的主要组成部分,即找到一个新的坐标系统为代表的我)的所有方面都相互正交,以及ii)所有方面有极大的差异多元数据(不是唯一)。


用法----------Usage----------





参数----------Arguments----------

参数:x
An NxK matrix.
NxKmatrix。


参数:w
An N vector of weights for each row (observation) in the data matrix. If NULL, all observations get the same weight, that is, standard PCA is used.
一个Nvector数据矩阵中的每一行(观察)的重量。如果NULL,所有观测得到同样的重量,标准PCA用于。


参数:center
If TRUE, the (weighted) sample mean column vector is subtracted from each column in mat, first. If data is not centered, the effect will be that a linear subspace that goes through the origin is fitted.
TRUE如果,(加权)样本均值列vector减去从每列mat,第一次。如果数据不集中,效果会通过原点的线性子空间安装。


参数:scale
If TRUE, each column in mat is divided by its (weighted) root-mean-square of the centered column, first.
如果TRUE,mat每列分(加权),根均方的中心列第一。


参数:method
If "dgesdd" LAPACK's divide-and-conquer based SVD routine is used (faster [1]), if "dgesvd", LAPACK's QR-decomposition-based routine is used, and if "dsvdc", LINPACK's DSVDC(?) routine is used. The latter is just for pure backward compatibility with R v1.7.0.  
如果"dgesdd"的LAPACK的鸿沟和征服基于SVD的日常使用(快[1]),如果"dgesvd",LAPACK的基于QR分解的例行使用,如果"dsvdc",LINPACK性能DSVDC(?)例程。后者仅仅是纯v1.7.0的R向后兼容性。


参数:swapDirections
If TRUE, the signs of eigenvectors that have more negative than positive components are inverted. The signs of corresponding principal components are also inverted. This is only of interest when for instance visualizing or comparing with other PCA estimates from other methods, because the PCA (SVD) decompostion of a matrix is not unique.  
如果TRUE,倒有更多的负面比正面元件的特征向量的迹象。相应的主要组成部分的迹象也倒。这是唯一的利益,例如可视化或与其他PCA估计其他方法比较,因为不是唯一的PCA(SVD)的矩阵分解的。


参数:...
Not used.
不使用。


值----------Value----------

Returns a list with elements:
返回一个元素list:


参数:pc
An NxK matrix where the column vectors are the principal components (a.k.a. loading vectors, spectral loadings or factors etc).
NxKmatrix列vectorS(又名负荷向量,频谱负荷或因素等)的主要组成部分。


参数:d
An K vector containing the eigenvalues of the principal components.
Kvector含有主成分的特征值。


参数:vt
An KxK matrix containing the eigenvector of the principal components.
KxKmatrix包含的主要组成部分的特征向量。


参数:xMean
The center coordinate.
该中心协调。

It holds that x == t(t(fit$pc %*% fit$vt) + fit$xMean).
认为x == t(t(fit$pc %*% fit$vt) + fit$xMean)。


方法----------Method----------

A singular value decomposition (SVD) is carried out. Let X=mat, then the SVD of the matrix is X = U D V', where U and V are othogonal, and D is a diagonal matrix with singular values. The principal returned by this method are U D.
一个奇异值分解(SVD)进行。设X =mat,那么矩阵的SVD是X = U D V',其中U和V是othogonal,D是一个对角矩阵的奇异值。此方法返回的主体是U D。

Internally La.svd() (or svd()) of the base package is used. For a popular and well written introduction to SVD see for instance [2].
内部La.svd()(或svd())base包使用。为一种流行和书面的介绍,以SVD的实例[2]。


作者(S)----------Author(s)----------


Henrik Bengtsson (<a href="http://www.braju.com/R/">http://www.braju.com/R/</a>)



参考文献----------References----------

http://www.cs.berkeley.edu/~demmel/DOE2000/Report0100.html <br> [2] Todd Will, Introduction to the Singular Value Decomposition, UW-La Crosse, 2004. http://www.uwlax.edu/faculty/will/svd/ <br>

参见----------See Also----------

For a iterative re-weighted PCA method, see *iwpca(). For Singular Value Decomposition, see svd(). For other implementations of Principal Component Analysis functions see (if they are installed): prcomp in package stats and pca() in package pcurve.
迭代重新加权PCA方法,请参阅*iwpca()。奇异值分解,看到svd()。对于其他主成分分析功能的实现(如果已安装):prcomp包stats和pca()在包pcurve。


举例----------Examples----------


  for (zzz in 0) {

# This example requires plot3d() in R.basic [http://www.braju.com/R/][这个例子需要plot3d()R.basic [http://www.braju.com/R/]]
if (!require(R.basic)) break

# -------------------------------------------------------------[-------------------------------------------------- -----------]
# A first example[第一个例子]
# -------------------------------------------------------------[-------------------------------------------------- -----------]
# Simulate data from the model y &lt;- a + bx + eps(bx)[从模型模拟数据Y < -  A + BX + EPS(BX)]
x <- rexp(1000)
a <- c(2,15,3)
b <- c(2,3,15)
bx <- outer(b,x)
eps <- apply(bx, MARGIN=2, FUN=function(x) rnorm(length(x), mean=0, sd=0.1*x))
y <- a + bx + eps
y <- t(y)

# Add some outliers by permuting the dimensions for 1/3 of the observations[加入一些离群值,通过置换的尺寸为1/3的意见]
idx <- sample(1:nrow(y), size=1/3*nrow(y))
y[idx,] <- y[idx,c(2,3,1)]

# Down-weight the outliers W times to demonstrate how weights are used[降体重离群W次演示如何使用重量]
W <- 10

# Plot the data with fitted lines at four different view points[绘制在四个不同的观点与数据拟合线]
N <- 4
theta <- seq(0,180,length=N)
phi <- rep(30, length.out=N)

# Use a different color for each set of weights[每一组的重量,使用不同的颜色]
col <- topo.colors(W)

opar <- par(mar=c(1,1,1,1)+0.1)
layout(matrix(1:N, nrow=2, byrow=TRUE))
for (kk in seq(theta)) {
  # Plot the data[绘制数据]
  plot3d(y, theta=theta[kk], phi=phi[kk])

  # First, same weights for all observations[首先,对所有意见相同重量]
  w <- rep(1, length=nrow(y))

  for (ww in 1:W) {
    # Fit a line using IWPCA through data[适合通过数据线使用IWPCA]
    fit <- wpca(y, w=w, swapDirections=TRUE)

    # Get the first principal component[得到的第一主成分]
    ymid <- fit$xMean
    d0 <- apply(y, MARGIN=2, FUN=min) - ymid
    d1 <- apply(y, MARGIN=2, FUN=max) - ymid
    b <- fit$vt[1,]
    y0 <- -b * max(abs(d0))
    y1 <-  b * max(abs(d1))
    yline <- matrix(c(y0,y1), nrow=length(b), ncol=2)
    yline <- yline + ymid

    points3d(t(ymid), col=col)
    lines3d(t(yline), col=col)

    # Down-weight outliers only, because here we know which they are.[下重量只有离群,因为在这里,我们知道他们。]
    w[idx] <- w[idx]/2
  }

  # Highlight the last one[突出的最后一]
  lines3d(t(yline), col="red", lwd=3)
}

par(opar)

} # for (zzz in 0)[(0 ZZZ)]
rm(zzz)


  if (dev.cur() > 1) dev.off()

  # -------------------------------------------------------------[-------------------------------------------------- -----------]
# A second example[第二个例子]
# -------------------------------------------------------------[-------------------------------------------------- -----------]
# Data[数据]
x <- c(1,2,3,4,5)
y <- c(2,4,3,3,6)

opar <- par(bty="L")
opalette <- palette(c("blue", "red", "black"))
xlim <- ylim <- c(0,6)

# Plot the data and the center mass[绘制数据中心质量]
plot(x,y, pch=16, cex=1.5, xlim=xlim, ylim=ylim)
points(mean(x), mean(y), cex=2, lwd=2, col="blue")


# Linear regression y ~ x[线性回归y~X]
fit <- lm(y ~ x)
abline(fit, lty=1, col=1)

# Linear regression y ~ x through without intercept[线性回归y~X通过无截距]
fit <- lm(y ~ x - 1)
abline(fit, lty=2, col=1)


# Linear regression x ~ y[线性回归X~Y]
fit <- lm(x ~ y)
c <- coefficients(fit)
b <- 1/c[2]
a <- -b*c[1]
abline(a=a, b=b, lty=1, col=2)

# Linear regression x ~ y through without intercept[线性回归X~Y通过无截距]
fit <- lm(x ~ y - 1)
b <- 1/coefficients(fit)
abline(a=0, b=b, lty=2, col=2)


# Orthogonal linear "regression"[正交直线的“回归”]
fit <- wpca(cbind(x,y))

b <- fit$vt[1,2]/fit$vt[1,1]
a <- fit$xMean[2]-b*fit$xMean[1]
abline(a=a, b=b, lwd=2, col=3)

# Orthogonal linear "regression" without intercept[正交直线的“回归”不拦截]
fit <- wpca(cbind(x,y), center=FALSE)
b <- fit$vt[1,2]/fit$vt[1,1]
a <- fit$xMean[2]-b*fit$xMean[1]
abline(a=a, b=b, lty=2, lwd=2, col=3)

legend(xlim[1],ylim[2], legend=c("lm(y~x)", "lm(y~x-1)", "lm(x~y)",
          "lm(x~y-1)", "pca", "pca w/o intercept"), lty=rep(1:2,3),
                     lwd=rep(c(1,1,2),each=2), col=rep(1:3,each=2))

palette(opalette)
par(opar)


转载请注明:出自 生物统计家园网(http://www.biostatistic.net)。


注:
注1:为了方便大家学习,本文档为生物统计家园网机器人LoveR翻译而成,仅供个人R语言学习参考使用,生物统计家园保留版权。
注2:由于是机器人自动翻译,难免有不准确之处,使用时仔细对照中、英文内容进行反复理解,可以帮助R语言的学习。
注3:如遇到不准确之处,请在本贴的后面进行回帖,我们会逐渐进行修订。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

手机版|小黑屋|生物统计家园 网站价格

GMT+8, 2025-1-23 21:26 , Processed in 0.026569 second(s), 16 queries .

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表