R version 4.2.1 (2022-06-23) -- "Funny-Looking Kid" Copyright (C) 2022 The R Foundation for Statistical Computing Platform: aarch64-apple-darwin20 (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > pkgname <- "psych" > source(file.path(R.home("share"), "R", "examples-header.R")) > options(warn = 1) > library('psych') > > base::assign(".oldSearch", base::search(), pos = 'CheckExEnv') > base::assign(".old_wd", base::getwd(), pos = 'CheckExEnv') > cleanEx() > nameEx("00.psych-package") > ### * 00.psych-package > > flush(stderr()); flush(stdout()) > > ### Name: 00.psych > ### Title: A package for personality, psychometric, and psychological > ### research > ### Aliases: psych psych-package 00.psych-package > ### Keywords: package multivariate models cluster > > ### ** Examples > > #See the separate man pages > #to test most of the psych package run the following > #test.psych() > > > > cleanEx() > nameEx("AUC") > ### * AUC > > flush(stderr()); flush(stdout()) > > ### Name: AUC > ### Title: Decision Theory measures of specificity, sensitivity, and d > ### prime > ### Aliases: AUC auc Specificity Sensitivity > ### Keywords: multivariate > > ### ** Examples > > AUC(c(30,20,20,30)) #specify the table input Decision Theory and Area under the Curve The original data implied the following 2 x 2 table Predicted.Pos Predicted.Neg True.Pos 0.3 0.2 True.Neg 0.2 0.3 Conditional probabilities of Predicted.Pos Predicted.Neg True.Pos 0.6 0.4 True.Neg 0.4 0.6 Accuracy = 0.6 Sensitivity = 0.6 Specificity = 0.6 with Area Under the Curve = 0.64 d.prime = 0.51 Criterion = 0.25 Beta = 1 Observed Phi correlation = 0.2 Inferred latent (tetrachoric) correlation = 0.31> AUC(c(140,60,100,900)) #Metz example with colors Decision Theory and Area under the Curve The original data implied the following 2 x 2 table Predicted.Pos Predicted.Neg True.Pos 0.117 0.05 True.Neg 0.083 0.75 Conditional probabilities of Predicted.Pos Predicted.Neg True.Pos 0.7 0.3 True.Neg 0.1 0.9 Accuracy = 0.87 Sensitivity = 0.7 Specificity = 0.9 with Area Under the Curve = 0.9 d.prime = 1.81 Criterion = 1.28 Beta = 0.16 Observed Phi correlation = 0.56 Inferred latent (tetrachoric) correlation = 0.81> AUC(c(140,60,100,900),col=c("grey","black")) #Metz example 1 no colors Decision Theory and Area under the Curve The original data implied the following 2 x 2 table Predicted.Pos Predicted.Neg True.Pos 0.117 0.05 True.Neg 0.083 0.75 Conditional probabilities of Predicted.Pos Predicted.Neg True.Pos 0.7 0.3 True.Neg 0.1 0.9 Accuracy = 0.87 Sensitivity = 0.7 Specificity = 0.9 with Area Under the Curve = 0.9 d.prime = 1.81 Criterion = 1.28 Beta = 0.16 Observed Phi correlation = 0.56 Inferred latent (tetrachoric) correlation = 0.81> AUC(c(80,120,40, 960)) #Metz example 2 Note how the accuracies are the same but d's differ Decision Theory and Area under the Curve The original data implied the following 2 x 2 table Predicted.Pos Predicted.Neg True.Pos 0.067 0.1 True.Neg 0.033 0.8 Conditional probabilities of Predicted.Pos Predicted.Neg True.Pos 0.40 0.60 True.Neg 0.04 0.96 Accuracy = 0.87 Sensitivity = 0.4 Specificity = 0.96 with Area Under the Curve = 0.85 d.prime = 1.5 Criterion = 1.75 Beta = 0.08 Observed Phi correlation = 0.45 Inferred latent (tetrachoric) correlation = 0.74> AUC(c(49,40,79,336)) #Wiggins p 249 Decision Theory and Area under the Curve The original data implied the following 2 x 2 table Predicted.Pos Predicted.Neg True.Pos 0.097 0.079 True.Neg 0.157 0.667 Conditional probabilities of Predicted.Pos Predicted.Neg True.Pos 0.55 0.45 True.Neg 0.19 0.81 Accuracy = 0.76 Sensitivity = 0.55 Specificity = 0.81 with Area Under the Curve = 0.76 d.prime = 1 Criterion = 0.88 Beta = 0.15 Observed Phi correlation = 0.32 Inferred latent (tetrachoric) correlation = 0.53> AUC(BR=.05,SR=.254,Phi = .317) #Wiggins 251 extreme Base Rates Decision Theory and Area under the Curve The original data implied the following 2 x 2 table Predicted.Pos Predicted.Neg True.Pos 0.043 0.0072 True.Neg 0.211 0.7388 Conditional probabilities of Predicted.Pos Predicted.Neg True.Pos 0.86 0.14 True.Neg 0.22 0.78 Accuracy = 0.78 Sensitivity = 0.86 Specificity = 0.78 with Area Under the Curve = 0.9 d.prime = 1.82 Criterion = 0.76 Beta = 0.06 Observed Phi correlation = 0.32 Inferred latent (tetrachoric) correlation = 0.71> > > > cleanEx() > nameEx("Garcia") > ### * Garcia > > flush(stderr()); flush(stdout()) > > ### Name: Garcia > ### Title: Data from the sexism (protest) study of Garcia, Schmitt, > ### Branscome, and Ellemers (2010) > ### Aliases: Garcia protest GSBE > ### Keywords: datasets > > ### ** Examples > > data(GSBE) #alias to Garcia data set > > ## Just do regressions with interactions > setCor(respappr ~ prot2 * sexism,std=FALSE,data=Garcia,main="Moderated (mean centered )") Call: setCor(y = respappr ~ prot2 * sexism, data = Garcia, std = FALSE, main = "Moderated (mean centered )") Multiple Regression from raw data DV = respappr slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 4.85 0.10 48.14 1.5e-82 4.65 5.05 1 0.00 prot2 1.46 0.22 6.73 5.5e-10 1.03 1.89 1 0.25 sexism 0.02 0.13 0.18 8.6e-01 -0.23 0.28 1 0.00 prot2*sexism 0.81 0.28 2.87 4.8e-03 0.25 1.37 1 0.04 Residual Standard Error = 1.14 with 125 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p respappr 0.54 0.3 0.37 0.13 0.28 0.06 17.53 3 125 1.46e-09 > setCor(respappr ~ prot2 * sexism,std=FALSE,data=Garcia,main="Moderated (don't center)", zero=FALSE) Call: setCor(y = respappr ~ prot2 * sexism, data = Garcia, std = FALSE, main = "Moderated (don't center)", zero = FALSE) Multiple Regression from raw data DV = respappr slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 6.57 1.21 5.43 2.8e-07 4.17 8.96 1.00 0.00 prot2 -2.69 1.45 -1.85 6.7e-02 -5.56 0.19 44.99 -0.47 sexism -0.53 0.24 -2.24 2.7e-02 -1.00 -0.06 3.34 -0.01 prot2*sexism 0.81 0.28 2.87 4.8e-03 0.25 1.37 48.14 0.77 Residual Standard Error = 1.14 with 125 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p respappr 0.54 0.3 0.41 0.17 0.28 0.06 17.53 3 125 1.46e-09 > #demonstrate interaction plots > plot(respappr ~ sexism, pch = 23- protest, bg = c("black","red", "blue")[protest], + data=Garcia, main = "Response to sexism varies as type of protest") > by(Garcia,Garcia$protest, function(x) abline(lm(respappr ~ sexism, + data =x),lty=c("solid","dashed","dotted")[x$protest+1])) Garcia$protest: 0 NULL ------------------------------------------------------------ Garcia$protest: 1 NULL ------------------------------------------------------------ Garcia$protest: 2 NULL > text(6.5,3.5,"No protest") > text(3,3.9,"Individual") > text(3,5.2,"Collective") > > > #compare two models (bootstrapping n.iter set to 50 for speed > # 1) mean center the variables prior to taking product terms > mod1 <- mediate(respappr ~ prot2 * sexism +(sexism),data=Garcia,n.iter=50 + ,main="Moderated mediation (mean centered)") > # 2) do not mean center > mod2 <- mediate(respappr ~ prot2 * sexism +(sexism),data=Garcia,zero=FALSE, n.iter=50, + main="Moderated mediation (not centered") > > summary(mod1) Call: mediate(y = respappr ~ prot2 * sexism + (sexism), data = Garcia, n.iter = 50, main = "Moderated mediation (mean centered)") Direct effect estimates (traditional regression) (c') X + M on Y respappr se t df Prob Intercept -0.01 0.10 -0.12 125 9.07e-01 prot2 1.46 0.22 6.73 125 5.52e-10 prot2*sexism 0.81 0.28 2.87 125 4.78e-03 sexism 0.02 0.13 0.18 125 8.56e-01 R = 0.54 R2 = 0.3 F = 17.53 on 3 and 125 DF p-value: 1.46e-09 Total effect estimates (c) (X on Y) respappr se t df Prob Intercept -0.01 0.10 -0.12 126 9.06e-01 prot2 1.46 0.22 6.77 126 4.43e-10 prot2*sexism 0.81 0.28 2.89 126 4.49e-03 'a' effect estimates (X on M) sexism se t df Prob Intercept 0.00 0.07 -0.02 126 0.986 prot2 0.07 0.15 0.47 126 0.642 prot2*sexism 0.09 0.19 0.44 126 0.661 'b' effect estimates (M on Y controlling for X) respappr se t df Prob sexism 0.02 0.13 0.18 125 0.856 'ab' effect estimates (through all mediators) respappr boot sd lower upper prot2 0 0.00 0.02 -0.04 0.06 prot2*sexism 0 -0.01 0.06 -0.04 0.06 > summary(mod2) Call: mediate(y = respappr ~ prot2 * sexism + (sexism), data = Garcia, n.iter = 50, zero = FALSE, main = "Moderated mediation (not centered") Direct effect estimates (traditional regression) (c') X + M on Y respappr se t df Prob Intercept 6.57 1.21 5.43 125 2.83e-07 prot2 -2.69 1.45 -1.85 125 6.65e-02 prot2*sexism 0.81 0.28 2.87 125 4.78e-03 sexism -0.53 0.24 -2.24 125 2.67e-02 R = 0.54 R2 = 0.3 F = 17.53 on 3 and 125 DF p-value: 1.46e-09 Total effect estimates (c) (X on Y) respappr se t df Prob Intercept 3.88 0.18 21.39 126 9.14e-44 prot2 0.00 0.84 0.00 126 9.96e-01 prot2*sexism 0.28 0.16 1.79 126 7.56e-02 'a' effect estimates (X on M) sexism se t df Prob Intercept 5.07 0.07 75.12 126 1.69e-106 prot2 -5.07 0.31 -16.33 126 6.81e-33 prot2*sexism 1.00 0.06 17.15 126 9.41e-35 'b' effect estimates (M on Y controlling for X) respappr se t df Prob sexism -0.53 0.24 -2.24 125 0.0267 'ab' effect estimates (through all mediators) respappr boot sd lower upper prot2 2.68 3.21 1.42 0.76 6.04 prot2*sexism -0.53 -0.63 0.28 0.76 6.04 > > > > > > cleanEx() > nameEx("Gleser") > ### * Gleser > > flush(stderr()); flush(stdout()) > > ### Name: Gleser > ### Title: Example data from Gleser, Cronbach and Rajaratnam (1965) to show > ### basic principles of generalizability theory. > ### Aliases: Gleser > ### Keywords: datasets > > ### ** Examples > > #Find the MS for each component: > #First, stack the data > data(Gleser) > stack.g <- stack(Gleser) > st.gc.df <- data.frame(stack.g,Persons=rep(letters[1:12],12), + Items=rep(letters[1:6],each=24),Judges=rep(letters[1:2],each=12)) > #now do the ANOVA > anov <- aov(values ~ (Persons*Judges*Items),data=st.gc.df) > summary(anov) Df Sum Sq Mean Sq Persons 11 84.17 7.652 Judges 1 1.00 1.000 Items 5 64.67 12.933 Persons:Judges 11 19.50 1.773 Persons:Items 55 81.17 1.476 Judges:Items 5 4.33 0.867 Persons:Judges:Items 55 34.17 0.621 > > > > cleanEx() > nameEx("Gorsuch") > ### * Gorsuch > > flush(stderr()); flush(stdout()) > > ### Name: Gorsuch > ### Title: Example data set from Gorsuch (1997) for an example factor > ### extension. > ### Aliases: Gorsuch > ### Keywords: datasets > > ### ** Examples > > data(Gorsuch) > > Ro <- Gorsuch[1:6,1:6] > Roe <- Gorsuch[1:6,7:10] > fo <- fa(Ro,2,rotate="none") > fa.extension(Roe,fo,correct=FALSE) Call: fa.extension(Roe = Roe, fo = fo, correct = FALSE) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 h2 u2 info2 0.72 0.34 0.63 0.37 tension2 -0.40 0.54 0.45 0.55 v123 0.82 0.41 0.84 0.16 v564 -0.45 0.71 0.71 0.29 MR1 MR2 SS loadings 1.55 1.08 Proportion Var 0.39 0.27 Cumulative Var 0.39 0.66 Proportion Explained 0.59 0.41 Cumulative Proportion 0.59 1.00 > > > > cleanEx() > nameEx("Harman") > ### * Harman > > flush(stderr()); flush(stdout()) > > ### Name: Harman > ### Title: Five data sets from Harman (1967). 9 cognitive variables from > ### Holzinger and 8 emotional variables from Burt > ### Aliases: Harman Harman.Burt Harman.Holzinger Harman.political Harman.5 > ### Harman.8 > ### Keywords: datasets > > ### ** Examples > > data(Harman) > cor.plot(Harman.Holzinger) > cor.plot(Harman.Burt) > smc(Harman.Burt) #note how this produces impossible results Warning in cor.smooth(R) : Matrix was not positive definite, smoothing was done Sociability Sorrow Tenderness Joy Wonder Disgust 0.7986744 0.8187909 0.2199417 0.6316716 0.4571281 0.2508866 Anger Fear 0.3352716 0.3152878 > f2 <- fa(Harman.8,2, rotate="none") #minres matches Harman and Jones > > > > cleanEx() > nameEx("ICC") > ### * ICC > > flush(stderr()); flush(stdout()) > > ### Name: ICC > ### Title: Intraclass Correlations (ICC1, ICC2, ICC3 from Shrout and > ### Fleiss) > ### Aliases: ICC > ### Keywords: multivariate > > ### ** Examples > > sf <- matrix(c( + 9, 2, 5, 8, + 6, 1, 3, 2, + 8, 4, 6, 8, + 7, 1, 2, 6, + 10, 5, 6, 9, + 6, 2, 4, 7),ncol=4,byrow=TRUE) > colnames(sf) <- paste("J",1:4,sep="") > rownames(sf) <- paste("S",1:6,sep="") > sf #example from Shrout and Fleiss (1979) J1 J2 J3 J4 S1 9 2 5 8 S2 6 1 3 2 S3 8 4 6 8 S4 7 1 2 6 S5 10 5 6 9 S6 6 2 4 7 > ICC(sf,lmer=FALSE) #just use the aov procedure Call: ICC(x = sf, lmer = FALSE) Intraclass correlation coefficients type ICC F df1 df2 p lower bound upper bound Single_raters_absolute ICC1 0.17 1.8 5 18 0.16477 -0.133 0.72 Single_random_raters ICC2 0.29 11.0 5 15 0.00013 0.019 0.76 Single_fixed_raters ICC3 0.71 11.0 5 15 0.00013 0.342 0.95 Average_raters_absolute ICC1k 0.44 1.8 5 18 0.16477 -0.884 0.91 Average_random_raters ICC2k 0.62 11.0 5 15 0.00013 0.071 0.93 Average_fixed_raters ICC3k 0.91 11.0 5 15 0.00013 0.676 0.99 Number of subjects = 6 Number of Judges = 4 See the help file for a discussion of the other 4 McGraw and Wong estimates,> > #data(sai) > sai <- psychTools::sai > sai.xray <- subset(sai,(sai$study=="XRAY") & (sai$time==1)) > xray.icc <- ICC(sai.xray[-c(1:3)],lmer=TRUE,check.keys=TRUE) Some items were negatively correlated with total scale and were automatically reversed. This is indicated by a negative sign for the variable name. reversed items tense regretful upset worrying anxious nervous jittery high.strung worried rattled> xray.icc Call: ICC(x = sai.xray[-c(1:3)], lmer = TRUE, check.keys = TRUE) Intraclass correlation coefficients type ICC F df1 df2 p lower bound Single_raters_absolute ICC1 0.28 8.8 199 3800 3.1e-194 0.24 Single_random_raters ICC2 0.29 13.1 199 3781 7.0e-301 0.23 Single_fixed_raters ICC3 0.38 13.1 199 3781 7.0e-301 0.33 Average_raters_absolute ICC1k 0.89 8.8 199 3800 3.1e-194 0.86 Average_random_raters ICC2k 0.89 13.1 199 3781 7.0e-301 0.86 Average_fixed_raters ICC3k 0.92 13.1 199 3781 7.0e-301 0.91 upper bound Single_raters_absolute 0.33 Single_random_raters 0.35 Single_fixed_raters 0.43 Average_raters_absolute 0.91 Average_random_raters 0.92 Average_fixed_raters 0.94 Number of subjects = 200 Number of Judges = 20 See the help file for a discussion of the other 4 McGraw and Wong estimates,> xray.icc$lme #show the variance components as well variance Percent ID 0.3033250 0.2894688 Items 0.2434623 0.2323407 Residual 0.5010804 0.4781905 Total 1.0478677 1.0000000 > > > > cleanEx() > nameEx("ICLUST") > ### * ICLUST > > flush(stderr()); flush(stdout()) > > ### Name: iclust > ### Title: iclust: Item Cluster Analysis - Hierarchical cluster analysis > ### using psychometric principles > ### Aliases: ICLUST iclust > ### Keywords: multivariate cluster > > ### ** Examples > > test.data <- Harman74.cor$cov > ic.out <- iclust(test.data,title="ICLUST of the Harman data") > summary(ic.out) ICLUST (Item Cluster Analysis)Call: iclust(r.mat = test.data, title = "ICLUST of the Harman data") ICLUST of the Harman data Purified Alpha: [1] 0.91 Guttman Lambda6* [1] 0.94 Original Beta: [1] 0.63 Cluster size: [1] 24 Purified scale intercorrelations reliabilities on diagonal correlations corrected for attenuation above diagonal: [,1] [1,] 0.91 > > #use all defaults and stop at 4 clusters > ic.out4 <- iclust(test.data,nclusters =4,title="Force 4 clusters") > summary(ic.out4) ICLUST (Item Cluster Analysis)Call: iclust(r.mat = test.data, nclusters = 4, title = "Force 4 clusters") Force 4 clusters Purified Alpha: C13 C20 C15 V2 0.90 0.84 0.81 0.48 Guttman Lambda6* C13 C20 V2 C15 0.90 0.87 0.52 0.84 Original Beta: C13 C20 V2 C15 0.79 0.74 NA 0.74 Cluster size: C13 C20 C15 V2 5 12 5 2 Purified scale intercorrelations reliabilities on diagonal correlations corrected for attenuation above diagonal: C13 C20 C15 V2 C13 0.90 0.69 0.54 0.52 C20 0.60 0.84 0.71 0.75 C15 0.46 0.59 0.81 0.39 V2 0.34 0.48 0.24 0.48 > ic.out1 <- iclust(test.data,beta=3,beta.size=3) #use more stringent criteria > ic.out #more complete output ICLUST (Item Cluster Analysis) Call: iclust(r.mat = test.data, title = "ICLUST of the Harman data") Purified Alpha: [1] 0.91 G6* reliability: [1] 1 Original Beta: [1] 0.63 Cluster size: [1] 24 Item by Cluster Structure matrix: [,1] VisualPerception 0.60 Cubes 0.38 PaperFormBoard 0.43 Flags 0.48 GeneralInformation 0.67 PargraphComprehension 0.66 SentenceCompletion 0.64 WordClassification 0.66 WordMeaning 0.66 Addition 0.47 Code 0.57 CountingDots 0.48 StraightCurvedCapitals 0.61 WordRecognition 0.43 NumberRecognition 0.40 FigureRecognition 0.53 ObjectNumber 0.48 NumberFigure 0.54 FigureWord 0.46 Deduction 0.62 NumericalPuzzles 0.61 ProblemReasoning 0.61 SeriesCompletion 0.69 ArithmeticProblems 0.66 With eigenvalues of: [1] 7.6 Purified scale intercorrelations reliabilities on diagonal correlations corrected for attenuation above diagonal: [,1] [1,] 0.91 Cluster fit = 0.8 Pattern fit = 0.94 RMSR = 0.1 > plot(ic.out4) #this shows the spatial representation Use ICLUST.diagram to see the hierarchical structure > #use a dot graphics viewer on the out.file > #dot.graph <- ICLUST.graph(ic.out,out.file="test.ICLUST.graph.dot") > #show the equivalent of a factor solution > fa.diagram(ic.out4$pattern,Phi=ic.out4$Phi,main="Pattern taken from iclust") > > > > > cleanEx() > nameEx("ICLUST.graph") > ### * ICLUST.graph > > flush(stderr()); flush(stdout()) > > ### Name: ICLUST.graph > ### Title: create control code for ICLUST graphical output > ### Aliases: ICLUST.graph iclust.graph > ### Keywords: multivariate cluster hplot > > ### ** Examples > > ## Not run: > ##D test.data <- Harman74.cor$cov > ##D ic.out <- ICLUST(test.data) > ##D #out.file <- file.choose(new=TRUE) #create a new file to write the plot commands to > ##D #ICLUST.graph(ic.out,out.file) > ##D now go to graphviz (outside of R) and open the out.file you created > ##D print(ic.out,digits=2) > ## End(Not run) > > > #test.data <- Harman74.cor$cov > #my.iclust <- ICLUST(test.data) > #ICLUST.graph(my.iclust) > # > # > #digraph ICLUST { > # rankdir=RL; > # size="8,8"; > # node [fontname="Helvetica" fontsize=14 shape=box, width=2]; > # edge [fontname="Helvetica" fontsize=12]; > # label = "ICLUST"; > # fontsize=20; > #V1 [label = VisualPerception]; > #V2 [label = Cubes]; > #V3 [label = PaperFormBoard]; > #V4 [label = Flags]; > #V5 [label = GeneralInformation]; > #V6 [label = PargraphComprehension]; > #V7 [label = SentenceCompletion]; > #V8 [label = WordClassification]; > #V9 [label = WordMeaning]; > #V10 [label = Addition]; > #V11 [label = Code]; > #V12 [label = CountingDots]; > #V13 [label = StraightCurvedCapitals]; > #V14 [label = WordRecognition]; > #V15 [label = NumberRecognition]; > #V16 [label = FigureRecognition]; > #V17 [label = ObjectNumber]; > #V18 [label = NumberFigure]; > #V19 [label = FigureWord]; > #V20 [label = Deduction]; > #V21 [label = NumericalPuzzles]; > #V22 [label = ProblemReasoning]; > #V23 [label = SeriesCompletion]; > #V24 [label = ArithmeticProblems]; > #node [shape=ellipse, width ="1"]; > #C1-> V9 [ label = 0.78 ]; > #C1-> V5 [ label = 0.78 ]; > #C2-> V12 [ label = 0.66 ]; > #C2-> V10 [ label = 0.66 ]; > #C3-> V18 [ label = 0.53 ]; > #C3-> V17 [ label = 0.53 ]; > #C4-> V23 [ label = 0.59 ]; > #C4-> V20 [ label = 0.59 ]; > #C5-> V13 [ label = 0.61 ]; > #C5-> V11 [ label = 0.61 ]; > #C6-> V7 [ label = 0.78 ]; > #C6-> V6 [ label = 0.78 ]; > #C7-> V4 [ label = 0.55 ]; > #C7-> V1 [ label = 0.55 ]; > #C8-> V16 [ label = 0.5 ]; > #C8-> V14 [ label = 0.49 ]; > #C9-> C1 [ label = 0.86 ]; > #C9-> C6 [ label = 0.86 ]; > #C10-> C4 [ label = 0.71 ]; > #C10-> V22 [ label = 0.62 ]; > #C11-> V21 [ label = 0.56 ]; > #C11-> V24 [ label = 0.58 ]; > #C12-> C10 [ label = 0.76 ]; > #C12-> C11 [ label = 0.67 ]; > #C13-> C8 [ label = 0.61 ]; > #C13-> V15 [ label = 0.49 ]; > #C14-> C2 [ label = 0.74 ]; > #C14-> C5 [ label = 0.72 ]; > #C15-> V3 [ label = 0.48 ]; > #C15-> C7 [ label = 0.65 ]; > #C16-> V19 [ label = 0.48 ]; > #C16-> C3 [ label = 0.64 ]; > #C17-> V8 [ label = 0.62 ]; > #C17-> C12 [ label = 0.8 ]; > #C18-> C17 [ label = 0.82 ]; > #C18-> C15 [ label = 0.68 ]; > #C19-> C16 [ label = 0.66 ]; > #C19-> C13 [ label = 0.65 ]; > #C20-> C19 [ label = 0.72 ]; > #C20-> C18 [ label = 0.83 ]; > #C21-> C20 [ label = 0.87 ]; > #C21-> C9 [ label = 0.76 ]; > #C22-> 0 [ label = 0 ]; > #C22-> 0 [ label = 0 ]; > #C23-> 0 [ label = 0 ]; > #C23-> 0 [ label = 0 ]; > #C1 [label = "C1\n alpha= 0.84\n beta= 0.84\nN= 2"] ; > #C2 [label = "C2\n alpha= 0.74\n beta= 0.74\nN= 2"] ; > #C3 [label = "C3\n alpha= 0.62\n beta= 0.62\nN= 2"] ; > #C4 [label = "C4\n alpha= 0.67\n beta= 0.67\nN= 2"] ; > #C5 [label = "C5\n alpha= 0.7\n beta= 0.7\nN= 2"] ; > #C6 [label = "C6\n alpha= 0.84\n beta= 0.84\nN= 2"] ; > #C7 [label = "C7\n alpha= 0.64\n beta= 0.64\nN= 2"] ; > #C8 [label = "C8\n alpha= 0.58\n beta= 0.58\nN= 2"] ; > #C9 [label = "C9\n alpha= 0.9\n beta= 0.87\nN= 4"] ; > #C10 [label = "C10\n alpha= 0.74\n beta= 0.71\nN= 3"] ; > #C11 [label = "C11\n alpha= 0.62\n beta= 0.62\nN= 2"] ; > #C12 [label = "C12\n alpha= 0.79\n beta= 0.74\nN= 5"] ; > #C13 [label = "C13\n alpha= 0.64\n beta= 0.59\nN= 3"] ; > #C14 [label = "C14\n alpha= 0.79\n beta= 0.74\nN= 4"] ; > #C15 [label = "C15\n alpha= 0.66\n beta= 0.58\nN= 3"] ; > #C16 [label = "C16\n alpha= 0.65\n beta= 0.57\nN= 3"] ; > #C17 [label = "C17\n alpha= 0.81\n beta= 0.71\nN= 6"] ; > #C18 [label = "C18\n alpha= 0.84\n beta= 0.75\nN= 9"] ; > #C19 [label = "C19\n alpha= 0.74\n beta= 0.65\nN= 6"] ; > #C20 [label = "C20\n alpha= 0.87\n beta= 0.74\nN= 15"] ; > #C21 [label = "C21\n alpha= 0.9\n beta= 0.77\nN= 19"] ; > #C22 [label = "C22\n alpha= 0\n beta= 0\nN= 0"] ; > #C23 [label = "C23\n alpha= 0\n beta= 0\nN= 0"] ; > #{ rank=same; > #V1;V2;V3;V4;V5;V6;V7;V8;V9;V10;V11;V12;V13;V14;V15;V16;V17;V18;V19;V20;V21;V22;V23;V24;}} > # > #copy the above output to Graphviz and draw it > #see \url{https://personality-project.org/r/r.ICLUST.html} for an example. > > > > > cleanEx() > nameEx("ICLUST.rgraph") > ### * ICLUST.rgraph > > flush(stderr()); flush(stdout()) > > ### Name: ICLUST.rgraph > ### Title: Draw an ICLUST graph using the Rgraphviz package > ### Aliases: ICLUST.rgraph > ### Keywords: multivariate cluster hplot > > ### ** Examples > > test.data <- Harman74.cor$cov > ic.out <- ICLUST(test.data) #uses iclust.diagram instead > > > > cleanEx() > nameEx("KMO") > ### * KMO > > flush(stderr()); flush(stdout()) > > ### Name: KMO > ### Title: Find the Kaiser, Meyer, Olkin Measure of Sampling Adequacy > ### Aliases: KMO > ### Keywords: multivariate models > > ### ** Examples > > KMO(Thurstone) Kaiser-Meyer-Olkin factor adequacy Call: KMO(r = Thurstone) Overall MSA = 0.88 MSA for each item = Sentences Vocabulary Sent.Completion First.Letters 0.86 0.86 0.90 0.86 Four.Letter.Words Suffixes Letter.Series Pedigrees 0.88 0.92 0.85 0.93 Letter.Group 0.87 > KMO(Harman.political) #compare to the results in Dziuban and Shirkey (1974) Kaiser-Meyer-Olkin factor adequacy Call: KMO(r = Harman.political) Overall MSA = 0.81 MSA for each item = Lewis Roosevelt Party Voting Median Rental Homeownership 0.73 0.76 0.84 0.87 0.53 Unemployment Mobility Education 0.93 0.78 0.86 > > > > > cleanEx() > nameEx("Pinv") > ### * Pinv > > flush(stderr()); flush(stdout()) > > ### Name: Pinv > ### Title: Compute the Moore-Penrose Pseudo Inverse of a matrix > ### Aliases: Pinv > ### Keywords: multivariate > > ### ** Examples > > round(Pinv(Thurstone) %*% Thurstone,2) #an identity matrix Sentences Vocabulary Sent.Completion First.Letters Four.Letter.Words [1,] 1 0 0 0 0 [2,] 0 1 0 0 0 [3,] 0 0 1 0 0 [4,] 0 0 0 1 0 [5,] 0 0 0 0 1 [6,] 0 0 0 0 0 [7,] 0 0 0 0 0 [8,] 0 0 0 0 0 [9,] 0 0 0 0 0 Suffixes Letter.Series Pedigrees Letter.Group [1,] 0 0 0 0 [2,] 0 0 0 0 [3,] 0 0 0 0 [4,] 0 0 0 0 [5,] 0 0 0 0 [6,] 1 0 0 0 [7,] 0 1 0 0 [8,] 0 0 1 0 [9,] 0 0 0 1 > sl <- schmid(Thurstone,3) #The schmid-leiman solution is less than full rank Loading required namespace: GPArotation > F <- sl$sl[,1:4] #the SL solution is general + 3 gropus > R <- Thurstone # > diag(R) <- sl$sl[,5] #the reproduced matrix (R - U2) > S <- t(Pinv(t(F) %*% F) %*% t(F) %*% R) #the structure matrix > Phi <- t(S) %*% F %*% Pinv(t(F) %*% F) #the factor covariances > > > > > cleanEx() > nameEx("Promax") > ### * Promax > > flush(stderr()); flush(stdout()) > > ### Name: Promax > ### Title: Perform Procustes,bifactor, promax or targeted rotations and > ### return the inter factor angles. > ### Aliases: Promax faRotate Procrustes TargetQ TargetT target.rot bifactor > ### biquartimin varimin vgQ.bimin vgQ.targetQ vgQ.varimin equamax > ### Keywords: multivariate models > > ### ** Examples > > jen <- sim.hierarchical() > f3 <- fa(jen,3,rotate="varimax") > f3 #not a very clean solution Factor Analysis using method = minres Call: fa(r = jen, nfactors = 3, rotate = "varimax") Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR3 MR2 h2 u2 com V1 0.70 0.30 0.26 0.64 0.36 1.6 V2 0.61 0.26 0.22 0.49 0.51 1.6 V3 0.52 0.22 0.19 0.36 0.64 1.6 V4 0.24 0.63 0.18 0.49 0.51 1.5 V5 0.21 0.54 0.16 0.36 0.64 1.5 V6 0.17 0.45 0.13 0.25 0.75 1.5 V7 0.17 0.15 0.56 0.36 0.64 1.3 V8 0.14 0.12 0.46 0.25 0.75 1.3 V9 0.11 0.10 0.37 0.16 0.84 1.3 MR1 MR3 MR2 SS loadings 1.32 1.15 0.89 Proportion Var 0.15 0.13 0.10 Cumulative Var 0.15 0.27 0.37 Proportion Explained 0.39 0.34 0.27 Cumulative Proportion 0.39 0.73 1.00 Mean item complexity = 1.5 Test of the hypothesis that 3 factors are sufficient. The degrees of freedom for the null model are 36 and the objective function was 1.71 The degrees of freedom for the model are 12 and the objective function was 0 The root mean square of the residuals (RMSR) is 0 The df corrected root mean square of the residuals is 0 Fit based upon off diagonal values = 1 Measures of factor score adequacy MR1 MR3 MR2 Correlation of (regression) scores with factors 0.78 0.73 0.67 Multiple R square of scores with factors 0.61 0.53 0.45 Minimum correlation of possible factor scores 0.22 0.07 -0.10 > Promax(f3) #this obliquely rotates, but from the varimax target Call: NULL Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR3 MR2 h2 u2 V1 0.76 0.04 0.03 0.64 0.36 V2 0.66 0.03 0.02 0.49 0.51 V3 0.57 0.03 0.02 0.36 0.64 V4 0.01 0.69 0.01 0.49 0.51 V5 0.01 0.59 0.01 0.36 0.64 V6 0.01 0.49 0.01 0.25 0.75 V7 0.00 0.00 0.60 0.36 0.64 V8 0.00 0.00 0.50 0.25 0.75 V9 0.00 0.00 0.40 0.16 0.84 MR1 MR3 MR2 SS loadings 1.43 1.13 0.80 Proportion Var 0.16 0.13 0.09 Cumulative Var 0.16 0.28 0.37 Proportion Explained 0.42 0.34 0.24 Cumulative Proportion 0.42 0.76 1.00 MR1 MR3 MR2 MR1 1.00 0.68 0.60 MR3 0.68 1.00 0.55 MR2 0.60 0.55 1.00 > target.rot(f3) #this obliquely rotates to wards a simple structure target Call: NULL Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR3 MR2 h2 u2 V1 0.8 0.0 0.0 0.64 0.36 V2 0.7 0.0 0.0 0.49 0.51 V3 0.6 0.0 0.0 0.36 0.64 V4 0.0 0.7 0.0 0.49 0.51 V5 0.0 0.6 0.0 0.36 0.64 V6 0.0 0.5 0.0 0.25 0.75 V7 0.0 0.0 0.6 0.36 0.64 V8 0.0 0.0 0.5 0.25 0.75 V9 0.0 0.0 0.4 0.16 0.84 MR1 MR3 MR2 SS loadings 1.49 1.10 0.77 Proportion Var 0.17 0.12 0.09 Cumulative Var 0.17 0.29 0.37 Proportion Explained 0.44 0.33 0.23 Cumulative Proportion 0.44 0.77 1.00 MR1 MR3 MR2 MR1 1.00 0.72 0.63 MR3 0.72 1.00 0.56 MR2 0.63 0.56 1.00 > > #compare this rotation with the solution from a targeted rotation aimed for > #an independent cluster solution > #now try a bifactor solution > fb <-fa(jen,3,rotate="bifactor") > fq <- fa(jen,3,rotate="biquartimin") > #Suitbert Ertel has suggested varimin > fm <- fa(jen,3,rotate="varimin") #the Ertel varimin > fn <- fa(jen,3,rotate="none") #just the unrotated factors > #compare them > factor.congruence(list(f3,fb,fq,fm,fn)) MR1 MR3 MR2 MR1 MR3 MR2 MR1 MR2 MR3 MR1 MR2 MR3 MR1 MR1 1.00 0.71 0.66 0.87 0.93 -0.02 0.92 -0.01 0.93 0.87 0.78 0.63 0.92 MR3 0.71 1.00 0.61 0.92 0.42 -0.36 0.89 -0.34 0.42 0.88 0.75 -0.08 0.89 MR2 0.66 0.61 1.00 0.84 0.41 0.52 0.82 0.54 0.41 0.88 0.16 0.40 0.82 MR1 0.87 0.92 0.84 1.00 0.61 0.00 0.99 0.01 0.61 1.00 0.65 0.27 0.99 MR3 0.93 0.42 0.41 0.61 1.00 0.00 0.71 0.00 1.00 0.62 0.73 0.80 0.71 MR2 -0.02 -0.36 0.52 0.00 0.00 1.00 -0.01 1.00 0.00 0.09 -0.63 0.53 0.00 MR1 0.92 0.89 0.82 0.99 0.71 -0.01 1.00 0.01 0.71 0.99 0.70 0.37 1.00 MR2 -0.01 -0.34 0.54 0.01 0.00 1.00 0.01 1.00 0.00 0.11 -0.63 0.52 0.01 MR3 0.93 0.42 0.41 0.61 1.00 0.00 0.71 0.00 1.00 0.62 0.73 0.80 0.71 MR1 0.87 0.88 0.88 1.00 0.62 0.09 0.99 0.11 0.62 1.00 0.59 0.33 0.99 MR2 0.78 0.75 0.16 0.65 0.73 -0.63 0.70 -0.63 0.73 0.59 1.00 0.18 0.70 MR3 0.63 -0.08 0.40 0.27 0.80 0.53 0.37 0.52 0.80 0.33 0.18 1.00 0.37 MR1 0.92 0.89 0.82 0.99 0.71 0.00 1.00 0.01 0.71 0.99 0.70 0.37 1.00 MR2 -0.01 -0.36 0.52 0.00 0.02 1.00 0.00 1.00 0.02 0.09 -0.62 0.55 0.00 MR3 -0.38 0.28 0.25 0.13 -0.70 0.02 0.00 0.03 -0.70 0.12 -0.35 -0.75 0.00 MR2 MR3 MR1 -0.01 -0.38 MR3 -0.36 0.28 MR2 0.52 0.25 MR1 0.00 0.13 MR3 0.02 -0.70 MR2 1.00 0.02 MR1 0.00 0.00 MR2 1.00 0.03 MR3 0.02 -0.70 MR1 0.09 0.12 MR2 -0.62 -0.35 MR3 0.55 -0.75 MR1 0.00 0.00 MR2 1.00 0.00 MR3 0.00 1.00 > # compare an oblimin with a target rotation using the Browne algorithm > #note that we are changing the factor #order (this is for demonstration only) > Targ <- make.keys(9,list(f1=1:3,f2=7:9,f3=4:6)) > Targ <- scrub(Targ,isvalue=1) #fix the 0s, allow the NAs to be estimated > Targ <- list(Targ) #input must be a list > #show the target > Targ [[1]] f1 f2 f3 [1,] NA 0 0 [2,] NA 0 0 [3,] NA 0 0 [4,] 0 0 NA [5,] 0 0 NA [6,] 0 0 NA [7,] 0 NA 0 [8,] 0 NA 0 [9,] 0 NA 0 > fa(Thurstone,3,rotate="TargetQ",Target=Targ) #targeted oblique rotation Factor Analysis using method = minres Call: fa(r = Thurstone, nfactors = 3, rotate = "TargetQ", Target = Targ) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR3 MR2 h2 u2 com Sentences 0.86 -0.01 0.08 0.82 0.18 1.0 Vocabulary 0.85 0.10 0.00 0.84 0.16 1.0 Sent.Completion 0.80 0.06 0.04 0.74 0.26 1.0 First.Letters -0.01 0.86 0.00 0.73 0.27 1.0 Four.Letter.Words -0.04 0.75 0.11 0.63 0.37 1.0 Suffixes 0.17 0.64 -0.08 0.50 0.50 1.2 Letter.Series -0.05 -0.07 0.92 0.73 0.27 1.0 Pedigrees 0.32 -0.07 0.52 0.51 0.49 1.7 Letter.Group -0.12 0.17 0.68 0.52 0.48 1.2 MR1 MR3 MR2 SS loadings 2.41 1.88 1.71 Proportion Var 0.27 0.21 0.19 Cumulative Var 0.27 0.48 0.67 Proportion Explained 0.40 0.31 0.28 Cumulative Proportion 0.40 0.72 1.00 With factor correlations of MR1 MR3 MR2 MR1 1.00 0.57 0.58 MR3 0.57 1.00 0.59 MR2 0.58 0.59 1.00 Mean item complexity = 1.1 Test of the hypothesis that 3 factors are sufficient. The degrees of freedom for the null model are 36 and the objective function was 5.2 The degrees of freedom for the model are 12 and the objective function was 0.01 The root mean square of the residuals (RMSR) is 0.01 The df corrected root mean square of the residuals is 0.01 Fit based upon off diagonal values = 1 Measures of factor score adequacy MR1 MR3 MR2 Correlation of (regression) scores with factors 0.96 0.92 0.92 Multiple R square of scores with factors 0.92 0.86 0.84 Minimum correlation of possible factor scores 0.84 0.71 0.69 > #compare with oblimin > f3 <- fa(Thurstone,3) > #now try a targeted orthogonal rotation > Targ <- make.keys(9,list(f1=1:3,f2=7:9,f3=4:6)) > faRotate(f3$loadings,rotate="TargetT",Target=list(Targ)) #orthogonal Call: NULL Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 MR3 h2 u2 Sentences 0.84 0.32 -0.06 0.81 0.19 Vocabulary 0.79 0.41 0.01 0.80 0.20 Sent.Completion 0.76 0.35 -0.01 0.70 0.30 First.Letters -0.08 0.28 0.80 0.73 0.27 Four.Letter.Words -0.05 0.15 0.74 0.57 0.43 Suffixes 0.07 0.35 0.55 0.43 0.57 Letter.Series 0.36 -0.71 0.28 0.71 0.29 Pedigrees 0.54 -0.26 0.09 0.36 0.64 Letter.Group 0.17 -0.49 0.42 0.44 0.56 MR1 MR2 MR3 SS loadings 2.39 1.42 1.75 Proportion Var 0.27 0.16 0.19 Cumulative Var 0.27 0.42 0.62 Proportion Explained 0.43 0.26 0.31 Cumulative Proportion 0.43 0.69 1.00 > > > > > > cleanEx() > nameEx("SD") > ### * SD > > flush(stderr()); flush(stdout()) > > ### Name: SD > ### Title: Find the Standard deviation for a vector, matrix, or data.frame > ### - do not return error if there are no cases > ### Aliases: SD > ### Keywords: models > > ### ** Examples > > data(attitude) > apply(attitude,2,sd) #all complete rating complaints privileges learning raises critical advance 12.172562 13.314757 12.235430 11.737013 10.397226 9.894908 10.288706 > attitude[,1] <- NA > SD(attitude) #missing a column rating complaints privileges learning raises critical advance NA 13.314757 12.235430 11.737013 10.397226 9.894908 10.288706 > describe(attitude) Warning in FUN(newX[, i], ...) : no non-missing arguments to min; returning Inf Warning in FUN(newX[, i], ...) : no non-missing arguments to max; returning -Inf vars n mean sd median trimmed mad min max range skew rating 1 0 NaN NA NA NaN NA Inf -Inf -Inf NA complaints 2 30 66.60 13.31 65.0 67.08 14.83 37 90 53 -0.22 privileges 3 30 53.13 12.24 51.5 52.75 10.38 30 83 53 0.38 learning 4 30 56.37 11.74 56.5 56.58 14.83 34 75 41 -0.05 raises 5 30 64.63 10.40 63.5 64.50 11.12 43 88 45 0.20 critical 6 30 74.77 9.89 77.5 75.83 7.41 49 92 43 -0.87 advance 7 30 42.93 10.29 41.0 41.83 8.90 25 72 47 0.85 kurtosis se rating NA NA complaints -0.68 2.43 privileges -0.41 2.23 learning -1.22 2.14 raises -0.60 1.90 critical 0.17 1.81 advance 0.47 1.88 > > > > cleanEx() > nameEx("Schmid.Leiman") > ### * Schmid.Leiman > > flush(stderr()); flush(stdout()) > > ### Name: Schmid > ### Title: 12 variables created by Schmid and Leiman to show the > ### Schmid-Leiman Transformation > ### Aliases: Schmid schmid.leiman West Chen > ### Keywords: datasets > > ### ** Examples > > data(Schmid) > cor.plot(Schmid,TRUE) > print(fa(Schmid,6,rotate="oblimin"),cut=0) #shows an oblique solution Factor Analysis using method = minres Call: fa(r = Schmid, nfactors = 6, rotate = "oblimin") Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR4 MR6 MR2 MR3 MR5 h2 u2 com V1 0.80 0.00 0.00 0.00 0.00 0.00 0.637 0.36 1 V2 0.90 0.00 0.00 0.00 0.00 0.00 0.814 0.19 1 V3 0.00 0.00 0.70 0.00 0.00 0.00 0.485 0.52 1 V4 0.00 0.00 0.61 0.00 0.00 0.00 0.364 0.64 1 V5 0.00 0.00 0.00 0.00 0.77 0.00 0.599 0.40 1 V6 0.00 0.00 0.00 0.00 0.42 -0.01 0.171 0.83 1 V7 0.00 0.00 0.00 0.00 0.00 0.77 0.589 0.41 1 V8 0.00 0.00 0.00 0.00 0.01 0.18 0.033 0.97 1 V9 0.02 0.78 0.01 0.02 0.00 0.01 0.635 0.36 1 V10 -0.04 0.60 -0.02 -0.03 -0.01 -0.01 0.326 0.67 1 V11 0.00 0.00 0.00 0.60 0.00 0.00 0.363 0.64 1 V12 0.00 0.00 0.00 0.70 0.00 0.00 0.486 0.51 1 MR1 MR4 MR6 MR2 MR3 MR5 SS loadings 1.45 0.96 0.85 0.85 0.77 0.62 Proportion Var 0.12 0.08 0.07 0.07 0.06 0.05 Cumulative Var 0.12 0.20 0.27 0.34 0.41 0.46 Proportion Explained 0.26 0.17 0.15 0.15 0.14 0.11 Cumulative Proportion 0.26 0.44 0.59 0.75 0.89 1.00 With factor correlations of MR1 MR4 MR6 MR2 MR3 MR5 MR1 1.00 0.43 0.56 0.14 0.16 0.21 MR4 0.43 1.00 0.38 0.29 0.17 0.22 MR6 0.56 0.38 1.00 0.12 0.14 0.19 MR2 0.14 0.29 0.12 1.00 0.05 0.07 MR3 0.16 0.17 0.14 0.05 1.00 0.23 MR5 0.21 0.22 0.19 0.07 0.23 1.00 Mean item complexity = 1 Test of the hypothesis that 6 factors are sufficient. The degrees of freedom for the null model are 66 and the objective function was 1.9 The degrees of freedom for the model are 9 and the objective function was 0 The root mean square of the residuals (RMSR) is 0 The df corrected root mean square of the residuals is 0 Fit based upon off diagonal values = 1 Measures of factor score adequacy MR1 MR4 MR6 MR2 MR3 MR5 Correlation of (regression) scores with factors 0.93 0.84 0.81 0.78 0.80 0.78 Multiple R square of scores with factors 0.87 0.71 0.66 0.61 0.64 0.61 Minimum correlation of possible factor scores 0.73 0.43 0.32 0.23 0.27 0.22 > round(cov2cor(schmid.leiman),2) V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V1 1.00 1.00 0.56 0.56 0.15 0.15 0.23 0.23 0.40 0.40 0.13 0.13 V2 1.00 1.00 0.56 0.56 0.15 0.15 0.23 0.23 0.40 0.40 0.13 0.13 V3 0.56 0.56 1.00 1.00 0.13 0.13 0.20 0.20 0.35 0.35 0.12 0.12 V4 0.56 0.56 1.00 1.00 0.13 0.13 0.20 0.20 0.35 0.35 0.12 0.12 V5 0.15 0.15 0.13 0.13 1.00 1.00 0.24 0.24 0.15 0.15 0.05 0.05 V6 0.15 0.15 0.13 0.13 1.00 1.00 0.24 0.24 0.15 0.15 0.05 0.05 V7 0.23 0.23 0.20 0.20 0.24 0.24 1.00 1.00 0.23 0.23 0.08 0.08 V8 0.23 0.23 0.20 0.20 0.24 0.24 1.00 1.00 0.23 0.23 0.08 0.08 V9 0.40 0.40 0.35 0.35 0.15 0.15 0.23 0.23 1.00 1.00 0.27 0.27 V10 0.40 0.40 0.35 0.35 0.15 0.15 0.23 0.23 1.00 1.00 0.27 0.27 V11 0.13 0.13 0.12 0.12 0.05 0.05 0.08 0.08 0.27 0.27 1.00 1.00 V12 0.13 0.13 0.12 0.12 0.05 0.05 0.08 0.08 0.27 0.27 1.00 1.00 > cor.plot(cov2cor(West),TRUE) > > > > cleanEx() > nameEx("Tucker") > ### * Tucker > > flush(stderr()); flush(stdout()) > > ### Name: Tucker > ### Title: 9 Cognitive variables discussed by Tucker and Lewis (1973) > ### Aliases: Tucker > ### Keywords: datasets > > ### ** Examples > > data(Tucker) > fa(Tucker,2,n.obs=710) Factor Analysis using method = minres Call: fa(r = Tucker, nfactors = 2, n.obs = 710) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 h2 u2 com t42 -0.04 0.71 0.48 0.52 1.0 t54 0.02 0.71 0.51 0.49 1.0 t45 0.92 -0.04 0.82 0.18 1.0 t46 0.87 -0.06 0.71 0.29 1.0 t23 -0.02 0.71 0.50 0.50 1.0 t24 0.00 0.74 0.55 0.45 1.0 t27 0.11 0.59 0.42 0.58 1.1 t10 0.76 0.13 0.68 0.32 1.1 t51 0.78 0.05 0.65 0.35 1.0 MR1 MR2 SS loadings 2.84 2.47 Proportion Var 0.32 0.27 Cumulative Var 0.32 0.59 Proportion Explained 0.54 0.46 Cumulative Proportion 0.54 1.00 With factor correlations of MR1 MR2 MR1 1.00 0.43 MR2 0.43 1.00 Mean item complexity = 1 Test of the hypothesis that 2 factors are sufficient. The degrees of freedom for the null model are 36 and the objective function was 4.49 with Chi Square of 3165.93 The degrees of freedom for the model are 19 and the objective function was 0.07 The root mean square of the residuals (RMSR) is 0.02 The df corrected root mean square of the residuals is 0.03 The harmonic number of observations is 710 with the empirical chi square 22.23 with prob < 0.27 The total number of observations was 710 with Likelihood Chi Square = 50.45 with prob < 0.00011 Tucker Lewis Index of factoring reliability = 0.981 RMSEA index = 0.048 and the 90 % confidence intervals are 0.032 0.065 BIC = -74.29 Fit based upon off diagonal values = 1 Measures of factor score adequacy MR1 MR2 Correlation of (regression) scores with factors 0.96 0.91 Multiple R square of scores with factors 0.92 0.84 Minimum correlation of possible factor scores 0.83 0.67 > omega(Tucker,2) Three factors are required for identification -- general factor loadings set to be equal. Proceed with caution. Think about redoing the analysis with alternative values of the 'option' setting. Omega Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, digits = digits, title = title, sl = sl, labels = labels, plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, covar = covar) Alpha: 0.86 G.6: 0.88 Omega Hierarchical: 0.54 Omega H asymptotic: 0.6 Omega Total 0.9 Schmid Leiman Factor loadings greater than 0.2 g F1* F2* h2 u2 p2 t42 0.44 0.53 0.48 0.52 0.40 t54 0.48 0.54 0.51 0.49 0.44 t45 0.58 0.69 0.82 0.18 0.41 t46 0.53 0.65 0.71 0.29 0.39 t23 0.46 0.54 0.50 0.50 0.42 t24 0.48 0.56 0.55 0.45 0.43 t27 0.46 0.45 0.42 0.58 0.51 t10 0.58 0.57 0.68 0.32 0.50 t51 0.55 0.59 0.65 0.35 0.46 With eigenvalues of: g F1* F2* 2.3 1.6 1.4 general/max 1.46 max/min = 1.15 mean percent general = 0.44 with sd = 0.04 and cv of 0.1 Explained Common Variance of the general factor = 0.44 The degrees of freedom are 19 and the fit is 0.07 The root mean square of the residuals is 0.02 The df corrected root mean square of the residuals is 0.03 Compare this with the adequacy of just a general factor and no group factors The degrees of freedom for just the general factor are 27 and the fit is 1.94 The root mean square of the residuals is 0.22 The df corrected root mean square of the residuals is 0.25 Measures of factor score adequacy g F1* F2* Correlation of scores with factors 0.74 0.78 0.74 Multiple R square of scores with factors 0.55 0.60 0.55 Minimum correlation of factor score estimates 0.10 0.21 0.10 Total, General and Subset omega for each subset g F1* F2* Omega total for total scores and subscales 0.90 0.91 0.83 Omega general for total scores and subscales 0.54 0.40 0.37 Omega group for total scores and subscales 0.34 0.51 0.46 > > > > cleanEx() > nameEx("VSS") > ### * VSS > > flush(stderr()); flush(stdout()) > > ### Name: VSS > ### Title: Apply the Very Simple Structure, MAP, and other criteria to > ### determine the appropriate number of factors. > ### Aliases: vss VSS MAP nfactors > ### Keywords: multivariate models > > ### ** Examples > > > #test.data <- Harman74.cor$cov > #my.vss <- VSS(test.data,title="VSS of 24 mental tests") > #print(my.vss[,1:12],digits =2) > #VSS.plot(my.vss, title="VSS of 24 mental tests") > > #now, some simulated data with two factors > #VSS(sim.circ(nvar=24),fm="minres" ,title="VSS of 24 circumplex variables") > VSS(sim.item(nvar=24),fm="minres" ,title="VSS of 24 simple structure variables") Very Simple Structure of VSS of 24 simple structure variables Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, n.obs = n.obs, plot = plot, title = title, use = use, cor = cor) VSS complexity 1 achieves a maximimum of 0.84 with 4 factors VSS complexity 2 achieves a maximimum of 0.87 with 7 factors The Velicer MAP achieves a minimum of 0.01 with 2 factors BIC achieves a minimum of -1156.52 with 2 factors Sample Size adjusted BIC achieves a minimum of -429.66 with 2 factors Statistics by number of factors vss1 vss2 map dof chisq prob sqresid fit RMSEA BIC SABIC complex 1 0.48 0.00 0.0480 252 1949 1.3e-259 32.1 0.48 0.1160 383 1183 1.0 2 0.84 0.84 0.0062 229 267 4.4e-02 9.8 0.84 0.0180 -1157 -430 1.0 3 0.84 0.85 0.0085 207 224 2.0e-01 9.3 0.85 0.0126 -1062 -405 1.1 4 0.84 0.85 0.0109 186 189 4.2e-01 8.8 0.86 0.0056 -967 -376 1.1 5 0.79 0.86 0.0138 166 157 6.8e-01 8.3 0.87 0.0000 -875 -348 1.2 6 0.79 0.86 0.0170 147 132 8.1e-01 7.9 0.87 0.0000 -782 -315 1.3 7 0.74 0.87 0.0204 129 105 9.4e-01 7.5 0.88 0.0000 -696 -287 1.3 8 0.78 0.87 0.0242 112 90 9.4e-01 7.3 0.88 0.0000 -606 -250 1.4 eChisq SRMR eCRMS eBIC 1 8179 0.172 0.180 6613 2 205 0.027 0.030 -1218 3 165 0.024 0.028 -1121 4 133 0.022 0.027 -1023 5 109 0.020 0.026 -923 6 89 0.018 0.025 -825 7 71 0.016 0.023 -731 8 58 0.014 0.023 -638 > > > > cleanEx() > nameEx("VSS.parallel") > ### * VSS.parallel > > flush(stderr()); flush(stdout()) > > ### Name: VSS.parallel > ### Title: Compare real and random VSS solutions > ### Aliases: VSS.parallel > ### Keywords: models models > > ### ** Examples > > #VSS.plot(VSS.parallel(200,24)) > > > > cleanEx() > nameEx("VSS.plot") > ### * VSS.plot > > flush(stderr()); flush(stdout()) > > ### Name: VSS.plot > ### Title: Plot VSS fits > ### Aliases: VSS.plot > ### Keywords: multivariate models > > ### ** Examples > > test.data <- Harman74.cor$cov > my.vss <- VSS(test.data) #suggests that 4 factor complexity two solution is optimal n.obs was not specified and was arbitrarily set to 1000. This only affects the chi square values. > VSS.plot(my.vss,title="VSS of Holzinger-Harmon problem") #see the graphics window > > > > > > cleanEx() > nameEx("VSS.scree") > ### * VSS.scree > > flush(stderr()); flush(stdout()) > > ### Name: VSS.scree > ### Title: Plot the successive eigen values for a scree test > ### Aliases: VSS.scree scree > ### Keywords: multivariate hplot > > ### ** Examples > > scree(attitude) > #VSS.scree(cor(attitude) > > > > > > cleanEx() > nameEx("Yule") > ### * Yule > > flush(stderr()); flush(stdout()) > > ### Name: Yule > ### Title: From a two by two table, find the Yule coefficients of > ### association, convert to phi, or tetrachoric, recreate table the table > ### to create the Yule coefficient. > ### Aliases: Yule Yule.inv Yule2phi Yule2tetra Yule2poly YuleBonett YuleCor > ### Keywords: multivariate models > > ### ** Examples > > Nach <- matrix(c(40,10,20,50),ncol=2,byrow=TRUE) > Yule(Nach) [1] 0.8181818 > Yule.inv(.81818,c(50,60),n=120) [,1] [,2] [1,] 39.99994 10.00006 [2,] 20.00006 49.99994 > Yule2phi(.81818,c(50,60),n=120) [1] 0.5070905 > Yule2tetra(.81818,c(50,60),n=120) [1] 0.7230638 > phi(Nach) #much less [1] 0.51 > #or express as percents and do not specify n > Nach <- matrix(c(40,10,20,50),ncol=2,byrow=TRUE) > Nach/120 [,1] [,2] [1,] 0.3333333 0.08333333 [2,] 0.1666667 0.41666667 > Yule(Nach) [1] 0.8181818 > Yule.inv(.81818,c(.41667,.5)) [,1] [,2] [1,] 0.3333391 0.08333086 [2,] 0.1666609 0.41666914 > Yule2phi(.81818,c(.41667,.5)) [1] 0.5071088 > Yule2tetra(.81818,c(.41667,.5)) [1] 0.7230836 > phi(Nach) #much less [1] 0.51 > YuleCor(psychTools::ability[,1:4],,TRUE) Yule and Generalized Yule coefficients Call: YuleCor(x = psychTools::ability[, 1:4], bonett = TRUE) Yule coefficient reason.4 reason.16 reason.17 reason.19 reason.4 1.00 0.28 0.40 0.30 reason.16 0.28 1.00 0.32 0.25 reason.17 0.40 0.32 1.00 0.33 reason.19 0.30 0.25 0.33 1.00 Upper and Lower Confidence Intervals = reason.4 reason.16 reason.17 reason.19 reason.4 1.00 0.33 0.44 0.35 reason.16 0.23 1.00 0.37 0.30 reason.17 0.35 0.27 1.00 0.38 reason.19 0.25 0.20 0.28 1.00 > YuleBonett(Nach,1) #Yule Q Yule and Generalized Yule coefficients Lower CI Yule coefficient Upper CI [1] 0.61 0.82 0.92 > YuleBonett(Nach,.5) #Yule Y Yule and Generalized Yule coefficients Lower CI Yule coefficient Upper CI [1] 0.34 0.52 0.66 > YuleBonett(Nach,.75) #Digby H Yule and Generalized Yule coefficients Lower CI Yule coefficient Upper CI [1] 0.49 0.70 0.83 > YuleBonett(Nach,,TRUE) #Yule* is a generalized Yule Yule and Generalized Yule coefficients Lower CI Yule coefficient Upper CI [1] 0.34 0.51 0.65 > > > > > cleanEx() > nameEx("alpha") > ### * alpha > > flush(stderr()); flush(stdout()) > > ### Name: alpha > ### Title: Find two estimates of reliability: Cronbach's alpha and > ### Guttman's Lambda 6. > ### Aliases: alpha alpha.scale alpha.ci > ### Keywords: models multivariate > > ### ** Examples > > set.seed(42) #keep the same starting values > #four congeneric measures > r4 <- sim.congeneric() > alpha(r4) Reliability analysis Call: alpha(x = r4) raw_alpha std.alpha G6(smc) average_r S/N median_r 0.74 0.74 0.69 0.42 2.9 0.41 95% confidence boundaries lower alpha upper Feldt -0.31 0.74 0.98 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r S/N var.r med.r V1 0.62 0.62 0.53 0.36 1.7 0.0036 0.35 V2 0.66 0.66 0.57 0.39 1.9 0.0081 0.40 V3 0.70 0.70 0.62 0.44 2.3 0.0120 0.40 V4 0.74 0.74 0.66 0.49 2.8 0.0049 0.48 Item statistics r r.cor r.drop V1 0.81 0.74 0.64 V2 0.78 0.67 0.57 V3 0.73 0.59 0.51 V4 0.68 0.50 0.43 > #nine hierarchical measures -- should actually use omega > r9 <- sim.hierarchical() > alpha(r9) Reliability analysis Call: alpha(x = r9) raw_alpha std.alpha G6(smc) average_r S/N median_r 0.76 0.76 0.76 0.26 3.2 0.25 95% confidence boundaries lower alpha upper Feldt 0.43 0.76 0.94 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r S/N var.r med.r V1 0.71 0.71 0.70 0.24 2.5 0.0067 0.22 V2 0.72 0.72 0.71 0.25 2.6 0.0085 0.23 V3 0.74 0.74 0.73 0.26 2.8 0.0101 0.25 V4 0.73 0.73 0.72 0.25 2.7 0.0106 0.23 V5 0.74 0.74 0.73 0.26 2.9 0.0112 0.24 V6 0.75 0.75 0.74 0.27 3.0 0.0113 0.25 V7 0.75 0.75 0.74 0.27 3.0 0.0129 0.25 V8 0.76 0.76 0.75 0.28 3.1 0.0118 0.26 V9 0.77 0.77 0.76 0.29 3.3 0.0099 0.28 Item statistics r r.cor r.drop V1 0.72 0.71 0.61 V2 0.67 0.63 0.54 V3 0.61 0.55 0.47 V4 0.65 0.59 0.51 V5 0.59 0.52 0.45 V6 0.53 0.43 0.38 V7 0.56 0.46 0.40 V8 0.50 0.39 0.34 V9 0.45 0.32 0.28 > > # examples of two independent factors that produce reasonable alphas > #this is a case where alpha is a poor indicator of unidimensionality > > two.f <- sim.item(8) > #specify which items to reverse key by name > alpha(two.f,keys=c("V3","V4","V5","V6")) Number of categories should be increased in order to count frequencies. Reliability analysis Call: alpha(x = two.f, keys = c("V3", "V4", "V5", "V6")) raw_alpha std.alpha G6(smc) average_r S/N ase mean sd median_r 0.58 0.58 0.62 0.15 1.4 0.029 0.072 0.51 0.051 95% confidence boundaries lower alpha upper Feldt 0.52 0.58 0.63 Duhachek 0.53 0.58 0.64 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r V1 0.55 0.55 0.58 0.15 1.2 0.031 0.032 0.056 V2 0.53 0.53 0.57 0.14 1.1 0.033 0.034 0.029 V3- 0.55 0.55 0.58 0.15 1.2 0.031 0.031 0.056 V4- 0.55 0.54 0.58 0.15 1.2 0.032 0.032 0.056 V5- 0.56 0.56 0.59 0.15 1.3 0.031 0.030 0.056 V6- 0.56 0.56 0.59 0.15 1.3 0.031 0.030 0.056 V7 0.53 0.53 0.56 0.14 1.1 0.033 0.031 0.041 V8 0.56 0.57 0.59 0.16 1.3 0.030 0.030 0.047 Item statistics n raw.r std.r r.cor r.drop mean sd V1 500 0.50 0.50 0.38 0.28 0.0117 1.01 V2 500 0.55 0.55 0.46 0.34 -0.0018 1.00 V3- 500 0.50 0.50 0.39 0.27 0.1443 1.05 V4- 500 0.51 0.51 0.41 0.30 0.1502 0.99 V5- 500 0.48 0.48 0.36 0.26 0.1030 1.01 V6- 500 0.48 0.48 0.36 0.26 0.1128 1.00 V7 500 0.56 0.56 0.48 0.35 0.0222 1.00 V8 500 0.45 0.46 0.33 0.23 0.0320 0.96 > cov.two <- cov(two.f) > alpha(cov.two,check.keys=TRUE) Warning in alpha(cov.two, check.keys = TRUE) : Some items were negatively correlated with total scale and were automatically reversed. This is indicated by a negative sign for the variable name. Reliability analysis Call: alpha(x = cov.two, check.keys = TRUE) raw_alpha std.alpha G6(smc) average_r S/N median_r 0.58 0.58 0.62 0.15 1.4 0.051 95% confidence boundaries lower alpha upper Feldt -0.07 0.58 0.9 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r S/N var.r med.r V1 0.55 0.55 0.58 0.15 1.2 0.032 0.056 V2 0.53 0.53 0.57 0.14 1.1 0.034 0.029 V3- 0.55 0.55 0.58 0.15 1.2 0.031 0.056 V4- 0.55 0.54 0.58 0.15 1.2 0.032 0.056 V5- 0.56 0.56 0.59 0.15 1.3 0.030 0.056 V6- 0.56 0.56 0.59 0.15 1.3 0.030 0.056 V7 0.53 0.53 0.56 0.14 1.1 0.031 0.041 V8 0.56 0.57 0.59 0.16 1.3 0.030 0.047 Item statistics r r.cor r.drop V1 0.50 0.38 0.28 V2 0.55 0.46 0.34 V3- 0.50 0.39 0.27 V4- 0.51 0.41 0.30 V5- 0.48 0.36 0.26 V6- 0.48 0.36 0.26 V7 0.56 0.48 0.35 V8 0.46 0.33 0.23 > #automatic reversal base upon first component > alpha(two.f,check.keys=TRUE) #note that the median is much less than the average R Number of categories should be increased in order to count frequencies. Warning in alpha(two.f, check.keys = TRUE) : Some items were negatively correlated with total scale and were automatically reversed. This is indicated by a negative sign for the variable name. Reliability analysis Call: alpha(x = two.f, check.keys = TRUE) raw_alpha std.alpha G6(smc) average_r S/N ase mean sd median_r 0.58 0.58 0.62 0.15 1.4 0.029 0.072 0.51 0.051 95% confidence boundaries lower alpha upper Feldt 0.52 0.58 0.63 Duhachek 0.53 0.58 0.64 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r V1 0.55 0.55 0.58 0.15 1.2 0.031 0.032 0.056 V2 0.53 0.53 0.57 0.14 1.1 0.033 0.034 0.029 V3- 0.55 0.55 0.58 0.15 1.2 0.031 0.031 0.056 V4- 0.55 0.54 0.58 0.15 1.2 0.032 0.032 0.056 V5- 0.56 0.56 0.59 0.15 1.3 0.031 0.030 0.056 V6- 0.56 0.56 0.59 0.15 1.3 0.031 0.030 0.056 V7 0.53 0.53 0.56 0.14 1.1 0.033 0.031 0.041 V8 0.56 0.57 0.59 0.16 1.3 0.030 0.030 0.047 Item statistics n raw.r std.r r.cor r.drop mean sd V1 500 0.50 0.50 0.38 0.28 0.0117 1.01 V2 500 0.55 0.55 0.46 0.34 -0.0018 1.00 V3- 500 0.50 0.50 0.39 0.27 0.1443 1.05 V4- 500 0.51 0.51 0.41 0.30 0.1502 0.99 V5- 500 0.48 0.48 0.36 0.26 0.1030 1.01 V6- 500 0.48 0.48 0.36 0.26 0.1128 1.00 V7 500 0.56 0.56 0.48 0.35 0.0222 1.00 V8 500 0.45 0.46 0.33 0.23 0.0320 0.96 > #this suggests (correctly) that the 1 factor model is probably wrong > #an example with discrete item responses -- show the frequencies > items <- sim.congeneric(N=500,short=FALSE,low=-2,high=2, + categorical=TRUE) #500 responses to 4 discrete items with 5 categories > a4 <- alpha(items$observed) #item response analysis of congeneric measures > a4 Reliability analysis Call: alpha(x = items$observed) raw_alpha std.alpha G6(smc) average_r S/N ase mean sd median_r 0.73 0.73 0.68 0.4 2.7 0.02 -0.013 0.76 0.4 95% confidence boundaries lower alpha upper Feldt 0.69 0.73 0.76 Duhachek 0.69 0.73 0.77 Reliability if an item is dropped: raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r V1 0.61 0.61 0.52 0.34 1.6 0.031 0.0063 0.32 V2 0.64 0.64 0.55 0.37 1.8 0.028 0.0097 0.37 V3 0.68 0.68 0.60 0.41 2.1 0.025 0.0134 0.37 V4 0.73 0.73 0.65 0.48 2.8 0.021 0.0036 0.47 Item statistics n raw.r std.r r.cor r.drop mean sd V1 500 0.80 0.80 0.73 0.62 0.050 1.00 V2 500 0.77 0.77 0.67 0.57 -0.022 1.03 V3 500 0.72 0.73 0.58 0.50 -0.028 0.99 V4 500 0.67 0.66 0.46 0.40 -0.050 1.05 Non missing response frequency for each item -2 -1 0 1 2 miss V1 0.06 0.24 0.38 0.25 0.07 0 V2 0.07 0.26 0.35 0.25 0.07 0 V3 0.05 0.27 0.38 0.22 0.07 0 V4 0.10 0.22 0.39 0.22 0.07 0 > #summary just gives Alpha > summary(a4) Reliability analysis raw_alpha std.alpha G6(smc) average_r S/N ase mean sd median_r 0.73 0.73 0.68 0.4 2.7 0.02 -0.013 0.76 0.4 > > > > cleanEx() > nameEx("anova.psych") > ### * anova.psych > > flush(stderr()); flush(stdout()) > > ### Name: anova.psych > ### Title: Model comparison for regression, mediation, and factor analysis > ### Aliases: anova.psych > ### Keywords: models multivariate > > ### ** Examples > > if(require("psychTools")) { + m1 <- setCor(reaction ~ import, data = Tal_Or,std=FALSE) + m2 <- setCor(reaction ~ import+pmi, data = Tal_Or,std=FALSE) + m3 <- setCor(reaction ~ import+pmi + cond, data = Tal_Or,std=FALSE) + anova(m1,m2,m3) + } Loading required package: psychTools Model 1 = setCor(y = reaction ~ import, data = Tal_Or, std = FALSE) Model 2 = setCor(y = reaction ~ import + pmi, data = Tal_Or, std = FALSE) Model 3 = setCor(y = reaction ~ import + pmi + cond, data = Tal_Or, std = FALSE) $reaction Res Df Res SS Diff df Diff SS F Pr(F > ) 1 121 229.8774 NA NA NA NA 2 120 198.1888 1 31.6885717 19.0569054 2.705402e-05 3 119 197.8779 1 0.3109252 0.1869845 6.662209e-01 > > > #Several interesting test cases are taken from analyses of the Spengler data set > #Although the sample sizes are actually very large in the first wave, I use the > #sample sizes from the last wave > #This data set is actually in psychTools but is copied here until we can update psychTools > #We set the n.iter to be 50 instead of the default value of 5,000 > if(require("psychTools")) { + + mod1 <- mediate(Income.50 ~ IQ + Parental+ (Ed.11) ,data=Spengler, + n.obs = 1952, n.iter=50) + mod2 <- mediate(Income.50 ~ IQ + Parental+ (Ed.11) + (Income.11) + ,data=Spengler,n.obs = 1952, n.iter=50) + + #Now, compare these models + anova(mod1,mod2) + } The replication data matrices were simulated based upon the specified number of subjects and the observed correlation matrix. The replication data matrices were simulated based upon the specified number of subjects and the observed correlation matrix. Model 1 = mediate(y = Income.50 ~ IQ + Parental + (Ed.11), data = Spengler, n.obs = 1952, n.iter = 50) Model 2 = mediate(y = Income.50 ~ IQ + Parental + (Ed.11) + (Income.11), data = Spengler, n.obs = 1952, n.iter = 50) Res Df Res SS Diff df Diff SS F Pr(F > ) 1 1948 1565.401 NA NA NA NA 2 1947 1547.155 1 18.2464 22.96198 1.777401e-06 > > f3 <- fa(Thurstone,3,n.obs=213) #we need to specifiy the n.obs for the test to work > f2 <- fa(Thurstone,2, n.obs=213) > anova(f2,f3) Model 1 = fa(r = Thurstone, nfactors = 2, n.obs = 213) Model 2 = fa(r = Thurstone, nfactors = 3, n.obs = 213) df d.df chiSq d.chiSq PR test empirical d.empirical test.echi BIC d.BIC 1 19 NA 86.62 NA NA NA 66.73 NA NA -15.25 NA 2 12 7 2.98 83.64 0 11.95 0.52 66.2 9.46 -61.36 -46.11 > > > > cleanEx() detaching ‘package:psychTools’ > nameEx("bassAckward") > ### * bassAckward > > flush(stderr()); flush(stdout()) > > ### Name: bassAckward > ### Title: The Bass-Ackward factoring algorithm discussed by Goldberg > ### Aliases: bassAckward bassAckward.diagram > ### Keywords: multivariate models > > ### ** Examples > > bassAckward(Thurstone,4,main="Thurstone data set") Call: bassAckward(r = Thurstone, nfactors = 4, main = "Thurstone data set") 1 F1 2 F1 3 F1 F2 4 F1 F2 F3 Use print with the short = FALSE option to see the correlations, or use the summary command.> f.labels <- list(level1=cs(Approach,Avoid),level2=cs(PosAffect,NegAffect,Constraint), + level3 = cs(Extraversion,Agreeableness,Neuroticism,Conscientiousness,Openness)) > > ba <- bassAckward(psychTools::bfi[1:25],c(2,3,5),labels=f.labels, + main="bfi data set from psychTools", values=TRUE) Warning in arrows(x0 = tv[, 13], y0 = tv[, 14], x1 = tv[, 15], y1 = tv[, : zero-length arrow is of indeterminate angle and so skipped > print(ba,short=FALSE) Call: bassAckward(r = psychTools::bfi[1:25], nfactors = c(2, 3, 5), labels = f.labels, main = "bfi data set from psychTools", values = TRUE) 1 F1 F2 2 F1 F2 3 F2 F1 F3 Factor correlations F1 F2 F1 1.00 -0.23 F2 -0.23 1.00 Factor correlations F1 F2 F1 0.95 -0.22 F2 -0.20 1.00 F3 0.59 -0.24 Factor correlations F1 F2 F3 F1 -0.16 0.98 -0.14 F2 0.82 -0.34 0.16 F3 0.30 -0.25 0.94 F4 0.79 -0.01 0.21 F5 0.37 0.04 0.49 > #show the items associated with the 5 level solution > fa.lookup(ba,dictionary=psychTools::bfi.dictionary) F1 F2 F3 F4 F5 ItemLabel E2 -0.68 -0.05 0.10 -0.02 -0.06 q_901 E4 0.59 0.29 0.01 0.02 -0.08 q_1410 E1 -0.56 -0.08 -0.06 0.11 -0.10 q_712 E5 0.42 0.05 0.15 0.27 0.21 q_1768 E3 0.42 0.25 0.08 0.00 0.28 q_1205 A3 0.12 0.66 -0.03 0.02 0.03 q_1206 A2 0.00 0.64 -0.02 0.08 0.03 q_1162 A5 0.23 0.53 -0.11 0.01 0.04 q_1419 A4 0.06 0.43 -0.06 0.19 -0.15 q_1364 A1 0.17 -0.41 0.21 0.07 -0.06 q_146 N1 0.10 -0.11 0.81 0.00 -0.05 q_952 N2 0.04 -0.09 0.78 0.01 0.01 q_974 N3 -0.10 0.08 0.71 -0.04 0.02 q_1099 N5 -0.20 0.21 0.49 0.00 -0.15 q_1505 N4 -0.39 0.09 0.47 -0.14 0.08 q_1479 C2 -0.09 0.08 0.15 0.67 0.04 q_530 C4 0.00 0.04 0.17 -0.61 -0.05 q_626 C3 -0.06 0.09 0.03 0.57 -0.07 q_619 C5 -0.14 0.02 0.19 -0.55 0.09 q_1949 C1 -0.03 -0.02 0.07 0.55 0.15 q_124 O3 0.15 0.08 0.03 0.02 0.61 q_492 O5 0.10 0.04 0.13 -0.03 -0.54 q_1964 O1 0.10 0.02 0.02 0.07 0.51 q_128 O2 0.06 0.16 0.19 -0.08 -0.46 q_316 O4 -0.32 0.17 0.13 -0.02 0.37 q_1738 Item Giant3 Big6 E2 Find it difficult to approach others. Plasticity Extraversion E4 Make friends easily. Plasticity Extraversion E1 Don't talk a lot. Plasticity Extraversion E5 Take charge. Plasticity Extraversion E3 Know how to captivate people. Plasticity Extraversion A3 Know how to comfort others. Cohesion Agreeableness A2 Inquire about others' well-being. Cohesion Agreeableness A5 Make people feel at ease. Cohesion Agreeableness A4 Love children. Cohesion Agreeableness A1 Am indifferent to the feelings of others. Cohesion Agreeableness N1 Get angry easily. Stability Emotional Stability N2 Get irritated easily. Stability Emotional Stability N3 Have frequent mood swings. Stability Emotional Stability N5 Panic easily. Stability Emotional Stability N4 Often feel blue. Stability Emotional Stability C2 Continue until everything is perfect. Stability Conscientiousness C4 Do things in a half-way manner. Stability Conscientiousness C3 Do things according to a plan. Stability Conscientiousness C5 Waste my time. Stability Conscientiousness C1 Am exacting in my work. Stability Conscientiousness O3 Carry the conversation to a higher level. Plasticity Openness O5 Will not probe deeply into a subject. Plasticity Openness O1 Am full of ideas. Plasticity Openness O2 Avoid difficult reading material. Plasticity Openness O4 Spend time reflecting on things. Plasticity Openness Little12 Keying IPIP100 E2 Sociability -1 B5:E E4 Sociability 1 B5:E E1 Sociability -1 B5:E E5 Assertiveness 1 B5:E E3 Assertiveness 1 B5:E A3 Compassion 1 B5:A A2 Compassion 1 B5:A A5 Compassion 1 B5:A A4 Compassion 1 B5:A A1 Compassion -1 B5:A N1 Balance -1 B5:N N2 Balance -1 B5:N N3 Balance -1 B5:N N5 Balance -1 B5:N N4 Balance -1 B5:N C2 Orderliness 1 B5:C C4 Industriousness -1 B5:C C3 Orderliness 1 B5:C C5 Industriousness -1 B5:C C1 Orderliness 1 B5:C O3 Intellect 1 B5:O O5 Openness -1 B5:O O1 Intellect 1 B5:O O2 Intellect -1 B5:O O4 Openness 1 B5:O > #now show the items associated with the 3 level solution > fa.lookup(ba$fa[[2]],dictionary=psychTools::bfi.dictionary) F1 F2 F3 ItemLabel Item E4 0.69 -0.07 -0.09 q_1410 Make friends easily. A5 0.64 -0.06 -0.02 q_1419 Make people feel at ease. A3 0.63 0.06 0.01 q_1206 Know how to comfort others. E3 0.63 0.09 0.07 q_1205 Know how to captivate people. E2 -0.58 0.21 0.03 q_901 Find it difficult to approach others. E1 -0.54 0.01 0.11 q_712 Don't talk a lot. A2 0.52 0.07 0.07 q_1162 Inquire about others' well-being. E5 0.44 0.09 0.29 q_1768 Take charge. A4 0.36 -0.04 0.09 q_1364 Love children. O3 0.35 0.11 0.25 q_492 Carry the conversation to a higher level. A1 -0.21 0.09 0.02 q_146 Am indifferent to the feelings of others. N3 0.03 0.76 0.00 q_1099 Have frequent mood swings. N2 0.00 0.74 0.02 q_974 Get irritated easily. N1 0.01 0.74 -0.02 q_952 Get angry easily. N4 -0.19 0.59 -0.04 q_1479 Often feel blue. N5 -0.01 0.52 -0.05 q_1505 Panic easily. O4 -0.01 0.27 0.17 q_1738 Spend time reflecting on things. C2 0.02 0.11 0.63 q_530 Continue until everything is perfect. C4 0.04 0.23 -0.59 q_626 Do things in a half-way manner. C1 -0.01 0.04 0.59 q_124 Am exacting in my work. C3 0.01 -0.01 0.48 q_619 Do things according to a plan. C5 -0.07 0.29 -0.44 q_1949 Waste my time. O1 0.23 0.08 0.27 q_128 Am full of ideas. O2 0.06 0.13 -0.27 q_316 Avoid difficult reading material. O5 -0.03 0.03 -0.26 q_1964 Will not probe deeply into a subject. Giant3 Big6 Little12 Keying IPIP100 E4 Plasticity Extraversion Sociability 1 B5:E A5 Cohesion Agreeableness Compassion 1 B5:A A3 Cohesion Agreeableness Compassion 1 B5:A E3 Plasticity Extraversion Assertiveness 1 B5:E E2 Plasticity Extraversion Sociability -1 B5:E E1 Plasticity Extraversion Sociability -1 B5:E A2 Cohesion Agreeableness Compassion 1 B5:A E5 Plasticity Extraversion Assertiveness 1 B5:E A4 Cohesion Agreeableness Compassion 1 B5:A O3 Plasticity Openness Intellect 1 B5:O A1 Cohesion Agreeableness Compassion -1 B5:A N3 Stability Emotional Stability Balance -1 B5:N N2 Stability Emotional Stability Balance -1 B5:N N1 Stability Emotional Stability Balance -1 B5:N N4 Stability Emotional Stability Balance -1 B5:N N5 Stability Emotional Stability Balance -1 B5:N O4 Plasticity Openness Openness 1 B5:O C2 Stability Conscientiousness Orderliness 1 B5:C C4 Stability Conscientiousness Industriousness -1 B5:C C1 Stability Conscientiousness Orderliness 1 B5:C C3 Stability Conscientiousness Orderliness 1 B5:C C5 Stability Conscientiousness Industriousness -1 B5:C O1 Plasticity Openness Intellect 1 B5:O O2 Plasticity Openness Intellect -1 B5:O O5 Plasticity Openness Openness -1 B5:O > # compare the 3 factor solution to what get by extracting 3 factors directly > f3 <- fa(psychTools::bfi[1:25],3) > f3$loadings - ba$fa[[2]]$loadings # these are the same Loadings: MR1 MR2 MR3 A1 A2 A3 A4 A5 C1 C2 C3 C4 C5 E1 E2 E3 E4 E5 N1 N2 N3 N4 N5 O1 O2 O3 O4 O5 MR1 MR2 MR3 SS loadings 0 0 0 Proportion Var 0 0 0 Cumulative Var 0 0 0 > #do pca instead of factors just summarize, don't plot > summary(bassAckward(psychTools::bfi[1:25],c(1,3,5,7),fm="pca", + main="Components",plot=FALSE)) Call: bassAckward(r = psychTools::bfi[1:25], nfactors = c(1, 3, 5, 7), fm = "pca", plot = FALSE, main = "Components") Factor correlations C1 C1 1 Factor correlations C1 C1 0.83 C2 -0.56 C3 0.59 Factor correlations C1 C2 C3 C1 -0.13 0.99 -0.10 C2 0.84 -0.24 0.17 C3 0.25 -0.21 0.86 C4 0.70 -0.01 0.14 C5 0.17 0.08 0.61 Factor correlations C1 C2 C3 C4 C5 C1 0.97 -0.17 -0.15 -0.10 -0.03 C2 -0.15 0.95 0.15 0.25 0.17 C3 -0.10 0.15 0.99 0.13 0.15 C4 -0.05 0.45 0.29 0.78 0.02 C5 0.23 -0.09 -0.20 0.06 -0.84 C6 0.32 -0.02 -0.08 0.34 0.63 C7 0.15 0.21 0.06 -0.61 0.01 > ##not run, but useful example > > > > cleanEx() > nameEx("best.scales") > ### * best.scales > > flush(stderr()); flush(stdout()) > > ### Name: bestScales > ### Title: A bootstrap aggregation function for choosing most predictive > ### unit weighted items > ### Aliases: bestItems bestScales BISCUIT biscuit BISCWIT biscwit > ### Keywords: models multivariate tree > > ### ** Examples > > #This is an example of 'bagging' (bootstrap aggregation) > #not run in order to pass the timing tests for Debian at CRAN > #bestboot <- bestScales(psychTools::bfi,criteria=cs(gender,age,education), > # n.iter=10,dictionary=psychTools::bfi.dictionary[1:3]) > #bestboot > #compare with 10 fold cross validation > > > > cleanEx() > nameEx("bfi") > ### * bfi > > flush(stderr()); flush(stdout()) > > ### Name: bfi > ### Title: 25 Personality items representing 5 factors > ### Aliases: bfi bfi.keys > ### Keywords: datasets > > ### ** Examples > > data(bfi) > psych::describe(bfi) vars n mean sd median trimmed mad min max range skew A1 1 2784 2.41 1.41 2 2.23 1.48 1 6 5 0.83 A2 2 2773 4.80 1.17 5 4.98 1.48 1 6 5 -1.12 A3 3 2774 4.60 1.30 5 4.79 1.48 1 6 5 -1.00 A4 4 2781 4.70 1.48 5 4.93 1.48 1 6 5 -1.03 A5 5 2784 4.56 1.26 5 4.71 1.48 1 6 5 -0.85 C1 6 2779 4.50 1.24 5 4.64 1.48 1 6 5 -0.85 C2 7 2776 4.37 1.32 5 4.50 1.48 1 6 5 -0.74 C3 8 2780 4.30 1.29 5 4.42 1.48 1 6 5 -0.69 C4 9 2774 2.55 1.38 2 2.41 1.48 1 6 5 0.60 C5 10 2784 3.30 1.63 3 3.25 1.48 1 6 5 0.07 E1 11 2777 2.97 1.63 3 2.86 1.48 1 6 5 0.37 E2 12 2784 3.14 1.61 3 3.06 1.48 1 6 5 0.22 E3 13 2775 4.00 1.35 4 4.07 1.48 1 6 5 -0.47 E4 14 2791 4.42 1.46 5 4.59 1.48 1 6 5 -0.82 E5 15 2779 4.42 1.33 5 4.56 1.48 1 6 5 -0.78 N1 16 2778 2.93 1.57 3 2.82 1.48 1 6 5 0.37 N2 17 2779 3.51 1.53 4 3.51 1.48 1 6 5 -0.08 N3 18 2789 3.22 1.60 3 3.16 1.48 1 6 5 0.15 N4 19 2764 3.19 1.57 3 3.12 1.48 1 6 5 0.20 N5 20 2771 2.97 1.62 3 2.85 1.48 1 6 5 0.37 O1 21 2778 4.82 1.13 5 4.96 1.48 1 6 5 -0.90 O2 22 2800 2.71 1.57 2 2.56 1.48 1 6 5 0.59 O3 23 2772 4.44 1.22 5 4.56 1.48 1 6 5 -0.77 O4 24 2786 4.89 1.22 5 5.10 1.48 1 6 5 -1.22 O5 25 2780 2.49 1.33 2 2.34 1.48 1 6 5 0.74 gender 26 2800 1.67 0.47 2 1.71 0.00 1 2 1 -0.73 education 27 2577 3.19 1.11 3 3.22 1.48 1 5 4 -0.05 age 28 2800 28.78 11.13 26 27.43 10.38 3 86 83 1.02 kurtosis se A1 -0.31 0.03 A2 1.05 0.02 A3 0.44 0.02 A4 0.04 0.03 A5 0.16 0.02 C1 0.30 0.02 C2 -0.14 0.03 C3 -0.13 0.02 C4 -0.62 0.03 C5 -1.22 0.03 E1 -1.09 0.03 E2 -1.15 0.03 E3 -0.47 0.03 E4 -0.30 0.03 E5 -0.09 0.03 N1 -1.01 0.03 N2 -1.05 0.03 N3 -1.18 0.03 N4 -1.09 0.03 N5 -1.06 0.03 O1 0.43 0.02 O2 -0.81 0.03 O3 0.30 0.02 O4 1.08 0.02 O5 -0.24 0.03 gender -1.47 0.01 education -0.32 0.02 age 0.56 0.21 > # create the bfi.keys (actually already saved in the data file) > keys <- + list(agree=c("-A1","A2","A3","A4","A5"),conscientious=c("C1","C2","C3","-C4","-C5"), + extraversion=c("-E1","-E2","E3","E4","E5"),neuroticism=c("N1","N2","N3","N4","N5"), + openness = c("O1","-O2","O3","O4","-O5")) > > scores <- psych::scoreItems(keys,bfi,min=1,max=6) #specify the minimum and maximum values > scores Call: psych::scoreItems(keys = keys, items = bfi, min = 1, max = 6) (Unstandardized) Alpha: agree conscientious extraversion neuroticism openness alpha 0.7 0.72 0.76 0.81 0.6 Standard errors of unstandardized Alpha: agree conscientious extraversion neuroticism openness ASE 0.014 0.014 0.013 0.011 0.017 Average item correlation: agree conscientious extraversion neuroticism openness average.r 0.32 0.34 0.39 0.46 0.23 Median item correlation: agree conscientious extraversion neuroticism openness 0.34 0.34 0.38 0.41 0.22 Guttman 6* reliability: agree conscientious extraversion neuroticism openness Lambda.6 0.7 0.72 0.76 0.81 0.6 Signal/Noise based upon av.r : agree conscientious extraversion neuroticism openness Signal/Noise 2.3 2.6 3.2 4.3 1.5 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: agree conscientious extraversion neuroticism openness agree 0.70 0.36 0.63 -0.245 0.23 conscientious 0.26 0.72 0.35 -0.305 0.30 extraversion 0.46 0.26 0.76 -0.284 0.32 neuroticism -0.18 -0.23 -0.22 0.812 -0.12 openness 0.15 0.19 0.22 -0.086 0.60 In order to see the item by scale loadings and frequency counts of the data print with the short option = FALSE> #show the use of the fa.lookup with a dictionary > #psych::keys.lookup(bfi.keys,bfi.dictionary[,1:4]) #deprecated -- use psychTools > > > > > cleanEx() > nameEx("bi.bars") > ### * bi.bars > > flush(stderr()); flush(stdout()) > > ### Name: bi.bars > ### Title: Draw pairs of bargraphs based on two groups > ### Aliases: bi.bars > ### Keywords: hplot > > ### ** Examples > > #data(bfi) > bi.bars(psychTools::bfi,"age","gender" ,ylab="Age",main="Age by males and females") > bi.bars(psychTools::bfi,"education","gender",xlab="Education", + main="Education by gender",horiz=FALSE) > > > > cleanEx() > nameEx("bifactor") > ### * bifactor > > flush(stderr()); flush(stdout()) > > ### Name: Bechtoldt > ### Title: Seven data sets showing a bifactor solution. > ### Aliases: Bechtoldt.1 Bechtoldt.2 Bechtoldt Holzinger Holzinger.9 Reise > ### Thurstone Thurstone.33 Thurstone.9 > ### Keywords: datasets > > ### ** Examples > > > if(!require(GPArotation)) {message("I am sorry, to run omega requires GPArotation") + } else { + #holz <- omega(Holzinger,4, title = "14 ability tests from Holzinger-Swineford") + #bf <- omega(Reise,5,title="16 health items from Reise") + #omega(Reise,5,labels=colnames(Reise),title="16 health items from Reise") + thur.om <- omega(Thurstone,title="9 variables from Thurstone") #compare with + thur.bf <- fa(Thurstone,3,rotate="biquartimin") + factor.congruence(thur.om,thur.bf) + } Loading required package: GPArotation MR1 MR2 MR3 g 0.99 0.58 0.46 F1* 0.66 0.97 0.01 F2* 0.55 0.01 0.97 F3* 0.64 -0.15 -0.15 h2 0.98 0.59 0.43 > > > > cleanEx() detaching ‘package:GPArotation’ > nameEx("bigCor") > ### * bigCor > > flush(stderr()); flush(stdout()) > > ### Name: bigCor > ### Title: Find large correlation matrices by stitching together smaller > ### ones found more rapidly > ### Aliases: bigCor > ### Keywords: models multivariate > > ### ** Examples > > R <- bigCor(psychTools::bfi,10) > #compare the results with > r.bfi <- cor(psychTools::bfi,use="pairwise") > all.equal(R,r.bfi) [1] TRUE > > > > cleanEx() > nameEx("biplot.psych") > ### * biplot.psych > > flush(stderr()); flush(stdout()) > > ### Name: biplot.psych > ### Title: Draw biplots of factor or component scores by factor or > ### component loadings > ### Aliases: biplot.psych > ### Keywords: multivariate hplot > > ### ** Examples > > #the standard example > data(USArrests) > fa2 <- fa(USArrests,2,scores=TRUE) > biplot(fa2,labels=rownames(USArrests)) > > # plot the 3 factor solution > #data(bfi) > fa3 <- fa(psychTools::bfi[1:200,1:15],3,scores=TRUE) > biplot(fa3) > #just plot factors 1 and 3 from that solution > biplot(fa3,choose=c(1,3)) > > # > fa2 <- fa(psychTools::bfi[16:25],2) #factor analysis > fa2$scores <- fa2$scores[1:100,] #just take the first 100 > #now plot with different colors and shapes for males and females > biplot(fa2,pch=c(24,21)[psychTools::bfi[1:100,"gender"]], + group =psychTools::bfi[1:100,"gender"], + main="Biplot of Conscientiousness and Neuroticism by gender") > > > r <- cor(psychTools::bfi[1:200,1:10], use="pairwise") #find the correlations > f2 <- fa(r,2) > x <- list() > x$scores <- factor.scores(psychTools::bfi[1:200,1:10],f2) > x$loadings <- f2$loadings > class(x) <- c('psych','fa') > biplot(x,main="biplot from correlation matrix and factor scores") > > > > > cleanEx() > nameEx("block.random") > ### * block.random > > flush(stderr()); flush(stdout()) > > ### Name: block.random > ### Title: Create a block randomized structure for n independent variables > ### Aliases: block.random > ### Keywords: multivariate > > ### ** Examples > > br <- block.random(n=24,c(2,3)) > pairs.panels(br) > br <- block.random(96,c(time=4,drug=3,sex=2)) > pairs.panels(br) > > > > cleanEx() > nameEx("bock.table") > ### * bock.table > > flush(stderr()); flush(stdout()) > > ### Name: bock > ### Title: Bock and Liberman (1970) data set of 1000 observations of the > ### LSAT > ### Aliases: bock bock.table lsat6 lsat7 bock.lsat > ### Keywords: datasets > > ### ** Examples > > data(bock) > responses <- table2df(bock.table[,2:6],count=bock.table[,7], + labs= paste("lsat6.",1:5,sep="")) > describe(responses) vars n mean sd median trimmed mad min max range skew kurtosis lsat6.1 1 1000 0.92 0.27 1 1.00 0 0 1 1 -3.20 8.22 lsat6.2 2 1000 0.71 0.45 1 0.76 0 0 1 1 -0.92 -1.16 lsat6.3 3 1000 0.55 0.50 1 0.57 0 0 1 1 -0.22 -1.95 lsat6.4 4 1000 0.76 0.43 1 0.83 0 0 1 1 -1.24 -0.48 lsat6.5 5 1000 0.87 0.34 1 0.96 0 0 1 1 -2.17 2.72 se lsat6.1 0.01 lsat6.2 0.01 lsat6.3 0.02 lsat6.4 0.01 lsat6.5 0.01 > ## maybe str(bock.table) ; plot(bock.table) ... > > > > cleanEx() > nameEx("cattell") > ### * cattell > > flush(stderr()); flush(stdout()) > > ### Name: cattell > ### Title: 12 cognitive variables from Cattell (1963) > ### Aliases: cattell > ### Keywords: datasets > > ### ** Examples > > data(cattell) > corPlot(cattell,numbers=TRUE,upper=FALSE,diag=FALSE, + main="12 cognitive variables from Cattell (1963)",xlas=2) > > > > cleanEx() > nameEx("circ.tests") > ### * circ.tests > > flush(stderr()); flush(stdout()) > > ### Name: circ.tests > ### Title: Apply four tests of circumplex versus simple structure > ### Aliases: circ.tests > ### Keywords: multivariate models > > ### ** Examples > > circ.data <- circ.sim(24,500) > circ.fa <- fa(circ.data,2) > plot(circ.fa,title="Circumplex Structure") > ct <- circ.tests(circ.fa) > #compare with non-circumplex data > simp.data <- item.sim(24,500) > simp.fa <- fa(simp.data,2) > plot(simp.fa,title="Simple Structure") > st <- circ.tests(simp.fa) > res <- rbind(ct[1:4],st[1:4]) > rownames(res) <- c("circumplex","Simple") > print(res,digits=2) gaps fisher RT VT circumplex 0.0056 0.044 0.35 0.35 Simple 0.28 0.053 0.91 0.87 > > > > > cleanEx() > nameEx("cluster.cor") > ### * cluster.cor > > flush(stderr()); flush(stdout()) > > ### Name: scoreOverlap > ### Title: Find correlations of composite variables (corrected for overlap) > ### from a larger matrix. > ### Aliases: cluster.cor scoreOverlap scoreBy > ### Keywords: multivariate models > > ### ** Examples > > #use the msq data set that shows the structure of energetic and tense arousal > small.msq <- psychTools::msq[ c("active", "energetic", "vigorous", "wakeful", + "wide.awake", "full.of.pep", "lively", "sleepy", "tired", "drowsy","intense", + "jittery", "fearful", "tense", "clutched.up", "quiet", "still", "placid", + "calm", "at.rest") ] > small.R <- cor(small.msq,use="pairwise") > keys.list <- list( + EA = c("active", "energetic", "vigorous", "wakeful", "wide.awake", "full.of.pep", + "lively", "-sleepy", "-tired", "-drowsy"), + TA =c("intense", "jittery", "fearful", "tense", "clutched.up", "-quiet", "-still", + "-placid", "-calm", "-at.rest") , + + high.EA = c("active", "energetic", "vigorous", "wakeful", "wide.awake", "full.of.pep", + "lively"), + low.EA =c("sleepy", "tired", "drowsy"), + lowTA= c("quiet", "still", "placid", "calm", "at.rest"), + highTA = c("intense", "jittery", "fearful", "tense", "clutched.up") + ) > > keys <- make.keys(small.R,keys.list) > > adjusted.scales <- scoreOverlap(keys.list,small.R) > #compare with unadjusted > confounded.scales <- cluster.cor(keys,small.R) > summary(adjusted.scales) Call: scoreOverlap(keys = keys.list, r = small.R) Scale intercorrelations adjusted for item overlap Scale intercorrelations corrected for attenuation raw correlations (corrected for overlap) below the diagonal, (standardized) alpha on the diagonal corrected (for overlap and reliability) correlations above the diagonal: EA TA high.EA low.EA lowTA highTA EA 0.93 0.27 0.965 -0.803 -0.18 0.253 TA 0.23 0.75 0.282 -0.167 -0.81 0.821 high.EA 0.90 0.24 0.937 -0.620 -0.12 0.324 low.EA -0.75 -0.14 -0.579 0.928 0.25 -0.023 lowTA -0.15 -0.60 -0.098 0.204 0.73 -0.335 highTA 0.21 0.62 0.273 -0.019 -0.25 0.757 > #note that the EA and high and low EA and TA and high and low TA > # scale correlations are confounded > summary(confounded.scales) Call: cluster.cor(keys = keys, r.mat = small.R) Scale intercorrelations corrected for attenuation raw correlations below the diagonal, (standardized) alpha on the diagonal corrected correlations above the diagonal: EA TA high.EA low.EA lowTA highTA EA 0.93 0.27 1.024 -0.848 -0.18 0.253 TA 0.23 0.75 0.282 -0.167 -1.06 1.056 high.EA 0.96 0.24 0.937 -0.620 -0.12 0.324 low.EA -0.79 -0.14 -0.579 0.928 0.25 -0.023 lowTA -0.15 -0.78 -0.098 0.204 0.73 -0.335 highTA 0.21 0.80 0.273 -0.019 -0.25 0.757 > > bfi.stats <- statsBy(bfi,group="education",cors=TRUE ,cor="cov") Warning in log((1 + rho)/(1 - rho)) : NaNs produced Warning in log((1 + rho)/(1 - rho)) : NaNs produced Warning in log((1 + rho)/(1 - rho)) : NaNs produced Warning in log((1 + rho)/(1 - rho)) : NaNs produced Warning in log((1 + rho)/(1 - rho)) : NaNs produced Warning in log((1 + rho)/(1 - rho)) : NaNs produced > #specify to find covariances > bfi.plus.keys <- c(bfi.keys,gender="gender",age ="age") > bfi.by <- scoreBy(bfi.plus.keys,bfi.stats) > bfi.by$var #to show the variances of each scale by groupl agree conscientious extraversion neuroticism openness gender 1 0.7882831 0.8887472 1.182612 1.470014 0.6942160 0.2438941 2 0.7899725 0.9758329 1.168059 1.636626 0.6715624 0.2290990 3 0.7228965 0.8233179 1.052605 1.432392 0.6071583 0.2039503 4 0.8202946 0.8872774 1.261313 1.391217 0.6520265 0.2250036 5 0.7806566 0.9607475 1.068765 1.218635 0.6307554 0.2319599 age 1 108.14342 2 150.03074 3 89.28392 4 106.76803 5 120.48925 > round(bfi.by$cor.mat,2) #the correlations by group agree-cnscn agree-extrv agree-nrtcs agree-opnns agree-gendr agree-age 1 0.21 0.40 -0.12 0.15 0.12 0.20 2 0.36 0.60 -0.25 0.10 0.17 0.21 3 0.23 0.42 -0.17 0.16 0.25 0.18 4 0.23 0.51 -0.14 -0.03 0.18 0.09 5 0.21 0.49 -0.19 0.22 0.26 0.13 cnscn-extrv cnscn-nrtcs cnscn-opnns cnscn-gendr cnscn-age extrv-nrtcs 1 0.28 -0.14 0.20 -0.10 0.21 -0.20 2 0.30 -0.34 0.09 0.17 0.08 -0.24 3 0.26 -0.23 0.23 0.09 0.13 -0.23 4 0.25 -0.20 0.13 0.08 0.11 -0.21 5 0.20 -0.23 0.22 0.12 0.03 -0.19 extrv-opnns extrv-gendr extrv-age nrtcs-opnns nrtcs-gendr nrtcs-age 1 0.24 0.04 0.02 -0.14 0.20 -0.15 2 0.09 0.17 0.02 0.00 0.04 -0.12 3 0.27 0.10 0.10 -0.13 0.14 -0.06 4 0.14 0.20 -0.02 -0.04 0.10 -0.12 5 0.26 0.12 0.08 -0.04 0.06 -0.18 opnns-gendr opnns-age gendr-age 1 -0.14 -0.06 0.00 2 -0.15 -0.07 0.00 3 -0.02 -0.03 0.12 4 -0.06 0.15 -0.01 5 -0.05 0.17 0.09 > bfi.by$alpha $`1` $`1`$alpha agree conscientious extraversion neuroticism openness gender age [1,] 0.6510109 0.6893152 0.7735322 0.8045484 0.5921303 1 1 $`2` $`2`$alpha agree conscientious extraversion neuroticism openness gender age [1,] 0.6512213 0.7191278 0.7645828 0.8268147 0.6092137 1 1 $`3` $`3`$alpha agree conscientious extraversion neuroticism openness gender age [1,] 0.6804076 0.70803 0.7557756 0.8169917 0.5480445 1 1 $`4` $`4`$alpha agree conscientious extraversion neuroticism openness gender age [1,] 0.7406404 0.7339035 0.7978691 0.8151749 0.6561507 1 1 $`5` $`5`$alpha agree conscientious extraversion neuroticism openness gender age [1,] 0.7415245 0.7558347 0.7507926 0.7973022 0.66304 1 1 > > > > cleanEx() > nameEx("cluster.fit") > ### * cluster.fit > > flush(stderr()); flush(stdout()) > > ### Name: cluster.fit > ### Title: cluster Fit: fit of the cluster model to a correlation matrix > ### Aliases: cluster.fit > ### Keywords: multivariate cluster > > ### ** Examples > > r.mat<- Harman74.cor$cov > iq.clus <- ICLUST(r.mat,nclusters =2) > fit <- cluster.fit(r.mat,iq.clus$loadings,iq.clus$clusters) > fit $clusterfit [1] 0.7943439 $structurefit [1] 0.6711022 $patternfit [1] 0.9249676 $clusterrmse [1] 0.1753907 $patternrmse [1] 0.1059401 > > > > > > cleanEx() > nameEx("cluster.loadings") > ### * cluster.loadings > > flush(stderr()); flush(stdout()) > > ### Name: cluster.loadings > ### Title: Find item by cluster correlations, corrected for overlap and > ### reliability > ### Aliases: cluster.loadings > ### Keywords: multivariate cluster > > ### ** Examples > > > r.mat<- Harman74.cor$cov > clusters <- matrix(c(1,1,1,rep(0,24),1,1,1,1,rep(0,17)),ncol=2) > cluster.loadings(clusters,r.mat) Call: cluster.loadings(keys = clusters, r.mat = r.mat) (Standardized) Alpha: [1] 0.61 0.79 (Standardized) G6*: [1] 0.66 0.84 Average item correlation: [1] 0.35 0.48 Number of items: [1] 3 4 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: [,1] [,2] [1,] 0.61 0.69 [2,] 0.48 0.79 Item by scale intercorrelations corrected for item overlap and scale reliability [,1] [,2] VisualPerception 0.675 0.50 Cubes 0.512 0.32 PaperFormBoard 0.636 0.36 Flags 0.550 0.45 GeneralInformation 0.468 0.76 PargraphComprehension 0.459 0.82 SentenceCompletion 0.375 0.84 WordClassification 0.478 0.74 WordMeaning 0.387 0.85 Addition 0.054 0.30 Code 0.301 0.36 CountingDots 0.328 0.23 StraightCurvedCapitals 0.575 0.46 WordRecognition 0.222 0.30 NumberRecognition 0.238 0.27 FigureRecognition 0.520 0.34 ObjectNumber 0.196 0.31 NumberFigure 0.457 0.29 FigureWord 0.380 0.28 Deduction 0.523 0.57 NumericalPuzzles 0.461 0.43 ProblemReasoning 0.491 0.56 SeriesCompletion 0.661 0.56 ArithmeticProblems 0.382 0.54 > > > > > > > cleanEx() > nameEx("cluster.plot") > ### * cluster.plot > > flush(stderr()); flush(stdout()) > > ### Name: cluster.plot > ### Title: Plot factor/cluster loadings and assign items to clusters by > ### their highest loading. > ### Aliases: cluster.plot fa.plot factor.plot > ### Keywords: multivariate hplot cluster > > ### ** Examples > > circ.data <- circ.sim(24,500) > circ.fa <- fa(circ.data,2) > plot(circ.fa,cut=.5) > f5 <- fa(psychTools::bfi[1:25],5) > plot(f5,labels=colnames(psychTools::bfi)[1:25],show.points=FALSE) > plot(f5,labels=colnames(psychTools::bfi)[1:25],show.points=FALSE,choose=c(1,2,4)) > > > > cleanEx() > nameEx("cluster2keys") > ### * cluster2keys > > flush(stderr()); flush(stdout()) > > ### Name: cluster2keys > ### Title: Convert a cluster vector (from e.g., kmeans) to a keys matrix > ### suitable for scoring item clusters. > ### Aliases: cluster2keys > ### Keywords: multivariate > > ### ** Examples > > test.data <- Harman74.cor$cov > kc <- kmeans(test.data,4) > keys <- cluster2keys(kc) > keys #these match those found by ICLUST [,1] [,2] [,3] [,4] [1,] 1 0 0 0 [2,] 1 0 0 0 [3,] 1 0 0 0 [4,] 1 0 0 0 [5,] 0 1 0 0 [6,] 0 1 0 0 [7,] 0 1 0 0 [8,] 0 1 0 0 [9,] 0 1 0 0 [10,] 0 0 1 0 [11,] 0 0 1 0 [12,] 0 0 1 0 [13,] 0 0 1 0 [14,] 0 0 0 1 [15,] 0 0 0 1 [16,] 0 0 0 1 [17,] 0 0 0 1 [18,] 0 0 0 1 [19,] 0 0 0 1 [20,] 0 1 0 0 [21,] 0 0 1 0 [22,] 0 1 0 0 [23,] 0 1 0 0 [24,] 0 0 1 0 > cluster.cor(keys,test.data) Call: cluster.cor(keys = keys, r.mat = test.data) (Standardized) Alpha: [1] 0.67 0.89 0.83 0.74 (Standardized) G6*: [1] 0.71 0.91 0.86 0.76 Average item correlation: [1] 0.34 0.51 0.45 0.32 Number of items: [1] 4 8 6 6 Signal to Noise ratio based upon average r and n [1] 2.1 8.4 4.8 2.8 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: [,1] [,2] [,3] [,4] [1,] 0.67 0.72 0.55 0.60 [2,] 0.56 0.89 0.63 0.63 [3,] 0.41 0.54 0.83 0.67 [4,] 0.42 0.51 0.53 0.74 > > > > cleanEx() > nameEx("cohen.d") > ### * cohen.d > > flush(stderr()); flush(stdout()) > > ### Name: cohen.d > ### Title: Find Cohen d and confidence intervals > ### Aliases: cohen.d d.robust cohen.d.ci d.ci cohen.d.by d2r r2d d2t t2d > ### m2t m2d d2OVL d2OVL2 d2CL d2U3 cd.validity > ### Keywords: models multivariate > > ### ** Examples > > cohen.d(sat.act,"gender") Call: cohen.d(x = sat.act, group = "gender") Cohen d statistic of difference between two means lower effect upper education 0.03 0.18 0.34 age -0.20 -0.04 0.11 ACT -0.23 -0.08 0.08 SATV -0.19 -0.04 0.12 SATQ -0.51 -0.35 -0.19 Multivariate (Mahalanobis) distance between groups [1] 0.52 r equivalent of difference between two means education age ACT SATV SATQ 0.09 -0.02 -0.04 -0.02 -0.17 > #robust version > round(d.robust(sat.act,"gender")$robust.d,2) education age ACT SATV SATQ 0.14 -0.05 -0.16 -0.05 -0.39 > > #formula input is nicer > cohen.d(sat.act ~ gender) #formula input version Call: cohen.d(x = sat.act ~ gender) Cohen d statistic of difference between two means lower effect upper education 0.03 0.18 0.34 age -0.20 -0.04 0.11 ACT -0.23 -0.08 0.08 SATV -0.19 -0.04 0.12 SATQ -0.51 -0.35 -0.19 Multivariate (Mahalanobis) distance between groups [1] 0.52 r equivalent of difference between two means education age ACT SATV SATQ 0.09 -0.02 -0.04 -0.02 -0.17 > > #report cohen.d by another group > cd <- cohen.d.by(sat.act,"gender","education") > cohen.d(SATV + SATQ ~ gender, data=sat.act) #just choose two variables Call: cohen.d(x = SATV + SATQ ~ gender, data = sat.act) Cohen d statistic of difference between two means lower effect upper SATV -0.19 -0.04 0.12 SATQ -0.51 -0.35 -0.19 Multivariate (Mahalanobis) distance between groups [1] 0.43 r equivalent of difference between two means SATV SATQ -0.02 -0.17 > summary(cd) #summarize the output Extract effect sizes by groups from cohen.d.by age ACT SATV SATQ Md genderforeducation0 0.039 -0.601 -0.357 -0.35 0.61 genderforeducation1 -0.063 0.239 -0.047 -0.28 0.50 genderforeducation2 0.471 0.119 0.253 0.12 0.57 genderforeducation3 0.064 -0.078 -0.064 -0.45 0.60 genderforeducation4 -0.383 0.117 -0.055 -0.37 0.79 genderforeducation5 -0.146 -0.472 -0.032 -0.51 0.63 > > #formula version combines these functions > cd <- cohen.d(sat.act ~ gender + education) #find d by gender for each level of education > summary(cd) Extract effect sizes by groups from cohen.d.by age ACT SATV SATQ Md genderforeducation0 0.039 -0.601 -0.357 -0.35 0.61 genderforeducation1 -0.063 0.239 -0.047 -0.28 0.50 genderforeducation2 0.471 0.119 0.253 0.12 0.57 genderforeducation3 0.064 -0.078 -0.064 -0.45 0.60 genderforeducation4 -0.383 0.117 -0.055 -0.37 0.79 genderforeducation5 -0.146 -0.472 -0.032 -0.51 0.63 > > #now show several examples of confidence intervals > #one group (d vs 0) > #consider the t from the cushny data set > t2d( -4.0621,n1=10) [1] -1.284549 > d.ci(-1.284549,n1=10) #the confidence interval of the effect of drug on sleep lower effect upper [1,] -2.118004 -1.284549 -0.4146159 > #two groups > d.ci(.62,n=64) #equal group size lower effect upper [1,] 0.1019223 0.62 1.128983 > d.ci(.62,n1=35,n2=29) #unequal group size lower effect upper [1,] 0.1135519 0.62 1.121681 > #several examples of d and t from data > m2d(52.58,-70.65,49.9,47.5) #Terman and Miles 1936 [1] 2.529622 > > #graphically show the various overlap statistics > curve(d2OVL2(x),0,3,xlab="d",ylab="",lty="dashed", + main="Four representations of effect size (d) ") > curve(d2OVL(x),0,3,xlab="d",add=TRUE,) > curve(d2CL(x),0,3,add=TRUE) > curve(d2U3(x), add=TRUE,lty="dotted") > text(1,.37,"OVL2") > text(2,.37,"OVL") > text(1,.88,"U3") > text(2, .88,"CL") > > > > cleanEx() > nameEx("comorbidity") > ### * comorbidity > > flush(stderr()); flush(stdout()) > > ### Name: comorbidity > ### Title: Convert base rates of two diagnoses and their comorbidity into > ### phi, Yule, and tetrachorics > ### Aliases: comorbidity > ### Keywords: multivariate > > ### ** Examples > > comorbidity(.2,.15,.1,c("Anxiety","Depression")) Call: comorbidity(d1 = 0.2, d2 = 0.15, com = 0.1, labels = c("Anxiety", "Depression")) Comorbidity table Anxiety -Anxiety Depression 0.1 0.05 -Depression 0.1 0.75 implies phi = 0.49 with Yule = 0.87 and tetrachoric correlation of 0.75 and normal thresholds of 1.04 0.84> > > > cleanEx() > nameEx("congruence") > ### * congruence > > flush(stderr()); flush(stdout()) > > ### Name: congruence > ### Title: Matrix and profile congruences and distances > ### Aliases: congruence cohen.profile distance > ### Keywords: multivariate models > > ### ** Examples > #cohen's example > # a and b have reversed one item around the midpoint > co <- data.frame(ira=c(2,6,5,6,4), + jim=c(1,3,5,4,4), + a=c(5,6,5,6,4),b=c(6,3,5,4,4)) > > lowerMat(congruence(co-3.5)) # congruence around the midpoint is insensitive to reflection ira jim a b ira 1.00 jim 0.49 1.00 a 0.74 -0.10 1.00 b -0.10 -0.35 0.49 1.00 > lowerCor(co) #the correlation is not ira jim a b ira 1.00 jim 0.67 1.00 a 0.61 -0.08 1.00 b -0.81 -0.40 -0.37 1.00 > lowerMat(congruence(scale(co,scale=FALSE))) #zero centered congruence is r ira jim a b ira 1.00 jim 0.67 1.00 a 0.61 -0.08 1.00 b -0.81 -0.40 -0.37 1.00 > cohen.profile(co) Congruence coefficients ira jim a b ira 1.00 0.49 0.74 -0.10 jim 0.49 1.00 -0.10 -0.35 a 0.74 -0.10 1.00 0.49 b -0.10 -0.35 0.49 1.00 > > > > cleanEx() > nameEx("cor.ci") > ### * cor.ci > > flush(stderr()); flush(stdout()) > > ### Name: corCi > ### Title: Bootstrapped and normal confidence intervals for raw and > ### composite correlations > ### Aliases: corCi cor.ci > ### Keywords: multivariate models > > ### ** Examples > > #find confidence intervals of a correlation matrix with specified sample size > ci <- corCi(Thurstone[1:6,1:6],n=213) > ci #show them Correlations and normal theory confidence intervals lower r upper Sntnc-Vcblr 0.78 0.83 0.87 Sntnc-Snt.C 0.72 0.78 0.82 Sntnc-Frs.L 0.32 0.44 0.54 Sntnc-F.L.W 0.32 0.43 0.54 Sntnc-Sffxs 0.33 0.45 0.55 Vcblr-Snt.C 0.72 0.78 0.83 Vcblr-Frs.L 0.38 0.49 0.59 Vcblr-F.L.W 0.35 0.46 0.56 Vcblr-Sffxs 0.38 0.49 0.58 Snt.C-Frs.L 0.35 0.46 0.56 Snt.C-F.L.W 0.31 0.42 0.53 Snt.C-Sffxs 0.33 0.44 0.54 Frs.L-F.L.W 0.59 0.67 0.74 Frs.L-Sffxs 0.49 0.59 0.67 F.L.W-Sffxs 0.44 0.54 0.63 > R <- cor.plot.upperLowerCi(ci) #show them graphically > R #show them as a matrix High and low confidence intervals Sntn Vcbl Sn.C Fr.L F.L. Sffx Sentences 1.00 0.87 0.82 0.54 0.54 0.55 Vocabulary 0.78 1.00 0.83 0.59 0.56 0.58 Sent.Completion 0.72 0.72 1.00 0.56 0.53 0.54 First.Letters 0.32 0.38 0.35 1.00 0.74 0.67 Four.Letter.Words 0.32 0.35 0.31 0.59 1.00 0.63 Suffixes 0.33 0.38 0.33 0.49 0.44 1.00 > > > #confidence intervals by bootstrapping requires raw data > corCi(psychTools::bfi[1:200,1:10]) # just the first 10 variables Call:corCi(x = psychTools::bfi[1:200, 1:10]) Coefficients and bootstrapped confidence intervals A1 A2 A3 A4 A5 C1 C2 C3 C4 C5 A1 1.00 A2 -0.40 1.00 A3 -0.27 0.55 1.00 A4 -0.06 0.24 0.33 1.00 A5 -0.26 0.43 0.61 0.31 1.00 C1 -0.10 0.19 0.15 0.10 0.06 1.00 C2 -0.03 0.15 0.12 0.20 0.04 0.49 1.00 C3 -0.04 0.17 0.08 0.11 0.07 0.44 0.48 1.00 C4 0.14 -0.24 -0.15 -0.10 -0.11 -0.34 -0.34 -0.33 1.00 C5 0.08 -0.10 -0.14 -0.12 -0.12 -0.26 -0.22 -0.27 0.41 1.00 scale correlations and bootstrapped confidence intervals lower.emp lower.norm estimate upper.norm upper.emp p A1-A2 -0.55 -0.54 -0.40 -0.28 -0.29 0.00 A1-A3 -0.42 -0.41 -0.27 -0.14 -0.14 0.00 A1-A4 -0.20 -0.21 -0.06 0.10 0.08 0.47 A1-A5 -0.42 -0.41 -0.26 -0.12 -0.14 0.00 A1-C1 -0.23 -0.22 -0.10 0.04 0.02 0.17 A1-C2 -0.20 -0.18 -0.03 0.12 0.10 0.67 A1-C3 -0.16 -0.17 -0.04 0.08 0.06 0.51 A1-C4 0.02 0.00 0.14 0.27 0.27 0.05 A1-C5 -0.05 -0.06 0.08 0.20 0.21 0.26 A2-A3 0.41 0.43 0.55 0.66 0.65 0.00 A2-A4 0.09 0.09 0.24 0.37 0.37 0.00 A2-A5 0.30 0.28 0.43 0.56 0.56 0.00 A2-C1 -0.01 0.04 0.19 0.32 0.29 0.02 A2-C2 -0.02 -0.02 0.15 0.30 0.28 0.10 A2-C3 0.00 -0.01 0.17 0.34 0.33 0.06 A2-C4 -0.36 -0.37 -0.24 -0.08 -0.10 0.00 A2-C5 -0.24 -0.23 -0.10 0.05 0.05 0.21 A3-A4 0.18 0.18 0.33 0.47 0.46 0.00 A3-A5 0.52 0.50 0.61 0.71 0.71 0.00 A3-C1 0.01 0.01 0.15 0.31 0.29 0.04 A3-C2 -0.03 -0.04 0.12 0.27 0.27 0.15 A3-C3 -0.09 -0.10 0.08 0.25 0.22 0.38 A3-C4 -0.32 -0.30 -0.15 0.01 0.01 0.07 A3-C5 -0.27 -0.27 -0.14 0.00 -0.01 0.06 A4-A5 0.19 0.15 0.31 0.45 0.46 0.00 A4-C1 -0.01 -0.02 0.10 0.22 0.21 0.09 A4-C2 0.07 0.04 0.20 0.34 0.36 0.02 A4-C3 -0.01 -0.02 0.11 0.26 0.26 0.10 A4-C4 -0.23 -0.23 -0.10 0.04 0.03 0.15 A4-C5 -0.26 -0.25 -0.12 0.01 0.01 0.06 A5-C1 -0.11 -0.12 0.06 0.23 0.23 0.52 A5-C2 -0.12 -0.12 0.04 0.19 0.20 0.64 A5-C3 -0.11 -0.08 0.07 0.22 0.22 0.36 A5-C4 -0.27 -0.26 -0.11 0.05 0.03 0.19 A5-C5 -0.25 -0.27 -0.12 0.04 0.03 0.15 C1-C2 0.39 0.36 0.49 0.61 0.61 0.00 C1-C3 0.30 0.30 0.44 0.57 0.55 0.00 C1-C4 -0.50 -0.47 -0.34 -0.18 -0.20 0.00 C1-C5 -0.42 -0.41 -0.26 -0.10 -0.13 0.00 C2-C3 0.37 0.36 0.48 0.60 0.60 0.00 C2-C4 -0.47 -0.48 -0.34 -0.18 -0.21 0.00 C2-C5 -0.35 -0.34 -0.22 -0.09 -0.09 0.00 C3-C4 -0.47 -0.46 -0.33 -0.20 -0.20 0.00 C3-C5 -0.40 -0.40 -0.27 -0.13 -0.13 0.00 C4-C5 0.27 0.27 0.41 0.52 0.52 0.00 > #The keys have overlapping scales > keys <- list(agree=c("-A1","A2","A3","A4","A5"), conscientious= c("C1", + "C2","C3","-C4","-C5"),extraversion=c("-E1","-E2","E3","E4","E5"), neuroticism= + c("N1", "N2", "N3","N4","N5"), openness = c("O1","-O2","O3","O4","-O5"), + alpha=c("-A1","A2","A3","A4","A5","C1","C2","C3","-C4","-C5","N1","N2","N3","N4","N5"), + beta = c("-E1","-E2","E3","E4","E5","O1","-O2","O3","O4","-O5") ) > > > #do not correct for item overlap > rci <- corCi(psychTools::bfi[1:200,],keys,n.iter=10,main="correlation with overlapping scales") > #also shows the graphic -note the overlap > #correct for overlap > rci <- cor.ci(psychTools::bfi[1:200,],keys,overlap=TRUE, n.iter=10,main="Correct for overlap") > #show the confidence intervals > ci <- cor.plot.upperLowerCi(rci) #to show the upper and lower confidence intervals > ci #print the confidence intervals in matrix form High and low confidence intervals agre cnsc extr nrtc opnn alph beta agree 1.00 0.42 0.55 -0.28 0.35 0.54 0.52 conscientious 0.03 1.00 0.37 -0.15 0.33 0.61 0.40 extraversion 0.31 0.17 1.00 -0.34 0.35 0.37 0.72 neuroticism 0.05 0.11 -0.13 1.00 -0.20 0.47 -0.28 openness 0.06 0.11 0.13 0.04 1.00 0.33 0.63 alpha 0.25 0.33 0.11 0.26 0.04 1.00 0.34 beta 0.30 0.23 0.57 -0.12 0.49 0.21 1.00 > > > > cleanEx() > nameEx("cor.plot") > ### * cor.plot > > flush(stderr()); flush(stdout()) > > ### Name: cor.plot > ### Title: Create an image plot for a correlation or factor matrix > ### Aliases: cor.plot corPlot cor.plot.upperLowerCi corPlotUpperLowerCi > ### Keywords: multivariate hplot > > ### ** Examples > > corPlot(Thurstone,main="9 cognitive variables from Thurstone") > #just blue implies positive manifold > #select just some variables to plot > corPlot(Thurstone, zlim=c(0,1),main="9 cognitive variables from Thurstone",select=c(1:3,7:9)) > #now show a non-symmetric plot > corPlot(Thurstone[4:9,1:3], zlim=c(0,1),main="9 cognitive variables + from Thurstone",numbers=TRUE,symmetric=FALSE) > > #Two ways of including stars to show significance > #From the raw data > corPlot(sat.act,numbers=TRUE,stars=TRUE) > #from a correlation matrix with pvals > cp <- corr.test(sat.act) #find the correlations and pvals > r<- cp$r > p <- cp$p > corPlot(r,numbers=TRUE,diag=FALSE,stars=TRUE, pval = p,main="Correlation plot + with Holm corrected 'significance'") > > #now red means less than .5 > corPlot(mat.sort(Thurstone),TRUE,zlim=c(0,1), + main="9 cognitive variables from Thurstone (sorted by factor loading) ") > simp <- sim.circ(24) > corPlot(cor(simp),main="24 variables in a circumplex") > > #scale by raw and adjusted probabilities > rs <- corr.test(sat.act[1:200,] ) #find the probabilities of the correlations > corPlot(r=rs$r,numbers=TRUE,pval=rs$p,main="Correlations scaled by probability values") > #Show the upper and lower confidence intervals > cor.plot.upperLowerCi(R=rs,numbers=TRUE) > > #now do this again, but with lighter colors > gr <- colorRampPalette(c("#B52127", "white", "#2171B5")) > corPlot(r=rs$r,numbers=TRUE,pval=rs$p,main="Correlations scaled by probability values",gr=gr) > cor.plot.upperLowerCi(R=rs,numbers=TRUE,gr=gr) > > > > #do multiple plots > #Also show the xaxis option > op <- par(mfrow=c(2,2)) > corPlot(psychTools::ability,show.legend=FALSE,keep.par=FALSE,upper=FALSE) > f4 <- fa(psychTools::ability,4) > corPlot(f4,show.legend=FALSE,keep.par=FALSE,numbers=TRUE,xlas=3) > om <- omega(psychTools::ability,4) > corPlot(om,show.legend=FALSE,keep.par=FALSE,numbers=TRUE,xaxis=3) > par(op) > > > corPlotUpperLowerCi(rs,adjust=TRUE,main="Holm adjusted confidence intervals",gr=gr) > > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > cleanEx() > nameEx("cor.smooth") > ### * cor.smooth > > flush(stderr()); flush(stdout()) > > ### Name: cor.smooth > ### Title: Smooth a non-positive definite correlation matrix to make it > ### positive definite > ### Aliases: cor.smooth cor.smoother > ### Keywords: multivariate models > > ### ** Examples > > burt <- psychTools::burt > bs <- cor.smooth(psychTools::burt) #burt data set is not positive definite Warning in cor.smooth(psychTools::burt) : Matrix was not positive definite, smoothing was done > plot(burt[lower.tri(burt)],bs[lower.tri(bs)],ylab="smoothed values",xlab="original values") > abline(0,1,lty="dashed") > > round(burt - bs,3) Sociality Sorrow Tenderness Joy Wonder Elation Disgust Anger Sociality 0.000 0.000 0.010 0.002 0.003 0.001 0.003 0.003 Sorrow 0.000 0.000 0.016 0.004 0.005 0.002 0.006 0.006 Tenderness 0.010 0.016 0.000 0.001 -0.001 0.002 -0.002 -0.003 Joy 0.002 0.004 0.001 0.000 0.000 0.000 0.000 0.000 Wonder 0.003 0.005 -0.001 0.000 0.000 0.000 -0.001 0.000 Elation 0.001 0.002 0.002 0.000 0.000 0.000 0.000 0.001 Disgust 0.003 0.006 -0.002 0.000 -0.001 0.000 0.000 -0.001 Anger 0.003 0.006 -0.003 0.000 0.000 0.001 -0.001 0.000 Sex 0.000 -0.001 0.004 0.001 0.001 0.000 0.001 0.002 Fear 0.000 0.002 0.001 0.000 0.000 0.000 0.000 0.000 Subjection 0.000 0.001 0.002 0.000 0.000 0.000 0.000 0.000 Sex Fear Subjection Sociality 0.000 0.000 0.000 Sorrow -0.001 0.002 0.001 Tenderness 0.004 0.001 0.002 Joy 0.001 0.000 0.000 Wonder 0.001 0.000 0.000 Elation 0.000 0.000 0.000 Disgust 0.001 0.000 0.000 Anger 0.002 0.000 0.000 Sex 0.000 0.000 0.000 Fear 0.000 0.000 0.000 Subjection 0.000 0.000 0.000 > fa(burt,2) #this throws a warning that the matrix yields an improper solution Warning in cor.smooth(R) : Matrix was not positive definite, smoothing was done Warning in cor.smooth(R) : Matrix was not positive definite, smoothing was done Warning in cor.smooth(R) : Matrix was not positive definite, smoothing was done Warning in cor.smooth(r) : Matrix was not positive definite, smoothing was done Warning in fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, : The estimated weights for the factor scores are probably incorrect. Try a different factor score estimation method. Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An ultra-Heywood case was detected. Examine the results carefully Factor Analysis using method = minres Call: fa(r = burt, nfactors = 2) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 h2 u2 com Sociality 0.67 0.51 1.02 -0.021 1.9 Sorrow 0.26 0.79 0.88 0.119 1.2 Tenderness 0.03 0.88 0.80 0.205 1.0 Joy 0.45 0.41 0.54 0.463 2.0 Wonder 0.67 0.13 0.55 0.450 1.1 Elation 0.61 0.16 0.48 0.523 1.1 Disgust 0.39 0.26 0.31 0.690 1.7 Anger 0.79 -0.15 0.54 0.462 1.1 Sex 0.66 -0.08 0.40 0.605 1.0 Fear -0.19 0.59 0.29 0.715 1.2 Subjection -0.43 0.60 0.31 0.690 1.8 MR1 MR2 SS loadings 3.21 2.90 Proportion Var 0.29 0.26 Cumulative Var 0.29 0.55 Proportion Explained 0.53 0.47 Cumulative Proportion 0.53 1.00 With factor correlations of MR1 MR2 MR1 1.00 0.46 MR2 0.46 1.00 Mean item complexity = 1.4 Test of the hypothesis that 2 factors are sufficient. The degrees of freedom for the null model are 55 and the objective function was 29.97 The degrees of freedom for the model are 34 and the objective function was 23.27 The root mean square of the residuals (RMSR) is 0.07 The df corrected root mean square of the residuals is 0.1 Fit based upon off diagonal values = 0.97> #Smoothing first throws a warning that the matrix was improper, > #but produces a better solution > fa(cor.smooth(burt),2) Warning in cor.smooth(burt) : Matrix was not positive definite, smoothing was done Warning in fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, : The estimated weights for the factor scores are probably incorrect. Try a different factor score estimation method. Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An ultra-Heywood case was detected. Examine the results carefully Factor Analysis using method = minres Call: fa(r = cor.smooth(burt), nfactors = 2) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 h2 u2 com Sociality 0.68 0.50 1.02 -0.019 1.8 Sorrow 0.28 0.77 0.87 0.130 1.3 Tenderness 0.05 0.86 0.78 0.223 1.0 Joy 0.46 0.40 0.54 0.462 2.0 Wonder 0.68 0.12 0.55 0.451 1.1 Elation 0.61 0.15 0.48 0.523 1.1 Disgust 0.40 0.25 0.31 0.691 1.7 Anger 0.79 -0.16 0.53 0.465 1.1 Sex 0.67 -0.09 0.40 0.604 1.0 Fear -0.19 0.60 0.29 0.711 1.2 Subjection -0.42 0.61 0.31 0.686 1.8 MR1 MR2 SS loadings 3.25 2.82 Proportion Var 0.30 0.26 Cumulative Var 0.30 0.55 Proportion Explained 0.54 0.46 Cumulative Proportion 0.54 1.00 With factor correlations of MR1 MR2 MR1 1.00 0.45 MR2 0.45 1.00 Mean item complexity = 1.4 Test of the hypothesis that 2 factors are sufficient. The degrees of freedom for the null model are 55 and the objective function was 29.97 The degrees of freedom for the model are 34 and the objective function was 23.22 The root mean square of the residuals (RMSR) is 0.07 The df corrected root mean square of the residuals is 0.09 Fit based upon off diagonal values = 0.97> > #this next example is a correlation matrix from DeLeuw used as an example > #in Knol and ten Berge. > #the example is also used in the nearcor documentation > cat("pr is the example matrix used in Knol DL, ten Berge (1989)\n") pr is the example matrix used in Knol DL, ten Berge (1989) > pr <- matrix(c(1, 0.477, 0.644, 0.478, 0.651, 0.826, + 0.477, 1, 0.516, 0.233, 0.682, 0.75, + 0.644, 0.516, 1, 0.599, 0.581, 0.742, + 0.478, 0.233, 0.599, 1, 0.741, 0.8, + 0.651, 0.682, 0.581, 0.741, 1, 0.798, + 0.826, 0.75, 0.742, 0.8, 0.798, 1), + nrow = 6, ncol = 6) > > sm <- cor.smooth(pr) Warning in cor.smooth(pr) : Matrix was not positive definite, smoothing was done > resid <- pr - sm > # several goodness of fit tests > # from Knol and ten Berge > tr(resid %*% t(resid)) /2 [1] 0.003520413 > > # from nearPD > sum(resid^2)/2 [1] 0.003520413 > > > > > cleanEx() > nameEx("cor.wt") > ### * cor.wt > > flush(stderr()); flush(stdout()) > > ### Name: cor.wt > ### Title: The sample size weighted correlation may be used in correlating > ### aggregated data > ### Aliases: cor.wt > ### Keywords: multivariate > > ### ** Examples > > means.by.age <- statsBy(sat.act,"age") > wt.cors <- cor.wt(means.by.age) > lowerMat(wt.cors$r) #show the weighted correlations gendr edctn age ACT SATV SATQ gender 1.00 education 0.12 1.00 age -0.09 0.67 1.00 ACT -0.24 0.32 0.39 1.00 SATV -0.15 -0.23 -0.16 0.39 1.00 SATQ -0.33 -0.23 -0.12 0.35 0.66 1.00 > unwt <- lowerCor(means.by.age$mean) gendr edctn age ACT SATV SATQ gender 1.00 education 0.19 1.00 age 0.14 0.69 1.00 ACT -0.22 0.07 0.21 1.00 SATV 0.09 -0.14 -0.03 0.37 1.00 SATQ -0.01 -0.10 0.00 0.44 0.73 1.00 > mixed <- lowerUpper(unwt,wt.cors$r) #combine both results > cor.plot(mixed,TRUE,main="weighted versus unweighted correlations") > diff <- lowerUpper(unwt,wt.cors$r,TRUE) > cor.plot(diff,TRUE,main="differences of weighted versus unweighted correlations") > > > > cleanEx() > nameEx("corFiml") > ### * corFiml > > flush(stderr()); flush(stdout()) > > ### Name: corFiml > ### Title: Find a Full Information Maximum Likelihood (FIML) correlation or > ### covariance matrix from a data matrix with missing data > ### Aliases: corFiml > ### Keywords: multivariate models > > ### ** Examples > > rML <- corFiml(psychTools::bfi[20:27]) > rpw <- cor(psychTools::bfi[20:27],use="pairwise") > round(rML - rpw,3) N5 O1 O2 O3 O4 O5 gender education N5 0.000 0.000 -0.001 -0.001 0.000 -0.001 -0.002 0.001 O1 0.000 0.000 0.001 0.000 -0.001 0.001 0.000 0.001 O2 -0.001 0.001 0.000 0.001 0.000 -0.002 0.000 0.001 O3 -0.001 0.000 0.001 0.000 -0.001 0.000 -0.002 0.001 O4 0.000 -0.001 0.000 -0.001 0.000 0.000 0.000 0.001 O5 -0.001 0.001 -0.002 0.000 0.000 0.000 0.000 0.001 gender -0.002 0.000 0.000 -0.002 0.000 0.000 0.000 0.001 education 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.000 > mp <- corFiml(psychTools::bfi[20:27],show=TRUE) > mp N5 O1 O2 O3 O4 O5 gender education 2489 TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE 214 TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE 22 FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE 19 TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE 18 TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE 13 TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE 13 TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE 3 TRUE TRUE TRUE FALSE TRUE TRUE TRUE FALSE 2 FALSE FALSE TRUE FALSE TRUE FALSE TRUE TRUE 2 FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE 1 TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE 1 TRUE TRUE TRUE TRUE FALSE TRUE TRUE FALSE 1 FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE 1 FALSE TRUE TRUE FALSE TRUE FALSE TRUE TRUE 1 FALSE TRUE TRUE FALSE TRUE FALSE TRUE FALSE > > > > cleanEx() > nameEx("corr.test") > ### * corr.test > > flush(stderr()); flush(stdout()) > > ### Name: corr.test > ### Title: Find the correlations, sample sizes, and probability values > ### between elements of a matrix or data.frame. > ### Aliases: corr.test corr.p > ### Keywords: multivariate models > > ### ** Examples > > > ct <- corr.test(attitude) #find the correlations and give the probabilities > ct #show the results Call:corr.test(x = attitude) Correlation matrix rating complaints privileges learning raises critical advance rating 1.00 0.83 0.43 0.62 0.59 0.16 0.16 complaints 0.83 1.00 0.56 0.60 0.67 0.19 0.22 privileges 0.43 0.56 1.00 0.49 0.45 0.15 0.34 learning 0.62 0.60 0.49 1.00 0.64 0.12 0.53 raises 0.59 0.67 0.45 0.64 1.00 0.38 0.57 critical 0.16 0.19 0.15 0.12 0.38 1.00 0.28 advance 0.16 0.22 0.34 0.53 0.57 0.28 1.00 Sample Size [1] 30 Probability values (Entries above the diagonal are adjusted for multiple tests.) rating complaints privileges learning raises critical advance rating 0.00 0.00 0.19 0.00 0.01 1.00 1.00 complaints 0.00 0.00 0.02 0.01 0.00 1.00 1.00 privileges 0.02 0.00 0.00 0.07 0.15 1.00 0.51 learning 0.00 0.00 0.01 0.00 0.00 1.00 0.03 raises 0.00 0.00 0.01 0.00 0.00 0.36 0.01 critical 0.41 0.32 0.44 0.54 0.04 0.00 0.90 advance 0.41 0.23 0.06 0.00 0.00 0.13 0.00 To see confidence intervals of the correlations, print with the short=FALSE option > cts <- corr.test(attitude[1:3],attitude[4:6]) #reports all values corrected for multiple tests > > #corr.test(sat.act[1:3],sat.act[4:6],adjust="none") #don't adjust the probabilities > > #take correlations and show the probabilities as well as the confidence intervals > print(corr.p(cts$r,n=30),short=FALSE) Call:corr.p(r = cts$r, n = 30) Correlation matrix learning raises critical rating 0.62 0.59 0.16 complaints 0.60 0.67 0.19 privileges 0.49 0.45 0.15 Sample Size [1] 30 These are the unadjusted probability values. To see the values adjusted for multiple tests see the p.adj object. learning raises critical rating 0.00 0.00 0.96 complaints 0.00 0.00 0.96 privileges 0.03 0.05 0.96 Confidence intervals based upon normal theory. To get bootstrapped values, try cor.ci lower r upper p lrnng-ratng 0.34 0.62 0.80 0.00 lrnng-cmpln 0.30 0.60 0.79 0.00 lrnng-prvlg 0.16 0.49 0.72 0.03 raiss-ratng 0.29 0.59 0.78 0.00 raiss-cmpln 0.41 0.67 0.83 0.00 raiss-prvlg 0.10 0.45 0.69 0.05 crtcl-ratng -0.22 0.16 0.49 0.96 crtcl-cmpln -0.19 0.19 0.51 0.96 crtcl-prvlg -0.22 0.15 0.48 0.96 > > #don't adjust the probabilities > print(corr.test(sat.act[1:3],sat.act[4:6],adjust="none"),short=FALSE) Call:corr.test(x = sat.act[1:3], y = sat.act[4:6], adjust = "none") Correlation matrix ACT SATV SATQ gender -0.04 -0.02 -0.17 education 0.15 0.05 0.03 age 0.11 -0.04 -0.03 Sample Size ACT SATV SATQ gender 700 700 687 education 700 700 687 age 700 700 687 ACT SATV SATQ gender 0.33 0.62 0.00 education 0.00 0.22 0.36 age 0.00 0.26 0.37 Confidence intervals based upon normal theory. To get bootstrapped values, try cor.ci raw.lower raw.r raw.upper raw.p lower.adj upper.adj gendr-ACT -0.11 -0.04 0.04 0.33 -0.14 0.07 edctn-ACT 0.08 0.15 0.23 0.00 0.05 0.26 age-ACT 0.04 0.11 0.18 0.00 0.01 0.21 gendr-SATV -0.09 -0.02 0.06 0.62 -0.12 0.09 edctn-SATV -0.03 0.05 0.12 0.22 -0.06 0.15 age-SATV -0.12 -0.04 0.03 0.26 -0.15 0.06 gendr-SATQ -0.24 -0.17 -0.09 0.00 -0.27 -0.06 edctn-SATQ -0.04 0.03 0.11 0.36 -0.07 0.14 age-SATQ -0.11 -0.03 0.04 0.37 -0.14 0.07 > > #print out the stars object without showing quotes > print(corr.test(attitude)$stars,quote=FALSE) #note that the adjusted ps are given as well rating complaints privileges learning raises critical advance rating 1*** 0.83*** 0.43 0.62** 0.59** 0.16 0.16 complaints 0.83*** 1*** 0.56* 0.6** 0.67** 0.19 0.22 privileges 0.43* 0.56** 1*** 0.49 0.45 0.15 0.34 learning 0.62*** 0.6*** 0.49** 1*** 0.64** 0.12 0.53* raises 0.59*** 0.67*** 0.45* 0.64*** 1*** 0.38 0.57* critical 0.16 0.19 0.15 0.12 0.38* 1*** 0.28 advance 0.16 0.22 0.34 0.53** 0.57*** 0.28 1*** > > > kendall.r <- corr.test(bfi[1:40,4:6], method="kendall", normal=FALSE) > #compare with > cor.test(x=bfi[1:40,4],y=bfi[1:40,6],method="kendall", exact=FALSE) Kendall's rank correlation tau data: bfi[1:40, 4] and bfi[1:40, 6] z = -0.40445, p-value = 0.6859 alternative hypothesis: true tau is not equal to 0 sample estimates: tau -0.05310335 > print(kendall.r,digits=6) Call:corr.test(x = bfi[1:40, 4:6], method = "kendall", normal = FALSE) Correlation matrix A4 A5 C1 A4 1.000000 0.214257 -0.053103 A5 0.214257 1.000000 0.024117 C1 -0.053103 0.024117 1.000000 Sample Size A4 A5 C1 A4 40 40 40 A5 40 40 40 C1 40 40 40 Probability values (Entries above the diagonal are adjusted for multiple tests.) A4 A5 C1 A4 0.000000 0.302878 1 A5 0.100959 0.000000 1 C1 0.685883 0.856251 0 To see confidence intervals of the correlations, print with the short=FALSE option > > > > cleanEx() > nameEx("correct.cor") > ### * correct.cor > > flush(stderr()); flush(stdout()) > > ### Name: correct.cor > ### Title: Find dis-attenuated correlations given correlations and > ### reliabilities > ### Aliases: correct.cor > ### Keywords: models multivariate > > ### ** Examples > > > # attitude from the datasets package > #example 1 is a rather clunky way of doing things > > a1 <- attitude[,c(1:3)] > a2 <- attitude[,c(4:7)] > x1 <- rowSums(a1) #find the sum of the first 3 attitudes > x2 <- rowSums(a2) #find the sum of the last 4 attitudes > alpha1 <- alpha(a1) Number of categories should be increased in order to count frequencies. > alpha2 <- alpha(a2) Number of categories should be increased in order to count frequencies. > x <- matrix(c(x1,x2),ncol=2) > x.cor <- cor(x) > alpha <- c(alpha1$total$raw_alpha,alpha2$total$raw_alpha) > round(correct.cor(x.cor,alpha),2) [,1] [,2] [1,] 0.82 0.78 [2,] 0.61 0.75 > # > #much better - although uses standardized alpha > clusters <- matrix(c(rep(1,3),rep(0,7),rep(1,4)),ncol=2) > cluster.loadings(clusters,cor(attitude)) Call: cluster.loadings(keys = clusters, r.mat = cor(attitude)) (Standardized) Alpha: [1] 0.82 0.74 (Standardized) G6*: [1] 0.83 0.78 Average item correlation: [1] 0.60 0.42 Number of items: [1] 3 4 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: [,1] [,2] [1,] 0.82 0.77 [2,] 0.60 0.74 Item by scale intercorrelations corrected for item overlap and scale reliability [,1] [,2] rating 0.85 0.57 complaints 0.92 0.63 privileges 0.58 0.54 learning 0.73 0.72 raises 0.73 0.85 critical 0.21 0.36 advance 0.31 0.72 > # or > clusters <- matrix(c(rep(1,3),rep(0,7),rep(1,4)),ncol=2) > cluster.cor(clusters,cor(attitude)) Call: cluster.cor(keys = clusters, r.mat = cor(attitude)) (Standardized) Alpha: [1] 0.82 0.74 (Standardized) G6*: [1] 0.83 0.78 Average item correlation: [1] 0.60 0.42 Number of items: [1] 3 4 Signal to Noise ratio based upon average r and n [1] 4.6 2.9 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: [,1] [,2] [1,] 0.82 0.77 [2,] 0.60 0.74 > # > #best > keys <- make.keys(attitude,list(first=1:3,second=4:7)) > scores <- scoreItems(keys,attitude) Number of categories should be increased in order to count frequencies. > scores$corrected first second first 0.8222013 0.7798835 second 0.6104550 0.7451947 > > #However, to do the more general case of correcting correlations for reliabilty > #corrected <- cor2cov(x.cor,1/alpha) > #diag(corrected) <- 1 > > > > > > cleanEx() > nameEx("cortest.bartlett") > ### * cortest.bartlett > > flush(stderr()); flush(stdout()) > > ### Name: cortest.bartlett > ### Title: Bartlett's test that a correlation matrix is an identity matrix > ### Aliases: cortest.bartlett > ### Keywords: multivariate > > ### ** Examples > > set.seed(42) > x <- matrix(rnorm(1000),ncol=10) > r <- cor(x) > cortest.bartlett(r) #random data don't differ from an identity matrix Warning in cortest.bartlett(r) : n not specified, 100 used $chisq [1] 33.78894 $p.value [1] 0.8897656 $df [1] 45 > #data(bfi) > cortest.bartlett(psychTools::bfi[1:200,1:10]) #not an identity matrix R was not square, finding R from data $chisq [1] 457.457 $p.value [1] 1.136106e-69 $df [1] 45 > f3 <- fa(Thurstone,3) > f3r <- f3$resid > cortest.bartlett(f3r,n=213,diag=FALSE) #incorrect $chisq [1] 2203.356 $p.value [1] 0 $df [1] 36 > > cortest.bartlett(f3r,n=213,diag=TRUE) #correct (by default) $chisq [1] 0.256498 $p.value [1] 1 $df [1] 36 > > > > > cleanEx() > nameEx("cortest.mat") > ### * cortest.mat > > flush(stderr()); flush(stdout()) > > ### Name: cortest.mat > ### Title: Chi square tests of whether a single matrix is an identity > ### matrix, or a pair of matrices are equal. > ### Aliases: cortest.normal cortest.mat cortest.jennrich cortest > ### Keywords: multivariate > > ### ** Examples > > x <- matrix(rnorm(1000),ncol=10) > cortest.normal(x) #just test if this matrix is an identity R1 was not square, finding R from data Tests of correlation matrices Call:cortest.normal(R1 = x) Chi Square value 40.03 with df = 45 with probability < 0.68 > x <- sim.congeneric(loads =c(.9,.8,.7,.6,.5),N=1000,short=FALSE) > y <- sim.congeneric(loads =c(.9,.8,.7,.6,.5),N=1000,short=FALSE) > cortest.normal(x$r,y$r,n1=1000,n2=1000) #The Steiger test Tests of correlation matrices Call:cortest.normal(R1 = x$r, R2 = y$r, n1 = 1000, n2 = 1000) Chi Square value 6.55 with df = 10 with probability < 0.77 > cortest.jennrich(x$r,y$r,n1=100,n2=1000) # The Jennrich test $chi2 [1] 3.519158 $prob [1] 0.9664439 > cortest.mat(x$r,y$r,n1=1000,n2=1000) #twice the degrees of freedom as the Jennrich Tests of correlation matrices Call:cortest.mat(R1 = x$r, R2 = y$r, n1 = 1000, n2 = 1000) Chi Square value 31.22 with df = 20 with probability < 0.052 > > > > > > cleanEx() > nameEx("cosinor") > ### * cosinor > > flush(stderr()); flush(stdout()) > > ### Name: cosinor > ### Title: Functions for analysis of circadian or diurnal data > ### Aliases: cosinor circadian.phase cosinor.plot cosinor.period > ### circadian.mean circadian.sd circadian.cor circadian.linear.cor > ### circadian.stats circadian.F circadian.reliability circular.mean > ### circular.cor > ### Keywords: multivariate > > ### ** Examples > > time <- seq(1:24) #create a 24 hour time > pure <- matrix(time,24,18) > colnames(pure) <- paste0("H",1:18) > pure <- data.frame(time,cos((pure - col(pure))*pi/12)*3 + 3) > #18 different phases but scaled to 0-6 match mood data > matplot(pure[-1],type="l",main="Pure circadian arousal rhythms", + xlab="time of day",ylab="Arousal") > op <- par(mfrow=c(2,2)) > cosinor.plot(1,3,pure) > cosinor.plot(1,5,pure) > cosinor.plot(1,8,pure) > cosinor.plot(1,12,pure) > > p <- cosinor(pure) #find the acrophases (should match the input) > > #now, test finding the acrophases for different subjects on 3 variables > #They should be the first 3, second 3, etc. acrophases of pure > pp <- matrix(NA,nrow=6*24,ncol=4) > pure <- as.matrix(pure) > pp[,1] <- rep(pure[,1],6) > pp[1:24,2:4] <- pure[1:24,2:4] > pp[25:48,2:4] <- pure[1:24,5:7] *2 #to test different variances > pp[49:72,2:4] <- pure[1:24,8:10] *3 > pp[73:96,2:4] <- pure[1:24,11:13] > pp[97:120,2:4] <- pure[1:24,14:16] > pp[121:144,2:4] <- pure[1:24,17:19] > pure.df <- data.frame(ID = rep(1:6,each=24),pp) > colnames(pure.df) <- c("ID","Time",paste0("V",1:3)) > cosinor("Time",3:5,"ID",pure.df) V1.phase V2.phase V3.phase V1.fit V2.fit V3.fit V1.amp V2.amp V3.amp V1.sd 1 1 2 3 1 1 1 1 1 1 2.166945 2 4 5 6 1 1 1 1 1 1 4.333891 3 7 8 9 1 1 1 1 1 1 6.500836 4 10 11 12 1 1 1 1 1 1 2.166945 5 13 14 15 1 1 1 1 1 1 2.166945 6 16 17 18 1 1 1 1 1 1 2.166945 V2.sd V3.sd V1.mean V2.mean V3.mean V1.intercept V2.intercept 1 2.166945 2.166945 3 3 3 5.0931085 4.8766297 2 4.333891 4.333891 6 6 6 8.1669454 7.1216935 3 6.500836 6.500836 9 9 9 7.3174598 5.7495820 4 2.166945 2.166945 3 3 3 1.1233703 0.9068915 5 2.166945 2.166945 3 3 3 0.9068915 1.1233703 6 2.166945 2.166945 3 3 3 1.9165273 2.4391533 V3.intercept 1 4.5322618 2 6.0000000 3 4.4032147 4 0.8330546 5 1.4677382 6 3.0000000 > > op <- par(mfrow=c(2,2)) > cosinor.plot(2,3,pure.df,IDloc=1,ID="1") > cosinor.plot(2,3,pure.df,IDloc=1,ID="2") > cosinor.plot(2,3,pure.df,IDloc=1,ID="3") > cosinor.plot(2,3,pure.df,IDloc=1,ID="4") > > #now, show those in one panel as spagetti plots > op <- par(mfrow=c(1,1)) > cosinor.plot(2,3,pure.df,IDloc=1,ID="1",multi=TRUE,ylim=c(0,20),ylab="Modeled") > cosinor.plot(2,3,pure.df,IDloc=1,ID="2",multi=TRUE,add=TRUE,lty="dotdash") > cosinor.plot(2,3,pure.df,IDloc=1,ID="3",multi=TRUE,add=TRUE,lty="dashed") > cosinor.plot(2,3,pure.df,IDloc=1,ID="4",multi=TRUE,add=TRUE,lty="dotted") > > set.seed(42) #what else? > noisy <- pure > noisy[,2:19]<- noisy[,2:19] + rnorm(24*18,0,.2) > > n <- cosinor(time,noisy) #add a bit of noise > > small.pure <- pure[c(8,11,14,17,20,23),] > small.noisy <- noisy[c(8,11,14,17,20,23),] > small.time <- c(8,11,14,17,20,23) > > cosinor.plot(1,3,small.pure,multi=TRUE) > cosinor.plot(1,3,small.noisy,multi=TRUE,add=TRUE,lty="dashed") > > > # sp <- cosinor(small.pure) > # spo <- cosinor(small.pure,opti=TRUE) #iterative fit > # sn <- cosinor(small.noisy) #linear > # sno <- cosinor(small.noisy,opti=TRUE) #iterative > # sum.df <- data.frame(pure=p,noisy = n, small=sp,small.noise = sn, > # small.opt=spo,small.noise.opt=sno) > # round(sum.df,2) > # round(circadian.cor(sum.df[,c(1,3,5,7,9,11)]),2) #compare alternatives > # > # #now, lets form three "subjects" and show how the grouping variable works > # mixed.df <- rbind(small.pure,small.noisy,noisy) > # mixed.df <- data.frame(ID=c(rep(1,6),rep(2,6),rep(3,24)), > # time=c(rep(c(8,11,14,17,20,23),2),1:24),mixed.df) > # group.df <- cosinor(angle="time",x=2:20,code="ID",data=mixed.df) > # round(group.df,2) #compare these values to the sp,sn,and n values done separately > > > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > cleanEx() > nameEx("count.pairwise") > ### * count.pairwise > > flush(stderr()); flush(stdout()) > > ### Name: pairwiseCount > ### Title: Count number of pairwise cases for a data set with missing (NA) > ### data and impute values. > ### Aliases: pairwiseCount pairwiseCountBig count.pairwise pairwiseDescribe > ### pairwiseZero pairwiseSample pairwiseReport pairwiseImpute > ### pairwisePlot > ### Keywords: models multivariate > > ### ** Examples > > > x <- matrix(rnorm(900),ncol=6) > y <- matrix(rnorm(450),ncol=3) > x[x < 0] <- NA > y[y > 1] <- NA > > pairwiseCount(x) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 71 33 30 34 31 35 [2,] 33 73 36 40 29 34 [3,] 30 36 70 36 35 34 [4,] 34 40 36 73 28 38 [5,] 31 29 35 28 66 35 [6,] 35 34 34 38 35 74 > pairwiseCount(y) [,1] [,2] [,3] [1,] 134 115 110 [2,] 115 127 102 [3,] 110 102 119 > pairwiseCount(x,y) [,1] [,2] [,3] [1,] 61 58 57 [2,] 64 63 57 [3,] 59 59 56 [4,] 62 63 56 [5,] 59 53 48 [6,] 69 63 58 > pairwiseCount(x,diagonal=FALSE) [,1] [,2] [,3] [,4] [,5] [,6] [1,] NA 33 30 34 31 35 [2,] 33 NA 36 40 29 34 [3,] 30 36 NA 36 35 34 [4,] 34 40 36 NA 28 38 [5,] 31 29 35 28 NA 35 [6,] 35 34 34 38 35 NA > pairwiseDescribe(x,quant=c(.1,.25,.5,.75,.9)) vars n mean sd median trimmed mad min max range skew kurtosis se Q0.1 1 1 15 33.87 3.27 34 33.85 2.97 28 40 12 -0.14 -0.79 0.84 29.4 Q0.25 Q0.5 Q0.75 Q0.9 1 32 34 35.5 37.2 > > #examine the structure of the ability data set > keys <- list(ICAR16=colnames(psychTools::ability),reasoning = + cs(reason.4,reason.16,reason.17,reason.19), + letters=cs(letter.7, letter.33,letter.34,letter.58, letter.7), + matrix=cs(matrix.45,matrix.46,matrix.47,matrix.55), + rotate=cs(rotate.3,rotate.4,rotate.6,rotate.8)) > pairwiseImpute(keys,psychTools::ability) Correlations found from raw data Call: pairwiseImpute(keys = keys, R = psychTools::ability) Mean correlations within/between scales ICAR1 rsnng lttrs matrx rotat ICAR16 0.23 reasoning 0.24 0.31 letters 0.25 0.26 0.34 matrix 0.20 0.21 0.22 0.23 rotate 0.24 0.20 0.20 0.17 0.45 Percentage of complete correlations ICAR1 rsnng lttrs matrx rotat ICAR16 0.94 reasoning 0.94 0.75 letters 0.94 1.00 0.75 matrix 0.94 1.00 1.00 0.75 rotate 0.94 1.00 1.00 1.00 0.75 Number of complete correlations per scale ICAR1 rsnng lttrs matrx rotat ICAR16 240 reasoning 60 12 letters 60 16 12 matrix 60 16 16 12 rotate 60 16 16 16 12 Average number of pairwise observations per scale ICAR1 rsnng lttrs matrx rotat ICAR16 1426 reasoning 1424 1427 letters 1418 1413 1416 matrix 1434 1430 1423 1446 rotate 1430 1426 1419 1437 1439 Imputed correlations (if found) are in the imputed object> > > > > > cleanEx() > nameEx("cta") > ### * cta > > flush(stderr()); flush(stdout()) > > ### Name: cta > ### Title: Simulate the C(ues) T(endency) A(ction) model of motivation > ### Aliases: cta cta.15 > ### Keywords: models > > ### ** Examples > > #not run > #cta() #default values, running over time > #cta(type="state") #default values, in a state space of tendency 1 versus tendency 2 > #these next are examples without graphic output > #not run > #two introverts > #c2i <- c(.95,1.05) > #cta(n=2,t=10000,cues=c2i,type="none") > #two extraverts > #c2e <- c(3.95,4.05) > #cta(n=2,t=10000,cues=c2e,type="none") > #three introverts > #c3i <- c(.95,1,1.05) > #cta(3,t=10000,cues=c3i,type="none") > #three extraverts > #c3i <- c(3.95,4, 4.05) > #cta(3,10000,c3i,type="none") > #mixed > #c3 <- c(1,2.5,4) > #cta(3,10000,c3,type="none") > > > > cleanEx() > nameEx("densityBy") > ### * densityBy > > flush(stderr()); flush(stdout()) > > ### Name: densityBy > ### Title: Create a 'violin plot' or density plot of the distribution of a > ### set of variables > ### Aliases: densityBy violinBy violin > ### Keywords: multivariate hplot > > ### ** Examples > > violin(psychTools::bfi[4:8]) > violin(SATV + SATQ ~ gender, data=sat.act, grp.name =cs(MV,FV,MQ,FQ)) #formula input > violinBy(psychTools::bfi,var=4:7,grp ="gender",grp.name=c("M","F")) > densityBy(SATV ~ gender,data =sat.act,legend=1) #formula input > > > > > cleanEx() > nameEx("deprecated") > ### * deprecated > > flush(stderr()); flush(stdout()) > > ### Name: fa.poly > ### Title: Deprecated Exploratory Factor analysis functions. Please use fa > ### Aliases: factor.pa factor.minres factor.wls fa.poly > ### Keywords: multivariate models > > ### ** Examples > > #none, you should see fa > #using the Harman 24 mental tests, compare a principal factor with a principal components solution > > > > > cleanEx() > nameEx("describe") > ### * describe > > flush(stderr()); flush(stdout()) > > ### Name: describe > ### Title: Basic descriptive statistics useful for psychometrics > ### Aliases: describe describeData describeFast > ### Keywords: multivariate models univar > > ### ** Examples > > data(sat.act) > describe(sat.act) vars n mean sd median trimmed mad min max range skew gender 1 700 1.65 0.48 2 1.68 0.00 1 2 1 -0.61 education 2 700 3.16 1.43 3 3.31 1.48 0 5 5 -0.68 age 3 700 25.59 9.50 22 23.86 5.93 13 65 52 1.64 ACT 4 700 28.55 4.82 29 28.84 4.45 3 36 33 -0.66 SATV 5 700 612.23 112.90 620 619.45 118.61 200 800 600 -0.64 SATQ 6 687 610.22 115.64 620 617.25 118.61 200 800 600 -0.59 kurtosis se gender -1.62 0.02 education -0.07 0.05 age 2.42 0.36 ACT 0.53 0.18 SATV 0.33 4.27 SATQ -0.02 4.41 > describe(sat.act ~ gender) #formula mode option calls describeBy for the entire data frame Descriptive statistics by group group: 1 vars n mean sd median trimmed mad min max range skew gender 1 247 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 247 3.00 1.54 3 3.12 1.48 0 5 5 -0.54 age 3 247 25.86 9.74 22 24.23 5.93 14 58 44 1.43 ACT 4 247 28.79 5.06 30 29.23 4.45 3 36 33 -1.06 SATV 5 247 615.11 114.16 630 622.07 118.61 200 800 600 -0.63 SATQ 6 245 635.87 116.02 660 645.53 94.89 300 800 500 -0.72 kurtosis se gender NaN 0.00 education -0.60 0.10 age 1.43 0.62 ACT 1.89 0.32 SATV 0.13 7.26 SATQ -0.12 7.41 ------------------------------------------------------------ group: 2 vars n mean sd median trimmed mad min max range skew gender 1 453 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 453 3.26 1.35 3 3.40 1.48 0 5 5 -0.74 age 3 453 25.45 9.37 22 23.70 5.93 13 65 52 1.77 ACT 4 453 28.42 4.69 29 28.63 4.45 15 36 21 -0.39 SATV 5 453 610.66 112.31 620 617.91 103.78 200 800 600 -0.65 SATQ 6 442 596.00 113.07 600 602.21 133.43 200 800 600 -0.58 kurtosis se gender NaN 0.00 education 0.27 0.06 age 3.03 0.44 ACT -0.42 0.22 SATV 0.42 5.28 SATQ 0.13 5.38 > describe(SATV + SATQ ~ gender, data=sat.act) #formula mode specifies just two variables Descriptive statistics by group gender: 1 vars n mean sd median trimmed mad min max range skew kurtosis SATV 1 247 615.11 114.16 630 622.07 118.61 200 800 600 -0.63 0.13 SATQ 2 245 635.87 116.02 660 645.53 94.89 300 800 500 -0.72 -0.12 se SATV 7.26 SATQ 7.41 ------------------------------------------------------------ gender: 2 vars n mean sd median trimmed mad min max range skew kurtosis SATV 1 453 610.66 112.31 620 617.91 103.78 200 800 600 -0.65 0.42 SATQ 2 442 596.00 113.07 600 602.21 133.43 200 800 600 -0.58 0.13 se SATV 5.28 SATQ 5.38 > > describe(sat.act,skew=FALSE) vars n mean sd min max range se gender 1 700 1.65 0.48 1 2 1 0.02 education 2 700 3.16 1.43 0 5 5 0.05 age 3 700 25.59 9.50 13 65 52 0.36 ACT 4 700 28.55 4.82 3 36 33 0.18 SATV 5 700 612.23 112.90 200 800 600 4.27 SATQ 6 687 610.22 115.64 200 800 600 4.41 > describe(sat.act,IQR=TRUE) #show the interquartile Range vars n mean sd median trimmed mad min max range skew gender 1 700 1.65 0.48 2 1.68 0.00 1 2 1 -0.61 education 2 700 3.16 1.43 3 3.31 1.48 0 5 5 -0.68 age 3 700 25.59 9.50 22 23.86 5.93 13 65 52 1.64 ACT 4 700 28.55 4.82 29 28.84 4.45 3 36 33 -0.66 SATV 5 700 612.23 112.90 620 619.45 118.61 200 800 600 -0.64 SATQ 6 687 610.22 115.64 620 617.25 118.61 200 800 600 -0.59 kurtosis se IQR gender -1.62 0.02 1 education -0.07 0.05 1 age 2.42 0.36 10 ACT 0.53 0.18 7 SATV 0.33 4.27 150 SATQ -0.02 4.41 170 > describe(sat.act,quant=c(.1,.25,.5,.75,.90) ) #find the 10th, 25th, 50th, vars n mean sd median trimmed mad min max range skew gender 1 700 1.65 0.48 2 1.68 0.00 1 2 1 -0.61 education 2 700 3.16 1.43 3 3.31 1.48 0 5 5 -0.68 age 3 700 25.59 9.50 22 23.86 5.93 13 65 52 1.64 ACT 4 700 28.55 4.82 29 28.84 4.45 3 36 33 -0.66 SATV 5 700 612.23 112.90 620 619.45 118.61 200 800 600 -0.64 SATQ 6 687 610.22 115.64 620 617.25 118.61 200 800 600 -0.59 kurtosis se Q0.1 Q0.25 Q0.5 Q0.75 Q0.9 gender -1.62 0.02 1 1 2 2 2 education -0.07 0.05 1 3 3 4 5 age 2.42 0.36 18 19 22 29 39 ACT 0.53 0.18 22 25 29 32 35 SATV 0.33 4.27 450 550 620 700 750 SATQ -0.02 4.41 450 530 620 700 750 > #75th and 90th percentiles > > > > describeData(sat.act) #the fast version just gives counts and head and tail n.obs = 700 of which 687 are complete cases. Number of variables = 6 of which all are numeric TRUE variable # n.obs type H1 H2 H3 H4 T1 T2 T3 T4 gender 1 700 1 2 2 2 1 1 2 1 1 education 2 700 1 3 3 3 4 4 3 4 5 age 3 700 1 19 23 20 27 40 24 35 25 ACT 4 700 1 24 35 21 26 27 31 32 25 SATV 5 700 1 500 600 480 550 613 700 700 600 SATQ 6 687 1 500 500 470 520 630 630 780 600 > > print(describeFast(sat.act),short=FALSE) #even faster is just counts (just less information) Number of observations = 700 of which 687 are complete cases. Number of variables = 6 of which 6 are numeric and 0 are factors var n.obs numeric factor logical character type gender 1 700 1 0 0 0 NA education 2 700 1 0 0 0 NA age 3 700 1 0 0 0 NA ACT 4 700 1 0 0 0 NA SATV 5 700 1 0 0 0 NA SATQ 6 687 1 0 0 0 NA > > #now show how to adjust the displayed number of digits > des <- describe(sat.act) #find the descriptive statistics. Keep the original accuracy > des #show the normal output, which is rounded to 2 decimals vars n mean sd median trimmed mad min max range skew gender 1 700 1.65 0.48 2 1.68 0.00 1 2 1 -0.61 education 2 700 3.16 1.43 3 3.31 1.48 0 5 5 -0.68 age 3 700 25.59 9.50 22 23.86 5.93 13 65 52 1.64 ACT 4 700 28.55 4.82 29 28.84 4.45 3 36 33 -0.66 SATV 5 700 612.23 112.90 620 619.45 118.61 200 800 600 -0.64 SATQ 6 687 610.22 115.64 620 617.25 118.61 200 800 600 -0.59 kurtosis se gender -1.62 0.02 education -0.07 0.05 age 2.42 0.36 ACT 0.53 0.18 SATV 0.33 4.27 SATQ -0.02 4.41 > print(des,digits=3) #show the output, but round to 3 (trailing) digits vars n mean sd median trimmed mad min max range skew gender 1 700 1.647 0.478 2 1.684 0.000 1 2 1 -0.615 education 2 700 3.164 1.425 3 3.307 1.483 0 5 5 -0.681 age 3 700 25.594 9.499 22 23.863 5.930 13 65 52 1.643 ACT 4 700 28.547 4.824 29 28.843 4.448 3 36 33 -0.656 SATV 5 700 612.234 112.903 620 619.454 118.608 200 800 600 -0.644 SATQ 6 687 610.217 115.639 620 617.254 118.608 200 800 600 -0.593 kurtosis se gender -1.625 0.018 education -0.075 0.054 age 2.424 0.359 ACT 0.535 0.182 SATV 0.325 4.267 SATQ -0.018 4.412 > print(des, signif=3) #round all numbers to the 3 significant digits vars n mean sd median trimmed mad min max range skew gender 1 700 1.65 0.48 2 1.68 0.00 1 2 1 -0.61 education 2 700 3.16 1.43 3 3.31 1.48 0 5 5 -0.68 age 3 700 25.60 9.50 22 23.90 5.93 13 65 52 1.64 ACT 4 700 28.50 4.82 29 28.80 4.45 3 36 33 -0.66 SATV 5 700 612.00 113.00 620 619.00 119.00 200 800 600 -0.64 SATQ 6 687 610.00 116.00 620 617.00 119.00 200 800 600 -0.59 kurtosis se gender -1.62 0.02 education -0.08 0.05 age 2.42 0.36 ACT 0.53 0.18 SATV 0.33 4.27 SATQ -0.02 4.41 > > > > > > cleanEx() > nameEx("describe.by") > ### * describe.by > > flush(stderr()); flush(stdout()) > > ### Name: describeBy > ### Title: Basic summary statistics by group > ### Aliases: describeBy describe.by > ### Keywords: models univar > > ### ** Examples > > > data(sat.act) > describeBy(sat.act,sat.act$gender) #just one grouping variable Descriptive statistics by group group: 1 vars n mean sd median trimmed mad min max range skew gender 1 247 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 247 3.00 1.54 3 3.12 1.48 0 5 5 -0.54 age 3 247 25.86 9.74 22 24.23 5.93 14 58 44 1.43 ACT 4 247 28.79 5.06 30 29.23 4.45 3 36 33 -1.06 SATV 5 247 615.11 114.16 630 622.07 118.61 200 800 600 -0.63 SATQ 6 245 635.87 116.02 660 645.53 94.89 300 800 500 -0.72 kurtosis se gender NaN 0.00 education -0.60 0.10 age 1.43 0.62 ACT 1.89 0.32 SATV 0.13 7.26 SATQ -0.12 7.41 ------------------------------------------------------------ group: 2 vars n mean sd median trimmed mad min max range skew gender 1 453 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 453 3.26 1.35 3 3.40 1.48 0 5 5 -0.74 age 3 453 25.45 9.37 22 23.70 5.93 13 65 52 1.77 ACT 4 453 28.42 4.69 29 28.63 4.45 15 36 21 -0.39 SATV 5 453 610.66 112.31 620 617.91 103.78 200 800 600 -0.65 SATQ 6 442 596.00 113.07 600 602.21 133.43 200 800 600 -0.58 kurtosis se gender NaN 0.00 education 0.27 0.06 age 3.03 0.44 ACT -0.42 0.22 SATV 0.42 5.28 SATQ 0.13 5.38 > describeBy(sat.act ~ gender) #describe the entire set formula input Descriptive statistics by group gender: 1 vars n mean sd median trimmed mad min max range skew gender 1 247 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 247 3.00 1.54 3 3.12 1.48 0 5 5 -0.54 age 3 247 25.86 9.74 22 24.23 5.93 14 58 44 1.43 ACT 4 247 28.79 5.06 30 29.23 4.45 3 36 33 -1.06 SATV 5 247 615.11 114.16 630 622.07 118.61 200 800 600 -0.63 SATQ 6 245 635.87 116.02 660 645.53 94.89 300 800 500 -0.72 kurtosis se gender NaN 0.00 education -0.60 0.10 age 1.43 0.62 ACT 1.89 0.32 SATV 0.13 7.26 SATQ -0.12 7.41 ------------------------------------------------------------ gender: 2 vars n mean sd median trimmed mad min max range skew gender 1 453 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 453 3.26 1.35 3 3.40 1.48 0 5 5 -0.74 age 3 453 25.45 9.37 22 23.70 5.93 13 65 52 1.77 ACT 4 453 28.42 4.69 29 28.63 4.45 15 36 21 -0.39 SATV 5 453 610.66 112.31 620 617.91 103.78 200 800 600 -0.65 SATQ 6 442 596.00 113.07 600 602.21 133.43 200 800 600 -0.58 kurtosis se gender NaN 0.00 education 0.27 0.06 age 3.03 0.44 ACT -0.42 0.22 SATV 0.42 5.28 SATQ 0.13 5.38 > describeBy(SATV + SATQ ~ gender,data =sat.act) #specify the data set if using formula Descriptive statistics by group gender: 1 vars n mean sd median trimmed mad min max range skew kurtosis SATV 1 247 615.11 114.16 630 622.07 118.61 200 800 600 -0.63 0.13 SATQ 2 245 635.87 116.02 660 645.53 94.89 300 800 500 -0.72 -0.12 se SATV 7.26 SATQ 7.41 ------------------------------------------------------------ gender: 2 vars n mean sd median trimmed mad min max range skew kurtosis SATV 1 453 610.66 112.31 620 617.91 103.78 200 800 600 -0.65 0.42 SATQ 2 442 596.00 113.07 600 602.21 133.43 200 800 600 -0.58 0.13 se SATV 5.28 SATQ 5.38 > #describeBy(sat.act,list(sat.act$gender,sat.act$education)) #two grouping variables > describeBy(sat.act ~ gender + education) #two grouping variables Descriptive statistics by group gender: 1 education: 0 vars n mean sd median trimmed mad min max range skew gender 1 27 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 27 0.00 0.00 0 0.00 0.00 0 0 0 NaN age 3 27 16.93 1.04 17 17.04 1.48 14 18 4 -0.86 ACT 4 27 29.04 5.00 29 29.22 5.93 20 36 16 -0.30 SATV 5 27 640.07 132.24 670 646.17 177.91 400 800 400 -0.29 SATQ 6 27 642.67 127.90 660 647.91 177.91 400 800 400 -0.24 kurtosis se gender NaN 0.00 education NaN 0.00 age 0.34 0.20 ACT -1.13 0.96 SATV -1.40 25.45 SATQ -1.36 24.61 ------------------------------------------------------------ gender: 2 education: 0 vars n mean sd median trimmed mad min max range skew gender 1 30 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 30 0.00 0.00 0 0.00 0.00 0 0 0 NaN age 3 30 16.97 1.07 17 17.12 0.74 13 18 5 -1.75 ACT 4 30 26.07 5.06 26 25.92 5.93 15 36 21 0.08 SATV 5 30 595.30 123.46 595 597.08 148.26 350 800 450 -0.09 SATQ 6 29 599.72 123.20 600 600.96 148.26 333 800 467 -0.09 kurtosis se gender NaN 0.00 education NaN 0.00 age 4.13 0.19 ACT -0.56 0.92 SATV -0.81 22.54 SATQ -0.99 22.88 ------------------------------------------------------------ gender: 1 education: 1 vars n mean sd median trimmed mad min max range skew gender 1 20 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 20 1.00 0.00 1 1.00 0.00 1 1 0 NaN age 3 20 19.65 6.12 18 18.19 0.00 17 45 28 3.55 ACT 4 20 26.70 7.11 28 27.12 8.15 15 35 20 -0.30 SATV 5 20 603.00 141.24 600 611.25 185.32 300 780 480 -0.39 SATQ 6 19 625.84 95.87 650 630.94 88.96 400 765 365 -0.66 kurtosis se gender NaN 0.00 education NaN 0.00 age 11.78 1.37 ACT -1.51 1.59 SATV -1.12 31.58 SATQ -0.47 21.99 ------------------------------------------------------------ gender: 2 education: 1 vars n mean sd median trimmed mad min max range skew gender 1 25 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 25 1.00 0.00 1 1.00 0.00 1 1 0 NaN age 3 25 19.32 4.62 18 18.14 0.00 17 37 20 2.86 ACT 4 25 28.12 5.13 27 28.33 4.45 18 36 18 -0.21 SATV 5 25 597.00 119.38 610 600.76 133.43 350 799 449 -0.31 SATQ 6 24 592.54 140.83 625 606.60 111.19 230 799 569 -0.93 kurtosis se gender NaN 0.00 education NaN 0.00 age 7.27 0.92 ACT -0.78 1.03 SATV -0.95 23.88 SATQ 0.20 28.75 ------------------------------------------------------------ gender: 1 education: 2 vars n mean sd median trimmed mad min max range skew gender 1 23 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 23 2.00 0.00 2 2.00 0.00 2 2 0 NaN age 3 23 25.26 8.68 22 23.58 4.45 18 55 37 1.94 ACT 4 23 26.65 6.39 28 27.68 4.45 3 32 29 -2.14 SATV 5 23 560.00 152.29 600 570.53 148.26 200 800 600 -0.53 SATQ 6 23 569.13 160.65 600 575.79 177.91 300 800 500 -0.36 kurtosis se gender NaN 0.00 education NaN 0.00 age 3.63 1.81 ACT 5.39 1.33 SATV -0.59 31.75 SATQ -1.44 33.50 ------------------------------------------------------------ gender: 2 education: 2 vars n mean sd median trimmed mad min max range skew gender 1 21 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 21 2.00 0.00 2 2.00 0.00 2 2 0 NaN age 3 21 30.10 12.22 26 28.41 10.38 18 57 39 1.16 ACT 4 21 27.33 5.23 28 27.53 4.45 15 36 21 -0.32 SATV 5 21 593.57 115.34 600 598.24 118.61 375 770 395 -0.44 SATQ 6 20 586.50 120.96 585 587.81 163.09 375 800 425 0.01 kurtosis se gender NaN 0.00 education NaN 0.00 age 0.01 2.67 ACT -0.34 1.14 SATV -0.91 25.17 SATQ -1.11 27.05 ------------------------------------------------------------ gender: 1 education: 3 vars n mean sd median trimmed mad min max range skew gender 1 80 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 80 3.00 0.00 3 3.00 0.00 3 3 0 NaN age 3 80 20.81 3.06 20 20.28 1.48 17 34 17 2.00 ACT 4 80 28.56 5.03 30 28.84 5.19 17 36 19 -0.45 SATV 5 80 617.44 111.79 630 624.45 111.19 300 800 500 -0.62 SATQ 6 79 642.59 118.28 680 653.15 118.61 300 800 500 -0.81 kurtosis se gender NaN 0.00 education NaN 0.00 age 4.55 0.34 ACT -0.92 0.56 SATV -0.06 12.50 SATQ -0.17 13.31 ------------------------------------------------------------ gender: 2 education: 3 vars n mean sd median trimmed mad min max range skew gender 1 195 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 195 3.00 0.00 3 3.00 0.00 3 3 0 NaN age 3 195 21.09 4.75 20 20.04 1.48 17 46 29 3.41 ACT 4 195 28.18 4.78 29 28.43 4.45 16 36 20 -0.46 SATV 5 195 609.96 119.78 620 619.57 118.61 200 800 600 -0.81 SATQ 6 190 590.89 114.46 600 598.94 118.61 200 800 600 -0.72 kurtosis se gender NaN 0.00 education NaN 0.00 age 12.83 0.34 ACT -0.47 0.34 SATV 0.66 8.58 SATQ 0.38 8.30 ------------------------------------------------------------ gender: 1 education: 4 vars n mean sd median trimmed mad min max range skew gender 1 51 1.00 0.00 1 1.00 0.00 1 1 0 NaN education 2 51 4.00 0.00 4 4.00 0.00 4 4 0 NaN age 3 51 32.22 9.03 29 30.78 8.90 23 57 34 1.20 ACT 4 51 28.94 4.42 29 29.34 4.45 16 36 20 -0.74 SATV 5 51 620.31 81.72 620 623.32 88.96 430 800 370 -0.26 SATQ 6 51 635.90 104.12 640 642.46 88.96 400 800 400 -0.46 kurtosis se gender NaN 0.00 education NaN 0.00 age 0.63 1.27 ACT 0.12 0.62 SATV -0.29 11.44 SATQ -0.45 14.58 ------------------------------------------------------------ gender: 2 education: 4 vars n mean sd median trimmed mad min max range skew gender 1 87 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 87 4.00 0.00 4 4.00 0.00 4 4 0 NaN age 3 87 29.08 7.76 26 27.83 5.93 21 52 31 1.26 ACT 4 87 29.45 4.32 30 29.59 4.45 19 36 17 -0.27 SATV 5 87 614.98 106.62 620 621.39 88.96 300 800 500 -0.58 SATQ 6 86 597.59 106.24 600 605.76 118.61 300 800 500 -0.71 kurtosis se gender NaN 0.00 education NaN 0.00 age 0.70 0.83 ACT -0.67 0.46 SATV 0.28 11.43 SATQ 0.20 11.46 ------------------------------------------------------------ gender: 1 education: 5 vars n mean sd median trimmed mad min max range skew gender 1 46 1.00 0.00 1.0 1.00 0.00 1 1 0 NaN education 2 46 5.00 0.00 5.0 5.00 0.00 5 5 0 NaN age 3 46 35.85 10.00 35.5 35.13 11.12 22 58 36 0.47 ACT 4 46 30.83 3.11 32.0 30.95 2.97 25 36 11 -0.38 SATV 5 46 623.48 99.58 645.0 631.18 96.37 390 770 380 -0.61 SATQ 6 46 657.83 89.61 680.0 661.71 103.78 475 800 325 -0.45 kurtosis se gender NaN 0.00 education NaN 0.00 age -0.67 1.48 ACT -0.81 0.46 SATV -0.43 14.68 SATQ -0.77 13.21 ------------------------------------------------------------ gender: 2 education: 5 vars n mean sd median trimmed mad min max range skew gender 1 95 2.00 0.00 2 2.00 0.00 2 2 0 NaN education 2 95 5.00 0.00 5 5.00 0.00 5 5 0 NaN age 3 95 34.34 10.67 30 32.74 8.90 22 65 43 1.18 ACT 4 95 29.01 4.19 29 29.14 4.45 18 36 18 -0.31 SATV 5 95 620.39 95.72 620 623.61 74.13 300 800 500 -0.46 SATQ 6 93 606.72 105.55 600 608.93 148.26 350 800 450 -0.14 kurtosis se gender NaN 0.00 education NaN 0.00 age 0.61 1.09 ACT -0.73 0.43 SATV 0.43 9.82 SATQ -0.94 10.95 > des.mat <- describeBy(age ~ education,mat=TRUE,data = sat.act) #matrix (data.frame) output > des.mat <- describeBy(age ~ education + gender, data=sat.act, + mat=TRUE,digits=2) #matrix output rounded to 2 decimals > > > > > > > cleanEx() > nameEx("diagram") > ### * diagram > > flush(stderr()); flush(stdout()) > > ### Name: diagram > ### Title: Helper functions for drawing path model diagrams > ### Aliases: diagram dia.rect dia.ellipse dia.ellipse1 dia.arrow dia.curve > ### dia.curved.arrow dia.self dia.shape dia.triangle dia.cone multi.rect > ### multi.arrow multi.curved.arrow multi.self > ### Keywords: multivariate hplot > > ### ** Examples > > #first, show the primitives > xlim=c(-2,10) > ylim=c(0,10) > plot(NA,xlim=xlim,ylim=ylim,main="Demonstration of diagram functions",axes=FALSE,xlab="",ylab="") > ul <- dia.rect(1,9,labels="upper left",xlim=xlim,ylim=ylim) > ml <- dia.rect(1,6,"middle left",xlim=xlim,ylim=ylim) > ll <- dia.rect(1,3,labels="lower left",xlim=xlim,ylim=ylim) > bl <- dia.rect(1,1,"bottom left",xlim=xlim,ylim=ylim) > lr <- dia.ellipse(7,3,"lower right",xlim=xlim,ylim=ylim,e.size=.07) > ur <- dia.ellipse(7,9,"upper right",xlim=xlim,ylim=ylim,e.size=.07) > mr <- dia.ellipse(7,6,"middle right",xlim=xlim,ylim=ylim,e.size=.07) > lm <- dia.triangle(4,1,"Lower Middle",xlim=xlim,ylim=ylim) > br <- dia.rect(9,1,"bottom right",xlim=xlim,ylim=ylim) > dia.curve(from=ul$left,to=bl$left,"double headed",scale=-1) > > dia.arrow(from=lr,to=ul,labels="right to left") > dia.arrow(from=ul,to=ur,labels="left to right") > dia.curved.arrow(from=lr,to=ll,labels ="right to left") NULL > dia.curved.arrow(to=ur,from=ul,labels ="left to right") NULL > dia.curve(ll$top,ul$bottom,"right") #for rectangles, specify where to point > > dia.curve(ll$top,ul$bottom,"left",scale=-1) #for rectangles, specify where to point > dia.curve(mr,ur,"up") #but for ellipses, you may just point to it. > dia.curve(mr,lr,"down") > dia.curve(mr,ur,"up") > dia.curved.arrow(mr,ur,"up") #but for ellipses, you may just point to it. NULL > dia.curved.arrow(mr,lr,"down") #but for ellipses, you may just point to it. NULL > > dia.curved.arrow(ur$right,mr$right,"3") NULL > dia.curve(ml,mr,"across") > dia.curve(ur$right,lr$right,"top down",scale =2) > dia.curved.arrow(br$top,lr$right,"up") NULL > dia.curved.arrow(bl,br,"left to right") NULL > dia.curved.arrow(br,bl,"right to left",scale=-1) NULL > dia.arrow(bl,ll$bottom) > dia.curved.arrow(ml,ll$right) NULL > dia.curved.arrow(mr,lr$top) NULL > > #now, put them together in a factor analysis diagram > v9 <- sim.hierarchical() > f3 <- fa(v9,3,rotate="cluster") > fa.diagram(f3,error=TRUE,side=3) > > > > cleanEx() > nameEx("draw.tetra") > ### * draw.tetra > > flush(stderr()); flush(stdout()) > > ### Name: draw.tetra > ### Title: Draw a correlation ellipse and two normal curves to demonstrate > ### tetrachoric correlation > ### Aliases: draw.tetra draw.cor > ### Keywords: multivariate hplot > > ### ** Examples > > #if(require(mvtnorm)) { > #draw.tetra(.5,1,1) > #draw.tetra(.8,2,1)} else {print("draw.tetra requires the mvtnorm package") > #draw.cor(.5,cuts=c(0,0))} > > draw.tetra(.5,1,1) > draw.tetra(.8,2,1) > draw.cor(.5,cuts=c(0,0)) > > > > cleanEx() > nameEx("dummy.code") > ### * dummy.code > > flush(stderr()); flush(stdout()) > > ### Name: dummy.code > ### Title: Create dummy coded variables > ### Aliases: dummy.code > ### Keywords: multivariate models > > ### ** Examples > > new <- dummy.code(sat.act$education) > new.sat <- data.frame(new,sat.act) > round(cor(new.sat,use="pairwise"),2) X3 X5 X4 X0 X1 X2 gender education age ACT X3 1.00 -0.40 -0.40 -0.24 -0.21 -0.21 0.10 -0.09 -0.39 -0.04 X5 -0.40 1.00 -0.25 -0.15 -0.13 -0.13 0.03 0.65 0.49 0.11 X4 -0.40 -0.25 1.00 -0.15 -0.13 -0.13 -0.02 0.29 0.24 0.07 X0 -0.24 -0.15 -0.15 1.00 -0.08 -0.08 -0.08 -0.66 -0.27 -0.07 X1 -0.21 -0.13 -0.13 -0.08 1.00 -0.07 -0.05 -0.40 -0.17 -0.06 X2 -0.21 -0.13 -0.13 -0.08 -0.07 1.00 -0.09 -0.21 0.05 -0.08 gender 0.10 0.03 -0.02 -0.08 -0.05 -0.09 1.00 0.09 -0.02 -0.04 education -0.09 0.65 0.29 -0.66 -0.40 -0.21 0.09 1.00 0.55 0.15 age -0.39 0.49 0.24 -0.27 -0.17 0.05 -0.02 0.55 1.00 0.11 ACT -0.04 0.11 0.07 -0.07 -0.06 -0.08 -0.04 0.15 0.11 1.00 SATV 0.00 0.04 0.02 0.01 -0.03 -0.08 -0.02 0.05 -0.04 0.56 SATQ -0.03 0.06 0.01 0.03 -0.01 -0.07 -0.17 0.03 -0.03 0.59 SATV SATQ X3 0.00 -0.03 X5 0.04 0.06 X4 0.02 0.01 X0 0.01 0.03 X1 -0.03 -0.01 X2 -0.08 -0.07 gender -0.02 -0.17 education 0.05 0.03 age -0.04 -0.03 ACT 0.56 0.59 SATV 1.00 0.64 SATQ 0.64 1.00 > #dum.smoke <- dummy.code(spi$smoke,group=2:9) > #table(dum.smoke,spi$smoke) > #dum.age <- dummy.code(round(spi$age/5)*5,top=5) #the most frequent five year blocks > > > > cleanEx() > nameEx("dwyer") > ### * dwyer > > flush(stderr()); flush(stdout()) > > ### Name: Dwyer > ### Title: 8 cognitive variables used by Dwyer for an example. > ### Aliases: Dwyer > ### Keywords: datasets > > ### ** Examples > > data(Dwyer) > Ro <- Dwyer[1:7,1:7] > Roe <- Dwyer[1:7,8] > fo <- fa(Ro,2,rotate="none") > fa.extension(Roe,fo) Call: fa.extension(Roe = Roe, fo = fo) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 h2 u2 1 0.37 -0.78 0.75 0.25 MR1 MR2 SS loadings 0.14 0.61 Proportion Var 0.14 0.61 Cumulative Var 0.14 0.75 Proportion Explained 0.18 0.82 Cumulative Proportion 0.18 1.00 > > > > cleanEx() > nameEx("eigen.loadings") > ### * eigen.loadings > > flush(stderr()); flush(stdout()) > > ### Name: eigen.loadings > ### Title: Convert eigen vectors and eigen values to the more normal (for > ### psychologists) component loadings > ### Aliases: eigen.loadings > ### Keywords: models multivariate > > ### ** Examples > > x <- eigen(Harman74.cor$cov) > x$vectors[1:8,1:4] #as they appear from eigen [,1] [,2] [,3] [,4] [1,] -0.2158754 -0.003763753 -0.3287459 0.16684940 [2,] -0.1401069 -0.054849237 -0.3075102 0.16446394 [3,] -0.1558742 -0.131808462 -0.3661446 0.08629700 [4,] -0.1789834 -0.122936419 -0.2572906 0.17620811 [5,] -0.2436443 -0.221950283 0.2577381 0.04309103 [6,] -0.2420622 -0.288461492 0.2037767 -0.06579018 [7,] -0.2372958 -0.293489427 0.2731887 0.05917239 [8,] -0.2433556 -0.167723001 0.1103619 0.09493344 > y <- princomp(covmat=Harman74.cor$cov) > y$loadings[1:8,1:4] #as they appear from princomp Comp.1 Comp.2 Comp.3 Comp.4 VisualPerception 0.2158754 0.003763753 0.3287459 0.16684940 Cubes 0.1401069 0.054849237 0.3075102 0.16446394 PaperFormBoard 0.1558742 0.131808462 0.3661446 0.08629700 Flags 0.1789834 0.122936419 0.2572906 0.17620811 GeneralInformation 0.2436443 0.221950283 -0.2577381 0.04309103 PargraphComprehension 0.2420622 0.288461492 -0.2037767 -0.06579018 SentenceCompletion 0.2372958 0.293489427 -0.2731887 0.05917239 WordClassification 0.2433556 0.167723001 -0.1103619 0.09493344 > eigen.loadings(x)[1:8,1:4] # rescaled by the eigen values [,1] [,2] [,3] [,4] [1,] -0.6157349 -0.005449052 -0.4276989 0.20447285 [2,] -0.3996226 -0.079409132 -0.4000712 0.20154949 [3,] -0.4445954 -0.190828463 -0.4763546 0.10575641 [4,] -0.5105089 -0.177983777 -0.3347354 0.21594189 [5,] -0.6949395 -0.321333174 0.3353177 0.05280778 [6,] -0.6904266 -0.417626171 0.2651138 -0.08062544 [7,] -0.6768317 -0.424905469 0.3554189 0.07251538 [8,] -0.6941158 -0.242824490 0.1435811 0.11634031 > > > > cleanEx() > nameEx("ellipses") > ### * ellipses > > flush(stderr()); flush(stdout()) > > ### Name: ellipses > ### Title: Plot data and 1 and 2 sigma correlation ellipses > ### Aliases: ellipses minkowski > ### Keywords: multivariate hplot > > ### ** Examples > > data(psychTools::galton) Warning in data(psychTools::galton) : data set ‘psychTools::galton’ not found > galton <- psychTools::galton > ellipses(galton,lm=TRUE) > > ellipses(galton$parent,galton$child,xlab="Mid Parent Height", + ylab="Child Height") #input are two vectors > data(sat.act) > ellipses(sat.act) #shows the pairs.panels ellipses > minkowski(2,main="Minkowski circles") > minkowski(1,TRUE) > minkowski(4,TRUE) > > > > > cleanEx() > nameEx("error.bars") > ### * error.bars > > flush(stderr()); flush(stdout()) > > ### Name: error.bars > ### Title: Plot means and confidence intervals > ### Aliases: error.bars error.bars.tab > ### Keywords: multivariate hplot > > ### ** Examples > > set.seed(42) > x <- matrix(rnorm(1000),ncol=20) > boxplot(x,notch=TRUE,main="Notched boxplot with error bars") > error.bars(x,add=TRUE) > abline(h=0) > > #show 50% confidence regions and color each variable separately > error.bars(attitude,alpha=.5, + main="50 percent confidence limits",col=rainbow(ncol(attitude)) ) > > error.bars(attitude,bar=TRUE) #show the use of bar graphs > > > #combine with a strip chart and boxplot > stripchart(attitude,vertical=TRUE,method="jitter",jitter=.1,pch=19, + main="Stripchart with 95 percent confidence limits") > boxplot(attitude,add=TRUE) > error.bars(attitude,add=TRUE,arrow.len=.2) > > #use statistics from somewhere else > #by specifying n, we are using the t distribution for confidences > #The first example allows the variables to be spaced along the x axis > my.stats <- data.frame(values=c(1,2,8),mean=c(10,12,18),se=c(2,3,5),n=c(5,10,20)) > error.bars(stats=my.stats,type="b",main="data with confidence intervals") > #don't connect the groups > my.stats <- data.frame(values=c(1,2,8),mean=c(10,12,18),se=c(2,3,5),n=c(5,10,20)) > error.bars(stats=my.stats,main="data with confidence intervals") > #by not specifying value, the groups are equally spaced > my.stats <- data.frame(mean=c(10,12,18),se=c(2,3,5),n=c(5,10,20)) > rownames(my.stats) <- c("First", "Second","Third") > error.bars(stats=my.stats,xlab="Condition",ylab="Score") > > > #Consider the case where we get stats from describe > temp <- describe(attitude) > error.bars(stats=temp) > > #show these do not differ from the other way by overlaying the two > error.bars(attitude,add=TRUE,col="red") > > #n is omitted > #the error distribution is a normal distribution > my.stats <- data.frame(mean=c(2,4,8),se=c(2,1,2)) > rownames(my.stats) <- c("First", "Second","Third") > error.bars(stats=my.stats,xlab="Condition",ylab="Score") > #n is specified > #compare this with small n which shows larger confidence regions > my.stats <- data.frame(mean=c(2,4,8),se=c(2,1,2),n=c(10,10,3)) > rownames(my.stats) <- c("First", "Second","Third") > error.bars(stats=my.stats,xlab="Condition",ylab="Score") > > > #example of arrest rates (as percentage of condition) > arrest <- data.frame(Control=c(14,21),Treated =c(3,23)) > rownames(arrest) <- c("Arrested","Not Arrested") > error.bars.tab(arrest,ylab="Probability of Arrest",xlab="Control vs Treatment", + main="Probability of Arrest varies by treatment") > > > #Show the raw rates > error.bars.tab(arrest,raw=TRUE,ylab="Number Arrested",xlab="Control vs Treatment", + main="Count of Arrest varies by treatment") > > #If a grouping variable is specified, the function calls error.bars.by > #Use error.bars.by to have more control over the output. > #Show how to use grouping variables > error.bars(SATV + SATQ ~ gender, data=sat.act) #one grouping variable, formula input > error.bars(SATV + SATQ ~ education + gender,data=sat.act)#two grouping variables > > > > > > > cleanEx() > nameEx("error.bars.by") > ### * error.bars.by > > flush(stderr()); flush(stdout()) > > ### Name: error.bars.by > ### Title: Plot means and confidence intervals for multiple groups > ### Aliases: error.bars.by > ### Keywords: multivariate hplot > > ### ** Examples > > > data(sat.act) > #The generic plot of variables by group > error.bars.by( SATV + SATQ ~ gender,data=sat.act) #formula input > error.bars.by( SATV + SATQ ~ gender,data=sat.act,v.lab=cs(male,female)) #labels > error.bars.by(SATV + SATQ ~ education + gender, data =sat.act) #see below > error.bars.by(sat.act[1:4],sat.act$gender,legend=7) #specification of variables > error.bars.by(sat.act[1:4],sat.act$gender,legend=7,labels=cs(male,female)) > > #a bar plot > error.bars.by(sat.act[5:6],sat.act$gender,bars=TRUE,labels=c("male","female"), + main="SAT V and SAT Q by gender",ylim=c(0,800),colors=c("red","blue"), + legend=5,v.labels=c("SATV","SATQ")) #draw a barplot > #a bar plot of SAT by age -- not recommended, see the next plot > error.bars.by(SATV + SATQ ~ education,data=sat.act,bars=TRUE,xlab="Education", + main="95 percent confidence limits of Sat V and Sat Q", ylim=c(0,800), + v.labels=c("SATV","SATQ"),colors=c("red","blue") ) > #a better graph uses points not bars > #use formulat input > #plot SAT V and SAT Q by education > error.bars.by(SATV + SATQ ~ education,data=sat.act,TRUE, xlab="Education", + legend=5,labels=colnames(sat.act[5:6]),ylim=c(525,700), + main="self reported SAT scores by education", + v.lab =c("HS","in coll", "< 16", "BA/BS", "in Grad", "Grad/Prof")) > #make the cats eyes semi-transparent by specifying a negative density > > error.bars.by(SATV + SATQ ~ education,data=sat.act, xlab="Education", + legend=5,labels=c("SATV","SATQ"),ylim=c(525,700), + main="self reported SAT scores by education",density=-10, + v.lab =c("HS","in coll", "< 16", "BA/BS", "in Grad", "Grad/Prof")) > > #use labels to specify the 2nd grouping variable, v.lab to specify the first > error.bars.by(SATV ~ education + gender,data=sat.act, xlab="Education", + legend=5,labels=cs(male,female),ylim=c(525,700), + main="self reported SAT scores by education",density=-10, + v.lab =c("HS","in coll", "< 16", "BA/BS", "in Grad", "Grad/Prof"), + colors=c("red","blue")) > > > #now for a more complicated examples using 25 big 5 items scored into 5 scales > #and showing age trends by decade > #this shows how to convert many levels of a grouping variable (age) into more manageable levels. > data(bfi) #The Big 5 data > #first create the keys > keys.list <- list(Agree=c(-1,2:5),Conscientious=c(6:8,-9,-10), + Extraversion=c(-11,-12,13:15),Neuroticism=c(16:20),Openness = c(21,-22,23,24,-25)) > keys <- make.keys(psychTools::bfi,keys.list) > #then create the scores for those older than 10 and less than 80 > bfis <- subset(psychTools::bfi,((psychTools::bfi$age > 10) & (psychTools::bfi$age < 80))) > > scores <- scoreItems(keys,bfis,min=1,max=6) #set the right limits for item reversals > #now draw the results by age > #specify the particular colors to use > error.bars.by(scores$scores,round(bfis$age/10)*10,by.var=TRUE, + main="BFI age trends",legend=3,labels=colnames(scores$scores), + xlab="Age",ylab="Mean item score", + colors=cs(green,yellow,black,red,blue), + v.labels =cs(10-14,15-24,25-34,35-44,45-54,55-64,65-74)) > #show transparency > error.bars.by(scores$scores,round(bfis$age/10)*10,by.var=TRUE, + main="BFI age trends",legend=3,labels=colnames(scores$scores), + xlab="Age",ylab="Mean item score", density=-10, + colors=cs(green,yellow,black,red,blue), + v.labels =cs(10-14,15-24,25-34,35-44,45-54,55-64,65-74)) > > > > cleanEx() > nameEx("error.circles") > ### * error.circles > > flush(stderr()); flush(stdout()) > > ### Name: errorCircles > ### Title: Two way plots of means, error bars, and sample sizes > ### Aliases: errorCircles > ### Keywords: multivariate hplot > > ### ** Examples > > #BFI scores for males and females > errorCircles(1:25,1:25,data=psychTools::bfi,group="gender",paired=TRUE,ylab="female scores", + xlab="male scores",main="BFI scores by gender") > abline(a=0,b=1) > #drop the circles since all samples are the same sizes > errorCircles(1:25,1:25,data=psychTools::bfi,group="gender",paired=TRUE,circles=FALSE, + ylab="female scores",xlab="male scores",main="BFI scores by gender") > abline(a=0,b=1) > > data(psychTools::affect) Warning in data(psychTools::affect) : data set ‘psychTools::affect’ not found > colors <- c("black","red","white","blue") > films <- c("Sad","Horror","Neutral","Happy") > affect.stats <- errorCircles("EA2","TA2",data=psychTools::affect[-c(1,20)], + group="Film",labels=films, + xlab="Energetic Arousal",ylab="Tense Arousal",ylim=c(10,22),xlim=c(8,20), + pch=16,cex=2,colors=colors, main ="EA and TA pre and post affective movies") > #now, use the stats from the prior run > errorCircles("EA1","TA1",data=affect.stats,labels=films,pch=16,cex=2,colors=colors,add=TRUE) > > #show sample size with the size of the circles > errorCircles("SATV","SATQ",sat.act,group="education") > > #Can also provide error.bars.by functionality > errorCircles(2,5,group=2,data=sat.act,circles=FALSE,pch=16,colors="blue", + ylim= c(200,800),main="SATV by education",labels="") > #just do the breakdown and then show the points > # errorCircles(3,5,group=3,data=sat.act,circles=FALSE,pch=16,colors="blue", > # ylim= c(200,800),main="SATV by age",labels="",bars=FALSE) > > > > > > cleanEx() > nameEx("error.crosses") > ### * error.crosses > > flush(stderr()); flush(stdout()) > > ### Name: error.crosses > ### Title: Plot x and y error bars > ### Aliases: error.crosses > ### Keywords: multivariate hplot > > ### ** Examples > > > #just draw one pair of variables > desc <- describe(attitude) > x <- desc[1,] > y <- desc[2,] > error.crosses(x,y,xlab=rownames(x),ylab=rownames(y)) > > #now for a bit more complicated plotting > data(psychTools::bfi) Warning in data(psychTools::bfi) : data set ‘psychTools::bfi’ not found > desc <- describeBy(psychTools::bfi[1:25],psychTools::bfi$gender) #select a high and low group > error.crosses(desc$'1',desc$'2',ylab="female scores", + xlab="male scores",main="BFI scores by gender") > abline(a=0,b=1) > > #do it from summary statistics (using standard errors) > g1.stats <- data.frame(n=c(10,20,30),mean=c(10,12,18),se=c(2,3,5)) > g2.stats <- data.frame(n=c(15,20,25),mean=c(6,14,15),se =c(1,2,3)) > error.crosses(g1.stats,g2.stats) > > #Or, if you prefer to draw +/- 1 sd. instead of 95% confidence > g1.stats <- data.frame(n=c(10,20,30),mean=c(10,12,18),sd=c(2,3,5)) > g2.stats <- data.frame(n=c(15,20,25),mean=c(6,14,15),sd =c(1,2,3)) > error.crosses(g1.stats,g2.stats,sd=TRUE) > > #and seem even fancy plotting: This is taken from a study of mood > #four films were given (sad, horror, neutral, happy) > #with a pre and post test > data(psychTools::affect) Warning in data(psychTools::affect) : data set ‘psychTools::affect’ not found > colors <- c("black","red","green","blue") > films <- c("Sad","Horror","Neutral","Happy") > affect.mat <- describeBy(psychTools::affect[10:17],psychTools::affect$Film,mat=TRUE) > error.crosses(affect.mat[c(1:4,17:20),],affect.mat[c(5:8,21:24),], + labels=films[affect.mat$group1],xlab="Energetic Arousal", + ylab="Tense Arousal",colors = + colors[affect.mat$group1],pch=16,cex=2) > > > > > cleanEx() > nameEx("error.dots") > ### * error.dots > > flush(stderr()); flush(stdout()) > > ### Name: error.dots > ### Title: Show a dot.chart with error bars for different groups or > ### variables > ### Aliases: error.dots > ### Keywords: multivariate hplot > > ### ** Examples > > temp <- error.dots(psychTools::bfi[1:25],sort=TRUE, + xlab="Mean score for the item, sorted by difficulty") > error.dots(psychTools::bfi[1:25],sort=TRUE, order=temp$order, + add=TRUE, eyes=TRUE) #over plot with eyes > > error.dots(psychTools::ability,eyes=TRUE, xlab="Mean score for the item") > > cd <- cohen.d(psychTools::bfi[1:26],"gender") > temp <- error.dots(cd, select=c(1:15,21:25),head=12,tail=13, + main="Cohen d and confidence intervals of BFI by gender") > error.dots(cd,select=c(16:20),head=13,tail=12,col="blue",add=TRUE,fg="red" ,main="") > abline(v=0) > > > > cleanEx() > nameEx("esem") > ### * esem > > flush(stderr()); flush(stdout()) > > ### Name: esem > ### Title: Perform and Exploratory Structural Equation Model (ESEM) by > ### using factor extension techniques > ### Aliases: esem esem.diagram interbattery > ### Keywords: multivariate models > > ### ** Examples > > #make up a sem like problem using sim.structure > fx <-matrix(c( .9,.8,.6,rep(0,4),.6,.8,-.7),ncol=2) > fy <- matrix(c(.6,.5,.4),ncol=1) > rownames(fx) <- c("V","Q","A","nach","Anx") > rownames(fy)<- c("gpa","Pre","MA") > Phi <-matrix( c(1,0,.7,.0,1,.7,.7,.7,1),ncol=3) > gre.gpa <- sim.structural(fx,Phi,fy) > print(gre.gpa) Call: sim.structural(fx = fx, Phi = Phi, fy = fy) $model (Population correlation matrix) V Q A nach Anx gpa Pre MA V 1.00 0.72 0.54 0.00 0.00 0.38 0.32 0.25 Q 0.72 1.00 0.48 0.00 0.00 0.34 0.28 0.22 A 0.54 0.48 1.00 0.48 -0.42 0.50 0.42 0.34 nach 0.00 0.00 0.48 1.00 -0.56 0.34 0.28 0.22 Anx 0.00 0.00 -0.42 -0.56 1.00 -0.29 -0.24 -0.20 gpa 0.38 0.34 0.50 0.34 -0.29 1.00 0.30 0.24 Pre 0.32 0.28 0.42 0.28 -0.24 0.30 1.00 0.20 MA 0.25 0.22 0.34 0.22 -0.20 0.24 0.20 1.00 $reliability (population reliability) V Q A nach Anx gpa Pre MA 0.81 0.64 0.72 0.64 0.49 0.36 0.25 0.16 > > #now esem it: > example <- esem(gre.gpa$model,varsX=1:5,varsY=6:8,nfX=2,nfY=1,n.obs=1000,plot=FALSE) > example Exploratory Structural Equation Modeling Analysis using method = minres Call: esem(r = gre.gpa$model, varsX = 1:5, varsY = 6:8, nfX = 2, nfY = 1, n.obs = 1000, plot = FALSE) For the 'X' set: X1 X2 V 0.91 -0.06 Q 0.81 -0.05 A 0.53 0.57 nach -0.10 0.81 Anx 0.08 -0.71 For the 'Y' set: Y1 gpa 0.6 Pre 0.5 MA 0.4 Correlations between the X and Y sets. X1 X2 Y1 X1 1.00 0.19 0.68 X2 0.19 1.00 0.67 Y1 0.68 0.67 1.00 The degrees of freedom for the null model are 56 and the empirical chi square function was 6930.29 The degrees of freedom for the model are 7 and the empirical chi square function was 21.83 with prob < 0.0027 The root mean square of the residuals (RMSR) is 0.02 The df corrected root mean square of the residuals is 0.04 with the empirical chi square 21.83 with prob < 0.0027 The total number of observations was 1000 with fitted Chi Square = 2175.06 with prob < 0 Empirical BIC = -26.53 ESABIC = -4.29 Fit based upon off diagonal values = 1 To see the item loadings for the X and Y sets combined, and the associated fa output, print with short=FALSE. > esem.diagram(example,simple=FALSE) > > #compare two alternative solutions to the first 2 factors of the neo. > #solution 1 is the normal 2 factor solution. > #solution 2 is an esem with 1 factor for the first 6 variables, and 1 for the second 6. > f2 <- fa(psychTools::neo[1:12,1:12],2) > es2 <- esem(psychTools::neo,1:6,7:12,1,1) > summary(f2) Factor analysis with Call: fa(r = psychTools::neo[1:12, 1:12], nfactors = 2) Test of the hypothesis that 2 factors are sufficient. The degrees of freedom for the model is 43 and the objective function was 0.63 The root mean square of the residuals (RMSA) is 0.06 The df corrected root mean square of the residuals is 0.07 With factor correlations of MR1 MR2 MR1 1.00 -0.22 MR2 -0.22 1.00 > summary(es2) Exploratory Structural Equation Modeling with Call: esem(r = psychTools::neo, varsX = 1:6, varsY = 7:12, nfX = 1, nfY = 1) Test of the hypothesis that 2 factors are sufficient. The degrees of freedom for the model is 43 and the objective function was 4.44 The root mean square of the residuals (RMSA) is 0.06 The df corrected root mean square of the residuals is 0.08 X1 Y1 X1 1.00 -0.28 Y1 -0.28 1.00 > fa.congruence(f2,es2) X1 Y1 MR1 0.98 -0.36 MR2 -0.21 0.94 > > interbattery(Thurstone.9,1:4,5:9,2,2) #compare to the solution of Tucker. We are not there yet. $A1 IB1 IB2 Prefixes 0.5763511 0 Suffixes 0.6322026 0 Vocabulary 0.7856401 0 Sentences 0.7289999 0 $A2 IB1 IB2 First.Last 0.4835162 0.3912559 FirstLetters 0.5223871 0.4452040 FourLetters 0.5077345 0.2413293 Completion 0.7543248 -0.3528007 SameorOpposite 0.7396321 -0.3760684 $loadings IB1 IB2 Prefixes 0.5763511 0.0000000 Suffixes 0.6322026 0.0000000 Vocabulary 0.7856401 0.0000000 Sentences 0.7289999 0.0000000 First.Last 0.4835162 0.3912559 FirstLetters 0.5223871 0.4452040 FourLetters 0.5077345 0.2413293 Completion 0.7543248 -0.3528007 SameorOpposite 0.7396321 -0.3760684 $Call interbattery(r = Thurstone.9, varsX = 1:4, varsY = 5:9, nfX = 2, nfY = 2) > > > > > cleanEx() > nameEx("fa") > ### * fa > > flush(stderr()); flush(stdout()) > > ### Name: fa > ### Title: Exploratory Factor analysis using MinRes (minimum residual) as > ### well as EFA by Principal Axis, Weighted Least Squares or Maximum > ### Likelihood > ### Aliases: fa fac fa.sapa fa.pooled > ### Keywords: multivariate models > > ### ** Examples > > #using the Harman 24 mental tests, compare a principal factor with a principal components solution > pc <- principal(Harman74.cor$cov,4,rotate="varimax") #principal components > pa <- fa(Harman74.cor$cov,4,fm="pa" ,rotate="varimax") #principal axis > uls <- fa(Harman74.cor$cov,4,rotate="varimax") #unweighted least squares is minres > wls <- fa(Harman74.cor$cov,4,fm="wls") #weighted least squares > > #to show the loadings sorted by absolute value > print(uls,sort=TRUE) Factor Analysis using method = minres Call: fa(r = Harman74.cor$cov, nfactors = 4, rotate = "varimax") Standardized loadings (pattern matrix) based upon correlation matrix item MR1 MR3 MR2 MR4 h2 u2 com SentenceCompletion 7 0.81 0.19 0.15 0.07 0.73 0.27 1.2 WordMeaning 9 0.81 0.20 0.05 0.22 0.74 0.26 1.3 PargraphComprehension 6 0.76 0.21 0.07 0.23 0.68 0.32 1.4 GeneralInformation 5 0.73 0.19 0.22 0.14 0.64 0.36 1.4 WordClassification 8 0.57 0.34 0.23 0.14 0.51 0.49 2.2 VisualPerception 1 0.15 0.68 0.20 0.15 0.55 0.45 1.4 PaperFormBoard 3 0.15 0.55 -0.01 0.11 0.34 0.66 1.2 Flags 4 0.23 0.53 0.09 0.07 0.35 0.65 1.5 SeriesCompletion 23 0.37 0.52 0.23 0.22 0.51 0.49 2.7 Cubes 2 0.11 0.45 0.08 0.08 0.23 0.77 1.3 Deduction 20 0.38 0.42 0.10 0.29 0.42 0.58 2.9 ProblemReasoning 22 0.37 0.41 0.13 0.29 0.40 0.60 3.0 Addition 10 0.17 -0.11 0.82 0.16 0.74 0.26 1.2 CountingDots 12 0.02 0.20 0.71 0.09 0.55 0.45 1.2 StraightCurvedCapitals 13 0.18 0.42 0.54 0.08 0.51 0.49 2.2 Code 11 0.18 0.11 0.54 0.37 0.47 0.53 2.1 ArithmeticProblems 24 0.36 0.19 0.49 0.29 0.49 0.51 2.9 NumericalPuzzles 21 0.18 0.40 0.43 0.21 0.42 0.58 2.8 ObjectNumber 17 0.14 0.06 0.22 0.58 0.41 0.59 1.4 WordRecognition 14 0.21 0.05 0.08 0.56 0.36 0.64 1.3 NumberRecognition 15 0.12 0.12 0.08 0.52 0.31 0.69 1.3 FigureRecognition 16 0.07 0.42 0.06 0.52 0.45 0.55 2.0 NumberFigure 18 0.02 0.31 0.34 0.45 0.41 0.59 2.7 FigureWord 19 0.15 0.25 0.18 0.35 0.23 0.77 2.8 MR1 MR3 MR2 MR4 SS loadings 3.64 2.93 2.67 2.23 Proportion Var 0.15 0.12 0.11 0.09 Cumulative Var 0.15 0.27 0.38 0.48 Proportion Explained 0.32 0.26 0.23 0.19 Cumulative Proportion 0.32 0.57 0.81 1.00 Mean item complexity = 1.9 Test of the hypothesis that 4 factors are sufficient. The degrees of freedom for the null model are 276 and the objective function was 11.44 The degrees of freedom for the model are 186 and the objective function was 1.72 The root mean square of the residuals (RMSR) is 0.04 The df corrected root mean square of the residuals is 0.05 Fit based upon off diagonal values = 0.98 Measures of factor score adequacy MR1 MR3 MR2 MR4 Correlation of (regression) scores with factors 0.93 0.87 0.91 0.82 Multiple R square of scores with factors 0.87 0.76 0.83 0.68 Minimum correlation of possible factor scores 0.74 0.52 0.65 0.36 > > #then compare with a maximum likelihood solution using factanal > mle <- factanal(covmat=Harman74.cor$cov,factors=4) > factor.congruence(list(mle,pa,pc,uls,wls)) Factor1 Factor2 Factor3 Factor4 PA1 PA3 PA2 PA4 RC1 RC3 RC2 RC4 Factor1 1.00 0.61 0.46 0.56 1.00 0.61 0.46 0.55 1.00 0.54 0.44 0.47 Factor2 0.61 1.00 0.50 0.61 0.61 1.00 0.50 0.60 0.60 0.99 0.49 0.52 Factor3 0.46 0.50 1.00 0.57 0.46 0.50 1.00 0.56 0.45 0.44 1.00 0.48 Factor4 0.56 0.61 0.57 1.00 0.56 0.62 0.58 1.00 0.55 0.55 0.56 0.99 PA1 1.00 0.61 0.46 0.56 1.00 0.61 0.46 0.55 1.00 0.54 0.44 0.47 PA3 0.61 1.00 0.50 0.62 0.61 1.00 0.50 0.61 0.61 0.99 0.50 0.53 PA2 0.46 0.50 1.00 0.58 0.46 0.50 1.00 0.57 0.46 0.44 1.00 0.49 PA4 0.55 0.60 0.56 1.00 0.55 0.61 0.57 1.00 0.54 0.54 0.55 0.99 RC1 1.00 0.60 0.45 0.55 1.00 0.61 0.46 0.54 1.00 0.53 0.43 0.46 RC3 0.54 0.99 0.44 0.55 0.54 0.99 0.44 0.54 0.53 1.00 0.43 0.47 RC2 0.44 0.49 1.00 0.56 0.44 0.50 1.00 0.55 0.43 0.43 1.00 0.47 RC4 0.47 0.52 0.48 0.99 0.47 0.53 0.49 0.99 0.46 0.47 0.47 1.00 MR1 1.00 0.61 0.46 0.56 1.00 0.61 0.46 0.55 1.00 0.54 0.44 0.47 MR3 0.61 1.00 0.50 0.62 0.61 1.00 0.50 0.61 0.61 0.99 0.50 0.53 MR2 0.46 0.50 1.00 0.58 0.46 0.50 1.00 0.57 0.46 0.44 1.00 0.49 MR4 0.55 0.60 0.56 1.00 0.55 0.61 0.57 1.00 0.54 0.54 0.55 0.99 WLS1 0.98 0.47 0.30 0.40 0.98 0.48 0.30 0.39 0.98 0.41 0.28 0.32 WLS3 0.36 0.95 0.41 0.41 0.36 0.95 0.41 0.39 0.35 0.97 0.41 0.32 WLS2 0.23 0.22 0.95 0.36 0.23 0.22 0.95 0.35 0.22 0.16 0.95 0.28 WLS4 0.28 0.40 0.36 0.94 0.28 0.41 0.37 0.94 0.27 0.36 0.35 0.97 MR1 MR3 MR2 MR4 WLS1 WLS3 WLS2 WLS4 Factor1 1.00 0.61 0.46 0.55 0.98 0.36 0.23 0.28 Factor2 0.61 1.00 0.50 0.60 0.47 0.95 0.22 0.40 Factor3 0.46 0.50 1.00 0.56 0.30 0.41 0.95 0.36 Factor4 0.56 0.62 0.58 1.00 0.40 0.41 0.36 0.94 PA1 1.00 0.61 0.46 0.55 0.98 0.36 0.23 0.28 PA3 0.61 1.00 0.50 0.61 0.48 0.95 0.22 0.41 PA2 0.46 0.50 1.00 0.57 0.30 0.41 0.95 0.37 PA4 0.55 0.61 0.57 1.00 0.39 0.39 0.35 0.94 RC1 1.00 0.61 0.46 0.54 0.98 0.35 0.22 0.27 RC3 0.54 0.99 0.44 0.54 0.41 0.97 0.16 0.36 RC2 0.44 0.50 1.00 0.55 0.28 0.41 0.95 0.35 RC4 0.47 0.53 0.49 0.99 0.32 0.32 0.28 0.97 MR1 1.00 0.61 0.46 0.55 0.98 0.36 0.23 0.28 MR3 0.61 1.00 0.50 0.61 0.48 0.95 0.22 0.41 MR2 0.46 0.50 1.00 0.57 0.30 0.41 0.95 0.37 MR4 0.55 0.61 0.57 1.00 0.39 0.39 0.35 0.94 WLS1 0.98 0.48 0.30 0.39 1.00 0.22 0.09 0.13 WLS3 0.36 0.95 0.41 0.39 0.22 1.00 0.17 0.23 WLS2 0.23 0.22 0.95 0.35 0.09 0.17 1.00 0.20 WLS4 0.28 0.41 0.37 0.94 0.13 0.23 0.20 1.00 > > #note that the order of factors and the sign of some of factors may differ > > #finally, compare the unrotated factor, ml, uls, and wls solutions > wls <- fa(Harman74.cor$cov,4,rotate="none",fm="wls") > pa <- fa(Harman74.cor$cov,4,rotate="none",fm="pa") > minres <- factanal(factors=4,covmat=Harman74.cor$cov,rotation="none") > mle <- fa(Harman74.cor$cov,4,rotate="none",fm="mle") > uls <- fa(Harman74.cor$cov,4,rotate="none",fm="uls") > factor.congruence(list(minres,mle,pa,wls,uls)) Factor1 Factor2 Factor3 Factor4 ML1 ML2 ML3 ML4 PA1 PA2 PA3 Factor1 1.00 0.11 0.25 0.06 1.00 0.11 0.25 0.06 1.00 -0.04 -0.05 Factor2 0.11 1.00 0.06 0.07 0.11 1.00 0.06 0.07 0.14 0.98 -0.08 Factor3 0.25 0.06 1.00 0.01 0.25 0.06 1.00 0.01 0.30 0.10 0.95 Factor4 0.06 0.07 0.01 1.00 0.06 0.07 0.01 1.00 0.07 0.13 -0.04 ML1 1.00 0.11 0.25 0.06 1.00 0.11 0.25 0.06 1.00 -0.04 -0.05 ML2 0.11 1.00 0.06 0.07 0.11 1.00 0.06 0.07 0.14 0.98 -0.08 ML3 0.25 0.06 1.00 0.01 0.25 0.06 1.00 0.01 0.30 0.10 0.95 ML4 0.06 0.07 0.01 1.00 0.06 0.07 0.01 1.00 0.07 0.13 -0.04 PA1 1.00 0.14 0.30 0.07 1.00 0.14 0.30 0.07 1.00 0.00 0.00 PA2 -0.04 0.98 0.10 0.13 -0.04 0.98 0.10 0.13 0.00 1.00 0.00 PA3 -0.05 -0.08 0.95 -0.04 -0.05 -0.08 0.95 -0.04 0.00 0.00 1.00 PA4 -0.01 -0.08 0.02 0.99 -0.01 -0.08 0.02 0.99 0.00 0.00 0.00 WLS1 1.00 0.14 0.30 0.07 1.00 0.14 0.30 0.07 1.00 0.00 0.00 WLS2 -0.04 0.98 0.09 0.13 -0.04 0.98 0.09 0.13 0.00 1.00 -0.01 WLS3 -0.05 -0.07 0.95 -0.04 -0.05 -0.07 0.95 -0.04 0.00 0.01 1.00 WLS4 -0.01 -0.07 0.02 0.99 -0.01 -0.07 0.02 0.99 0.00 0.01 0.00 ULS1 1.00 0.14 0.30 0.07 1.00 0.14 0.30 0.07 1.00 0.00 0.00 ULS2 -0.04 0.98 0.09 0.13 -0.04 0.98 0.09 0.13 0.00 1.00 0.00 ULS3 -0.05 -0.07 0.95 -0.04 -0.05 -0.07 0.95 -0.04 0.00 0.00 1.00 ULS4 -0.01 -0.08 0.02 0.99 -0.01 -0.08 0.02 0.99 0.00 0.00 0.00 PA4 WLS1 WLS2 WLS3 WLS4 ULS1 ULS2 ULS3 ULS4 Factor1 -0.01 1.00 -0.04 -0.05 -0.01 1.00 -0.04 -0.05 -0.01 Factor2 -0.08 0.14 0.98 -0.07 -0.07 0.14 0.98 -0.07 -0.08 Factor3 0.02 0.30 0.09 0.95 0.02 0.30 0.09 0.95 0.02 Factor4 0.99 0.07 0.13 -0.04 0.99 0.07 0.13 -0.04 0.99 ML1 -0.01 1.00 -0.04 -0.05 -0.01 1.00 -0.04 -0.05 -0.01 ML2 -0.08 0.14 0.98 -0.07 -0.07 0.14 0.98 -0.07 -0.08 ML3 0.02 0.30 0.09 0.95 0.02 0.30 0.09 0.95 0.02 ML4 0.99 0.07 0.13 -0.04 0.99 0.07 0.13 -0.04 0.99 PA1 0.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 PA2 0.00 0.00 1.00 0.01 0.01 0.00 1.00 0.00 0.00 PA3 0.00 0.00 -0.01 1.00 0.00 0.00 0.00 1.00 0.00 PA4 1.00 0.00 -0.01 0.00 1.00 0.00 0.00 0.00 1.00 WLS1 0.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 WLS2 -0.01 0.00 1.00 0.00 0.00 0.00 1.00 -0.01 -0.01 WLS3 0.00 0.00 0.00 1.00 0.00 0.00 0.01 1.00 0.00 WLS4 1.00 0.00 0.00 0.00 1.00 0.00 0.01 0.00 1.00 ULS1 0.00 1.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 ULS2 0.00 0.00 1.00 0.01 0.01 0.00 1.00 0.00 0.00 ULS3 0.00 0.00 -0.01 1.00 0.00 0.00 0.00 1.00 0.00 ULS4 1.00 0.00 -0.01 0.00 1.00 0.00 0.00 0.00 1.00 > #in particular, note the similarity of the mle and min res solutions > #note that the order of factors and the sign of some of factors may differ > > > > #an example of where the ML and PA and MR models differ is found in Thurstone.33. > #compare the first two factors with the 3 factor solution > Thurstone.33 <- as.matrix(Thurstone.33) > mle2 <- fa(Thurstone.33,2,rotate="none",fm="mle") > mle3 <- fa(Thurstone.33,3 ,rotate="none",fm="mle") > pa2 <- fa(Thurstone.33,2,rotate="none",fm="pa") > pa3 <- fa(Thurstone.33,3,rotate="none",fm="pa") > mr2 <- fa(Thurstone.33,2,rotate="none") > mr3 <- fa(Thurstone.33,3,rotate="none") > factor.congruence(list(mle2,mr2,pa2,mle3,pa3,mr3)) ML1 ML2 MR1 MR2 PA1 PA2 ML1 ML2 ML3 PA1 PA2 PA3 MR1 ML1 1.00 0.16 1.00 -0.03 1.00 -0.03 0.98 0.88 0.16 1.00 -0.02 0.00 1.00 ML2 0.16 1.00 0.19 0.97 0.19 0.98 0.03 0.51 -0.79 0.18 0.97 -0.09 0.18 MR1 1.00 0.19 1.00 0.00 1.00 0.00 0.97 0.89 0.14 1.00 0.01 0.01 1.00 MR2 -0.03 0.97 0.00 1.00 0.00 1.00 -0.14 0.32 -0.87 -0.01 0.97 -0.17 -0.01 PA1 1.00 0.19 1.00 0.00 1.00 0.00 0.97 0.89 0.15 1.00 0.01 0.01 1.00 PA2 -0.03 0.98 0.00 1.00 0.00 1.00 -0.14 0.32 -0.87 -0.01 0.98 -0.17 -0.01 ML1 0.98 0.03 0.97 -0.14 0.97 -0.14 1.00 0.76 0.19 0.98 -0.17 -0.14 0.98 ML2 0.88 0.51 0.89 0.32 0.89 0.32 0.76 1.00 -0.01 0.88 0.39 0.26 0.88 ML3 0.16 -0.79 0.14 -0.87 0.15 -0.87 0.19 -0.01 1.00 0.15 -0.78 0.61 0.15 PA1 1.00 0.18 1.00 -0.01 1.00 -0.01 0.98 0.88 0.15 1.00 0.00 0.00 1.00 PA2 -0.02 0.97 0.01 0.97 0.01 0.98 -0.17 0.39 -0.78 0.00 1.00 0.00 0.00 PA3 0.00 -0.09 0.01 -0.17 0.01 -0.17 -0.14 0.26 0.61 0.00 0.00 1.00 0.00 MR1 1.00 0.18 1.00 -0.01 1.00 -0.01 0.98 0.88 0.15 1.00 0.00 0.00 1.00 MR2 -0.02 0.97 0.01 0.96 0.01 0.97 -0.17 0.40 -0.75 0.00 1.00 0.04 0.00 MR3 0.01 -0.14 0.01 -0.22 0.02 -0.22 -0.13 0.24 0.64 0.00 -0.04 1.00 0.00 MR2 MR3 ML1 -0.02 0.01 ML2 0.97 -0.14 MR1 0.01 0.01 MR2 0.96 -0.22 PA1 0.01 0.02 PA2 0.97 -0.22 ML1 -0.17 -0.13 ML2 0.40 0.24 ML3 -0.75 0.64 PA1 0.00 0.00 PA2 1.00 -0.04 PA3 0.04 1.00 MR1 0.00 0.00 MR2 1.00 0.00 MR3 0.00 1.00 > > #f5 <- fa(psychTools::bfi[1:25],5) > #f5 #names are not in ascending numerical order (see note) > #colnames(f5$loadings) <- paste("F",1:5,sep="") > #f5 > > #Get the variance accounted for object from the print function > p <- print(mr3) Factor Analysis using method = minres Call: fa(r = Thurstone.33, nfactors = 3, rotate = "none") Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 MR3 h2 u2 com Definitions 0.77 0.00 0.11 0.61 0.392 1.0 Arithmetical_Problems 0.64 0.39 -0.20 0.59 0.406 1.9 Classification 0.73 -0.09 0.18 0.57 0.431 1.2 Artificial_Languange 0.65 0.13 0.20 0.48 0.519 1.3 Antonyms 0.81 -0.26 0.02 0.72 0.278 1.2 Number_Series_Completion 0.58 0.52 -0.13 0.62 0.381 2.1 Analogies 0.74 -0.04 0.25 0.62 0.381 1.2 Logical_Inference 0.75 0.01 -0.04 0.56 0.438 1.0 Paragraph_Reading 0.82 -0.39 -0.39 0.98 0.019 1.9 MR1 MR2 MR3 SS loadings 4.73 0.67 0.37 Proportion Var 0.53 0.07 0.04 Cumulative Var 0.53 0.60 0.64 Proportion Explained 0.82 0.12 0.06 Cumulative Proportion 0.82 0.94 1.00 Mean item complexity = 1.4 Test of the hypothesis that 3 factors are sufficient. The degrees of freedom for the null model are 36 and the objective function was 4.85 The degrees of freedom for the model are 12 and the objective function was 0.07 The root mean square of the residuals (RMSR) is 0.02 The df corrected root mean square of the residuals is 0.03 Fit based upon off diagonal values = 1 Measures of factor score adequacy MR1 MR2 MR3 Correlation of (regression) scores with factors 0.97 0.86 0.83 Multiple R square of scores with factors 0.94 0.73 0.68 Minimum correlation of possible factor scores 0.88 0.46 0.37 > round(p$Vaccounted,2) MR1 MR2 MR3 SS loadings 4.73 0.67 0.37 Proportion Var 0.53 0.07 0.04 Cumulative Var 0.53 0.60 0.64 Proportion Explained 0.82 0.12 0.06 Cumulative Proportion 0.82 0.94 1.00 > > #pool data and fa the pooled result (not run) > #datasets.list <- list(bfi[1:500,1:25],bfi[501:1000,1:25], > # bfi[1001:1500,1:25],bfi[1501:2000,1:25],bfi[2001:2500,1:25]) #five different data sets > #temp <- fa.pooled(datasets.list,5) #do 5 factor analyses, pool the results > > > > cleanEx() > nameEx("fa.diagram") > ### * fa.diagram > > flush(stderr()); flush(stdout()) > > ### Name: fa.diagram > ### Title: Graph factor loading matrices > ### Aliases: fa.graph fa.rgraph fa.diagram extension.diagram het.diagram > ### Keywords: multivariate hplot > > ### ** Examples > > > test.simple <- fa(item.sim(16),2,rotate="oblimin") > #if(require(Rgraphviz)) {fa.graph(test.simple) } > fa.diagram(test.simple) > f3 <- fa(Thurstone,3,rotate="cluster") > fa.diagram(f3,cut=.4,digits=2) > f3l <- f3$loadings > fa.diagram(f3l,main="input from a matrix") > Phi <- f3$Phi > fa.diagram(f3l,Phi=Phi,main="Input from a matrix") > fa.diagram(ICLUST(Thurstone,2,title="Two cluster solution of Thurstone"),main="Input from ICLUST") > het.diagram(Thurstone,levels=list(1:4,5:8,3:7)) > > > > cleanEx() > nameEx("fa.extension") > ### * fa.extension > > flush(stderr()); flush(stdout()) > > ### Name: fa.extension > ### Title: Apply Dwyer's factor extension to find factor loadings for > ### extended variables > ### Aliases: fa.extension fa.extend > ### Keywords: multivariate > > ### ** Examples > #The Dwyer Example > Ro <- Dwyer[1:7,1:7] > Roe <- Dwyer[1:7,8] > fo <- fa(Ro,2,rotate="none") > fe <- fa.extension(Roe,fo) > > #an example from simulated data > set.seed(42) > d <- sim.item(12) #two orthogonal factors > R <- cor(d) > Ro <- R[c(1,2,4,5,7,8,10,11),c(1,2,4,5,7,8,10,11)] > Roe <- R[c(1,2,4,5,7,8,10,11),c(3,6,9,12)] > fo <- fa(Ro,2) > fe <- fa.extension(Roe,fo) > fa.diagram(fo,fe=fe) > > #alternatively just specify the original variables and the extension variables > fe = fa.extend(R, 2, ov =c(1,2,4,5,7,8,10,11), ev=c(3,6,9,12)) > fa.diagram(fe$fo, fe = fe$fe) > > #create two correlated factors > fx <- matrix(c(.9,.8,.7,.85,.75,.65,rep(0,12),.9,.8,.7,.85,.75,.65),ncol=2) > Phi <- matrix(c(1,.6,.6,1),2) > sim.data <- sim.structure(fx,Phi,n=1000,raw=TRUE) > R <- cor(sim.data$observed) > Ro <- R[c(1,2,4,5,7,8,10,11),c(1,2,4,5,7,8,10,11)] > Roe <- R[c(1,2,4,5,7,8,10,11),c(3,6,9,12)] > fo <- fa(Ro,2) > fe <- fa.extension(Roe,fo) > fa.diagram(fo,fe=fe) > > #now show how fa.extend works with the same data set > #note that we have to make sure that the variables are in the order to do the factor congruence > fe2 <- fa.extend(sim.data$observed,2,ov=c(1,2,4,5,7,8,10,11),ev=c(3,6,9,12)) > fa.diagram(fe2,main="factor analysis with extension variables") > fa2 <- fa(sim.data$observed[,c(1,2,4,5,7,8,10,11,3,6,9,12)],2) > factor.congruence(fe2,fa2) MR1 MR2 MR1 1.00 0 MR2 0.01 1 > summary(fe2) Factor extensions analysis with Call: fa.extend(r = sim.data$observed, nfactors = 2, ov = c(1, 2, 4, 5, 7, 8, 10, 11), ev = c(3, 6, 9, 12)) With factor correlations of MR1 MR2 MR1 1.00 0.59 MR2 0.59 1.00 > > #an example of extending an omega analysis > > > fload <- matrix(c(c(c(.9,.8,.7,.6),rep(0,20)),c(c(.9,.8,.7,.6),rep(0,20)),c(c(.9,.8,.7,.6), + rep(0,20)),c(c(c(.9,.8,.7,.6),rep(0,20)),c(.9,.8,.7,.6))),ncol=5) > gload <- matrix(rep(.7,5)) > five.factor <- sim.hierarchical(gload,fload,500,TRUE) #create sample data set > ss <- c(1,2,3,5,6,7,9,10,11,13,14,15,17,18,19) > Ro <- cor(five.factor$observed[,ss]) > Re <- cor(five.factor$observed[,ss],five.factor$observed[,-ss]) > om5 <-omega(Ro,5) #the omega analysis > om.extend <- fa.extension(Re,om5) #the extension analysis > om.extend #show it Call: fa.extension(Roe = Re, fo = om5) Standardized loadings (pattern matrix) based upon correlation matrix g F1* F2* F3* F4* F5* h2 u2 V4 0.39 -0.04 0.05 0.04 0.36 0.02 0.29 0.71 V8 0.34 0.04 0.01 0.38 -0.04 0.02 0.26 0.74 V12 0.39 0.00 0.05 0.05 0.01 0.34 0.28 0.72 V16 0.38 0.00 0.31 0.03 0.04 0.00 0.24 0.76 V20 0.40 0.35 -0.05 0.11 0.01 0.03 0.30 0.70 g F1* F2* F3* F4* F5* SS loadings 0.72 0.13 0.11 0.16 0.13 0.12 Proportion Var 0.14 0.03 0.02 0.03 0.03 0.02 Cumulative Var 0.14 0.17 0.19 0.22 0.25 0.27 Proportion Explained 0.53 0.09 0.08 0.12 0.10 0.09 Cumulative Proportion 0.53 0.62 0.70 0.81 0.91 1.00 > #now, include it in an omega diagram > combined.om <- rbind(om5$schmid$sl[,1:ncol(om.extend$loadings)],om.extend$loadings) > class(combined.om) <-c("psych","extend") > omega.diagram(combined.om,main="Extended Omega") > > > > > cleanEx() > nameEx("fa.lookup") > ### * fa.lookup > > flush(stderr()); flush(stdout()) > > ### Name: fa.lookup > ### Title: A set of functions for factorial and empirical scale > ### construction > ### Aliases: lookup lookupItems fa.lookup item.lookup keys.lookup > ### lookupFromKeys setCorLookup > ### Keywords: models multivariate > > ### ** Examples > > #Tne following shows how to create a dictionary > #first, copy the spreadsheet to the clipboard > # bfi.dictionary <- read.clipboard.tab() #read from the clipboard > # rownames(bfi.dictionary) <- bfi.dictionary[1] #the first column had the names > # bfi.dictionary <- bfi.dictionary[-1] #these are redundant, drop them > > f5 <- fa(psychTools::bfi,5) > m <- colMeans(psychTools::bfi,na.rm=TRUE) > item.lookup(f5,m,dictionary=psychTools::bfi.dictionary[2,drop=FALSE]) Warning in `[.data.frame`(psychTools::bfi.dictionary, 2, drop = FALSE) : 'drop' argument will be ignored MR2 MR1 MR3 MR5 MR4 means N1 0.78 0.09 0.00 -0.12 -0.03 2.93 N2 0.76 0.03 0.01 -0.09 0.02 3.51 N3 0.73 -0.06 -0.03 0.06 0.02 3.22 N5 0.53 -0.15 0.01 0.21 -0.17 2.97 N4 0.50 -0.36 -0.13 0.10 0.09 3.19 E4 0.00 0.67 0.02 0.18 -0.07 4.42 E2 0.15 -0.64 -0.02 -0.03 -0.06 3.14 E1 -0.03 -0.54 0.10 -0.07 -0.09 2.97 E3 0.08 0.49 0.00 0.16 0.29 4.00 E5 0.12 0.41 0.27 0.03 0.22 4.42 C2 0.16 -0.05 0.66 0.04 0.04 4.37 C4 0.18 0.06 -0.62 -0.01 -0.04 2.55 C3 0.04 -0.05 0.56 0.08 -0.06 4.30 C5 0.20 -0.12 -0.56 0.01 0.10 3.30 C1 0.07 -0.02 0.55 -0.04 0.15 4.50 A2 0.01 0.07 0.08 0.63 0.03 4.80 A3 0.00 0.23 0.03 0.58 0.03 4.60 A1 0.20 0.19 0.07 -0.51 -0.05 2.41 A5 -0.10 0.32 0.01 0.46 0.05 4.56 A4 -0.04 0.13 0.20 0.40 -0.14 4.70 gender 0.14 0.00 0.11 0.29 -0.16 1.67 age -0.15 -0.12 0.08 0.19 0.08 28.78 O3 0.03 0.17 0.02 0.05 0.62 4.44 O1 0.01 0.13 0.07 -0.03 0.52 4.82 O5 0.13 0.14 -0.03 -0.02 -0.52 2.49 O2 0.20 0.11 -0.08 0.10 -0.44 2.71 O4 0.16 -0.28 -0.02 0.18 0.37 4.89 education -0.10 -0.14 -0.01 0.10 0.15 3.19 Item N1 Get angry easily. N2 Get irritated easily. N3 Have frequent mood swings. N5 Panic easily. N4 Often feel blue. E4 Make friends easily. E2 Find it difficult to approach others. E1 Don't talk a lot. E3 Know how to captivate people. E5 Take charge. C2 Continue until everything is perfect. C4 Do things in a half-way manner. C3 Do things according to a plan. C5 Waste my time. C1 Am exacting in my work. A2 Inquire about others' well-being. A3 Know how to comfort others. A1 Am indifferent to the feelings of others. A5 Make people feel at ease. A4 Love children. gender males=1, females=2 age age in years O3 Carry the conversation to a higher level. O1 Am full of ideas. O5 Will not probe deeply into a subject. O2 Avoid difficult reading material. O4 Spend time reflecting on things. education in HS, fin HS, coll, coll grad , grad deg > #just show the item content, not the source of the items > fa.lookup(f5,dictionary=psychTools::bfi.dictionary[2]) MR2 MR1 MR3 MR5 MR4 com h2 N1 0.78 0.09 0.00 -0.12 -0.03 1.08 0.61 N2 0.76 0.03 0.01 -0.09 0.02 1.03 0.58 N3 0.73 -0.06 -0.03 0.06 0.02 1.03 0.56 N5 0.53 -0.15 0.01 0.21 -0.17 1.70 0.37 N4 0.50 -0.36 -0.13 0.10 0.09 2.15 0.48 E4 0.00 0.67 0.02 0.18 -0.07 1.17 0.55 E2 0.15 -0.64 -0.02 -0.03 -0.06 1.13 0.51 E1 -0.03 -0.54 0.10 -0.07 -0.09 1.17 0.32 E3 0.08 0.49 0.00 0.16 0.29 1.93 0.45 E5 0.12 0.41 0.27 0.03 0.22 2.58 0.39 C2 0.16 -0.05 0.66 0.04 0.04 1.15 0.44 C4 0.18 0.06 -0.62 -0.01 -0.04 1.20 0.46 C3 0.04 -0.05 0.56 0.08 -0.06 1.09 0.31 C5 0.20 -0.12 -0.56 0.01 0.10 1.43 0.43 C1 0.07 -0.02 0.55 -0.04 0.15 1.21 0.33 A2 0.01 0.07 0.08 0.63 0.03 1.07 0.47 A3 0.00 0.23 0.03 0.58 0.03 1.31 0.49 A1 0.20 0.19 0.07 -0.51 -0.05 1.67 0.28 A5 -0.10 0.32 0.01 0.46 0.05 1.93 0.45 A4 -0.04 0.13 0.20 0.40 -0.14 2.03 0.28 gender 0.14 0.00 0.11 0.29 -0.16 2.42 0.12 age -0.15 -0.12 0.08 0.19 0.08 3.49 0.08 O3 0.03 0.17 0.02 0.05 0.62 1.18 0.47 O1 0.01 0.13 0.07 -0.03 0.52 1.17 0.33 O5 0.13 0.14 -0.03 -0.02 -0.52 1.28 0.29 O2 0.20 0.11 -0.08 0.10 -0.44 1.77 0.24 O4 0.16 -0.28 -0.02 0.18 0.37 2.83 0.24 education -0.10 -0.14 -0.01 0.10 0.15 3.47 0.05 Item N1 Get angry easily. N2 Get irritated easily. N3 Have frequent mood swings. N5 Panic easily. N4 Often feel blue. E4 Make friends easily. E2 Find it difficult to approach others. E1 Don't talk a lot. E3 Know how to captivate people. E5 Take charge. C2 Continue until everything is perfect. C4 Do things in a half-way manner. C3 Do things according to a plan. C5 Waste my time. C1 Am exacting in my work. A2 Inquire about others' well-being. A3 Know how to comfort others. A1 Am indifferent to the feelings of others. A5 Make people feel at ease. A4 Love children. gender males=1, females=2 age age in years O3 Carry the conversation to a higher level. O1 Am full of ideas. O5 Will not probe deeply into a subject. O2 Avoid difficult reading material. O4 Spend time reflecting on things. education in HS, fin HS, coll, coll grad , grad deg > > > #show how to use lookupFromKeys > bfi.keys <- + list(agree=c("-A1","A2","A3","A4","A5"),conscientiousness=c("C1","C2","C3","-C4","-C5"), + extraversion=c("-E1","-E2","E3","E4","E5"),neuroticism=c("N1","N2","N3","N4","N5"), + openness = c("O1","-O2","O3","O4","-O5")) > bfi.over <- scoreOverlap(bfi.keys,bfi) #returns the corrected for overlap values > lookupFromKeys(bfi.keys,psychTools::bfi.dictionary,n=5, cors=bfi.over$item.cor) $agree ItemLabel Item Giant3 Big6 A3 q_1206 Know how to comfort others. Cohesion Agreeableness A2 q_1162 Inquire about others' well-being. Cohesion Agreeableness A5 q_1419 Make people feel at ease. Cohesion Agreeableness A4 q_1364 Love children. Cohesion Agreeableness A1- q_146 Am indifferent to the feelings of others. Cohesion Agreeableness Little12 Keying IPIP100 cors A3 Compassion 1 B5:A 0.72 A2 Compassion 1 B5:A 0.69 A5 Compassion 1 B5:A 0.64 A4 Compassion 1 B5:A 0.50 A1- Compassion -1 B5:A -0.40 $conscientiousness ItemLabel Item Giant3 Big6 C4- q_626 Do things in a half-way manner. Stability Conscientiousness C2 q_530 Continue until everything is perfect. Stability Conscientiousness C5- q_1949 Waste my time. Stability Conscientiousness C1 q_124 Am exacting in my work. Stability Conscientiousness C3 q_619 Do things according to a plan. Stability Conscientiousness Little12 Keying IPIP100 cors C4- Industriousness -1 B5:C -0.72 C2 Orderliness 1 B5:C 0.68 C5- Industriousness -1 B5:C -0.65 C1 Orderliness 1 B5:C 0.60 C3 Orderliness 1 B5:C 0.59 $extraversion ItemLabel Item Giant3 Big6 E2- q_901 Find it difficult to approach others. Plasticity Extraversion E4 q_1410 Make friends easily. Plasticity Extraversion E3 q_1205 Know how to captivate people. Plasticity Extraversion E1- q_712 Don't talk a lot. Plasticity Extraversion E5 q_1768 Take charge. Plasticity Extraversion Little12 Keying IPIP100 cors E2- Sociability -1 B5:E -0.76 E4 Sociability 1 B5:E 0.74 E3 Assertiveness 1 B5:E 0.66 E1- Sociability -1 B5:E -0.64 E5 Assertiveness 1 B5:E 0.60 $neuroticism ItemLabel Item Giant3 Big6 Little12 N1 q_952 Get angry easily. Stability Emotional Stability Balance N2 q_974 Get irritated easily. Stability Emotional Stability Balance N3 q_1099 Have frequent mood swings. Stability Emotional Stability Balance N4 q_1479 Often feel blue. Stability Emotional Stability Balance N5 q_1505 Panic easily. Stability Emotional Stability Balance Keying IPIP100 cors N1 -1 B5:N 0.76 N2 -1 B5:N 0.75 N3 -1 B5:N 0.74 N4 -1 B5:N 0.62 N5 -1 B5:N 0.55 $openness ItemLabel Item Giant3 Big6 O3 q_492 Carry the conversation to a higher level. Plasticity Openness O1 q_128 Am full of ideas. Plasticity Openness O5- q_1964 Will not probe deeply into a subject. Plasticity Openness O2- q_316 Avoid difficult reading material. Plasticity Openness O4 q_1738 Spend time reflecting on things. Plasticity Openness Little12 Keying IPIP100 cors O3 Intellect 1 B5:O 0.67 O1 Intellect 1 B5:O 0.57 O5- Openness -1 B5:O -0.57 O2- Intellect -1 B5:O -0.48 O4 Openness 1 B5:O 0.36 > #show the keying information > lookupItems("life",psychTools::spi.dictionary) #find those items with "life" in the item item_id item item_scale resp_type B5 q_1281 q_1281 Like a leisurely lifestyle. IPIP reg q_2765 q_2765 Am happy with my life. IPIP reg q_1248 q_1248 Laugh my way through life. IPIP reg q_755 q_755 Enjoy examining myself and my life. IPIP reg q_1052 q_1052 Have a slow pace to my life. IPIP reg q_1371 q_1371 Love life. IPIP reg L27 q_1281 EasyGoingness q_2765 WellBeing q_1248 Humor q_755 Introspection q_1052 EasyGoingness q_1371 WellBeing > > > > cleanEx() > nameEx("fa.parallel") > ### * fa.parallel > > flush(stderr()); flush(stdout()) > > ### Name: fa.parallel > ### Title: Scree plots of data or correlation matrix compared to random > ### "parallel" matrices > ### Aliases: fa.parallel fa.parallel.poly plot.poly.parallel > ### Keywords: multivariate > > ### ** Examples > > > #test.data <- Harman74.cor$cov #The 24 variable Holzinger - Harman problem > #fa.parallel(test.data,n.obs=145) > fa.parallel(Thurstone,n.obs=213) #the 9 variable Thurstone problem Parallel analysis suggests that the number of factors = 3 and the number of components = 1 > > #set.seed(123) > #minor <- sim.minor(24,4,400) #4 large and 12 minor factors > #ffa.parallel(minor$observed) #shows 5 factors and 4 components -- compare with > #fa.parallel(minor$observed,SMC=FALSE) #which shows 6 and 4 components factors > #a demonstration of parallel analysis of a dichotomous variable > #fp <- fa.parallel(psychTools::ability) #use the default Pearson correlation > #fpt <- fa.parallel(psychTools::ability,cor="tet") #do a tetrachoric correlation > #fpt <- fa.parallel(psychTools::ability,cor="tet",quant=.95) #do a tetrachoric correlation and > #use the 95th percentile of the simulated results > #apply(fp$values,2,function(x) quantile(x,.95)) #look at the 95th percentile of values > #apply(fpt$values,2,function(x) quantile(x,.95)) #look at the 95th percentile of values > #describe(fpt$values) #look at all the statistics of the simulated values > > > > > cleanEx() > nameEx("fa.random") > ### * fa.random > > flush(stderr()); flush(stdout()) > > ### Name: fa.random > ### Title: A first approximation to Random Effects Exploratory Factor > ### Analysis > ### Aliases: fa.random > ### Keywords: multivariate models > > ### ** Examples > > fa.ab <- fa(psychTools::ability,4,rotate="none") #normal factor analysis > fa.ab.ip <- fa.random(psychTools::ability,3,rotate="none") > fa.congruence(list(fa.ab,fa.ab.ip,fa.ab.ip$within.r)) MR1 MR2 MR3 MR4 MR1 MR2 MR3 within MR1 1.00 0.00 0.00 0.00 0.09 0.09 0.03 1.00 MR2 0.00 1.00 0.00 0.00 -0.99 -0.03 0.02 0.04 MR3 0.00 0.00 1.00 0.00 0.03 -0.98 -0.10 0.02 MR4 0.00 0.00 0.00 1.00 -0.02 0.10 -0.99 0.01 MR1 0.09 -0.99 0.03 -0.02 1.00 0.00 0.00 0.05 MR2 0.09 -0.03 -0.98 0.10 0.00 1.00 0.00 0.07 MR3 0.03 0.02 -0.10 -0.99 0.00 0.00 1.00 0.02 within 1.00 0.04 0.02 0.01 0.05 0.07 0.02 1.00 > > > > > > cleanEx() > nameEx("fa.sort") > ### * fa.sort > > flush(stderr()); flush(stdout()) > > ### Name: fa.sort > ### Title: Sort factor analysis or principal components analysis loadings > ### Aliases: fa.sort fa.organize > ### Keywords: multivariate > > ### ** Examples > > test.simple <- fa(sim.item(16),2) > fa.sort(test.simple) Factor Analysis using method = minres Call: fa(r = sim.item(16), nfactors = 2) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 h2 u2 com V3 -0.67 0.06 0.45 0.55 1 V12 0.66 -0.01 0.44 0.56 1 V11 0.65 0.06 0.43 0.57 1 V9 0.62 -0.01 0.39 0.61 1 V4 -0.62 0.01 0.38 0.62 1 V1 -0.61 -0.03 0.38 0.62 1 V2 -0.60 0.02 0.36 0.64 1 V10 0.57 0.04 0.33 0.67 1 V8 0.00 0.65 0.43 0.57 1 V6 -0.01 0.65 0.42 0.58 1 V7 0.04 0.62 0.39 0.61 1 V14 0.03 -0.61 0.37 0.63 1 V15 -0.03 -0.60 0.37 0.63 1 V16 0.01 -0.59 0.34 0.66 1 V5 -0.03 0.57 0.33 0.67 1 V13 -0.01 -0.53 0.28 0.72 1 MR1 MR2 SS loadings 3.15 2.92 Proportion Var 0.20 0.18 Cumulative Var 0.20 0.38 Proportion Explained 0.52 0.48 Cumulative Proportion 0.52 1.00 With factor correlations of MR1 MR2 MR1 1.00 0.08 MR2 0.08 1.00 Mean item complexity = 1 Test of the hypothesis that 2 factors are sufficient. The degrees of freedom for the null model are 120 and the objective function was 4.36 with Chi Square of 2146.44 The degrees of freedom for the model are 89 and the objective function was 0.24 The root mean square of the residuals (RMSR) is 0.03 The df corrected root mean square of the residuals is 0.03 The harmonic number of observations is 500 with the empirical chi square 91.95 with prob < 0.39 The total number of observations was 500 with Likelihood Chi Square = 116.25 with prob < 0.028 Tucker Lewis Index of factoring reliability = 0.982 RMSEA index = 0.025 and the 90 % confidence intervals are 0.009 0.037 BIC = -436.85 Fit based upon off diagonal values = 0.99 Measures of factor score adequacy MR1 MR2 Correlation of (regression) scores with factors 0.92 0.91 Multiple R square of scores with factors 0.84 0.82 Minimum correlation of possible factor scores 0.68 0.65 > fa.organize(test.simple,c(2,1)) #the factors but not the items have been rearranged Factor Analysis using method = minres Call: fa(r = sim.item(16), nfactors = 2) Standardized loadings (pattern matrix) based upon correlation matrix MR2 MR1 h2 u2 com V1 -0.03 -0.61 0.38 0.62 1 V2 0.02 -0.60 0.36 0.64 1 V3 0.06 -0.67 0.45 0.55 1 V4 0.01 -0.62 0.38 0.62 1 V5 0.57 -0.03 0.33 0.67 1 V6 0.65 -0.01 0.42 0.58 1 V7 0.62 0.04 0.39 0.61 1 V8 0.65 0.00 0.43 0.57 1 V9 -0.01 0.62 0.39 0.61 1 V10 0.04 0.57 0.33 0.67 1 V11 0.06 0.65 0.43 0.57 1 V12 -0.01 0.66 0.44 0.56 1 V13 -0.53 -0.01 0.28 0.72 1 V14 -0.61 0.03 0.37 0.63 1 V15 -0.60 -0.03 0.37 0.63 1 V16 -0.59 0.01 0.34 0.66 1 MR2 MR1 SS loadings 2.92 3.15 Proportion Var 0.18 0.20 Cumulative Var 0.18 0.38 Proportion Explained 0.48 0.52 Cumulative Proportion 0.48 1.00 With factor correlations of MR2 MR1 MR2 1.00 0.08 MR1 0.08 1.00 Mean item complexity = 1 Test of the hypothesis that 2 factors are sufficient. The degrees of freedom for the null model are 120 and the objective function was 4.36 with Chi Square of 2146.44 The degrees of freedom for the model are 89 and the objective function was 0.24 The root mean square of the residuals (RMSR) is 0.03 The df corrected root mean square of the residuals is 0.03 The harmonic number of observations is 500 with the empirical chi square 91.95 with prob < 0.39 The total number of observations was 500 with Likelihood Chi Square = 116.25 with prob < 0.028 Tucker Lewis Index of factoring reliability = 0.982 RMSEA index = 0.025 and the 90 % confidence intervals are 0.009 0.037 BIC = -436.85 Fit based upon off diagonal values = 0.99 Measures of factor score adequacy MR2 MR1 Correlation of (regression) scores with factors 0.91 0.92 Multiple R square of scores with factors 0.82 0.84 Minimum correlation of possible factor scores 0.65 0.68 > > > > cleanEx() > nameEx("faCor") > ### * faCor > > flush(stderr()); flush(stdout()) > > ### Name: faCor > ### Title: Correlations between two factor analysis solutions > ### Aliases: faCor > ### Keywords: multivariate models > > ### ** Examples > > faCor(Thurstone,nfactors=c(2,3)) #compare two solutions to the Thurstone problem Call: faCor(r = Thurstone, nfactors = c(2, 3)) Factor correlations between the two solutions MR1 MR2 MR3 MR1 0.99 0.59 0.61 MR2 0.62 0.97 0.70 Factor congruence between the two solutions MR1 MR2 MR3 MR1 0.98 0.04 0.28 MR2 0.06 0.92 0.47 > faCor(psychTools::bfi[1:25],nfactors=c(5,5),fm=c("minres","pca")) #compare pca and fa solutions Call: faCor(r = psychTools::bfi[1:25], nfactors = c(5, 5), fm = c("minres", "pca")) Factor correlations between the two solutions TC2 TC1 TC3 TC5 TC4 MR2 0.9910 -0.11 -0.13 -0.092 -0.024 MR1 -0.2533 0.98 0.20 0.173 0.025 MR3 -0.1905 0.22 0.99 0.151 0.142 MR5 0.0061 0.39 0.20 0.966 0.098 MR4 0.0085 0.24 0.17 0.160 0.980 Factor congruence between the two solutions TC2 TC1 TC3 TC5 TC4 MR2 0.99 -0.03 -0.03 -0.08 -0.05 MR1 -0.16 0.98 0.06 0.09 0.00 MR3 -0.09 0.07 1.00 0.06 0.07 MR5 0.03 0.31 0.10 0.98 0.01 MR4 -0.01 0.20 0.07 0.06 0.99 > #compare two levels of factor extraction, and then find the correlations of the scores > faCor(psychTools::bfi[1:25],nfactors=c(3,5)) #based upon linear algebra Call: faCor(r = psychTools::bfi[1:25], nfactors = c(3, 5)) Factor correlations between the two solutions MR2 MR1 MR3 MR5 MR4 MR1 -0.16 0.82 0.30 0.788 0.371 MR2 0.98 -0.34 -0.25 -0.013 0.039 MR3 -0.14 0.16 0.94 0.206 0.494 Factor congruence between the two solutions MR2 MR1 MR3 MR5 MR4 MR1 -0.04 0.77 0.11 0.75 0.28 MR2 0.97 -0.22 -0.13 0.08 0.08 MR3 -0.04 -0.01 0.92 0.06 0.44 > f3 <- fa(psychTools::bfi[1:25],3,scores="tenBerge") > f5 <- fa(psychTools::bfi[1:25],5 ,scores="tenBerge") > cor2(f3$scores,f5$scores) #the correlation between the factor score estimates MR2 MR1 MR3 MR5 MR4 MR1 -0.16 0.82 0.30 0.79 0.37 MR2 0.98 -0.35 -0.25 -0.02 0.05 MR3 -0.13 0.18 0.94 0.20 0.49 > > > > > cleanEx() > nameEx("faMulti") > ### * faMulti > > flush(stderr()); flush(stdout()) > > ### Name: fa.multi > ### Title: Multi level (hierarchical) factor analysis > ### Aliases: fa.multi fa.multi.diagram > ### Keywords: multivariate models > > ### ** Examples > > f31 <- fa.multi(Thurstone,3,1) #compare with \code{omega} > f31 $f1 Factor Analysis using method = minres Call: fa(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, scores = scores, residuals = residuals, SMC = SMC, covar = covar, missing = missing, impute = impute, min.err = min.err, max.iter = max.iter, symmetric = symmetric, warnings = warnings, fm = fm, alpha = alpha, oblique.scores = oblique.scores, np.obs = np.obs, use = use, cor = cor) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 MR3 h2 u2 com Sentences 0.90 -0.03 0.04 0.82 0.18 1.0 Vocabulary 0.89 0.06 -0.03 0.84 0.16 1.0 Sent.Completion 0.84 0.03 0.00 0.74 0.26 1.0 First.Letters 0.00 0.85 0.00 0.73 0.27 1.0 Four.Letter.Words -0.02 0.75 0.10 0.63 0.37 1.0 Suffixes 0.18 0.63 -0.08 0.50 0.50 1.2 Letter.Series 0.03 -0.01 0.84 0.73 0.27 1.0 Pedigrees 0.38 -0.05 0.46 0.51 0.49 2.0 Letter.Group -0.06 0.21 0.63 0.52 0.48 1.2 MR1 MR2 MR3 SS loadings 2.65 1.87 1.49 Proportion Var 0.29 0.21 0.17 Cumulative Var 0.29 0.50 0.67 Proportion Explained 0.44 0.31 0.25 Cumulative Proportion 0.44 0.75 1.00 With factor correlations of MR1 MR2 MR3 MR1 1.00 0.59 0.53 MR2 0.59 1.00 0.52 MR3 0.53 0.52 1.00 Mean item complexity = 1.2 Test of the hypothesis that 3 factors are sufficient. The degrees of freedom for the null model are 36 and the objective function was 5.2 The degrees of freedom for the model are 12 and the objective function was 0.01 The root mean square of the residuals (RMSR) is 0.01 The df corrected root mean square of the residuals is 0.01 Fit based upon off diagonal values = 1 Measures of factor score adequacy MR1 MR2 MR3 Correlation of (regression) scores with factors 0.96 0.92 0.90 Multiple R square of scores with factors 0.93 0.85 0.82 Minimum correlation of possible factor scores 0.86 0.71 0.63 $f2 Factor Analysis using method = minres Call: fa(r = f1$Phi, nfactors = nfact2, rotate = rotate, fm = fm) Standardized loadings (pattern matrix) based upon correlation matrix MR1 h2 u2 com MR1 0.78 0.61 0.39 1 MR2 0.76 0.57 0.43 1 MR3 0.68 0.46 0.54 1 MR1 SS loadings 1.65 Proportion Var 0.55 Mean item complexity = 1 Test of the hypothesis that 1 factor is sufficient. The degrees of freedom for the null model are 3 and the objective function was 0.86 The degrees of freedom for the model are 0 and the objective function was 0 The root mean square of the residuals (RMSR) is 0 The df corrected root mean square of the residuals is NA Fit based upon off diagonal values = 1 Measures of factor score adequacy MR1 Correlation of (regression) scores with factors 0.89 Multiple R square of scores with factors 0.79 Minimum correlation of possible factor scores 0.58 > fa.multi.diagram(f31) > > > > cleanEx() > nameEx("faRotate") > ### * faRotate > > flush(stderr()); flush(stdout()) > > ### Name: faRotations > ### Title: Multiple rotations of factor loadings to find local minima > ### Aliases: faRotations > ### Keywords: multivariate models > > ### ** Examples > > f5 <- fa(psychTools::bfi[,1:25],5,rotate="none") > faRotations(f5,n.rotations=10) #note that the factor analysis needs to not do the rotation Factor Analysis using method = Call: faRotations(loadings = f5, n.rotations = 10) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 MR3 MR4 MR5 h2 u2 A1 -0.17 0.07 0.06 0.41 -0.21 0.19 0.81 A2 0.00 0.08 -0.03 -0.64 0.02 0.45 0.55 A3 -0.12 0.02 -0.03 -0.66 0.03 0.52 0.48 A4 -0.06 0.19 0.15 -0.43 0.06 0.28 0.72 A5 -0.23 0.01 -0.04 -0.53 0.11 0.46 0.54 C1 0.03 0.55 -0.15 0.02 -0.07 0.33 0.67 C2 0.09 0.67 -0.04 -0.08 -0.15 0.45 0.55 C3 0.06 0.57 0.07 -0.09 -0.03 0.32 0.68 C4 0.00 -0.61 0.05 -0.04 -0.17 0.45 0.55 C5 0.14 -0.55 -0.09 -0.02 -0.19 0.43 0.57 E1 0.56 0.11 0.10 0.08 0.06 0.35 0.65 E2 0.68 -0.02 0.06 0.05 -0.10 0.54 0.46 E3 -0.42 0.00 -0.28 -0.25 -0.08 0.44 0.56 E4 -0.59 0.02 0.08 -0.29 -0.01 0.53 0.47 E5 -0.42 0.27 -0.21 -0.05 -0.15 0.40 0.60 N1 -0.10 0.00 0.05 0.11 -0.81 0.65 0.35 N2 -0.04 0.01 -0.01 0.09 -0.78 0.60 0.40 N3 0.10 -0.04 -0.02 -0.08 -0.71 0.55 0.45 N4 0.39 -0.14 -0.08 -0.09 -0.47 0.49 0.51 N5 0.20 0.00 0.15 -0.21 -0.49 0.35 0.65 O1 -0.10 0.07 -0.51 -0.02 -0.02 0.31 0.69 O2 -0.06 -0.08 0.46 -0.16 -0.19 0.26 0.74 O3 -0.15 0.02 -0.61 -0.08 -0.03 0.46 0.54 O4 0.32 -0.02 -0.37 -0.17 -0.13 0.25 0.75 O5 -0.10 -0.03 0.54 -0.04 -0.13 0.30 0.70 MR1 MR2 MR3 MR4 MR5 SS loadings 2.20 2.03 1.59 1.99 2.57 Proportion Var 0.09 0.08 0.06 0.08 0.10 Cumulative Var 0.09 0.17 0.23 0.31 0.41 Proportion Explained 0.21 0.20 0.15 0.19 0.25 Cumulative Proportion 0.21 0.41 0.56 0.75 1.00 With factor correlations of MR1 MR2 MR3 MR4 MR5 MR1 1.00 -0.23 0.17 0.33 -0.21 MR2 -0.23 1.00 -0.19 -0.20 0.19 MR3 0.17 -0.19 1.00 0.19 -0.01 MR4 0.33 -0.20 0.19 1.00 -0.04 MR5 -0.21 0.19 -0.01 -0.04 1.00 > faRotations(f5$loadings) #matrix input Call: faRotations(loadings = f5$loadings) Standardized loadings (pattern matrix) based upon correlation matrix MR1 MR2 MR3 MR4 MR5 h2 u2 A1 0.21 -0.41 0.17 -0.06 0.07 0.19 0.81 A2 -0.02 0.64 0.00 0.03 0.08 0.45 0.55 A3 -0.03 0.66 0.12 0.03 0.02 0.52 0.48 A4 -0.06 0.43 0.06 -0.15 0.19 0.28 0.72 A5 -0.11 0.53 0.23 0.04 0.01 0.46 0.54 C1 0.07 -0.02 -0.03 0.15 0.55 0.33 0.67 C2 0.15 0.08 -0.09 0.04 0.67 0.45 0.55 C3 0.03 0.09 -0.06 -0.07 0.57 0.32 0.68 C4 0.17 0.04 0.00 -0.05 -0.61 0.45 0.55 C5 0.19 0.02 -0.14 0.09 -0.55 0.43 0.57 E1 -0.06 -0.08 -0.56 -0.10 0.11 0.35 0.65 E2 0.10 -0.05 -0.68 -0.06 -0.02 0.54 0.46 E3 0.08 0.25 0.42 0.28 0.00 0.44 0.56 E4 0.01 0.29 0.59 -0.08 0.02 0.53 0.47 E5 0.15 0.05 0.42 0.21 0.27 0.40 0.60 N1 0.81 -0.11 0.10 -0.05 0.00 0.65 0.35 N2 0.78 -0.09 0.04 0.01 0.01 0.60 0.40 N3 0.71 0.08 -0.10 0.02 -0.04 0.55 0.45 N4 0.47 0.09 -0.39 0.08 -0.14 0.49 0.51 N5 0.49 0.21 -0.20 -0.15 0.00 0.35 0.65 O1 0.02 0.02 0.10 0.51 0.07 0.31 0.69 O2 0.19 0.16 0.06 -0.46 -0.08 0.26 0.74 O3 0.03 0.08 0.15 0.61 0.02 0.46 0.54 O4 0.13 0.17 -0.32 0.37 -0.02 0.25 0.75 O5 0.13 0.04 0.10 -0.54 -0.03 0.30 0.70 MR1 MR2 MR3 MR4 MR5 SS loadings 2.57 1.99 2.20 1.59 2.03 Proportion Var 0.10 0.08 0.09 0.06 0.08 Cumulative Var 0.10 0.18 0.27 0.33 0.41 Proportion Explained 0.25 0.19 0.21 0.15 0.20 Cumulative Proportion 0.25 0.44 0.65 0.80 1.00 MR1 MR2 MR3 MR4 MR5 MR1 1.00 -0.04 -0.21 -0.01 -0.19 MR2 -0.04 1.00 0.33 0.19 0.20 MR3 -0.21 0.33 1.00 0.17 0.23 MR4 -0.01 0.19 0.17 1.00 0.19 MR5 -0.19 0.20 0.23 0.19 1.00 > geo <- faRotations(f5,rotate="geominQ",n.rotation=10) > # a popular alternative, but more sensitive to local minima > describe(geo$rotation.stats[,1:3]) vars n mean sd median trimmed mad min max range skew kurtosis se hyperplane 1 10 0.61 0 0.61 0.61 0 0.61 0.61 0 Inf NaN 0 fit 2 10 0.10 0 0.10 0.10 0 0.10 0.10 0 0 -2.17 0 complexity 3 10 1.45 0 1.45 1.45 0 1.45 1.45 0 0 -2.19 0 > > > > cleanEx() > nameEx("factor.congruence") > ### * factor.congruence > > flush(stderr()); flush(stdout()) > > ### Name: factor.congruence > ### Title: Coefficient of factor congruence > ### Aliases: factor.congruence fa.congruence > ### Keywords: multivariate models > > ### ** Examples > > #factor congruence of factors and components, both rotated > #fa <- fa(Harman74.cor$cov,4) > #pc <- principal(Harman74.cor$cov,4) > #factor.congruence(fa,pc) > # RC1 RC3 RC2 RC4 > #MR1 0.98 0.41 0.28 0.32 > #MR3 0.35 0.96 0.41 0.31 > #MR2 0.23 0.16 0.95 0.28 > #MR4 0.28 0.38 0.36 0.98 > > > > #factor congruence without rotation > #fa <- fa(Harman74.cor$cov,4,rotate="none") > #pc <- principal(Harman74.cor$cov,4,rotate="none") > #factor.congruence(fa,pc) #just show the beween method congruences > # PC1 PC2 PC3 PC4 > #MR1 1.00 -0.04 -0.06 -0.01 > #MR2 0.15 0.97 -0.01 -0.15 > #MR3 0.31 0.05 0.94 0.11 > #MR4 0.07 0.21 -0.12 0.96 > > #factor.congruence(list(fa,pc)) #this shows the within method congruence as well > > # MR1 MR2 MR3 MR4 PC1 PC2 PC3 PC4 > #MR1 1.00 0.11 0.25 0.06 1.00 -0.04 -0.06 -0.01 > #MR2 0.11 1.00 0.06 0.07 0.15 0.97 -0.01 -0.15 > #MR3 0.25 0.06 1.00 0.01 0.31 0.05 0.94 0.11 > #MR4 0.06 0.07 0.01 1.00 0.07 0.21 -0.12 0.96 > #PC1 1.00 0.15 0.31 0.07 1.00 0.00 0.00 0.00 > #PC2 -0.04 0.97 0.05 0.21 0.00 1.00 0.00 0.00 > #PC3 -0.06 -0.01 0.94 -0.12 0.00 0.00 1.00 0.00 > #PC4 -0.01 -0.15 0.11 0.96 0.00 0.00 0.00 1.00 > > #pa <- fa(Harman74.cor$cov,4,fm="pa") > # factor.congruence(fa,pa) > # PA1 PA3 PA2 PA4 > #Factor1 1.00 0.61 0.46 0.55 > #Factor2 0.61 1.00 0.50 0.60 > #Factor3 0.46 0.50 1.00 0.57 > #Factor4 0.56 0.62 0.58 1.00 > > > #compare with > #round(cor(fa$loading,pc$loading),2) > # RC1 RC3 RC2 RC4 > #MR1 0.99 -0.18 -0.33 -0.34 > #MR3 -0.33 0.96 -0.16 -0.43 > #MR2 -0.29 -0.46 0.98 -0.21 > #MR4 -0.44 -0.30 -0.22 0.98 > > > > cleanEx() > nameEx("factor.fit") > ### * factor.fit > > flush(stderr()); flush(stdout()) > > ### Name: factor.fit > ### Title: How well does the factor model fit a correlation matrix. Part of > ### the VSS package > ### Aliases: factor.fit > ### Keywords: models models > > ### ** Examples > > ## Not run: > ##D #compare the fit of 4 to 3 factors for the Harman 24 variables > ##D fa4 <- factanal(x,4,covmat=Harman74.cor$cov) > ##D round(factor.fit(Harman74.cor$cov,fa4$loading),2) > ##D #[1] 0.9 > ##D fa3 <- factanal(x,3,covmat=Harman74.cor$cov) > ##D round(factor.fit(Harman74.cor$cov,fa3$loading),2) > ##D #[1] 0.88 > ##D > ## End(Not run) > > > > > cleanEx() > nameEx("factor.model") > ### * factor.model > > flush(stderr()); flush(stdout()) > > ### Name: factor.model > ### Title: Find R = F F' + U2 is the basic factor model > ### Aliases: factor.model > ### Keywords: multivariate models > > ### ** Examples > > > f2 <- matrix(c(.9,.8,.7,rep(0,6),.6,.7,.8),ncol=2) > mod <- factor.model(f2) > round(mod,2) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 0.81 0.72 0.63 0.00 0.00 0.00 [2,] 0.72 0.64 0.56 0.00 0.00 0.00 [3,] 0.63 0.56 0.49 0.00 0.00 0.00 [4,] 0.00 0.00 0.00 0.36 0.42 0.48 [5,] 0.00 0.00 0.00 0.42 0.49 0.56 [6,] 0.00 0.00 0.00 0.48 0.56 0.64 > > > > cleanEx() > nameEx("factor.residuals") > ### * factor.residuals > > flush(stderr()); flush(stdout()) > > ### Name: factor.residuals > ### Title: R* = R- F F' > ### Aliases: factor.residuals > ### Keywords: multivariate models > > ### ** Examples > > fa2 <- fa(Harman74.cor$cov,2,rotate=TRUE) Specified rotation not found, rotate='none' used > fa2resid <- factor.residuals(Harman74.cor$cov,fa2) > fa2resid[1:4,1:4] #residuals with two factors extracted VisualPerception Cubes PaperFormBoard Flags VisualPerception 0.6533350 0.10174203 0.1656341 0.19205836 Cubes 0.1017420 0.86326270 0.1638187 0.05229164 PaperFormBoard 0.1656341 0.16381874 0.8232249 0.10052094 Flags 0.1920584 0.05229164 0.1005209 0.76340745 > fa4 <- fa(Harman74.cor$cov,4,rotate=TRUE) Specified rotation not found, rotate='none' used > fa4resid <- factor.residuals(Harman74.cor$cov,fa4) > fa4resid[1:4,1:4] #residuals with 4 factors extracted VisualPerception Cubes PaperFormBoard Flags VisualPerception 0.44982015 -0.03531402 -0.01016952 0.04145577 Cubes -0.03531402 0.77015627 0.04339997 -0.04969772 PaperFormBoard -0.01016952 0.04339997 0.66152929 -0.02945683 Flags 0.04145577 -0.04969772 -0.02945683 0.65020432 > > > > > cleanEx() > nameEx("factor.rotate") > ### * factor.rotate > > flush(stderr()); flush(stdout()) > > ### Name: factor.rotate > ### Title: "Hand" rotate a factor loading matrix > ### Aliases: factor.rotate > ### Keywords: multivariate models > > ### ** Examples > > #using the Harman 24 mental tests, rotate the 2nd and 3rd factors 45 degrees > f4<- fa(Harman74.cor$cov,4,rotate="TRUE") Specified rotation not found, rotate='none' used > f4r45 <- factor.rotate(f4,45,2,3) > f4r90 <- factor.rotate(f4r45,45,2,3) > print(factor.congruence(f4,f4r45),digits=3) #poor congruence with original [,1] [,2] [,3] [,4] MR1 1 0.00 0.00 0 MR2 0 0.76 -0.76 0 MR3 0 0.65 0.65 0 MR4 0 0.00 0.00 1 > print(factor.congruence(f4,f4r90),digits=3) #factor 2 and 3 have been exchanged and 3 flipped [,1] [,2] [,3] [,4] MR1 1 0 0 0 MR2 0 0 -1 0 MR3 0 1 0 0 MR4 0 0 0 1 > > #a graphic example > data(Harman23.cor) > f2 <- fa(Harman23.cor$cov,2,rotate="none") > op <- par(mfrow=c(1,2)) > cluster.plot(f2,xlim=c(-1,1),ylim=c(-1,1),title="Unrotated ") > f2r <- factor.rotate(f2,-33,plot=TRUE,xlim=c(-1,1),ylim=c(-1,1),title="rotated -33 degrees") > op <- par(mfrow=c(1,1)) > > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > cleanEx() > nameEx("factor.scores") > ### * factor.scores > > flush(stderr()); flush(stdout()) > > ### Name: factor.scores > ### Title: Various ways to estimate factor scores for the factor analysis > ### model > ### Aliases: factor.scores > ### Keywords: multivariate models > > ### ** Examples > > f3 <- fa(Thurstone) > f3$weights #just the scoring weights MR1 Sentences 0.18680979 Vocabulary 0.23056715 Sent.Completion 0.15762958 First.Letters 0.13887845 Four.Letter.Words 0.13244727 Suffixes 0.09233885 Letter.Series 0.10932454 Pedigrees 0.11176173 Letter.Group 0.10401145 > f5 <- fa(psychTools::bfi,5) #this does the factor analyis > my.scores <- factor.scores(psychTools::bfi,f5, method="tenBerge") > #compare the tenBerge factor score correlation to the factor correlations > cor(my.scores$scores,use="pairwise") - f5$Phi #compare to the f5$Phi values MR2 MR1 MR3 MR5 MR4 MR2 -1.110223e-16 -4.070921e-03 5.696257e-03 0.01285203 0.006792444 MR1 -4.070921e-03 -2.220446e-16 1.730549e-02 0.01624656 -0.001683079 MR3 5.696257e-03 1.730549e-02 1.110223e-16 -0.01913838 -0.012481493 MR5 1.285203e-02 1.624656e-02 -1.913838e-02 0.00000000 -0.035165966 MR4 6.792444e-03 -1.683079e-03 -1.248149e-02 -0.03516597 0.000000000 > #compare the default (regression) score correlations to the factor correlations > cor(f5$scores,use="pairwise") - f5$Phi MR2 MR1 MR3 MR5 MR4 MR2 0.000000000 -4.830220e-02 -4.253256e-02 -0.00642898 -4.510998e-03 MR1 -0.048302200 -2.220446e-16 8.547635e-02 0.13531624 7.123388e-02 MR3 -0.042532555 8.547635e-02 1.110223e-16 0.05540426 5.822093e-02 MR5 -0.006428980 1.353162e-01 5.540426e-02 0.00000000 3.003090e-02 MR4 -0.004510998 7.123388e-02 5.822093e-02 0.03003090 -1.110223e-16 > #compare to the f5 solution > > > > > cleanEx() > nameEx("factor.stats") > ### * factor.stats > > flush(stderr()); flush(stdout()) > > ### Name: factor.stats > ### Title: Find various goodness of fit statistics for factor analysis and > ### principal components > ### Aliases: factor.stats fa.stats > ### Keywords: multivariate models > > ### ** Examples > > v9 <- sim.hierarchical() > f3 <- fa(v9,3) > factor.stats(v9,f3,n.obs=500) Warning in fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, : The estimated weights for the factor scores are probably incorrect. Try a different factor score estimation method. Call: fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, alpha = alpha, fm = fm) Test of the hypothesis that 3 factors are sufficient. The degrees of freedom for the model is 12 and the fit was 0.5 The number of observations was 500 with Chi Square = 244.88 with prob < 1.6e-45 Measures of factor score adequacy Correlation of scores with factors 1.06 0.95 0.8 Multiple R square of scores with factors 1.13 0.9 0.63 Minimum correlation of factor score estimates 1.25 0.8 0.27 > f3o <- fa(v9,3,fm="pa",rotate="Promax") > factor.stats(v9,f3o,n.obs=500) Call: fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, alpha = alpha, fm = fm) Test of the hypothesis that 3 factors are sufficient. The degrees of freedom for the model is 12 and the fit was 0.44 The number of observations was 500 with Chi Square = 216.22 with prob < 1.4e-39 Measures of factor score adequacy Correlation of scores with factors 1 0.91 0.78 Multiple R square of scores with factors 0.99 0.83 0.61 Minimum correlation of factor score estimates 0.98 0.66 0.21 > > > > > > cleanEx() > nameEx("factor2cluster") > ### * factor2cluster > > flush(stderr()); flush(stdout()) > > ### Name: factor2cluster > ### Title: Extract cluster definitions from factor loadings > ### Aliases: factor2cluster > ### Keywords: multivariate models > > ### ** Examples > > #matches the factanal example > f4 <- fa(Harman74.cor$cov,4,rotate="varimax") > factor2cluster(f4) MR1 MR3 MR2 MR4 VisualPerception 0 1 0 0 Cubes 0 1 0 0 PaperFormBoard 0 1 0 0 Flags 0 1 0 0 GeneralInformation 1 0 0 0 PargraphComprehension 1 0 0 0 SentenceCompletion 1 0 0 0 WordClassification 1 0 0 0 WordMeaning 1 0 0 0 Addition 0 0 1 0 Code 0 0 1 0 CountingDots 0 0 1 0 StraightCurvedCapitals 0 0 1 0 WordRecognition 0 0 0 1 NumberRecognition 0 0 0 1 FigureRecognition 0 0 0 1 ObjectNumber 0 0 0 1 NumberFigure 0 0 0 1 FigureWord 0 0 0 1 Deduction 0 1 0 0 NumericalPuzzles 0 0 1 0 ProblemReasoning 0 1 0 0 SeriesCompletion 0 1 0 0 ArithmeticProblems 0 0 1 0 > > > cleanEx() > nameEx("fisherz") > ### * fisherz > > flush(stderr()); flush(stdout()) > > ### Name: fisherz > ### Title: Transformations of r, d, and t including Fisher r to z and z to > ### r and confidence intervals > ### Aliases: fisherz fisherz2r r.con r2c r2t t2r g2r chi2r r2chi cor2cov > ### Keywords: multivariate models > > ### ** Examples > > > n <- 30 > r <- seq(0,.9,.1) > d <- r2d(r) > rc <- matrix(r.con(r,n),ncol=2) > t <- r*sqrt(n-2)/sqrt(1-r^2) > p <- (1-pt(t,n-2))*2 > r1 <- t2r(t,(n-2)) > r2 <- d2r(d) > chi <- r2chi(r,n) > r3 <- chi2r(chi,n) > r.rc <- data.frame(r=r,z=fisherz(r),lower=rc[,1],upper=rc[,2],t=t,p=p,d=d, + chi2=chi,d2r=r2,t2r=r1,chi2r=r3) > round(r.rc,2) r z lower upper t p d chi2 d2r t2r chi2r 1 0.0 0.00 -0.36 0.36 0.00 1.00 0.00 0.0 0.0 0.0 0.0 2 0.1 0.10 -0.27 0.44 0.53 0.60 0.20 0.3 0.1 0.1 0.1 3 0.2 0.20 -0.17 0.52 1.08 0.29 0.41 1.2 0.2 0.2 0.2 4 0.3 0.31 -0.07 0.60 1.66 0.11 0.63 2.7 0.3 0.3 0.3 5 0.4 0.42 0.05 0.66 2.31 0.03 0.87 4.8 0.4 0.4 0.4 6 0.5 0.55 0.17 0.73 3.06 0.00 1.15 7.5 0.5 0.5 0.5 7 0.6 0.69 0.31 0.79 3.97 0.00 1.50 10.8 0.6 0.6 0.6 8 0.7 0.87 0.45 0.85 5.19 0.00 1.96 14.7 0.7 0.7 0.7 9 0.8 1.10 0.62 0.90 7.06 0.00 2.67 19.2 0.8 0.8 0.8 10 0.9 1.47 0.80 0.95 10.93 0.00 4.13 24.3 0.9 0.9 0.9 > > > > > > cleanEx() > nameEx("fparse") > ### * fparse > > flush(stderr()); flush(stdout()) > > ### Name: fparse > ### Title: Parse and exten formula input from a model and return the DV, > ### IV, and associated terms. > ### Aliases: fparse > ### Keywords: utilities > > ### ** Examples > > fparse(DV ~ IV1 + IV2 * IV2*IV3 + (IV4) + I(IV5^2) ) $y [1] "DV" $x [1] "IV1" "IV2" "IV3" "IV4" $m [1] "IV4" $prod $prod[[1]] [1] "IV2" "IV3" $z NULL $ex [1] "IV5" > #somewhat more complicated > fparse(DV1 + DV2 ~ IV1 + IV2 + IV3*IV4 + I(IV5^2) + I(Iv6^2) + (IV7) + (IV8) - IV9) $y [1] "DV1" "DV2" $x [1] "IV1" "IV2" "IV3" "IV4" "IV7" "IV8" $m [1] "IV7" "IV8" $prod $prod[[1]] [1] "IV3" "IV4" $z [1] "IV9" $ex [1] "IV5" "Iv6" > > > > > cleanEx() > nameEx("geometric.mean") > ### * geometric.mean > > flush(stderr()); flush(stdout()) > > ### Name: geometric.mean > ### Title: Find the geometric mean of a vector or columns of a data.frame. > ### Aliases: geometric.mean > ### Keywords: multivariate > > ### ** Examples > > > x <- seq(1,5) > x2 <- x^2 > x2[2] <- NA > X <- data.frame(x,x2) > geometric.mean(x) [1] 2.605171 > geometric.mean(x2) [1] 7.745967 > geometric.mean(X) x x2 2.605171 7.745967 > geometric.mean(X,na.rm=FALSE) x x2 2.605171 NA > > > > > cleanEx() > nameEx("glb.algebraic") > ### * glb.algebraic > > flush(stderr()); flush(stdout()) > > ### Name: glb.algebraic > ### Title: Find the greatest lower bound to reliability. > ### Aliases: glb.algebraic > ### Keywords: multivariate > > ### ** Examples > > > Cv<-matrix(c(215, 64, 33, 22, + 64, 97, 57, 25, + 33, 57,103, 36, + 22, 25, 36, 77),ncol=4) > > Cv # covariance matrix of a test with 4 subtests [,1] [,2] [,3] [,4] [1,] 215 64 33 22 [2,] 64 97 57 25 [3,] 33 57 103 36 [4,] 22 25 36 77 > Cr<-cov2cor(Cv) # Correlation matrix of tests > if(!require(Rcsdp)) {print("Rcsdp must be installed to find the glb.algebraic")} else { + glb.algebraic(Cv) # glb of total score + glb.algebraic(Cr) # glb of sum of standardized scores + + w<-c(1,2,2,1) # glb of weighted total score + glb.algebraic(diag(w) %*% Cv %*% diag(w)) + alphas <- c(0.8,0,0,0) # Internal consistency of first test is known + + glb.algebraic(Cv,LoBounds=alphas*diag(Cv)) + + # Fix all diagonal elements to 1 but the first: + + lb <- glb.algebraic(Cr,LoBounds=c(0,1,1,1),UpBounds=c(1,1,1,1)) + lb$solution[1] # should be the same as the squared mult. corr. + smc(Cr)[1] + } Loading required package: Rcsdp V1 0.2013016 > > > > > cleanEx() detaching ‘package:Rcsdp’ > nameEx("guttman") > ### * guttman > > flush(stderr()); flush(stdout()) > > ### Name: splitHalf > ### Title: Alternative estimates of test reliabiity > ### Aliases: splitHalf guttman tenberge glb glb.fa > ### Keywords: multivariate > > ### ** Examples > > data(attitude) > splitHalf(attitude) Split half reliabilities Call: splitHalf(r = attitude) Maximum split half reliability (lambda 4) = 0.89 Guttman lambda 6 = 0.88 Average split half reliability = 0.85 Guttman lambda 3 (alpha) = 0.84 Guttman lambda 2 = 0.85 Minimum split half reliability (beta) = 0.74 Average interitem r = 0.43 with median = 0.45> splitHalf(attitude,covar=TRUE) #do it on the covariances Split half reliabilities Call: splitHalf(r = attitude, covar = TRUE) Maximum split half reliability (lambda 4) = 0.91 Guttman lambda 6 = 0.88 Average split half reliability = 0.86 Guttman lambda 3 (alpha) = 0.84 Guttman lambda 2 = 0.86 Minimum split half reliability (beta) = 0.76 Average interitem covariance = 57.38 with median = 61.42> temp <- splitHalf(attitude,raw=TRUE) > temp$ci #to show the confidence intervals, you need to specify that raw=TRUE 2.5% 50% 97.5% 0.7512462 0.8252375 0.8914738 > > glb(attitude) $beta [1] 0.7023614 $beta.factor [1] 0.6005961 $alpha.pc [1] 0.7309206 $glb.max [1] 0.9108864 $glb.IC [1] 0.9108864 $glb.Km [1] 0.9108864 $glb.Fa [1] 0.9011208 $r.smc [1] 0.8752151 $tenberge $tenberge$mu0 [1] 0.8390838 $tenberge$mu1 [1] 0.8518997 $tenberge$mu2 [1] 0.8544578 $tenberge$mu3 [1] 0.8549927 $keys IC1 IC2 ICr1 ICr2 K1 K2 F1 F2 f1 f2 rating 1 0 1 0 0 1 0 1 1 0 complaints 1 0 0 1 1 0 0 1 1 0 privileges 1 0 1 0 0 1 1 0 1 0 learning 1 0 1 0 0 1 1 0 1 0 raises 0 1 1 0 0 1 1 0 1 0 critical 0 1 1 0 0 1 1 0 0 1 advance 0 1 0 1 1 0 0 1 0 1 > glb.fa(attitude) $glb [1] 0.9062287 $communality rating complaints privileges learning raises critical advance 0.7733609 0.8981735 0.3500094 0.7634485 0.7666944 0.3268903 0.7836959 $numf [1] 3 $Call glb.fa(r = attitude) > if(require(Rcsdp)) {glb.algebraic(cor(attitude)) } Loading required package: Rcsdp $glb [1] 0.9204472 $solution rating complaints privileges learning raises critical advance 0.8381632 1.0000000 0.4528276 0.9011539 0.8384801 0.3085488 0.6775696 $status [1] 0 $Call glb.algebraic(Cov = cor(attitude)) > guttman(attitude) Warning: Guttman has been deprecated. The use of the splitHalf function is recommended Call: guttman(r = attitude) Alternative estimates of reliability Guttman bounds L1 = 0.72 L2 = 0.85 L3 (alpha) = 0.84 L4 (max) = 0.89 L5 = 0.83 L6 (smc) = 0.88 TenBerge bounds mu0 = 0.84 mu1 = 0.85 mu2 = 0.85 mu3 = 0.85 alpha of first PC = 0.85 estimated greatest lower bound based upon communalities= 0.91 beta found by splitHalf = 0.74 > > #to show the histogram of all possible splits for the ability test > #sp <- splitHalf(psychTools::ability,raw=TRUE) #this saves the results > #hist(sp$raw,breaks=101,ylab="SplitHalf reliability",main="SplitHalf > # reliabilities of a test with 16 ability items") > sp <- splitHalf(psychTools::bfi[1:10],key=c(1,9,10)) > > > > > cleanEx() detaching ‘package:Rcsdp’ > nameEx("harmonic.mean") > ### * harmonic.mean > > flush(stderr()); flush(stdout()) > > ### Name: harmonic.mean > ### Title: Find the harmonic mean of a vector, matrix, or columns of a > ### data.frame > ### Aliases: harmonic.mean > ### Keywords: multivariate > > ### ** Examples > > x <- seq(1,5) > x2 <- x^2 > x2[2] <- NA > y <- x - 1 > X <- data.frame(x,x2,y) > harmonic.mean(x) [1] 2.189781 > harmonic.mean(x2) [1] 3.295949 > harmonic.mean(X) x x2 y 2.189781 3.295949 0.000000 > harmonic.mean(X,na.rm=FALSE) x x2 y 2.189781 NA 0.000000 > harmonic.mean(X,zero=FALSE) x x2 y 2.189781 3.295949 1.920000 > > > > > cleanEx() > nameEx("headtail") > ### * headtail > > flush(stderr()); flush(stdout()) > > ### Name: headTail > ### Title: Combine calls to head and tail > ### Aliases: headtail headTail topBottom quickView > ### Keywords: multivariate > > ### ** Examples > > > headTail(psychTools::iqitems,4,8,to=6) #the first 4 and last 6 items from 1 to 6 Warning in rbind(deparse.level, ...) : number of columns of result, 6, is not a multiple of vector length 16 of arg 2 reason.4 reason.16 reason.17 reason.19 letter.7 letter.33 5 3 3 6 3 5 3 6 3 3 4 4 6 2 7 3 4 4 2 6 5 8 4 0 6 1 5 1 ... ... ... ... ... ... ... 1835 4 4 4 4 6 3 1836 4 4 4 6 6 3 1837 3 2 4 5 6 3 1838 4 4 4 6 6 3 1839 4 4 4 6 6 3 1840 2 4 4 3 2 5 1841 4 4 4 6 6 0 1843 4 4 4 4 6 3 > topBottom(psychTools::ability,from =2, to = 6) #the first and last 4 items from 2 to 6 reason.16 reason.17 reason.19 letter.7 letter.33 5 0 0 0 0 1 6 0 1 0 1 0 7 1 1 0 1 0 8 NA 0 0 0 0 1839 1 1 1 1 1 1840 1 1 0 0 0 1841 1 1 1 1 NA 1843 1 1 0 1 1 > headTail(psychTools::bfi,top=4, bottom=4,from =6,to=10) #the first and last 4 from 6 to 10 Warning in rbind(deparse.level, ...) : number of columns of result, 5, is not a multiple of vector length 28 of arg 2 C1 C2 C3 C4 C5 61617 2 3 3 4 4 61618 5 4 4 3 4 61620 4 5 4 2 5 61621 4 4 3 5 5 ... ... ... ... ... ... 67552 2 3 4 4 3 67556 5 5 5 1 1 67559 5 5 5 2 6 67560 5 5 3 3 3 > #not shown > #quickView(ability,hlength=10,tlength=10) #this loads a spreadsheet like table > > > > cleanEx() > nameEx("iclust.diagram") > ### * iclust.diagram > > flush(stderr()); flush(stdout()) > > ### Name: iclust.diagram > ### Title: Draw an ICLUST hierarchical cluster structure diagram > ### Aliases: iclust.diagram ICLUST.diagram > ### Keywords: multivariate cluster hplot > > ### ** Examples > > v9 <- sim.hierarchical() > v9c <- ICLUST(v9) > test.data <- Harman74.cor$cov > ic.out <- ICLUST(test.data) > #now show how to relabel clusters > ic.bfi <- iclust(psychTools::bfi[1:25],beta=3) #find the clusters > cluster.names <- rownames(ic.bfi$results) #get the old names > #change the names to the desired ones > cluster.names[c(16,19,18,15,20)] <- c("Neuroticism","Extra-Open","Agreeableness", + "Conscientiousness","Open") > #now show the new names > iclust.diagram(ic.bfi,cluster.names=cluster.names,min.size=4,e.size=1.75) > > > > > cleanEx() > nameEx("interp.median") > ### * interp.median > > flush(stderr()); flush(stdout()) > > ### Name: interp.median > ### Title: Find the interpolated sample median, quartiles, or specific > ### quantiles for a vector, matrix, or data frame > ### Aliases: interp.median interp.quantiles interp.quartiles interp.boxplot > ### interp.values interp.qplot.by interp.q interp.quart > ### Keywords: univar > > ### ** Examples > > interp.median(c(1,2,3,3,3)) # compare with median = 3 [1] 2.666667 > interp.median(c(1,2,2,5)) [1] 2 > interp.quantiles(c(1,2,2,5),.25) > x <- sample(10,100,TRUE) > interp.quartiles(x) Q1 Median Q3 3.833333 6.357143 8.500000 > # > x <- c(1,1,2,2,2,3,3,3,3,4,5,1,1,1,2,2,3,3,3,3,4,5,1,1,1,2,2,3,3,3,3,4,2) > y <- c(1,2,3,3,3,3,4,4,4,4,4,1,2,3,3,3,3,4,4,4,4,5,1,5,3,3,3,3,4,4,4,4,4) > x <- x[order(x)] #sort the data by ascending order to make it clearer > y <- y[order(y)] > xv <- interp.values(x) > yv <- interp.values(y) > barplot(x,space=0,xlab="ordinal position",ylab="value") > lines(1:length(x)-.5,xv) > points(c(length(x)/4,length(x)/2,3*length(x)/4),interp.quartiles(x)) > barplot(y,space=0,xlab="ordinal position",ylab="value") > lines(1:length(y)-.5,yv) > points(c(length(y)/4,length(y)/2,3*length(y)/4),interp.quartiles(y)) > data(psychTools::galton) Warning in data(psychTools::galton) : data set ‘psychTools::galton’ not found > galton <- psychTools::galton > interp.median(galton) parent child [1,] 68.32877 68.18333 > interp.qplot.by(galton$child,galton$parent,ylab="child height" + ,xlab="Mid parent height") > > > > > > cleanEx() > nameEx("irt.fa") > ### * irt.fa > > flush(stderr()); flush(stdout()) > > ### Name: irt.fa > ### Title: Item Response Analysis by Exploratory Factor Analysis of > ### tetrachoric/polychoric correlations > ### Aliases: irt.fa irt.select fa2irt > ### Keywords: multivariate models > > ### ** Examples > > ## Not run: > ##D set.seed(17) > ##D d9 <- sim.irt(9,1000,-2.5,2.5,mod="normal") #dichotomous items > ##D test <- irt.fa(d9$items) > ##D test > ##D op <- par(mfrow=c(3,1)) > ##D plot(test,type="ICC") > ##D plot(test,type="IIC") > ##D plot(test,type="test") > ##D par(op) > ##D set.seed(17) > ##D items <- sim.congeneric(N=500,short=FALSE,categorical=TRUE) #500 responses to 4 discrete items > ##D d4 <- irt.fa(items$observed) #item response analysis of congeneric measures > ##D d4 #show just the irt output > ##D d4$fa #show just the factor analysis output > ##D > ##D > ##D op <- par(mfrow=c(2,2)) > ##D plot(d4,type="ICC") > ##D par(op) > ##D > ##D > ##D #using the iq data set for an example of real items > ##D #first need to convert the responses to tf > ##D data(iqitems) > ##D iq.keys <- c(4,4,4, 6, 6,3,4,4, 5,2,2,4, 3,2,6,7) > ##D > ##D iq.tf <- score.multiple.choice(iq.keys,psychTools::iqitems,score=FALSE) #just the responses > ##D iq.irt <- irt.fa(iq.tf) > ##D print(iq.irt,short=FALSE) #show the IRT as well as factor analysis output > ##D p.iq <- plot(iq.irt) #save the invisible summary table > ##D p.iq #show the summary table of information by ability level > ##D #select a subset of these variables > ##D small.iq.irt <- irt.select(iq.irt,c(1,5,9,10,11,13)) > ##D small.irt <- irt.fa(small.iq.irt) > ##D plot(small.irt) > ##D #find the information for three subset of iq items > ##D keys <- make.keys(16,list(all=1:16,some=c(1,5,9,10,11,13),others=c(1:5))) > ##D plot(iq.irt,keys=keys) > ## End(Not run) > #compare output to the ltm package or Kamata and Bauer -- these are in logistic units > ls <- irt.fa(lsat6) > #library(ltm) > # lsat.ltm <- ltm(lsat6~z1) > # round(coefficients(lsat.ltm)/1.702,3) #convert to normal (approximation) > # > # Dffclt Dscrmn > #Q1 -1.974 0.485 > #Q2 -0.805 0.425 > #Q3 -0.164 0.523 > #Q4 -1.096 0.405 > #Q5 -1.835 0.386 > > > #Normal results ("Standardized and Marginal")(from Akihito Kamata ) > #Item discrim tau > # 1 0.4169 -1.5520 > # 2 0.4333 -0.5999 > # 3 0.5373 -0.1512 > # 4 0.4044 -0.7723 > # 5 0.3587 -1.1966 > #compare to ls > > #Normal results ("Standardized and conditional") (from Akihito Kamata ) > #item discrim tau > # 1 0.3848 -1.4325 > # 2 0.3976 -0.5505 > # 3 0.4733 -0.1332 > # 4 0.3749 -0.7159 > # 5 0.3377 -1.1264 > #compare to ls$fa and ls$tau > > #Kamata and Bauer (2008) logistic estimates > #1 0.826 2.773 > #2 0.723 0.990 > #3 0.891 0.249 > #4 0.688 1.285 > #5 0.657 2.053 > > > > > > > > > > cleanEx() > nameEx("irt.responses") > ### * irt.responses > > flush(stderr()); flush(stdout()) > > ### Name: irt.responses > ### Title: Plot probability of multiple choice responses as a function of a > ### latent trait > ### Aliases: irt.responses > ### Keywords: multivariate models > > ### ** Examples > > data(psychTools::iqitems) Warning in data(psychTools::iqitems) : data set ‘psychTools::iqitems’ not found > iq.keys <- c(4,4,4, 6,6,3,4,4, 5,2,2,4, 3,2,6,7) > scores <- score.multiple.choice(iq.keys,psychTools::iqitems,score=TRUE,short=FALSE) > #note that for speed we can just do this on simple item counts rather > # than IRT based scores. > op <- par(mfrow=c(2,2)) #set this to see the output for multiple items > irt.responses(scores$scores,psychTools::iqitems[1:4],breaks=11) > op <- par(op) > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > cleanEx() > nameEx("kaiser") > ### * kaiser > > flush(stderr()); flush(stdout()) > > ### Name: kaiser > ### Title: Apply the Kaiser normalization when rotating factors > ### Aliases: kaiser > ### Keywords: multivariate models > > ### ** Examples > > f3 <- fa(Thurstone,3) > f3n <- kaiser(fa(Thurstone,3,rotate="none")) > f3p <- kaiser(fa(Thurstone,3,rotate="none"),rotate="Promax",m=3) > factor.congruence(list(f3,f3n,f3p)) MR1 MR2 MR3 MR1 MR2 MR3 MR1 MR2 MR3 MR1 1.00 0.06 0.09 1.00 0.10 0.10 1.00 0.07 0.06 MR2 0.06 1.00 0.08 0.04 1.00 0.11 0.03 1.00 0.09 MR3 0.09 0.08 1.00 0.10 0.07 1.00 0.05 0.03 1.00 MR1 1.00 0.04 0.10 1.00 0.08 0.10 1.00 0.06 0.06 MR2 0.10 1.00 0.07 0.08 1.00 0.09 0.07 1.00 0.08 MR3 0.10 0.11 1.00 0.10 0.09 1.00 0.06 0.05 1.00 MR1 1.00 0.03 0.05 1.00 0.07 0.06 1.00 0.05 0.02 MR2 0.07 1.00 0.03 0.06 1.00 0.05 0.05 1.00 0.04 MR3 0.06 0.09 1.00 0.06 0.08 1.00 0.02 0.04 1.00 > > > > cleanEx() > nameEx("kappa") > ### * kappa > > flush(stderr()); flush(stdout()) > > ### Name: cohen.kappa > ### Title: Find Cohen's kappa and weighted kappa coefficients for > ### correlation of two raters > ### Aliases: wkappa cohen.kappa > ### Keywords: multivariate > > ### ** Examples > > > #rating data (with thanks to Tim Bates) > rater1 = c(1,2,3,4,5,6,7,8,9) # rater one's ratings > rater2 = c(1,3,1,6,1,5,5,6,7) # rater one's ratings > cohen.kappa(x=cbind(rater1,rater2)) Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.18 0.00 0.18 weighted kappa 0.43 0.68 0.93 Number of subjects = 9 > > #data matrix taken from Cohen > cohen <- matrix(c( + 0.44, 0.07, 0.09, + 0.05, 0.20, 0.05, + 0.01, 0.03, 0.06),ncol=3,byrow=TRUE) > > #cohen.weights weight differences > cohen.weights <- matrix(c( + 0,1,3, + 1,0,6, + 3,6,0),ncol=3) > > > cohen.kappa(cohen,cohen.weights,n.obs=200) Warning in cohen.kappa1(x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 0.39 0.49 0.59 weighted kappa -0.34 0.35 1.00 Number of subjects = 200 > #cohen reports .492 and .348 > > #another set of weights > #what if the weights are non-symmetric > wc <- matrix(c( + 0,1,4, + 1,0,6, + 2,2,0),ncol=3,byrow=TRUE) > cohen.kappa(cohen,wc) Warning in cohen.kappa1(x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.92 0.49 1 weighted kappa -1.00 0.35 1 Number of subjects = 1 > #Cohen reports kw = .353 > > cohen.kappa(cohen,n.obs=200) #this uses the squared weights Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 0.39 0.49 0.59 weighted kappa 0.33 0.45 0.58 Number of subjects = 200 > > fleiss.cohen <- 1 - cohen.weights/9 > cohen.kappa(cohen,fleiss.cohen,n.obs=200) Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 0.392 0.49 0.59 weighted kappa -0.018 0.35 0.71 Number of subjects = 200 > > #however, Fleiss, Cohen and Everitt weight similarities > fleiss <- matrix(c( + 106, 10,4, + 22,28, 10, + 2, 12, 6),ncol=3,byrow=TRUE) > > #Fleiss weights the similarities > weights <- matrix(c( + 1.0000, 0.0000, 0.4444, + 0.0000, 1.0000, 0.6667, + 0.4444, 0.6667, 1.0000),ncol=3) > > cohen.kappa(fleiss,weights,n.obs=200) Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 0.32 0.43 0.53 weighted kappa 0.40 0.51 0.61 Number of subjects = 200 > > #another example is comparing the scores of two sets of twins > #data may be a 2 column matrix > #compare weighted and unweighted > #also look at the ICC for this data set. > twins <- matrix(c( + 1, 2, + 2, 3, + 3, 4, + 5, 6, + 6, 7), ncol=2,byrow=TRUE) > cohen.kappa(twins) Warning in cohen.kappa1(x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.23 -0.14 -0.046 weighted kappa 0.24 0.87 1.000 Number of subjects = 5 > > #data may be explicitly categorical > x <- c("red","yellow","blue","red") > y <- c("red", "blue", "blue" ,"red") > xy.df <- data.frame(x,y) > ck <- cohen.kappa(xy.df) Warning in cohen.kappa1(x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. > ck Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 0.098 0.6 1 weighted kappa -1.000 0.0 1 Number of subjects = 4 > ck$agree x2f x1f blue red yellow blue 0.25 0.00 0.00 red 0.00 0.50 0.00 yellow 0.25 0.00 0.00 > > #Example for specifying levels > #The problem of missing categories (from Amy Finnegan) > #We need to specify all the categories possible using the levels option > numbers <- data.frame(rater1=c(6,3,7,8,7), + rater2=c(6,1,8,5,10)) > cohen.kappa(numbers) #compare with the next analysis Warning in cohen.kappa1(x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.16 0.13 0.42 weighted kappa -1.00 0.53 1.00 Number of subjects = 5 > cohen.kappa(numbers,levels=1:10) #specify the number of levels Warning in cohen.kappa1(x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Call: cohen.kappa1(x = x, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.16 0.13 0.42 weighted kappa -1.00 0.62 1.00 Number of subjects = 5 > # these leads to slightly higher weighted kappa > > #finally, input can be a data.frame of ratings from more than two raters > ratings <- matrix(rep(1:5,4),ncol=4) > ratings[1,2] <- ratings[2,3] <- ratings[3,4] <- NA > ratings[2,1] <- ratings[3,2] <- ratings[4,3] <- 1 > ck <- cohen.kappa(ratings) Warning in cohen.kappa1(x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Warning in cohen.kappa1(x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Warning in cohen.kappa1(x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Warning in cohen.kappa1(x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. Warning in cohen.kappa1(x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) : upper or lower confidence interval exceed abs(1) and set to +/- 1. > ck #just show the raw and weighted kappas Cohen Kappa (below the diagonal) and Weighted Kappa (above the diagonal) For confidence intervals and detail print with all=TRUE R1 R2 R3 R4 R1 1.00 0.74 0.67 0.92 R2 0.38 1.00 0.48 1.00 R3 0.67 0.14 1.00 0.80 R4 0.67 1.00 0.50 1.00 Average Cohen kappa for all raters 0.56 Average weighted kappa for all raters 0.77> print(ck, all=TRUE) #show the confidence intervals as well $cohen.kappa R1 R2 R3 R4 R1 1.00 0.74 0.67 0.92 R2 0.38 1.00 0.48 1.00 R3 0.67 0.14 1.00 0.80 R4 0.67 1.00 0.50 1.00 $`R1 R2` Call: cohen.kappa1(x = x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.13 0.38 0.89 weighted kappa 0.28 0.74 1.00 Number of subjects = 4 $`R1 R3` Call: cohen.kappa1(x = x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 0.16 0.67 1.00 weighted kappa 0.40 0.67 0.94 Number of subjects = 4 $`R2 R3` Call: cohen.kappa1(x = x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.347 0.14 0.63 weighted kappa -0.042 0.48 1.00 Number of subjects = 3 $`R1 R4` Call: cohen.kappa1(x = x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 0.16 0.67 1 weighted kappa 0.76 0.92 1 Number of subjects = 4 $`R2 R4` Call: cohen.kappa1(x = x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa 1 1 1 weighted kappa 1 1 1 Number of subjects = 3 $`R3 R4` Call: cohen.kappa1(x = x1, w = w, n.obs = n.obs, alpha = alpha, levels = levels) Cohen Kappa and Weighted Kappa correlation coefficients and confidence boundaries lower estimate upper unweighted kappa -0.11 0.5 1 weighted kappa 0.41 0.8 1 Number of subjects = 3 $av.kappa [1] 0.56 $av.wt [1] 0.77 > > > #In the case of confidence intervals being artificially truncated to +/- 1, it is > #helpful to compare the results of a boot strap resample > #ck.boot <-function(x,s=1:nrow(x)) {cohen.kappa(x[s,])$kappa} > #library(boot) > #ckb <- boot(x,ck.boot,R=1000) > #hist(ckb$t) > > > > > cleanEx() > nameEx("logistic") > ### * logistic > > flush(stderr()); flush(stdout()) > > ### Name: logistic > ### Title: Logistic transform from x to p and logit transform from p to x > ### Aliases: logistic logit logistic.grm > ### Keywords: multivariate > > ### ** Examples > > curve(logistic(x,a=1.702),-3,3,ylab="Probability of x", + main="Logistic transform of x",xlab="z score units") > #logistic with a=1.702 is almost the same as pnorm > > curve(pnorm(x),add=TRUE,lty="dashed") > curve(logistic(x),add=TRUE) > text(2,.8, expression(alpha ==1)) > text(2,1.0,expression(alpha==1.7)) > curve(logistic(x),-4,4,ylab="Probability of x", + main = "Logistic transform of x in logit units",xlab="logits") > curve(logistic(x,d=-1),add=TRUE) > curve(logistic(x,d=1),add=TRUE) > curve(logistic(x,c=.2),add=TRUE,lty="dashed") > text(1.3,.5,"d=1") > text(.3,.5,"d=0") > text(-1.5,.5,"d=-1") > text(-3,.3,"c=.2") > #demo of graded response model > curve(logistic.grm(x,r=1),-4,4,ylim=c(0,1),main="Five level response scale", + ylab="Probability of endorsement",xlab="Latent attribute on logit scale") > curve(logistic.grm(x,r=2),add=TRUE) > curve(logistic.grm(x,r=3),add=TRUE) > curve(logistic.grm(x,r=4),add=TRUE) > curve(logistic.grm(x,r=5),add=TRUE) > > text(-2.,.5,1) > text(-1.,.4,2) > text(0,.4,3) > text(1.,.4,4) > text(2.,.4,5) > > > > cleanEx() > nameEx("lowerUpper") > ### * lowerUpper > > flush(stderr()); flush(stdout()) > > ### Name: lowerUpper > ### Title: Combine two square matrices to have a lower off diagonal for > ### one, upper off diagonal for the other > ### Aliases: lowerUpper > ### Keywords: multivariate > > ### ** Examples > > b1 <- Bechtoldt.1 > b2 <- Bechtoldt.2 > b12 <- lowerUpper(b1,b2) > cor.plot(b12) > diff12 <- lowerUpper(b1,b2,diff=TRUE) > > corPlot(t(diff12),numbers=TRUE,main="Bechtoldt1 and the differences from Bechtoldt2") > > #Compare r and partial r > lower <- lowerCor(sat.act) gendr edctn age ACT SATV SATQ gender 1.00 education 0.09 1.00 age -0.02 0.55 1.00 ACT -0.04 0.15 0.11 1.00 SATV -0.02 0.05 -0.04 0.56 1.00 SATQ -0.17 0.03 -0.03 0.59 0.64 1.00 > upper <- partial.r(sat.act) > both = lowerUpper(lower,upper) > corPlot(both,numbers=TRUE,main="r and partial r for the sat.act data set") > #now show the differences > both = lowerUpper(lower,upper,diff=TRUE) > corPlot(both,numbers=TRUE,main="Differences between r and partial r for the sat.act data set") > > > > cleanEx() > nameEx("make.keys") > ### * make.keys > > flush(stderr()); flush(stdout()) > > ### Name: make.keys > ### Title: Create a keys matrix for use by score.items or cluster.cor > ### Aliases: make.keys keys2list selectFromKeys makePositiveKeys > ### Keywords: multivariate models > > ### ** Examples > > data(attitude) #specify the items by location > key.list <- list(all=c(1,2,3,4,-5,6,7), + first=c(1,2,3), + last=c(4,5,6,7)) > keys <- make.keys(7,key.list,item.labels = colnames(attitude)) > keys all first last rating 1 1 0 complaints 1 1 0 privileges 1 1 0 learning 1 0 1 raises -1 0 1 critical 1 0 1 advance 1 0 1 > #now, undo this > new.keys.list <- keys2list(keys) #note, these are now given the variable names > > select <- selectFromKeys(key.list) > > > #scores <- score.items(keys,attitude) > #scores > > # data(psychTools::bfi) > #first create the keys by location (the conventional way) > keys.list <- list(agree=c(-1,2:5),conscientious=c(6:8,-9,-10), + extraversion=c(-11,-12,13:15),neuroticism=c(16:20),openness = c(21,-22,23,24,-25)) > keys <- make.keys(25,keys.list,item.labels=colnames(psychTools::bfi)[1:25]) > new.keys.list <- keys2list(keys) #these will be in the form of variable names > > #alternatively, create by a mixture of names and locations > keys.list <- list(agree=c("-A1","A2","A3","A4","A5"), + conscientious=c("C1","C2","C2","-C4","-C5"),extraversion=c("-E1","-E2","E3","E4","E5"), + neuroticism=c(16:20),openness = c(21,-22,23,24,-25)) > keys <- make.keys(psychTools::bfi, keys.list) #specify the data file to be scored (bfi) > #or > keys <- make.keys(colnames(psychTools::bfi),keys.list) #specify the names of the variables > #to be used > #or > #specify the number of variables to be scored and their names in all cases > keys <- make.keys(28,keys.list,colnames(psychTools::bfi)) > > > scores <- scoreItems(keys,psychTools::bfi) > summary(scores) Call: scoreItems(keys = keys, items = psychTools::bfi) Scale intercorrelations corrected for attenuation raw correlations below the diagonal, (unstandardized) alpha on the diagonal corrected correlations above the diagonal: agree conscientious extraversion neuroticism openness agree 0.70 0.36 0.63 -0.245 0.23 conscientious 0.25 0.69 0.37 -0.334 0.33 extraversion 0.46 0.27 0.76 -0.284 0.32 neuroticism -0.18 -0.25 -0.22 0.812 -0.12 openness 0.15 0.21 0.22 -0.086 0.60 > > > > > cleanEx() > nameEx("manhattan") > ### * manhattan > > flush(stderr()); flush(stdout()) > > ### Name: manhattan > ### Title: "Manhattan" plots of correlations with a set of criteria. > ### Aliases: manhattan > ### Keywords: multivariate hplot > > ### ** Examples > > op <- par(mfrow=(c(2,3))) #we want to compare two different sets of plots > manhattan(psychTools::bfi[1:25],psychTools::bfi[26:28] + ,labels=colnames(psychTools::bfi)[1:25], dictionary=psychTools::bfi.dictionary) > manhattan(psychTools::bfi[1:25],psychTools::bfi[26:28],log.p=TRUE, + dictionary=psychTools::bfi.dictionary) > > #Do it again, but now show items by the keys.list > bfi.keys <- + list(agree=c("-A1","A2","A3","A4","A5"),conscientious=c("C1","C2","C3","-C4","-C5"), + extraversion=c("-E1","-E2","E3","E4","E5"),neuroticism=c("N1","N2","N3","N4","N5"), + openness = c("O1","-O2","O3","O4","-O5")) > man <- manhattan(psychTools::bfi[1:25],psychTools::bfi[26:28],keys=bfi.keys, + dictionary=psychTools::bfi.dictionary[1:2]) > manhattan(psychTools::bfi[1:25],psychTools::bfi[26:28],keys=bfi.keys,log.p=TRUE, + dictionary=psychTools::bfi.dictionary[1:2]) > > #Alternatively, use a matrix as input > R <-cor(psychTools::bfi[1:25],psychTools::bfi[26:28],use="pairwise") > manhattan(R,cs(gender,education,age),keys=bfi.keys, + dictionary=psychTools::bfi.dictionary[1:2], raw=FALSE,abs=FALSE) > par <- op > > > > psychTools::dfOrder(man,1,ascending=FALSE) #print out the items sorted on gender gender education age ItemLabel Item N5 0.21 0.05 0.10 q_1505 Panic easily. A2 0.18 0.01 0.11 q_1162 Inquire about others' well-being. A1 0.16 0.14 0.16 q_146 Am indifferent to the feelings of others. A3 0.14 0.00 0.07 q_1206 Know how to comfort others. A4 0.13 0.02 0.14 q_1364 Love children. E1 0.13 0.00 0.03 q_712 Don't talk a lot. N3 0.12 0.05 0.11 q_1099 Have frequent mood swings. A5 0.10 0.01 0.13 q_1419 Make people feel at ease. N2 0.10 0.05 0.10 q_974 Get irritated easily. O1 0.10 0.03 0.05 q_128 Am full of ideas. C5 0.09 0.03 0.09 q_1949 Waste my time. C4 0.08 0.04 0.15 q_626 Do things in a half-way manner. E4 0.08 0.04 0.01 q_1410 Make friends easily. C2 0.07 0.00 0.02 q_530 Continue until everything is perfect. E5 0.07 0.06 0.11 q_1768 Take charge. C3 0.05 0.05 0.07 q_619 Do things according to a plan. E2 0.05 0.01 0.11 q_901 Find it difficult to approach others. E3 0.05 0.00 0.00 q_1205 Know how to captivate people. N1 0.04 0.05 0.09 q_952 Get angry easily. O3 0.04 0.09 0.04 q_492 Carry the conversation to a higher level. O2 0.03 0.09 0.04 q_316 Avoid difficult reading material. O5 0.02 0.06 0.10 q_1964 Will not probe deeply into a subject. C1 0.01 0.03 0.08 q_124 Am exacting in my work. N4 0.00 0.01 0.03 q_1479 Often feel blue. O4 0.00 0.05 0.01 q_1738 Spend time reflecting on things. > > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > cleanEx() > nameEx("mat.sort") > ### * mat.sort > > flush(stderr()); flush(stdout()) > > ### Name: mat.sort > ### Title: Sort the elements of a correlation matrix to reflect factor > ### loadings > ### Aliases: mat.sort matSort > ### Keywords: multivariate models > > ### ** Examples > data(Bechtoldt.1) > sorted <- mat.sort(Bechtoldt.1,fa(Bechtoldt.1,5)) > corPlot(sorted,xlas=2) #vertical xaxis names > > > > cleanEx() > nameEx("matrix.addition") > ### * matrix.addition > > flush(stderr()); flush(stdout()) > > ### Name: matrix.addition > ### Title: A function to add two vectors or matrices > ### Aliases: matrix.addition %+% > ### Keywords: multivariate > > ### ** Examples > > > x <- seq(1,4) > z <- x %+% -t(x) > x [1] 1 2 3 4 > z [,1] [,2] [,3] [,4] [1,] 0 -1 -2 -3 [2,] 1 0 -1 -2 [3,] 2 1 0 -1 [4,] 3 2 1 0 > #compare with outer(x,-x,FUN="+") > x <- matrix(seq(1,6),ncol=2) > y <- matrix(seq(1,10),nrow=2) > z <- x %+% y > x [,1] [,2] [1,] 1 4 [2,] 2 5 [3,] 3 6 > y [,1] [,2] [,3] [,4] [,5] [1,] 1 3 5 7 9 [2,] 2 4 6 8 10 > z [,1] [,2] [,3] [,4] [,5] [1,] 8 12 16 20 24 [2,] 10 14 18 22 26 [3,] 12 16 20 24 28 > #but compare this with outer(x ,y,FUN="+") > > > > cleanEx() > nameEx("mediate") > ### * mediate > > flush(stderr()); flush(stdout()) > > ### Name: mediate > ### Title: Estimate and display direct and indirect effects of mediators > ### and moderator in path models > ### Aliases: mediate mediate.diagram moderate.diagram > ### Keywords: multivariate models > > ### ** Examples > > # A simple mediation example is the Tal_Or data set (pmi for Hayes) > #The pmi data set from Hayes is available as the Tal_Or data set. > mod4 <- mediate(reaction ~ cond + (pmi), data =Tal_Or,n.iter=50) > summary(mod4) Call: mediate(y = reaction ~ cond + (pmi), data = Tal_Or, n.iter = 50) Direct effect estimates (traditional regression) (c') X + M on Y reaction se t df Prob Intercept 0.53 0.55 0.96 120 3.40e-01 cond 0.25 0.26 0.99 120 3.22e-01 pmi 0.51 0.10 5.22 120 7.66e-07 R = 0.45 R2 = 0.21 F = 15.56 on 2 and 120 DF p-value: 9.83e-07 Total effect estimates (c) (X on Y) reaction se t df Prob Intercept 3.25 0.19 17.05 121 5.68e-34 cond 0.50 0.28 1.79 121 7.66e-02 'a' effect estimates (X on M) pmi se t df Prob Intercept 5.38 0.16 33.22 121 1.16e-62 cond 0.48 0.24 2.02 121 4.54e-02 'b' effect estimates (M on Y controlling for X) reaction se t df Prob pmi 0.51 0.1 5.22 120 7.66e-07 'ab' effect estimates (through all mediators) reaction boot sd lower upper cond 0.24 0.22 0.12 0.03 0.43 > #Two mediators (from Hayes model 6 (chapter 5)) > mod6 <- mediate(reaction ~ cond + (pmi) + (import), data =Tal_Or,n.iter=50) > summary(mod6) Call: mediate(y = reaction ~ cond + (pmi) + (import), data = Tal_Or, n.iter = 50) Direct effect estimates (traditional regression) (c') X + M on Y reaction se t df Prob Intercept -0.15 0.53 -0.28 119 7.78e-01 cond 0.10 0.24 0.43 119 6.66e-01 pmi 0.40 0.09 4.26 119 4.04e-05 import 0.32 0.07 4.59 119 1.13e-05 R = 0.57 R2 = 0.33 F = 19.11 on 3 and 119 DF p-value: 3.5e-10 Total effect estimates (c) (X on Y) reaction se t df Prob Intercept 3.25 0.19 17.05 121 5.68e-34 cond 0.50 0.28 1.79 121 7.66e-02 'a' effect estimates (X on M) pmi se t df Prob Intercept 5.38 0.16 33.22 121 1.16e-62 cond 0.48 0.24 2.02 121 4.54e-02 import se t df Prob Intercept 3.91 0.21 18.37 121 8.39e-37 cond 0.63 0.31 2.02 121 4.52e-02 'b' effect estimates (M on Y controlling for X) reaction se t df Prob pmi 0.40 0.09 4.26 119 4.04e-05 import 0.32 0.07 4.59 119 1.13e-05 'ab' effect estimates (through all mediators) reaction boot sd lower upper cond 0.39 0.45 0.19 0.12 0.76 'ab' effects estimates for each mediator for reaction boot sd lower upper cond 0.45 0.19 0.12 0.76 pmi*cond 0.21 0.12 0.02 0.47 import*cond 0.23 0.12 0.06 0.49 > > #Moderated mediation is done for the Garcia (Garcia, 2010) data set. > # (see Hayes, 2013 for the protest data set > #n.iter set to 50 (instead of default of 5000) for speed of example > #no mediation, just an interaction > mod7 <- mediate(liking ~ sexism * prot2 , data=Garcia, n.iter = 50) > summary(mod7) Call: mediate(y = liking ~ sexism * prot2, data = Garcia, n.iter = 50) No mediator specified leads to traditional regression liking se t df Prob Intercept -0.01 0.09 -0.14 125 0.88900 sexism 0.10 0.11 0.86 125 0.39100 prot2 0.49 0.19 2.63 125 0.00958 sexism*prot2 0.83 0.24 3.42 125 0.00084 R = 0.37 R2 = 0.13 F = 6.42 on 3 and 125 DF p-value: 0.000444 > data(GSBE) #The Garcia et al data set (aka GSBE) > mod11.4 <- mediate(liking ~ sexism * prot2 + (respappr), data=Garcia, + n.iter = 50,zero=FALSE) #to match Hayes > summary(mod11.4) Call: mediate(y = liking ~ sexism * prot2 + (respappr), data = Garcia, n.iter = 50, zero = FALSE) Direct effect estimates (traditional regression) (c') X + M on Y liking se t df Prob Intercept 5.35 1.06 5.04 124 1.60e-06 sexism -0.28 0.19 -1.49 124 1.39e-01 prot2 -2.81 1.16 -2.42 124 1.70e-02 sexism*prot2 0.54 0.23 2.36 124 1.97e-02 respappr 0.36 0.07 5.09 124 1.28e-06 R = 0.53 R2 = 0.28 F = 12.26 on 4 and 124 DF p-value: 1.99e-08 Total effect estimates (c) (X on Y) liking se t df Prob Intercept 7.71 1.04 7.37 125 1.99e-11 sexism -0.47 0.20 -2.32 125 2.20e-02 prot2 -3.77 1.25 -3.01 125 3.18e-03 sexism*prot2 0.83 0.24 3.42 125 8.40e-04 'a' effect estimates (X on M) respappr se t df Prob Intercept 6.57 1.21 5.43 125 2.83e-07 sexism -0.53 0.24 -2.24 125 2.67e-02 prot2 -2.69 1.45 -1.85 125 6.65e-02 sexism*prot2 0.81 0.28 2.87 125 4.78e-03 'b' effect estimates (M on Y controlling for X) liking se t df Prob respappr 0.36 0.07 5.09 124 1.28e-06 'ab' effect estimates (through all mediators) liking boot sd lower upper sexism -0.19 -0.18 0.11 -0.38 0.08 prot2 -0.97 -0.98 0.62 -0.38 0.08 sexism*prot2 0.29 0.29 0.13 -0.38 0.08 > #to see this interaction graphically, run the examples in ?Garcia > > > #data from Preacher and Hayes (2004) > sobel <- structure(list(SATIS = c(-0.59, 1.3, 0.02, 0.01, 0.79, -0.35, + -0.03, 1.75, -0.8, -1.2, -1.27, 0.7, -1.59, 0.68, -0.39, 1.33, + -1.59, 1.34, 0.1, 0.05, 0.66, 0.56, 0.85, 0.88, 0.14, -0.72, + 0.84, -1.13, -0.13, 0.2), THERAPY = structure(c(0, 1, 1, 0, 1, + 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, + 1, 1, 1, 0), value.labels = structure(c(1, 0), .Names = c("cognitive", + "standard"))), ATTRIB = c(-1.17, 0.04, 0.58, -0.23, 0.62, -0.26, + -0.28, 0.52, 0.34, -0.09, -1.09, 1.05, -1.84, -0.95, 0.15, 0.07, + -0.1, 2.35, 0.75, 0.49, 0.67, 1.21, 0.31, 1.97, -0.94, 0.11, + -0.54, -0.23, 0.05, -1.07)), .Names = c("SATIS", "THERAPY", "ATTRIB" + ), row.names = c(NA, -30L), class = "data.frame", variable.labels = structure(c("Satisfaction", + "Therapy", "Attributional Positivity"), .Names = c("SATIS", "THERAPY", + "ATTRIB"))) > #n.iter set to 50 (instead of default of 5000) for speed of example > > #There are several forms of input. The original specified y, x , and the mediator > #mediate(1,2,3,sobel,n.iter=50) #The example in Preacher and Hayes > #As of October, 2017 we can specify this in a formula mode > mediate (SATIS ~ THERAPY + (ATTRIB),data=sobel, n.iter=50) #specify the mediator by Mediation/Moderation Analysis Call: mediate(y = SATIS ~ THERAPY + (ATTRIB), data = sobel, n.iter = 50) The DV (Y) was SATIS . The IV (X) was THERAPY . The mediating variable(s) = ATTRIB . Total effect(c) of THERAPY on SATIS = 0.76 S.E. = 0.31 t = 2.5 df= 28 with p = 0.019 Direct effect (c') of THERAPY on SATIS removing ATTRIB = 0.43 S.E. = 0.32 t = 1.35 df= 27 with p = 0.19 Indirect effect (ab) of THERAPY on SATIS through ATTRIB = 0.33 Mean bootstrapped indirect effect = 0.3 with standard error = 0.16 Lower CI = 0.06 Upper CI = 0.63 R = 0.56 R2 = 0.31 F = 6.06 on 2 and 27 DF p-value: 0.00272 To see the longer output, specify short = FALSE in the print statement or ask for the summary> # adding parentheses > > #A.C. Kerchoff, (1974) Ambition and Attainment: A Study of Four Samples of American Boys. > #Data from sem package taken from Kerckhoff (and in turn, from Lisrel manual) > R.kerch <- structure(list(Intelligence = c(1, -0.1, 0.277, 0.25, 0.572, + 0.489, 0.335), Siblings = c(-0.1, 1, -0.152, -0.108, -0.105, + -0.213, -0.153), FatherEd = c(0.277, -0.152, 1, 0.611, 0.294, + 0.446, 0.303), FatherOcc = c(0.25, -0.108, 0.611, 1, 0.248, 0.41, + 0.331), Grades = c(0.572, -0.105, 0.294, 0.248, 1, 0.597, 0.478 + ), EducExp = c(0.489, -0.213, 0.446, 0.41, 0.597, 1, 0.651), + OccupAsp = c(0.335, -0.153, 0.303, 0.331, 0.478, 0.651, 1 + )), .Names = c("Intelligence", "Siblings", "FatherEd", "FatherOcc", + "Grades", "EducExp", "OccupAsp"), class = "data.frame", row.names = c("Intelligence", + "Siblings", "FatherEd", "FatherOcc", "Grades", "EducExp", "OccupAsp" + )) > > #n.iter set to 50 (instead of default of 5000) for speed of demo > #mod.k <- mediate("OccupAsp","Intelligence",m= c(2:5),data=R.kerch,n.obs=767,n.iter=50) > #new style > mod.k <- mediate(OccupAsp ~ Intelligence + (Siblings) + (FatherEd) + (FatherOcc) + + (Grades), data = R.kerch, n.obs=767, n.iter=50) The replication data matrices were simulated based upon the specified number of subjects and the observed correlation matrix. > > mediate.diagram(mod.k) > #print the path values > mod.k Mediation/Moderation Analysis Call: mediate(y = OccupAsp ~ Intelligence + (Siblings) + (FatherEd) + (FatherOcc) + (Grades), data = R.kerch, n.obs = 767, n.iter = 50) The DV (Y) was OccupAsp . The IV (X) was Intelligence . The mediating variable(s) = Siblings FatherEd FatherOcc Grades . Total effect(c) of Intelligence on OccupAsp = 0.34 S.E. = 0.03 t = 9.83 df= 765 with p = 1.4e-21 Direct effect (c') of Intelligence on OccupAsp removing Siblings FatherEd FatherOcc Grades = 0.05 S.E. = 0.04 t = 1.29 df= 761 with p = 0.2 Indirect effect (ab) of Intelligence on OccupAsp through Siblings FatherEd FatherOcc Grades = 0.29 Mean bootstrapped indirect effect = 0.27 with standard error = 0.03 Lower CI = 0.21 Upper CI = 0.32 R = 0.54 R2 = 0.29 F = 61.36 on 5 and 761 DF p-value: 4.85e-62 To see the longer output, specify short = FALSE in the print statement or ask for the summary> > #Compare the following solution to the path coefficients found by the sem package > > #mod.k2 <- mediate(y="OccupAsp",x=c("Intelligence","Siblings","FatherEd","FatherOcc"), > # m= c(5:6),data=R.kerch,n.obs=767,n.iter=50) > #new format > mod.k2 <- mediate(OccupAsp ~ Intelligence + Siblings + FatherEd + FatherOcc + (Grades) + + (EducExp),data=R.kerch, n.obs=767, n.iter=50) The replication data matrices were simulated based upon the specified number of subjects and the observed correlation matrix. > mediate.diagram(mod.k2,show.c=FALSE) #simpler output > #print the path values > mod.k2 Mediation/Moderation Analysis Call: mediate(y = OccupAsp ~ Intelligence + Siblings + FatherEd + FatherOcc + (Grades) + (EducExp), data = R.kerch, n.obs = 767, n.iter = 50) The DV (Y) was OccupAsp . The IV (X) was Intelligence Siblings FatherEd FatherOcc . The mediating variable(s) = Grades EducExp . Total effect(c) of Intelligence on OccupAsp = 0.25 S.E. = 0.03 t = 7.29 df= 762 with p = 7.6e-13 Direct effect (c') of Intelligence on OccupAsp removing Grades EducExp = -0.04 S.E. = 0.03 t = -1.16 df= 760 with p = 0.25 Indirect effect (ab) of Intelligence on OccupAsp through Grades EducExp = 0.29 Mean bootstrapped indirect effect = 0.28 with standard error = 0.02 Lower CI = 0.22 Upper CI = 0.32 Total effect(c) of Siblings on OccupAsp = -0.09 S.E. = 0.03 t = -2.78 df= 762 with p = 0.0056 Direct effect (c') of Siblings on OccupAsp removing Grades EducExp = -0.02 S.E. = 0.03 t = -0.68 df= 760 with p = 0.5 Indirect effect (ab) of Siblings on OccupAsp through Grades EducExp = -0.07 Mean bootstrapped indirect effect = -0.09 with standard error = 0.02 Lower CI = -0.13 Upper CI = -0.05 Total effect(c) of FatherEd on OccupAsp = 0.1 S.E. = 0.04 t = 2.36 df= 762 with p = 0.018 Direct effect (c') of FatherEd on OccupAsp removing Grades EducExp = -0.04 S.E. = 0.04 t = -1.16 df= 760 with p = 0.25 Indirect effect (ab) of FatherEd on OccupAsp through Grades EducExp = 0.14 Mean bootstrapped indirect effect = 0.14 with standard error = 0.03 Lower CI = 0.09 Upper CI = 0.19 Total effect(c) of FatherOcc on OccupAsp = 0.2 S.E. = 0.04 t = 4.8 df= 762 with p = 1.9e-06 Direct effect (c') of FatherOcc on OccupAsp removing Grades EducExp = 0.1 S.E. = 0.03 t = 2.85 df= 760 with p = 0.0044 Indirect effect (ab) of FatherOcc on OccupAsp through Grades EducExp = 0.1 Mean bootstrapped indirect effect = 0.1 with standard error = 0.02 Lower CI = 0.07 Upper CI = 0.14 R = 0.67 R2 = 0.44 F = 100.9 on 6 and 760 DF p-value: 4.88e-104 To see the longer output, specify short = FALSE in the print statement or ask for the summary> > #Several interesting test cases are taken from analyses of the Spengler data set > #This is temporarily added to psych from psychTools to help build for CRAN > #Although the sample sizes are actually very large in the first wave, I use the > #sample sizes from the last wave > #We set the n.iter to be 50 instead of the default value of 5,000 > if(require("psychTools")) { + mod1 <- mediate(Income.50 ~ IQ + Parental+ (Ed.11) ,data=Spengler, + n.obs = 1952, n.iter=50) + mod2 <- mediate(Income.50 ~ IQ + Parental+ (Ed.11) + (Income.11) + ,data=Spengler,n.obs = 1952, n.iter=50) + + #Now, compare these models + anova(mod1,mod2) + + #Current version does not support two DVs + #mod22 <- mediate(Income.50 + Educ.50 ~ IQ + Parental+ (Ed.11) + (Income.11) + # ,data=Spengler,n.obs = 1952, n.iter=50) + + } Loading required package: psychTools The replication data matrices were simulated based upon the specified number of subjects and the observed correlation matrix. The replication data matrices were simulated based upon the specified number of subjects and the observed correlation matrix. Model 1 = mediate(y = Income.50 ~ IQ + Parental + (Ed.11), data = Spengler, n.obs = 1952, n.iter = 50) Model 2 = mediate(y = Income.50 ~ IQ + Parental + (Ed.11) + (Income.11), data = Spengler, n.obs = 1952, n.iter = 50) Res Df Res SS Diff df Diff SS F Pr(F > ) 1 1948 1565.401 NA NA NA NA 2 1947 1547.155 1 18.2464 22.96198 1.777401e-06 > > > > > cleanEx() detaching ‘package:psychTools’ > nameEx("misc") > ### * misc > > flush(stderr()); flush(stdout()) > > ### Name: psych.misc > ### Title: Miscellaneous helper functions for the psych package > ### Aliases: psych.misc misc tableF lowerCor lowerMat progressBar reflect > ### shannon test.all cor2 levels2numeric char2numeric nchar2numeric > ### isCorrelation isCovariance fromTo cs acs > ### Keywords: multivariate > > ### ** Examples > > lowerMat(Thurstone) Sntnc Vcblr Snt.C Frs.L F.L.W Sffxs Ltt.S Pdgrs Ltt.G Sentences 1.00 Vocabulary 0.83 1.00 Sent.Completion 0.78 0.78 1.00 First.Letters 0.44 0.49 0.46 1.00 Four.Letter.Words 0.43 0.46 0.42 0.67 1.00 Suffixes 0.45 0.49 0.44 0.59 0.54 1.00 Letter.Series 0.45 0.43 0.40 0.38 0.40 0.29 1.00 Pedigrees 0.54 0.54 0.53 0.35 0.37 0.32 0.56 1.00 Letter.Group 0.38 0.36 0.36 0.42 0.45 0.32 0.60 0.45 1.00 > lb <- lowerCor(psychTools::bfi[1:10]) #finds and prints the lower correlation matrix, A1 A2 A3 A4 A5 C1 C2 C3 C4 C5 A1 1.00 A2 -0.34 1.00 A3 -0.27 0.49 1.00 A4 -0.15 0.34 0.36 1.00 A5 -0.18 0.39 0.50 0.31 1.00 C1 0.03 0.09 0.10 0.09 0.12 1.00 C2 0.02 0.14 0.14 0.23 0.11 0.43 1.00 C3 -0.02 0.19 0.13 0.13 0.13 0.31 0.36 1.00 C4 0.13 -0.15 -0.12 -0.15 -0.13 -0.34 -0.38 -0.34 1.00 C5 0.05 -0.12 -0.16 -0.24 -0.17 -0.25 -0.30 -0.34 0.48 1.00 > # returns the square matrix. > #fiml <- corFiml(psychTools::bfi[1:10]) #FIML correlations require lavaan package > #lowerMat(fiml) #to get pretty output > f3 <- fa(Thurstone,3) > f3r <- reflect(f3,2) #reflect the second factor > #find the complexity of the response patterns of the iqitems. > round(shannon(psychTools::iqitems),2) reason.4 reason.16 reason.17 reason.19 letter.7 letter.33 letter.34 letter.58 1.78 1.52 1.62 1.86 1.88 2.01 1.89 2.29 matrix.45 matrix.46 matrix.47 matrix.55 rotate.3 rotate.4 rotate.6 rotate.8 2.03 2.10 1.92 2.40 2.89 2.76 2.72 2.93 > #test.all('BinNor') #Does the BinNor package work when we are using other packages > bestItems(lb,"A3",cut=.1,dictionary=psychTools::bfi.dictionary[1:2],raw=FALSE) $A3 A3 ItemLabel Item A3 1.00 q_1206 Know how to comfort others. A5 0.50 q_1419 Make people feel at ease. A2 0.49 q_1162 Inquire about others' well-being. A4 0.36 q_1364 Love children. A1 -0.27 q_146 Am indifferent to the feelings of others. C5 -0.16 q_1949 Waste my time. C2 0.14 q_530 Continue until everything is perfect. C3 0.13 q_619 Do things according to a plan. C4 -0.12 q_626 Do things in a half-way manner. C1 0.10 q_124 Am exacting in my work. > #to make this a latex table > #df2latex(bestItems(lb,2,cut=.2)) > # > data(psychTools::bfi.dictionary) Warning in data(psychTools::bfi.dictionary) : data set ‘psychTools::bfi.dictionary’ not found > f2 <- fa(psychTools::bfi[1:10],2) > fa.lookup(f2,psychTools::bfi.dictionary) MR1 MR2 com h2 ItemLabel Item C4 -0.65 0.00 1.00 0.42 q_626 Do things in a half-way manner. C2 0.64 -0.01 1.00 0.40 q_530 Continue until everything is perfect. C1 0.57 -0.06 1.02 0.31 q_124 Am exacting in my work. C5 -0.56 -0.06 1.02 0.34 q_1949 Waste my time. C3 0.54 0.03 1.01 0.31 q_619 Do things according to a plan. A3 -0.03 0.76 1.00 0.56 q_1206 Know how to comfort others. A2 0.01 0.68 1.00 0.46 q_1162 Inquire about others' well-being. A5 0.03 0.60 1.00 0.37 q_1419 Make people feel at ease. A4 0.14 0.44 1.22 0.25 q_1364 Love children. A1 0.08 -0.41 1.08 0.15 q_146 Am indifferent to the feelings of others. Giant3 Big6 Little12 Keying IPIP100 C4 Stability Conscientiousness Industriousness -1 B5:C C2 Stability Conscientiousness Orderliness 1 B5:C C1 Stability Conscientiousness Orderliness 1 B5:C C5 Stability Conscientiousness Industriousness -1 B5:C C3 Stability Conscientiousness Orderliness 1 B5:C A3 Cohesion Agreeableness Compassion 1 B5:A A2 Cohesion Agreeableness Compassion 1 B5:A A5 Cohesion Agreeableness Compassion 1 B5:A A4 Cohesion Agreeableness Compassion 1 B5:A A1 Cohesion Agreeableness Compassion -1 B5:A > > sa1 <-sat.act[1:2] > sa2 <- sat.act[3:4] > sa3 <- sat.act[5:6] > cor2(sa1,sa2) age ACT gender -0.02 -0.04 education 0.55 0.15 > cor2(list(sa1,sa2)) #show within set and between set cors gendr edctn age ACT gender 1.00 education 0.09 1.00 age -0.02 0.55 1.00 ACT -0.04 0.15 0.11 1.00 > cor2(list(sa1,sa2,sa3)) gendr edctn age ACT SATV SATQ gender 1.00 education 0.09 1.00 age -0.02 0.55 1.00 ACT -0.04 0.15 0.11 1.00 SATV -0.02 0.05 -0.04 0.56 1.00 SATQ -0.17 0.03 -0.03 0.59 0.64 1.00 > lowerCor(fromTo(sat.act,"ACT","SATQ")) #show some correlations ACT SATV SATQ ACT 1.00 SATV 0.56 1.00 SATQ 0.59 0.64 1.00 > vect <- cs(ACT,SATQ) #skip the quotes > vect #they are in this vector [1] "ACT" "SATQ" > #to combine longer terms > vect <- cs("Here is a longish",vector, that, we ,"want to combine", into, several) > vect [1] "Here is a longish" "vector" "that" [4] "we" "want to combine" "into" [7] "several" > temp <- acs("Here is a longish",vector, that, we ,"want to combine", into, one) > temp [1] "Here is a longish vector that we want to combine into one" > lowerCor(fromTo(sat.act,cs(ACT,SATQ))) ACT SATV SATQ ACT 1.00 SATV 0.56 1.00 SATQ 0.59 0.64 1.00 > > > > > > > cleanEx() > nameEx("mixed.cor") > ### * mixed.cor > > flush(stderr()); flush(stdout()) > > ### Name: mixedCor > ### Title: Find correlations for mixtures of continuous, polytomous, and > ### dichotomous variables > ### Aliases: mixedCor mixed.cor > ### Keywords: multivariate models > > ### ** Examples > > data(bfi) > r <- mixedCor(data=psychTools::bfi[,c(1:5,26,28)]) > r Call: mixedCor(data = psychTools::bfi[, c(1:5, 26, 28)]) A1 A2 A3 A4 A5 gendr age A1 1.00 A2 -0.41 1.00 A3 -0.32 0.56 1.00 A4 -0.18 0.39 0.41 1.00 A5 -0.23 0.45 0.57 0.36 1.00 gender -0.23 0.25 0.20 0.20 0.14 1.00 age -0.17 0.12 0.07 0.16 0.14 0.06 1.00 > #this is the same as > r <- mixedCor(data=psychTools::bfi,p=1:5,c=28,d=26) > r #note how the variable order reflects the original order in the data Call: mixedCor(data = psychTools::bfi, c = 28, p = 1:5, d = 26) A1 A2 A3 A4 A5 gendr age A1 1.00 A2 -0.41 1.00 A3 -0.32 0.56 1.00 A4 -0.18 0.39 0.41 1.00 A5 -0.23 0.45 0.57 0.36 1.00 gender -0.23 0.25 0.20 0.20 0.14 1.00 age -0.17 0.12 0.07 0.16 0.14 0.06 1.00 > #compare to raw Pearson > #note that the biserials and polychorics are not attenuated > rp <- cor(psychTools::bfi[c(1:5,26,28)],use="pairwise") > lowerMat(rp) A1 A2 A3 A4 A5 gendr age A1 1.00 A2 -0.34 1.00 A3 -0.27 0.49 1.00 A4 -0.15 0.34 0.36 1.00 A5 -0.18 0.39 0.50 0.31 1.00 gender -0.16 0.18 0.14 0.13 0.10 1.00 age -0.16 0.11 0.07 0.14 0.13 0.05 1.00 > > > > cleanEx() > nameEx("mssd") > ### * mssd > > flush(stderr()); flush(stdout()) > > ### Name: mssd > ### Title: Find von Neuman's Mean Square of Successive Differences > ### Aliases: mssd rmssd autoR > ### Keywords: multivariate models > > ### ** Examples > > t <- seq(-pi, pi, .1) > trial <- 1: length(t) > gr <- trial %% 8 > c <- cos(t) > ts <- sample(t,length(t)) > cs <- cos(ts) > x.df <- data.frame(trial,gr,t,c,ts,cs) > rmssd(x.df) trial gr t c ts cs 1.00000000 2.53364458 0.10000000 0.07115242 2.67364678 0.92408148 > rmssd(x.df,gr) trial gr t c ts cs 0 8 0 0.8 0.6109385 2.830489 1.0000729 1 8 0 0.8 0.5769437 2.692582 1.0308382 2 8 0 0.8 0.5805655 3.895602 1.0190935 3 8 0 0.8 0.5830116 3.187251 1.0006610 4 8 0 0.8 0.5841997 1.717556 0.7914451 5 8 0 0.8 0.5840902 1.962142 1.0630566 6 8 0 0.8 0.5826866 1.966868 0.6465706 7 8 0 0.8 0.5800358 2.721869 1.0720273 > autoR(x.df,gr) Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Warning in cor(x[1:(n.obs - lag), ], x[(lag + 1):n.obs, ], use = use) : the standard deviation is zero Autocorrelations Call: autoR(x = x.df, group = gr) trial gr t c ts cs 0 1 NA 1 0.47 0.20 -0.71 1 1 NA 1 0.64 -0.31 0.04 2 1 NA 1 0.63 -0.45 -0.29 3 1 NA 1 0.62 -0.67 0.11 4 1 NA 1 0.62 0.36 0.14 5 1 NA 1 0.62 -0.09 -0.89 6 1 NA 1 0.62 -0.42 0.11 7 1 NA 1 0.63 -0.03 -0.09 > describe(x.df) vars n mean sd median trimmed mad min max range skew kurtosis trial 1 63 32.00 18.33 32.00 32.00 23.72 1.00 63.00 62.0 0.00 -1.26 gr 2 63 3.56 2.28 4.00 3.57 2.97 0.00 7.00 7.0 -0.01 -1.28 t 3 63 -0.04 1.83 -0.04 -0.04 2.37 -3.14 3.06 6.2 0.00 -1.26 c 4 63 0.00 0.71 0.01 0.00 1.06 -1.00 1.00 2.0 0.00 -1.55 ts 5 63 -0.04 1.83 -0.04 -0.04 2.37 -3.14 3.06 6.2 0.00 -1.26 cs 6 63 0.00 0.71 0.01 0.00 1.06 -1.00 1.00 2.0 0.00 -1.55 se trial 2.31 gr 0.29 t 0.23 c 0.09 ts 0.23 cs 0.09 > #pairs.panels(x.df) > #mlPlot(x.df,grp="gr",Time="t",items=c(4:6)) > > > > cleanEx() > nameEx("multi.hist") > ### * multi.hist > > flush(stderr()); flush(stdout()) > > ### Name: multi.hist > ### Title: Multiple histograms with density and normal fits on one page > ### Aliases: multi.hist histo.density histBy > ### Keywords: multivariate hplot > > ### ** Examples > > multi.hist(sat.act) > multi.hist(sat.act,bcol="red") > multi.hist(sat.act,dcol="blue") #make both lines blue > multi.hist(sat.act,dcol= c("blue","red"),dlty=c("dotted", "solid")) > multi.hist(sat.act,freq=TRUE) #show the frequency plot > multi.hist(sat.act,nrow=2) > histBy(sat.act,"SATQ","gender") #input by variable names > histBy(SATQ~ gender, data=sat.act) #formula input > > > > cleanEx() > nameEx("multilevel.reliability") > ### * multilevel.reliability > > flush(stderr()); flush(stdout()) > > ### Name: multilevel.reliability > ### Title: Find and plot various reliability/gneralizability coefficients > ### for multilevel data > ### Aliases: mlr multilevel.reliability mlArrange mlPlot > ### Keywords: multivariate models > > ### ** Examples > > #data from Shrout and Lane, 2012. > > shrout <- structure(list(Person = c(1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, + 5L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L), Time = c(1L, 1L, + 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, + 4L, 4L), Item1 = c(2L, 3L, 6L, 3L, 7L, 3L, 5L, 6L, 3L, 8L, 4L, + 4L, 7L, 5L, 6L, 1L, 5L, 8L, 8L, 6L), Item2 = c(3L, 4L, 6L, 4L, + 8L, 3L, 7L, 7L, 5L, 8L, 2L, 6L, 8L, 6L, 7L, 3L, 9L, 9L, 7L, 8L + ), Item3 = c(6L, 4L, 5L, 3L, 7L, 4L, 7L, 8L, 9L, 9L, 5L, 7L, + 9L, 7L, 8L, 4L, 7L, 9L, 9L, 6L)), .Names = c("Person", "Time", + "Item1", "Item2", "Item3"), class = "data.frame", row.names = c(NA, + -20L)) > > #make shrout super wide > #Xwide <- reshape(shrout,v.names=c("Item1","Item2","Item3"),timevar="Time", > #direction="wide",idvar="Person") > #add more helpful Names > #colnames(Xwide ) <- c("Person",c(paste0("Item",1:3,".T",1),paste0("Item",1:3,".T",2), > #paste0("Item",1:3,".T",3),paste0("Item",1:3,".T",4))) > #make superwide into normal form (i.e., just return it to the original shrout data > #Xlong <-Xlong <- reshape(Xwide,idvar="Person",2:13) > > #Now use these data for a multilevel repliability study, use the normal wide form output > mg <- mlr(shrout,grp="Person",Time="Time",items=3:5) > #which is the same as > #mg <- multilevel.reliability(shrout,grp="Person",Time="Time",items= > # c("Item1","Item2","Item3"),plot=TRUE) > #to show the lattice plot by subjects, set plot = TRUE > > #Alternatively for long input (returned in this case from the prior run) > mlr(mg$long,grp="id",Time ="time",items="items", values="values",long=TRUE) Multilevel Generalizability analysis Call: mlr(x = mg$long, grp = "id", Time = "time", items = "items", long = TRUE, values = "values") The data had 5 observations taken over 4 time intervals for 3 items. Alternative estimates of reliability based upon Generalizability theory RkF = 0.97 Reliability of average of all ratings across all items and times (Fixed time effects) R1R = 0.6 Generalizability of a single time point across all items (Random time effects) RkR = 0.85 Generalizability of average time points across all items (Random time effects) Rc = 0.74 Generalizability of change (fixed time points, fixed items) RkRn = 0.85 Generalizability of between person differences averaged over time (time nested within people) Rcn = 0.65 Generalizability of within person variations averaged over items (time nested within people) These reliabilities are derived from the components of variance estimated by ANOVA variance Percent ID 2.34 0.44 Time 0.38 0.07 Items 0.61 0.11 ID x time 0.92 0.17 ID x items 0.12 0.02 time x items 0.05 0.01 Residual 0.96 0.18 Total 5.38 1.00 The nested components of variance estimated from lme are: variance Percent id 2.3 0.45 id(time) 1.1 0.21 residual 1.7 0.34 total 5.1 1.00 To see the ANOVA and alpha by subject, use the short = FALSE option. To see the summaries of the ICCs by subject and time, use all=TRUE To see specific objects select from the following list: ANOVA s.lmer s.lme alpha summary.by.person summary.by.time ICC.by.person ICC.by.time lmer long Call> > #example of mlArrange > #First, add two new columns to shrout and > #then convert to long output using mlArrange > total <- rowSums(shrout[3:5]) > caseid <- rep(paste0("ID",1:5),4) > new.shrout <- cbind(shrout,total=total,case=caseid) > #now convert to long > new.long <- mlArrange(new.shrout,grp="Person",Time="Time",items =3:5,extra=6:7) > headTail(new.long,6,6) id time values items total case 1 1 1 2 Item1 11 ID1 2 1 2 3 Item1 10 ID1 3 1 3 4 Item1 11 ID1 4 1 4 1 Item1 8 ID1 5 1 1 3 Item2 11 ID1 6 1 2 3 Item2 10 ID1 ... ... ... ... ... 55 5 3 7 Item2 21 ID5 56 5 4 8 Item2 20 ID5 57 5 1 7 Item3 22 ID5 58 5 2 9 Item3 25 ID5 59 5 3 8 Item3 21 ID5 60 5 4 6 Item3 20 ID5 > > > > cleanEx() > nameEx("omega") > ### * omega > > flush(stderr()); flush(stdout()) > > ### Name: omega > ### Title: Calculate McDonald's omega estimates of general and total factor > ### saturation > ### Aliases: omega omegaSem omegaFromSem omegah omegaDirect directSl > ### Keywords: multivariate models > > ### ** Examples > > ## Not run: > ##D test.data <- Harman74.cor$cov > ##D # if(!require(GPArotation)) {message("Omega requires GPA rotation" )} else { > ##D my.omega <- omega(test.data) > ##D print(my.omega,digits=2) > ##D #} > ##D > ##D #create 9 variables with a hierarchical structure > ##D v9 <- sim.hierarchical() > ##D #with correlations of > ##D round(v9,2) > ##D #find omega > ##D v9.omega <- omega(v9,digits=2) > ##D v9.omega > ##D > ##D #create 8 items with a two factor solution, showing the use of the flip option > ##D sim2 <- item.sim(8) > ##D omega(sim2) #an example of misidentification-- remember to look at the loadings matrices. > ##D omega(sim2,2) #this shows that in fact there is no general factor > ##D omega(sim2,2,option="first") #but, if we define one of the two group factors > ##D #as a general factor, we get a falsely high omega > ##D #apply omega to analyze 6 mental ability tests > ##D data(ability.cov) #has a covariance matrix > ##D omega(ability.cov$cov) > ##D > ##D #om <- omega(Thurstone) > ##D #round(om$omega.group,2) > ##D #round(om$omega.group[2]/om$omega.group[1],2) #fraction of reliable that is general variance > ##D # round(om$omega.group[3]/om$omega.group[1],2) #fraction of reliable that is group variance > ##D > ##D #To find factor score estimates for the hierarchical model it is necessary to > ##D #do two extra steps. > ##D > ##D #Consider the case of the raw data in an object data. (An example from simulation) > ##D # set.seed(42) > ##D # gload <- matrix(c(.9,.8,.7),nrow=3) > ##D # fload <- matrix(c(.8,.7,.6,rep(0,9),.7,.6,.5,rep(0,9),.7,.6,.4), ncol=3) > ##D # data <- sim.hierarchical(gload=gload,fload=fload, n=100000, raw=TRUE) > ##D # > ##D # f3 <- fa(data$observed,3,scores="tenBerge", oblique.scores=TRUE) > ##D # f1 <- fa(f3$scores) > ##D > ##D # om <- omega(data$observed,sl=FALSE) #draw the hierarchical figure > ##D # The scores from om are based upon the Schmid-Leiman factors and although the g factor > ##D # is identical, the group factors are not. > ##D # This is seen in the following correlation matrix > ##D # hier.scores <- cbind(om$scores,f1$scores,f3$scores) > ##D # lowerCor(hier.scores) > ##D # > ##D #this next set of examples require lavaan > ##D #jensen <- sim.hierarchical() #create a hierarchical structure (same as v9 above) > ##D #om.jen <- omegaSem(jensen,lavaan=TRUE) #do the exploratory omega with confirmatory as well > ##D #lav.mod <- om.jen$omegaSem$model$lavaan #get the lavaan code or create it yourself > ##D # lav.mod <- 'g =~ +V1+V2+V3+V4+V5+V6+V7+V8+V9 > ##D # F1=~ + V1 + V2 + V3 > ##D # F2=~ + V4 + V5 + V6 > ##D # F3=~ + V7 + V8 + V9 ' > ##D #lav.jen <- cfa(lav.mod,sample.cov=jensen,sample.nobs=500,orthogonal=TRUE,std.lv=TRUE) > ##D # omegaFromSem(lav.jen,jensen) > ##D #the directSl solution > ##D #direct.jen <- directSl(jen) > ##D #direct.jen > ##D > ##D #try a one factor solution -- this is not recommended, but sometimes done > ##D #it will just give omega_total > ##D # lav.mod.1 <- 'g =~ +V1+V2+V3+V4+V5+V6+V7+V8+V9 ' > ##D #lav.jen.1<- cfa(lav.mod.1,sample.cov=jensen,sample.nobs=500,orthogonal=TRUE,std.lv=TRUE) > ##D # omegaFromSem(lav.jen.1,jensen) > ##D > ##D > ##D > ## End(Not run) > > > > cleanEx() > nameEx("omega.graph") > ### * omega.graph > > flush(stderr()); flush(stdout()) > > ### Name: omega.graph > ### Title: Graph hierarchical factor structures > ### Aliases: omega.diagram omega.graph > ### Keywords: multivariate > > ### ** Examples > > #24 mental tests from Holzinger-Swineford-Harman > if(require(GPArotation) ) {om24 <- omega(Harman74.cor$cov,4) } #run omega Loading required package: GPArotation > > # > #example hierarchical structure from Jensen and Weng > if(require(GPArotation) ) {jen.omega <- omega(make.hierarchical())} > > > > > > cleanEx() detaching ‘package:GPArotation’ > nameEx("outlier") > ### * outlier > > flush(stderr()); flush(stdout()) > > ### Name: outlier > ### Title: Find and graph Mahalanobis squared distances to detect outliers > ### Aliases: outlier > ### Keywords: multivariate models > > ### ** Examples > > #first, just find and graph the outliers > d2 <- outlier(sat.act) > #combine with the data frame and plot it with the outliers highlighted in blue > sat.d2 <- data.frame(sat.act,d2) > pairs.panels(sat.d2,bg=c("yellow","blue")[(d2 > 25)+1],pch=21) > > > > cleanEx() > nameEx("p.rep") > ### * p.rep > > flush(stderr()); flush(stdout()) > > ### Name: p.rep > ### Title: Find the probability of replication for an F, t, or r and > ### estimate effect size > ### Aliases: p.rep p.rep.f p.rep.t p.rep.r > ### Keywords: models univar > > ### ** Examples > > > p.rep(.05) #probability of replicating a result if the original study had a p = .05 [1] 0.8776029 > p.rep.f(9.0,98) #probability of replicating a result with F = 9.0 with 98 df $p.rep [1] 0.9720729 $dprime [1] 0.6060915 $prob [1] 0.003423264 $r.equiv [1] 0.2900209 > p.rep.r(.4,50) #probability of replicating a result if r =.4 with n = 50 $p.rep [1] 0.9818287 $dprime [1] 0.8728716 $prob [1] 0.003075876 > p.rep.t(3,98) #probability of replicating a result if t = 3 with df =98 $p.rep [1] 0.9720729 $dprime [1] 0.6060915 $prob [1] 0.003423264 $r.equiv [1] 0.2900209 > p.rep.t(2.14,84,14) #effect of equal sample sizes (see Rosnow et al.) $p.rep [1] 0.9002023 $dprime [1] 0.6054045 $prob [1] 0.0348374 $r.equiv [1] 0.2112921 > > > > > cleanEx() > nameEx("paired.r") > ### * paired.r > > flush(stderr()); flush(stdout()) > > ### Name: paired.r > ### Title: Test the difference between (un)paired correlations > ### Aliases: paired.r > ### Keywords: multivariate models > > ### ** Examples > > paired.r(.5,.3, .4, 100) #dependent correlations Call: paired.r(xy = 0.5, xz = 0.3, yz = 0.4, n = 100) [1] "test of difference between two correlated correlations" t = 2.06 With probability = 0.04> paired.r(.5,.3,NULL,100) #independent correlations same sample size Call: paired.r(xy = 0.5, xz = 0.3, yz = NULL, n = 100) [1] "test of difference between two independent correlations" z = 1.67 With probability = 0.09> paired.r(.5,.3,NULL, 100, 64) # independent correlations, different sample sizes Call: paired.r(xy = 0.5, xz = 0.3, yz = NULL, n = 100, n2 = 64) [1] "test of difference between two independent correlations" z = 1.47 With probability = 0.14> > > > cleanEx() > nameEx("pairs.panels") > ### * pairs.panels > > flush(stderr()); flush(stdout()) > > ### Name: pairs.panels > ### Title: SPLOM, histograms and correlations for a data matrix > ### Aliases: pairs.panels panel.cor panel.cor.scale panel.hist panel.lm > ### panel.lm.ellipse panel.hist.density panel.ellipse panel.smoother > ### Keywords: multivariate hplot > > ### ** Examples > > > pairs.panels(attitude) #see the graphics window > data(iris) > pairs.panels(iris[1:4],bg=c("red","yellow","blue")[iris$Species], + pch=21,main="Fisher Iris data by Species") #to show color grouping > > pairs.panels(iris[1:4],bg=c("red","yellow","blue")[iris$Species], + pch=21+as.numeric(iris$Species),main="Fisher Iris data by Species",hist.col="red") > #to show changing the diagonal > > #to show 'significance' > pairs.panels(iris[1:4],bg=c("red","yellow","blue")[iris$Species], + pch=21+as.numeric(iris$Species),main="Fisher Iris data by Species",hist.col="red",stars=TRUE) > > > > #demonstrate not showing the data points > data(sat.act) > pairs.panels(sat.act,show.points=FALSE) > #better yet is to show the points as a period > pairs.panels(sat.act,pch=".") > #show many variables with 0 gap between scatterplots > # data(bfi) > # pairs.panels(psychTools::bfi,show.points=FALSE,gap=0) > > #plot raw data points and then the weighted correlations. > #output from statsBy > sb <- statsBy(sat.act,"education") > pairs.panels(sb$mean,wt=sb$n) #report the weighted correlations > #compare with > pairs.panels(sb$mean) #unweighed correlations > > > > cleanEx() > nameEx("parcels") > ### * parcels > > flush(stderr()); flush(stdout()) > > ### Name: parcels > ### Title: Find miniscales (parcels) of size 2 or 3 from a set of items > ### Aliases: parcels keysort > ### Keywords: multivariate > > ### ** Examples > > parcels(Thurstone) P1 P2 P3 Sentences 1 0 0 Vocabulary 1 0 0 Sent.Completion 1 0 0 First.Letters 0 1 0 Four.Letter.Words 0 1 0 Suffixes 0 1 0 Letter.Series 0 0 1 Pedigrees 0 0 1 Letter.Group 0 0 1 > keys <- parcels(psychTools::bfi) > keys <- keysort(keys) > score.items(keys,psychTools::bfi) score.items has been replaced by scoreItems, please change your call Call: scoreItems(keys = keys, items = items, totals = totals, ilabels = ilabels, missing = missing, impute = impute, delete = delete, min = min, max = max, digits = digits, select = select) (Unstandardized) Alpha: P3 P4 P6 P2 P1 P9 P8 P5 P7 alpha 0.72 0.65 0.5 0.72 0.82 0.3 0.12 0.63 0.5 Standard errors of unstandardized Alpha: P3 P4 P6 P2 P1 P9 P8 P5 P7 ASE 0.019 0.021 0.024 0.019 0.016 0.028 0.016 0.021 0.024 Average item correlation: P3 P4 P6 P2 P1 P9 P8 P5 P7 average.r 0.46 0.38 0.25 0.46 0.6 0.12 0.044 0.36 0.25 Median item correlation: P3 P4 P6 P2 P1 P9 P8 P5 P7 0.483 0.378 0.245 0.464 0.553 0.038 0.161 0.389 0.208 Guttman 6* reliability: P3 P4 P6 P2 P1 P9 P8 P5 P7 Lambda.6 0.69 0.64 0.53 0.69 0.79 0.39 0.22 0.62 0.55 Signal/Noise based upon av.r : P3 P4 P6 P2 P1 P9 P8 P5 P7 Signal/Noise 2.5 1.8 1 2.6 4.5 0.43 0.14 1.7 0.98 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: P3 P4 P6 P2 P1 P9 P8 P5 P7 P3 0.717 -0.328 0.519 0.630 -0.192 0.4029 0.551 0.589 -0.146 P4 -0.223 0.647 -0.971 -0.294 0.319 0.0302 -0.412 -0.315 0.436 P6 0.312 -0.554 0.502 0.466 -0.091 -0.0402 0.500 0.620 -0.256 P2 0.453 -0.200 0.280 0.720 -0.211 0.1775 0.201 0.598 -0.535 P1 -0.147 0.232 -0.059 -0.162 0.818 0.1087 -0.426 -0.090 0.853 P9 0.187 0.013 -0.016 0.082 0.054 0.2999 -0.025 -0.418 0.059 P8 0.162 -0.115 0.123 0.059 -0.134 -0.0047 0.121 0.152 -0.252 P5 0.397 -0.202 0.350 0.404 -0.065 -0.1823 0.042 0.633 -0.083 P7 -0.087 0.247 -0.128 -0.320 0.543 0.0226 -0.062 -0.047 0.496 In order to see the item by scale loadings and frequency counts of the data print with the short option = FALSE> > > > cleanEx() > nameEx("partial.r") > ### * partial.r > > flush(stderr()); flush(stdout()) > > ### Name: partial.r > ### Title: Find the partial correlations for a set (x) of variables with > ### set (y) removed. > ### Aliases: partial.r > ### Keywords: multivariate > > ### ** Examples > > jen <- make.hierarchical() #make up a correlation matrix > lowerMat(jen[1:5,1:5]) V1 V2 V3 V4 V5 V1 1.00 V2 0.56 1.00 V3 0.48 0.42 1.00 V4 0.40 0.35 0.30 1.00 V5 0.35 0.30 0.26 0.42 1.00 > par.r <- partial.r(jen,c(1,3,5),c(2,4)) > lowerMat(par.r) V1 V3 V5 V1 1.00 V3 0.29 1.00 V5 0.14 0.10 1.00 > #or > R <- jen[1:5,1:5] > par.r <- partial.r(R, y = cs(V2,V4)) > lowerMat(par.r) V1 V3 V5 V1 1.00 V3 0.29 1.00 V5 0.14 0.10 1.00 > cp <- corr.p(par.r,n=98) #assumes the jen data based upon n =100. > print(cp,short=FALSE) #show the confidence intervals as well Call:corr.p(r = par.r, n = 98) Correlation matrix partial correlations V1 V3 V5 V1 1.00 0.29 0.14 V3 0.29 1.00 0.10 V5 0.14 0.10 1.00 Sample Size [1] 98 Probability values (Entries above the diagonal are adjusted for multiple tests.) partial correlations V1 V3 V5 V1 0.00 0.01 0.31 V3 0.00 0.00 0.34 V5 0.16 0.34 0.00 Confidence intervals based upon normal theory. To get bootstrapped values, try cor.ci lower r upper p V1-V3 0.10 0.29 0.46 0.00 V1-V5 -0.06 0.14 0.33 0.16 V3-V5 -0.10 0.10 0.29 0.34 > #partial all from all correlations. > lowerMat(partial.r(jen)) V1 V2 V3 V4 V5 V6 V7 V8 V9 V1 1.00 V2 0.36 1.00 V3 0.26 0.17 1.00 V4 0.14 0.09 0.07 1.00 V5 0.10 0.06 0.05 0.26 1.00 V6 0.07 0.05 0.03 0.19 0.14 1.00 V7 0.09 0.06 0.05 0.06 0.04 0.03 1.00 V8 0.07 0.05 0.03 0.04 0.03 0.02 0.20 1.00 V9 0.05 0.03 0.02 0.03 0.02 0.02 0.15 0.11 1.00 > > > #Consider the Tal.Or data set. > lowerCor(Tal.Or) cond pmi imprt rectn gendr age cond 1.00 pmi 0.18 1.00 import 0.18 0.28 1.00 reaction 0.16 0.45 0.46 1.00 gender -0.13 -0.02 0.03 0.01 1.00 age 0.03 0.00 0.07 -0.08 -0.32 1.00 > #partial gender and age from these relations (they hardly change) > partial.r(Tal.Or,1:4,cs(gender,age)) partial correlations cond pmi import reaction cond 1.00 0.18 0.19 0.16 pmi 0.18 1.00 0.28 0.45 import 0.19 0.28 1.00 0.48 reaction 0.16 0.45 0.48 1.00 > #find the partial correlations between the first three variables and the DV (reaction) > round(partial.r(Tal.Or[1:4])[4,1:3],2) #The partial correlations with the criterion cond pmi import 0.04 0.36 0.39 > > #Consider the eminence data set from Del Giudice. > if(require("psychTools")) { + data(eminence) + partial.r(reputation ~ works + citations - birth.year, data=eminence) + } Loading required package: psychTools partial correlations reputation works citations reputation 1.00 0.73 0.66 works 0.73 1.00 0.79 citations 0.66 0.79 1.00 > > > > > > cleanEx() detaching ‘package:psychTools’ > nameEx("phi") > ### * phi > > flush(stderr()); flush(stdout()) > > ### Name: phi > ### Title: Find the phi coefficient of correlation between two dichotomous > ### variables > ### Aliases: phi > ### Keywords: multivariate models > > ### ** Examples > > phi(c(30,20,20,30)) [1] 0.2 > phi(c(40,10,10,40)) [1] 0.6 > x <- matrix(c(40,5,20,20),ncol=2) > phi(x) [1] 0.43 > > > > > > cleanEx() > nameEx("phi.demo") > ### * phi.demo > > flush(stderr()); flush(stdout()) > > ### Name: phi.demo > ### Title: A simple demonstration of the Pearson, phi, and polychoric > ### corelation > ### Aliases: phi.demo > ### Keywords: multivariate models > > ### ** Examples > > #demo <- phi.demo() #compare the phi (lower off diagonal and polychoric correlations > # (upper off diagonal) > #show the result from tetrachoric which corrects for zero entries by default > #round(demo$tetrachoric$rho,2) > #show the result from phi2poly > #tetrachorics above the diagonal, phi below the diagonal > #round(demo$phis,2) > > > > cleanEx() > nameEx("phi2poly") > ### * phi2poly > > flush(stderr()); flush(stdout()) > > ### Name: phi2tetra > ### Title: Convert a phi coefficient to a tetrachoric correlation > ### Aliases: phi2tetra phi2poly > ### Keywords: models models > > ### ** Examples > > phi2tetra(.3,c(.5,.5)) [1] 0.4540106 > #phi2poly(.3,.3,.7) > > > > cleanEx() > nameEx("plot.psych") > ### * plot.psych > > flush(stderr()); flush(stdout()) > > ### Name: plot.psych > ### Title: Plotting functions for the psych package of class "psych" > ### Aliases: plot.psych plot.poly plot.irt plot.residuals > ### Keywords: multivariate > > ### ** Examples > > test.data <- Harman74.cor$cov > f4 <- fa(test.data,4) > plot(f4) > plot(resid(f4)) > plot(resid(f4),main="Residuals from a 4 factor solution",qq=FALSE) Warning in text.default(xy$x[worst[1:bad]], xy$y[worst[1:bad]], paste(rname[worstItems[, : "qq" is not a graphical parameter > #not run > #data(bfi) > #e.irt <- irt.fa(bfi[11:15]) #just the extraversion items > #plot(e.irt) #the information curves > # > ic <- iclust(test.data,3) #shows hierarchical structure > plot(ic) #plots loadings Use ICLUST.diagram to see the hierarchical structure > # > > > > > > cleanEx() > nameEx("polar") > ### * polar > > flush(stderr()); flush(stdout()) > > ### Name: polar > ### Title: Convert Cartesian factor loadings into polar coordinates > ### Aliases: polar > ### Keywords: multivariate > > ### ** Examples > > > circ.data <- circ.sim(24,500) > circ.fa <- fa(circ.data,2) > circ.polar <- round(polar(circ.fa),2) > circ.polar Var theta21 vecl21 v2 2 2.63 0.59 v3 3 17.62 0.68 v4 4 32.42 0.63 v5 5 44.85 0.59 v6 6 67.00 0.64 v7 7 83.75 0.62 v8 8 94.65 0.63 v9 9 110.86 0.60 v10 10 120.06 0.58 v11 11 135.48 0.67 v12 12 155.32 0.66 v13 13 162.20 0.60 v14 14 186.11 0.62 v15 15 194.06 0.59 v16 16 210.68 0.61 v17 17 229.94 0.64 v18 18 243.88 0.62 v19 19 260.13 0.62 v20 20 274.87 0.59 v21 21 291.23 0.61 v22 22 302.14 0.59 v23 23 323.05 0.62 v24 24 338.85 0.60 v1 1 345.26 0.61 > #compare to the graphic > cluster.plot(circ.fa) > > > > cleanEx() > nameEx("polychor.matrix") > ### * polychor.matrix > > flush(stderr()); flush(stdout()) > > ### Name: polychor.matrix > ### Title: Phi or Yule coefficient matrix to polychoric coefficient matrix > ### Aliases: polychor.matrix Yule2poly.matrix phi2poly.matrix > ### Yule2phi.matrix > ### Keywords: models multivariate > > ### ** Examples > > #demo <- phi.demo() > #compare the phi (lower off diagonal and polychoric correlations (upper off diagonal) > #show the result from poly.mat > #round(demo$tetrachoric$rho,2) > #show the result from phi2poly > #tetrachorics above the diagonal, phi below the diagonal > #round(demo$phis,2) > > > > > > cleanEx() > nameEx("predict.psych") > ### * predict.psych > > flush(stderr()); flush(stdout()) > > ### Name: predict.psych > ### Title: Prediction function for factor analysis, principal components > ### (pca), bestScales > ### Aliases: predict.psych > ### Keywords: multivariate models > > ### ** Examples > > set.seed(42) > x <- sim.item(12,500) > f2 <- fa(x[1:250,],2,scores="regression") # a two factor solution > p2 <- principal(x[1:250,],2,scores=TRUE) # a two component solution > round(cor(f2$scores,p2$scores),2) #correlate the components and factors from the A set RC1 RC2 MR1 1 0.01 MR2 0 1.00 > #find the predicted scores (The B set) > pf2 <- predict(f2,x[251:500,],x[1:250,]) > > #use the original data for standardization values > pp2 <- predict(p2,x[251:500,],x[1:250,]) > #standardized based upon the first set > round(cor(pf2,pp2),2) #find the correlations in the B set RC1 RC2 MR1 1.00 -0.06 MR2 -0.07 1.00 > #test how well these predicted scores match the factor scores from the second set > fp2 <- fa(x[251:500,],2,scores=TRUE) > round(cor(fp2$scores,pf2),2) MR1 MR2 MR1 0.07 -0.99 MR2 -0.99 0.08 > > pf2.n <- predict(f2,x[251:500,]) #Standardized based upon the new data set > round(cor(fp2$scores,pf2.n)) MR1 MR2 MR1 0 -1 MR2 -1 0 > #predict factors of set two from factors of set 1, factor order is arbitrary > > > #note that the signs of the factors in the second set are arbitrary > > > > cleanEx() > nameEx("predicted.validity") > ### * predicted.validity > > flush(stderr()); flush(stdout()) > > ### Name: predicted.validity > ### Title: Find the predicted validities of a set of scales based on item > ### statistics > ### Aliases: predicted.validity item.validity validityItem > ### Keywords: multivariate models > > ### ** Examples > > pred.bfi <- predicted.validity(psychTools::bfi[,1:25], psychTools::bfi[,26:28], + psychTools::bfi.keys) > pred.bfi Call: predicted.validity(x = psychTools::bfi[, 1:25], criteria = psychTools::bfi[, 26:28], keys = psychTools::bfi.keys) Predicted Asymptotic Scale Validity: gender education age agree 0.245 0.051 0.215 conscientious 0.105 0.027 0.136 extraversion 0.118 0.010 0.073 neuroticism 0.137 -0.054 -0.128 openness -0.079 0.132 0.100 For predicted scale validities, average item validities, or scale reliabilities, print the separate objects> > > > cleanEx() > nameEx("principal") > ### * principal > > flush(stderr()); flush(stdout()) > > ### Name: principal > ### Title: Principal components analysis (PCA) > ### Aliases: principal pca > ### Keywords: multivariate models > > ### ** Examples > > #Four principal components of the Harman 24 variable problem > #compare to a four factor principal axes solution using factor.congruence > pc <- principal(Harman74.cor$cov,4,rotate="varimax") > mr <- fa(Harman74.cor$cov,4,rotate="varimax") #minres factor analysis > pa <- fa(Harman74.cor$cov,4,rotate="varimax",fm="pa") # principal axis factor analysis > round(factor.congruence(list(pc,mr,pa)),2) RC1 RC3 RC2 RC4 MR1 MR3 MR2 MR4 PA1 PA3 PA2 PA4 RC1 1.00 0.53 0.43 0.46 1.00 0.61 0.46 0.54 1.00 0.61 0.46 0.54 RC3 0.53 1.00 0.43 0.47 0.54 0.99 0.44 0.54 0.54 0.99 0.44 0.54 RC2 0.43 0.43 1.00 0.47 0.44 0.50 1.00 0.55 0.44 0.50 1.00 0.55 RC4 0.46 0.47 0.47 1.00 0.47 0.53 0.49 0.99 0.47 0.53 0.49 0.99 MR1 1.00 0.54 0.44 0.47 1.00 0.61 0.46 0.55 1.00 0.61 0.46 0.55 MR3 0.61 0.99 0.50 0.53 0.61 1.00 0.50 0.61 0.61 1.00 0.50 0.61 MR2 0.46 0.44 1.00 0.49 0.46 0.50 1.00 0.57 0.46 0.50 1.00 0.57 MR4 0.54 0.54 0.55 0.99 0.55 0.61 0.57 1.00 0.55 0.61 0.57 1.00 PA1 1.00 0.54 0.44 0.47 1.00 0.61 0.46 0.55 1.00 0.61 0.46 0.55 PA3 0.61 0.99 0.50 0.53 0.61 1.00 0.50 0.61 0.61 1.00 0.50 0.61 PA2 0.46 0.44 1.00 0.49 0.46 0.50 1.00 0.57 0.46 0.50 1.00 0.57 PA4 0.54 0.54 0.55 0.99 0.55 0.61 0.57 1.00 0.55 0.61 0.57 1.00 > > pc2 <- principal(Harman.5,2,rotate="varimax") > pc2 Principal Components Analysis Call: principal(r = Harman.5, nfactors = 2, rotate = "varimax") Standardized loadings (pattern matrix) based upon correlation matrix RC1 RC2 h2 u2 com population 0.02 0.99 0.99 0.012 1.0 schooling 0.94 -0.01 0.89 0.115 1.0 employment 0.14 0.98 0.98 0.021 1.0 professional 0.83 0.45 0.88 0.120 1.5 housevalue 0.97 -0.01 0.94 0.062 1.0 RC1 RC2 SS loadings 2.52 2.15 Proportion Var 0.50 0.43 Cumulative Var 0.50 0.93 Proportion Explained 0.54 0.46 Cumulative Proportion 0.54 1.00 Mean item complexity = 1.1 Test of the hypothesis that 2 components are sufficient. The root mean square of the residuals (RMSR) is 0.03 with the empirical chi square 0.29 with prob < 0.59 Fit based upon off diagonal values = 1> round(cor(Harman.5,pc2$scores),2) #compare these correlations to the loadings RC1 RC2 population 0.02 0.99 schooling 0.94 -0.01 employment 0.14 0.98 professional 0.83 0.45 housevalue 0.97 -0.01 > #now do it for unstandardized scores, and transform obliquely > pc2o <- principal(Harman.5,2,rotate="promax",covar=TRUE) > pc2o Principal Components Analysis Call: principal(r = Harman.5, nfactors = 2, rotate = "promax", covar = TRUE) Standardized loadings (pattern matrix) based upon correlation matrix RC1 RC2 h2 u2 com population -0.11 1.01 0.99 0.012 1.0 schooling 0.96 -0.13 0.89 0.115 1.0 employment 0.02 0.98 0.98 0.021 1.0 professional 0.79 0.35 0.88 0.120 1.4 housevalue 0.99 -0.13 0.94 0.062 1.0 RC1 RC2 SS loadings 2.53 2.14 Proportion Var 0.51 0.43 Cumulative Var 0.51 0.93 Proportion Explained 0.54 0.46 Cumulative Proportion 0.54 1.00 With component correlations of RC1 RC2 RC1 1.00 0.24 RC2 0.24 1.00 Mean item complexity = 1.1 Test of the hypothesis that 2 components are sufficient. The root mean square of the residuals (RMSR) is 0.03 with the empirical chi square 0.29 with prob < 0.59 Fit based upon off diagonal values = 1> round(cov(Harman.5,pc2o$scores),2) RC1 RC2 population -36540.11 7460683.30 schooling 3864.00 -266.10 employment 300199.01 2619927.43 professional 221355.65 89775.59 housevalue 15934475.60 -1192838.73 > pc2o$Structure #this matches the covariances with the scores RC1 RC2 population 0.1380470 0.9884735 schooling 0.9325424 0.1044421 employment 0.2564425 0.9894333 professional 0.8735095 0.5431315 housevalue 0.9601422 0.1104942 > biplot(pc2,main="Biplot of the Harman.5 socio-economic variables",labels=paste0(1:12)) > > #For comparison with SPSS (contributed by Gottfried Helms) > pc2v <- principal(iris[1:4],2,rotate="varimax",normalize=FALSE,eps=1e-14) > print(pc2v,digits=7) Principal Components Analysis Call: principal(r = iris[1:4], nfactors = 2, rotate = "varimax", normalize = FALSE, eps = 1e-14) Standardized loadings (pattern matrix) based upon correlation matrix RC1 RC2 h2 u2 com Sepal.Length 0.9593182 0.0480331 0.9225986 0.077401362 1.005014 Sepal.Width -0.1442732 0.9849389 0.9909193 0.009080678 1.042893 Petal.Length 0.9441083 -0.3039564 0.9837300 0.016270047 1.205101 Petal.Width 0.9323563 -0.2568894 0.9352804 0.064719625 1.150960 RC1 RC2 SS loadings 2.7017349 1.1307934 Proportion Var 0.6754337 0.2826983 Cumulative Var 0.6754337 0.9581321 Proportion Explained 0.7049485 0.2950515 Cumulative Proportion 0.7049485 1.0000000 Mean item complexity = 1.1 Test of the hypothesis that 2 components are sufficient. The root mean square of the residuals (RMSR) is 0.030876 with the empirical chi square 1.715987 with prob < NA Fit based upon off diagonal values = 0.9978717> pc2V <- principal(iris[1:4],2,rotate="Varimax",eps=1e-7) > p <- print(pc2V,digits=7) Principal Components Analysis Call: principal(r = iris[1:4], nfactors = 2, rotate = "Varimax", eps = 1e-07) Standardized loadings (pattern matrix) based upon correlation matrix RC1 RC2 h2 u2 com Sepal.Length 0.9593182 0.0480331 0.9225986 0.077401362 1.005014 Sepal.Width -0.1442732 0.9849389 0.9909193 0.009080678 1.042893 Petal.Length 0.9441083 -0.3039563 0.9837300 0.016270047 1.205101 Petal.Width 0.9323563 -0.2568893 0.9352804 0.064719625 1.150960 RC1 RC2 SS loadings 2.7017350 1.1307933 Proportion Var 0.6754338 0.2826983 Cumulative Var 0.6754338 0.9581321 Proportion Explained 0.7049485 0.2950515 Cumulative Proportion 0.7049485 1.0000000 Mean item complexity = 1.1 Test of the hypothesis that 2 components are sufficient. The root mean square of the residuals (RMSR) is 0.030876 with the empirical chi square 1.715987 with prob < NA Fit based upon off diagonal values = 0.9978717> round(p$Vaccounted,2) # the amount of variance accounted for is returned as an object of print RC1 RC2 SS loadings 2.70 1.13 Proportion Var 0.68 0.28 Cumulative Var 0.68 0.96 Proportion Explained 0.70 0.30 Cumulative Proportion 0.70 1.00 > > > > cleanEx() > nameEx("print.psych") > ### * print.psych > > flush(stderr()); flush(stdout()) > > ### Name: print.psych > ### Title: Print and summary functions for the psych class > ### Aliases: print.psych summary.psych > ### Keywords: multivariate > > ### ** Examples > > data(bfi) > keys.list <- list(agree=c(-1,2:5),conscientious=c(6:8,-9,-10), + extraversion=c(-11,-12,13:15),neuroticism=c(16:20),openness = c(21,-22,23,24,-25)) > keys <- make.keys(25,keys.list,item.labels=colnames(psychTools::bfi[1:25])) > scores <- score.items(keys,psychTools::bfi[1:25]) score.items has been replaced by scoreItems, please change your call > scores Call: scoreItems(keys = keys, items = items, totals = totals, ilabels = ilabels, missing = missing, impute = impute, delete = delete, min = min, max = max, digits = digits, select = select) (Unstandardized) Alpha: agree conscientious extraversion neuroticism openness alpha 0.7 0.72 0.76 0.81 0.6 Standard errors of unstandardized Alpha: agree conscientious extraversion neuroticism openness ASE 0.014 0.014 0.013 0.011 0.017 Average item correlation: agree conscientious extraversion neuroticism openness average.r 0.32 0.34 0.39 0.46 0.23 Median item correlation: agree conscientious extraversion neuroticism openness 0.34 0.34 0.38 0.41 0.22 Guttman 6* reliability: agree conscientious extraversion neuroticism openness Lambda.6 0.7 0.72 0.76 0.81 0.6 Signal/Noise based upon av.r : agree conscientious extraversion neuroticism openness Signal/Noise 2.3 2.6 3.2 4.3 1.5 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: agree conscientious extraversion neuroticism openness agree 0.70 0.36 0.63 -0.245 0.23 conscientious 0.26 0.72 0.35 -0.305 0.30 extraversion 0.46 0.26 0.76 -0.284 0.32 neuroticism -0.18 -0.23 -0.22 0.812 -0.12 openness 0.15 0.19 0.22 -0.086 0.60 In order to see the item by scale loadings and frequency counts of the data print with the short option = FALSE> summary(scores) Call: scoreItems(keys = keys, items = items, totals = totals, ilabels = ilabels, missing = missing, impute = impute, delete = delete, min = min, max = max, digits = digits, select = select) Scale intercorrelations corrected for attenuation raw correlations below the diagonal, (unstandardized) alpha on the diagonal corrected correlations above the diagonal: agree conscientious extraversion neuroticism openness agree 0.70 0.36 0.63 -0.245 0.23 conscientious 0.26 0.72 0.35 -0.305 0.30 extraversion 0.46 0.26 0.76 -0.284 0.32 neuroticism -0.18 -0.23 -0.22 0.812 -0.12 openness 0.15 0.19 0.22 -0.086 0.60 > > > > cleanEx() > nameEx("r.test") > ### * r.test > > flush(stderr()); flush(stdout()) > > ### Name: r.test > ### Title: Tests of significance for correlations > ### Aliases: r.test > ### Keywords: multivariate models > > ### ** Examples > > > n <- 30 > r <- seq(0,.9,.1) > rc <- matrix(r.con(r,n),ncol=2) > test <- r.test(n,r) > r.rc <- data.frame(r=r,z=fisherz(r),lower=rc[,1],upper=rc[,2],t=test$t,p=test$p) > round(r.rc,2) r z lower upper t p 1 0.0 0.00 -0.36 0.36 0.00 1.00 2 0.1 0.10 -0.27 0.44 0.53 0.60 3 0.2 0.20 -0.17 0.52 1.08 0.29 4 0.3 0.31 -0.07 0.60 1.66 0.11 5 0.4 0.42 0.05 0.66 2.31 0.03 6 0.5 0.55 0.17 0.73 3.06 0.00 7 0.6 0.69 0.31 0.79 3.97 0.00 8 0.7 0.87 0.45 0.85 5.19 0.00 9 0.8 1.10 0.62 0.90 7.06 0.00 10 0.9 1.47 0.80 0.95 10.93 0.00 > > r.test(50,r) Correlation tests Call:r.test(n = 50, r12 = r) Test of significance of a correlation t value 0 0.7 1.41 2.18 3.02 4 5.2 6.79 9.24 14.3 with probability < 1 0.49 0.16 0.034 0.004 0.00022 4.1e-06 1.5e-08 3.2e-12 0 and confidence interval -0.28 -0.18 -0.08 0.02 0.14 0.26 0.39 0.52 0.67 0.83 0.28 0.37 0.45 0.53 0.61 0.68 0.75 0.82 0.88 0.94> r.test(30,.4,.6) #test the difference between two independent correlations Correlation tests Call:r.test(n = 30, r12 = 0.4, r34 = 0.6) Test of difference between two independent correlations z value 0.99 with probability 0.32> r.test(103,.4,.5,.1) #Steiger case A of dependent correlations Correlation tests Call:[1] "r.test(n = 103 , r12 = 0.4 , r23 = 0.1 , r13 = 0.5 )" Test of difference between two correlated correlations t value -0.89 with probability < 0.37> r.test(n=103, r12=.4, r13=.5,r23=.1) Correlation tests Call:[1] "r.test(n = 103 , r12 = 0.4 , r23 = 0.1 , r13 = 0.5 )" Test of difference between two correlated correlations t value -0.89 with probability < 0.37> #for complicated tests, it is probably better to specify correlations by name > r.test(n=103,r12=.5,r34=.6,r13=.7,r23=.5,r14=.5,r24=.8) #steiger Case B Correlation tests Call:r.test(n = 103, r12 = 0.5, r34 = 0.6, r23 = 0.5, r13 = 0.7, r14 = 0.5, r24 = 0.8) Test of difference between two dependent correlations z value -1.4 with probability 0.16> > > ##By default, the precision of p values is 2 decimals > #Consider three different precisions shown by varying the requested number of digits > r12 = 0.693458895410494 > r23 = 0.988475791500198 > r13 = 0.695966022434845 > print(r.test(n = 5105 , r12 = r12 , r23 = r23 , r13 = r13 )) #probability < 0.1 Correlation tests Call:[1] "r.test(n = 5105 , r12 = 0.693458895410494 , r23 = 0.988475791500198 , r13 = 0.695966022434845 )" Test of difference between two correlated correlations t value -1.64 with probability < 0.1> print(r.test(n = 5105 , r12 = r12, r23 = r23 , r13 = r13 ),digits=4) #p < 0.1001 Correlation tests Call:[1] "r.test(n = 5105 , r12 = 0.693458895410494 , r23 = 0.988475791500198 , r13 = 0.695966022434845 )" Test of difference between two correlated correlations t value -1.6448 with probability < 0.1001> print(r.test(n = 5105 , r12 = r12, r23 = r23 , r13 = r13 ),digits=8) #p< <0.1000759 Correlation tests Call:[1] "r.test(n = 5105 , r12 = 0.693458895410494 , r23 = 0.988475791500198 , r13 = 0.695966022434845 )" Test of difference between two correlated correlations t value -1.644784 with probability < 0.1000759> > > > #an example of how to compare the elements of two matrices > R1 <- lowerCor(psychTools::bfi[1:200,1:5]) #find one set of Correlations A1 A2 A3 A4 A5 A1 1.00 A2 -0.40 1.00 A3 -0.27 0.55 1.00 A4 -0.06 0.24 0.33 1.00 A5 -0.26 0.43 0.61 0.31 1.00 > R2 <- lowerCor(psychTools::bfi[201:400,1:5]) #and now another set sampled A1 A2 A3 A4 A5 A1 1.00 A2 -0.39 1.00 A3 -0.18 0.41 1.00 A4 -0.20 0.39 0.32 1.00 A5 -0.08 0.28 0.43 0.34 1.00 > #from the same population > test <- r.test(n=200, r12 = R1, r34 = R2) > round(lowerUpper(R1,R2,diff=TRUE),digits=2) #show the differences between correlations A1 A2 A3 A4 A5 A1 NA -0.02 -0.09 0.14 -0.18 A2 -0.40 NA 0.14 -0.14 0.15 A3 -0.27 0.55 NA 0.01 0.18 A4 -0.06 0.24 0.33 NA -0.03 A5 -0.26 0.43 0.61 0.31 NA > #lowerMat(test$p) #show the p values of the difference between the two matrices > adjusted <- p.adjust(test$p[upper.tri(test$p)]) > both <- test$p > both[upper.tri(both)] <- adjusted > round(both,digits=2) #The lower off diagonal are the raw ps, the upper the adjusted ps A1 A2 A3 A4 A5 A1 1.00 1.00 1.00 0.83 0.57 A2 0.86 0.00 0.57 0.66 0.64 A3 0.34 0.06 1.00 1.00 0.14 A4 0.17 0.11 0.92 0.00 1.00 A5 0.07 0.09 0.01 0.74 NaN > > > > > > cleanEx() > nameEx("range.correction") > ### * range.correction > > flush(stderr()); flush(stdout()) > > ### Name: rangeCorrection > ### Title: Correct correlations for restriction of range. (Thorndike Case > ### 2) > ### Aliases: rangeCorrection > ### Keywords: multivariate models > > ### ** Examples > > rangeCorrection(.33,100.32,48.19) #example from Revelle (in prep) Chapter 4. [1] 0.5884233 > > > > cleanEx() > nameEx("reliability") > ### * reliability > > flush(stderr()); flush(stdout()) > > ### Name: reliability > ### Title: Reports 7 different estimates of scale reliabity including > ### alpha, omega, split half > ### Aliases: reliability plot.reliability > ### Keywords: multivariate models > > ### ** Examples > > reliability(psychTools::ability) #an example of finding reliability for all items keys not specified, all items will be scored Measures of reliability reliability(keys = psychTools::ability) omega_h alpha omega.tot Uni r.fit fa.fit max.split min.split mean.r All_items 0.56 0.83 0.85 0.84 0.91 0.93 0.87 0.73 0.23 med.r n.items All_items 0.21 16 > rel <- reliability(psychTools::ability.keys,psychTools::ability) #use keys to select scales > R <- cor(psychTools::ability,use="pairwise") #find the correlations to test > rel.R <- reliability(psychTools::ability.keys,R) #this should be the same as rel > plot(rel.R) #versus all and subsets > all.equal(rel$result.df,rel.R$result.df ) #should be TRUE [1] TRUE > reliability(psychTools::bfi.keys,psychTools::bfi) #reliability when items are keyed negative Measures of reliability reliability(keys = psychTools::bfi.keys, items = psychTools::bfi) omega_h alpha omega.tot Uni r.fit fa.fit max.split min.split agree 0.64 0.71 0.75 0.89 0.90 0.99 0.75 0.62 conscientious 0.53 0.73 0.77 0.95 0.97 0.98 0.73 0.64 extraversion 0.55 0.76 0.82 0.97 0.97 0.99 0.78 0.71 neuroticism 0.71 0.81 0.85 0.93 0.95 0.98 0.83 0.73 openness 0.39 0.61 0.70 0.85 0.88 0.97 0.66 0.53 mean.r med.r n.items agree 0.33 0.34 5 conscientious 0.35 0.34 5 extraversion 0.39 0.38 5 neuroticism 0.47 0.41 5 openness 0.24 0.23 5 > > ## Not run: > ##D > ##D #this takes a few seconds but shows nice graphic displays > ##D > ##D spi.rel <- reliability(psychTools::spi.keys,psychTools::spi,hist=TRUE) #graph them > ##D spi.rel #show them > ##D > ##D #plot them using plot.reliability > ##D plot(spi.rel) #draw the density distrbutions > ##D > ##D plot(spi.rel,split=FALSE) #don't draw the split half density distribution > ##D plot(spi.rel,omega=FALSE) # don't add omega values to the diagram > ##D #or do this without the densities > ##D > ##D #plot the first three values in a dot chart > ##D error.dots(spi.rel$result.df[,1],sort=FALSE, xlim=c(.3,1),head=16,tail=16, > ##D main = expression(paste(omega[h], ~~~~ alpha,~~~~ omega[t]))) > ##D #plot the omega_h values > ##D error.dots(spi.rel$result.df[,2],sort=FALSE,pch=2,xlim=c(.3,1),head=16,tail=16, > ##D main="",labels="",add=TRUE)#add the alpha values > ##D error.dots(spi.rel$result.df[,3],sort=FALSE, xlim=c(.3,1),head=16,tail=16, > ##D pch=3,labels="", main="",add=TRUE) #and the omega_t values > ##D > ##D #or, show the smallest and greatest split half, as well as alpha > ##D error.dots(spi.rel$result.df[,4],sort=FALSE, xlim=c(.3,1),head=16,tail=16, > ##D main = expression(paste(beta, ~~~~ alpha,~~~~ glb))) > ##D error.dots(spi.rel$result.df[,5],sort=FALSE,pch=5,xlim=c(.3,1),head=16,tail=16, > ##D main="",labels="",add=TRUE)#add the GLB values > ##D error.dots(spi.rel$result.df[,2],sort=FALSE,pch=2,xlim=c(.3,1),head=16,tail=16, > ##D main="",labels="",add=TRUE)#add the alpha values > ##D > ##D > ## End(Not run) > > > > cleanEx() > nameEx("rescale") > ### * rescale > > flush(stderr()); flush(stdout()) > > ### Name: rescale > ### Title: Function to convert scores to "conventional " metrics > ### Aliases: rescale > ### Keywords: multivariate models univar > > ### ** Examples > > T <- rescale(attitude,50,10) #all put on same scale > describe(T) vars n mean sd median trimmed mad min max range skew rating 1 30 50 10 50.71 50.47 8.53 29.76 66.73 36.97 -0.36 complaints 2 30 50 10 48.80 50.36 11.14 27.77 67.57 39.81 -0.22 privileges 3 30 50 10 48.67 49.69 8.48 31.09 74.41 43.32 0.38 learning 4 30 50 10 50.11 50.18 12.63 30.94 65.88 34.93 -0.05 raises 5 30 50 10 48.91 49.87 10.69 29.19 72.47 43.28 0.20 critical 6 30 50 10 52.76 51.08 7.49 23.96 67.42 43.46 -0.87 advance 7 30 50 10 48.12 48.93 8.65 32.57 78.25 45.68 0.85 kurtosis se rating -0.77 1.83 complaints -0.68 1.83 privileges -0.41 1.83 learning -1.22 1.83 raises -0.60 1.83 critical 0.17 1.83 advance 0.47 1.83 > T1 <- rescale(attitude,seq(0,300,50),seq(10,70,10)) #different means and sigmas > describe(T1) vars n mean sd median trimmed mad min max range skew rating 1 30 0 10 0.71 0.47 8.53 -20.24 16.73 36.97 -0.36 complaints 2 30 50 20 47.60 50.73 22.27 5.54 85.15 79.61 -0.22 privileges 3 30 100 30 96.00 99.06 25.45 43.28 173.23 129.95 0.38 learning 4 30 150 40 150.45 150.74 50.53 73.77 213.50 139.73 -0.05 raises 5 30 200 50 194.55 199.36 53.47 95.97 312.37 216.40 0.20 critical 6 30 250 60 266.57 256.47 44.95 93.76 354.50 260.74 -0.87 advance 7 30 300 70 286.85 292.52 60.52 177.99 497.76 319.77 0.85 kurtosis se rating -0.77 1.83 complaints -0.68 3.65 privileges -0.41 5.48 learning -1.22 7.30 raises -0.60 9.13 critical 0.17 10.95 advance 0.47 12.78 > > > > cleanEx() > nameEx("residuals.psych") > ### * residuals.psych > > flush(stderr()); flush(stdout()) > > ### Name: residuals.psych > ### Title: Extract residuals from various psych objects > ### Aliases: residuals.psych resid.psych > ### Keywords: multivariate models > > ### ** Examples > > f3 <- fa(Thurstone,3) > residuals(f3) Sntnc Vcblr Snt.C Frs.L F.L.W Sffxs Ltt.S Pdgrs Ltt.G Sentences 0.18 Vocabulary 0.01 0.16 Sent.Completion 0.00 -0.01 0.26 First.Letters -0.01 0.00 0.01 0.27 Four.Letter.Words 0.01 0.00 0.00 0.00 0.37 Suffixes 0.00 0.00 0.00 0.00 0.00 0.50 Letter.Series 0.00 0.01 -0.01 0.01 -0.01 0.00 0.27 Pedigrees -0.01 0.00 0.02 0.00 0.00 0.01 0.00 0.49 Letter.Group 0.01 -0.01 0.00 0.00 0.01 0.00 0.00 0.00 0.48 > sum(residuals(f3)^2) #include diagonal [1] 1.142454 > sum(residuals(f3,diag=FALSE)^2,na.rm=TRUE) #drop diagonal [1] 0.002456811 > > > > cleanEx() > nameEx("reverse.code") > ### * reverse.code > > flush(stderr()); flush(stdout()) > > ### Name: reverse.code > ### Title: Reverse the coding of selected items prior to scale analysis > ### Aliases: reverse.code > ### Keywords: multivariate > > ### ** Examples > > original <- matrix(sample(6,50,replace=TRUE),10,5) > keys <- c(1,1,-1,-1,1) #reverse the 3rd and 4th items > new <- reverse.code(keys,original,mini=rep(1,5),maxi=rep(6,5)) > original[1:3,] [,1] [,2] [,3] [,4] [,5] [1,] 1 1 1 1 2 [2,] 4 5 1 4 4 [3,] 1 5 6 3 1 > new[1:3,] - - [1,] 1 1 6 6 2 [2,] 4 5 6 3 4 [3,] 1 5 1 4 1 > > > > cleanEx() > nameEx("sat.act") > ### * sat.act > > flush(stderr()); flush(stdout()) > > ### Name: sat.act > ### Title: 3 Measures of ability: SATV, SATQ, ACT > ### Aliases: sat.act > ### Keywords: datasets > > ### ** Examples > > data(sat.act) > describe(sat.act) vars n mean sd median trimmed mad min max range skew gender 1 700 1.65 0.48 2 1.68 0.00 1 2 1 -0.61 education 2 700 3.16 1.43 3 3.31 1.48 0 5 5 -0.68 age 3 700 25.59 9.50 22 23.86 5.93 13 65 52 1.64 ACT 4 700 28.55 4.82 29 28.84 4.45 3 36 33 -0.66 SATV 5 700 612.23 112.90 620 619.45 118.61 200 800 600 -0.64 SATQ 6 687 610.22 115.64 620 617.25 118.61 200 800 600 -0.59 kurtosis se gender -1.62 0.02 education -0.07 0.05 age 2.42 0.36 ACT 0.53 0.18 SATV 0.33 4.27 SATQ -0.02 4.41 > pairs.panels(sat.act) > > > > cleanEx() > nameEx("scatter.hist") > ### * scatter.hist > > flush(stderr()); flush(stdout()) > > ### Name: scatterHist > ### Title: Draw a scatter plot with associated X and Y histograms, > ### densities and correlation > ### Aliases: scatter.hist scatterHist > ### Keywords: multivariate hplot > > ### ** Examples > > data(sat.act) > with(sat.act,scatterHist(SATV,SATQ)) > scatterHist(SATV ~ SATQ,data=sat.act) #formula input > > #or for something a bit more splashy > scatter.hist(sat.act[5:6],pch=(19+sat.act$gender),col=c("blue","red")[sat.act$gender],grid=TRUE) > #better yet > scatterHist(SATV ~ SATQ + gender,data=sat.act) #formula input with a grouping variable > > > > cleanEx() > nameEx("schmid") > ### * schmid > > flush(stderr()); flush(stdout()) > > ### Name: schmid > ### Title: Apply the Schmid Leiman transformation to a correlation matrix > ### Aliases: schmid > ### Keywords: multivariate models > > ### ** Examples > > jen <- sim.hierarchical() #create a hierarchical demo > if(!require(GPArotation)) { + message("I am sorry, you must have GPArotation installed to use schmid.")} else { + p.jen <- schmid(jen,digits=2) #use the oblimin rotation + p.jen <- schmid(jen,rotate="promax") #use the promax rotation + } Loading required package: GPArotation > > > > cleanEx() detaching ‘package:GPArotation’ > nameEx("score.alpha") > ### * score.alpha > > flush(stderr()); flush(stdout()) > > ### Name: score.alpha > ### Title: Score scales and find Cronbach's alpha as well as associated > ### statistics > ### Aliases: score.alpha > ### Keywords: multivariate models > > ### ** Examples > > > y <- attitude #from the datasets package > keys <- matrix(c(rep(1,7),rep(1,4),rep(0,7),rep(-1,3)),ncol=3) > labels <- c("first","second","third") > x <- score.alpha(keys,y,labels) #deprecated Warning: score.alpha is deprecated. Please use the scoreItems function > > > > > > cleanEx() > nameEx("score.irt") > ### * score.irt > > flush(stderr()); flush(stdout()) > > ### Name: scoreIrt > ### Title: Find Item Response Theory (IRT) based scores for dichotomous or > ### polytomous items > ### Aliases: scoreIrt scoreIrt.1pl scoreIrt.2pl score.irt score.irt.poly > ### score.irt.2 irt.stats.like make.irt.stats irt.tau irt.se > ### Keywords: multivariate models > > ### ** Examples > > > > > cleanEx() > nameEx("score.items") > ### * score.items > > flush(stderr()); flush(stdout()) > > ### Name: scoreItems > ### Title: Score item composite scales and find Cronbach's alpha, Guttman > ### lambda 6 and item whole correlations > ### Aliases: scoreItems scoreFast scoreVeryFast score.items > ### response.frequencies responseFrequency > ### Keywords: multivariate models > > ### ** Examples > > > #see the example including the bfi data set > data(psychTools::bfi) Warning in data(psychTools::bfi) : data set ‘psychTools::bfi’ not found > keys.list <- list(agree=c("-A1","A2","A3","A4","A5"), + conscientious=c("C1","C2","C3","-C4","-C5"),extraversion=c("-E1","-E2","E3","E4","E5"), + neuroticism=c("N1","N2","N3","N4","N5"), openness = c("O1","-O2","O3","O4","-O5")) > keys <- make.keys(psychTools::bfi,keys.list) #no longer necessary > scores <- scoreItems(keys,psychTools::bfi,min=1,max=6) #using a keys matrix > scores <- scoreItems(keys.list,psychTools::bfi,min=1,max=6) # or just use the keys.list > summary(scores) Call: scoreItems(keys = keys.list, items = psychTools::bfi, min = 1, max = 6) Scale intercorrelations corrected for attenuation raw correlations below the diagonal, (unstandardized) alpha on the diagonal corrected correlations above the diagonal: agree conscientious extraversion neuroticism openness agree 0.70 0.36 0.63 -0.245 0.23 conscientious 0.26 0.72 0.35 -0.305 0.30 extraversion 0.46 0.26 0.76 -0.284 0.32 neuroticism -0.18 -0.23 -0.22 0.812 -0.12 openness 0.15 0.19 0.22 -0.086 0.60 > #to get the response frequencies, we need to not use the age variable > scores <- scoreItems(keys[1:25,],psychTools::bfi[1:25]) #we do not need to specify min or > #max if there are no values (such as age) outside the normal item range. > scores Call: scoreItems(keys = keys[1:25, ], items = psychTools::bfi[1:25]) (Unstandardized) Alpha: agree conscientious extraversion neuroticism openness alpha 0.7 0.72 0.76 0.81 0.6 Standard errors of unstandardized Alpha: agree conscientious extraversion neuroticism openness ASE 0.014 0.014 0.013 0.011 0.017 Average item correlation: agree conscientious extraversion neuroticism openness average.r 0.32 0.34 0.39 0.46 0.23 Median item correlation: agree conscientious extraversion neuroticism openness 0.34 0.34 0.38 0.41 0.22 Guttman 6* reliability: agree conscientious extraversion neuroticism openness Lambda.6 0.7 0.72 0.76 0.81 0.6 Signal/Noise based upon av.r : agree conscientious extraversion neuroticism openness Signal/Noise 2.3 2.6 3.2 4.3 1.5 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: agree conscientious extraversion neuroticism openness agree 0.70 0.36 0.63 -0.245 0.23 conscientious 0.26 0.72 0.35 -0.305 0.30 extraversion 0.46 0.26 0.76 -0.284 0.32 neuroticism -0.18 -0.23 -0.22 0.812 -0.12 openness 0.15 0.19 0.22 -0.086 0.60 In order to see the item by scale loadings and frequency counts of the data print with the short option = FALSE> #The scores themselves are available in the scores$scores object. I.e., > describe(scores$scores) vars n mean sd median trimmed mad min max range skew agree 1 2800 4.65 0.89 4.8 4.73 0.89 1.0 6 5.0 -0.77 conscientious 2 2800 4.27 0.95 4.4 4.31 0.89 1.0 6 5.0 -0.41 extraversion 3 2800 4.15 1.05 4.2 4.20 1.19 1.0 6 5.0 -0.48 neuroticism 4 2800 3.16 1.19 3.0 3.13 1.19 1.0 6 5.0 0.22 openness 5 2800 4.59 0.80 4.6 4.62 0.89 1.2 6 4.8 -0.34 kurtosis se agree 0.44 0.02 conscientious -0.16 0.02 extraversion -0.19 0.02 neuroticism -0.65 0.02 openness -0.28 0.02 > > > #compare this output to that for the impute="none" option for SAPA type data > #first make many of the items missing in a missing pattern way > missing.bfi <- psychTools::bfi > missing.bfi[1:1000,3:8] <- NA > missing.bfi[1001:2000,c(1:2,9:10)] <- NA > scores <- scoreItems(keys.list,missing.bfi,impute="none",min=1,max=6) > scores Call: scoreItems(keys = keys.list, items = missing.bfi, impute = "none", min = 1, max = 6) (Standardized) Alpha: agree conscientious extraversion neuroticism openness alpha 0.72 0.72 0.76 0.81 0.6 Standard errors of unstandardized Alpha: agree conscientious extraversion neuroticism openness ASE 0.014 0.014 0.013 0.011 0.017 Standardized Alpha of observed scales: agree conscientious extraversion neuroticism openness [1,] 0.62 0.62 0.76 0.81 0.6 Average item correlation: agree conscientious extraversion neuroticism openness average.r 0.34 0.34 0.39 0.47 0.23 Median item correlation: agree conscientious extraversion neuroticism openness 0.35 0.32 0.38 0.41 0.23 Guttman 6* reliability: agree conscientious extraversion neuroticism openness Lambda.6 0.73 0.72 0.77 0.81 0.61 Signal/Noise based upon av.r : agree conscientious extraversion neuroticism openness Signal/Noise 2.5 2.6 3.2 4.4 1.5 Scale intercorrelations corrected for attenuation raw correlations below the diagonal, alpha on the diagonal corrected correlations above the diagonal: Note that these are the correlations of the complete scales based on the correlation matrix, not the observed scales based on the raw items. agree conscientious extraversion neuroticism openness agree 0.72 0.35 0.64 -0.243 0.25 conscientious 0.25 0.72 0.33 -0.275 0.28 extraversion 0.47 0.24 0.76 -0.284 0.32 neuroticism -0.19 -0.21 -0.22 0.814 -0.12 openness 0.17 0.19 0.22 -0.086 0.60 In order to see the item by scale loadings and frequency counts of the data print with the short option = FALSE> describe(scores$scores) #the actual scores themselves vars n mean sd median trimmed mad min max range skew agree 1 2800 4.67 1.00 5.00 4.77 0.99 1.0 6 5.0 -0.82 conscientious 2 2800 4.27 1.09 4.33 4.33 1.09 1.0 6 5.0 -0.47 extraversion 3 2800 4.15 1.06 4.20 4.20 1.19 1.0 6 5.0 -0.48 neuroticism 4 2800 3.16 1.20 3.00 3.13 1.48 1.0 6 5.0 0.21 openness 5 2800 4.59 0.81 4.60 4.62 0.89 1.2 6 4.8 -0.34 kurtosis se agree 0.42 0.02 conscientious -0.24 0.02 extraversion -0.21 0.02 neuroticism -0.67 0.02 openness -0.29 0.02 > > #If we want to delete scales scores for people who did not answer some items for one > #(or more) scales, we can do the following: > > scores <- scoreItems(keys.list,missing.bfi,totals=TRUE,min=1,max=6) #find total scores > describe(scores$scores) #note that missing data were replaced with median for the item vars n mean sd median trimmed mad min max range skew agree 1 2800 23.90 3.34 25 24.24 2.97 8 30 22 -1.14 conscientious 2 2800 22.37 3.48 23 22.60 2.97 6 30 24 -0.78 extraversion 3 2800 20.73 5.27 21 21.02 5.93 5 30 25 -0.48 neuroticism 4 2800 15.81 5.93 15 15.64 5.93 5 30 25 0.22 openness 5 2800 22.96 4.02 23 23.10 4.45 6 30 24 -0.34 kurtosis se agree 1.93 0.06 conscientious 1.22 0.07 extraversion -0.19 0.10 neuroticism -0.65 0.11 openness -0.28 0.08 > scores$scores[scores$missing > 0] <- NA #get rid of cases with missing data > describe(scores$scores) vars n mean sd median trimmed mad min max range skew agree 1 776 23.17 4.56 24 23.55 4.45 8 30 22 -0.75 conscientious 2 769 21.52 4.72 22 21.76 4.45 6 30 24 -0.49 extraversion 3 2713 20.72 5.30 21 21.01 5.93 5 30 25 -0.47 neuroticism 4 2694 15.82 5.97 15 15.65 7.41 5 30 25 0.22 openness 5 2726 22.97 4.04 23 23.12 4.45 6 30 24 -0.35 kurtosis se agree 0.31 0.16 conscientious 0.01 0.17 extraversion -0.20 0.10 neuroticism -0.66 0.12 openness -0.28 0.08 > > > > > > cleanEx() > nameEx("score.multiple.choice") > ### * score.multiple.choice > > flush(stderr()); flush(stdout()) > > ### Name: score.multiple.choice > ### Title: Score multiple choice items and provide basic test statistics > ### Aliases: score.multiple.choice > ### Keywords: multivariate models > > ### ** Examples > > data(psychTools::iqitems) Warning in data(psychTools::iqitems) : data set ‘psychTools::iqitems’ not found > iq.keys <- c(4,4,4, 6,6,3,4,4, 5,2,2,4, 3,2,6,7) > score.multiple.choice(iq.keys,psychTools::iqitems) Call: score.multiple.choice(key = iq.keys, data = psychTools::iqitems) (Unstandardized) Alpha: [1] 0.84 Average item correlation: [1] 0.25 item statistics key 0 1 2 3 4 5 6 7 8 miss r n mean reason.4 4 0.05 0.05 0.11 0.10 0.64 0.03 0.02 0.00 0.00 0 0.59 1523 0.64 reason.16 4 0.04 0.06 0.08 0.10 0.70 0.01 0.00 0.00 0.00 0 0.53 1524 0.70 reason.17 4 0.05 0.03 0.05 0.03 0.70 0.03 0.11 0.00 0.00 0 0.59 1523 0.70 reason.19 6 0.04 0.02 0.13 0.03 0.06 0.10 0.62 0.00 0.00 0 0.56 1523 0.62 letter.7 6 0.05 0.01 0.05 0.03 0.11 0.14 0.60 0.00 0.00 0 0.58 1524 0.60 letter.33 3 0.06 0.10 0.13 0.57 0.04 0.09 0.02 0.00 0.00 0 0.56 1523 0.57 letter.34 4 0.04 0.09 0.07 0.11 0.61 0.05 0.02 0.00 0.00 0 0.59 1523 0.61 letter.58 4 0.06 0.14 0.09 0.09 0.44 0.16 0.01 0.00 0.00 0 0.58 1525 0.44 matrix.45 5 0.04 0.01 0.06 0.14 0.18 0.53 0.04 0.00 0.00 0 0.51 1523 0.53 matrix.46 2 0.04 0.12 0.55 0.07 0.11 0.06 0.05 0.00 0.00 0 0.52 1524 0.55 matrix.47 2 0.04 0.05 0.61 0.07 0.11 0.06 0.06 0.00 0.00 0 0.55 1523 0.61 matrix.55 4 0.04 0.02 0.18 0.14 0.37 0.07 0.18 0.00 0.00 0 0.45 1524 0.37 rotate.3 3 0.04 0.03 0.04 0.19 0.22 0.15 0.05 0.12 0.15 0 0.51 1523 0.19 rotate.4 2 0.04 0.03 0.21 0.05 0.18 0.04 0.04 0.25 0.15 0 0.56 1523 0.21 rotate.6 6 0.04 0.22 0.02 0.05 0.14 0.05 0.30 0.04 0.14 0 0.55 1523 0.30 rotate.8 7 0.04 0.03 0.21 0.07 0.16 0.05 0.13 0.19 0.13 0 0.48 1524 0.19 sd reason.4 0.48 reason.16 0.46 reason.17 0.46 reason.19 0.49 letter.7 0.49 letter.33 0.50 letter.34 0.49 letter.58 0.50 matrix.45 0.50 matrix.46 0.50 matrix.47 0.49 matrix.55 0.48 rotate.3 0.40 rotate.4 0.41 rotate.6 0.46 rotate.8 0.39 > #just convert the items to true or false > iq.tf <- score.multiple.choice(iq.keys,psychTools::iqitems,score=FALSE) > describe(iq.tf) #compare to previous results vars n mean sd median trimmed mad min max range skew kurtosis reason.4 1 1523 0.64 0.48 1 0.68 0 0 1 1 -0.58 -1.66 reason.16 2 1524 0.70 0.46 1 0.75 0 0 1 1 -0.86 -1.26 reason.17 3 1523 0.70 0.46 1 0.75 0 0 1 1 -0.86 -1.26 reason.19 4 1523 0.62 0.49 1 0.64 0 0 1 1 -0.47 -1.78 letter.7 5 1524 0.60 0.49 1 0.62 0 0 1 1 -0.41 -1.84 letter.33 6 1523 0.57 0.50 1 0.59 0 0 1 1 -0.29 -1.92 letter.34 7 1523 0.61 0.49 1 0.64 0 0 1 1 -0.46 -1.79 letter.58 8 1525 0.44 0.50 0 0.43 0 0 1 1 0.23 -1.95 matrix.45 9 1523 0.53 0.50 1 0.53 0 0 1 1 -0.10 -1.99 matrix.46 10 1524 0.55 0.50 1 0.56 0 0 1 1 -0.20 -1.96 matrix.47 11 1523 0.61 0.49 1 0.64 0 0 1 1 -0.47 -1.78 matrix.55 12 1524 0.37 0.48 0 0.34 0 0 1 1 0.52 -1.73 rotate.3 13 1523 0.19 0.40 0 0.12 0 0 1 1 1.55 0.40 rotate.4 14 1523 0.21 0.41 0 0.14 0 0 1 1 1.40 -0.03 rotate.6 15 1523 0.30 0.46 0 0.25 0 0 1 1 0.88 -1.24 rotate.8 16 1524 0.19 0.39 0 0.11 0 0 1 1 1.62 0.63 se reason.4 0.01 reason.16 0.01 reason.17 0.01 reason.19 0.01 letter.7 0.01 letter.33 0.01 letter.34 0.01 letter.58 0.01 matrix.45 0.01 matrix.46 0.01 matrix.47 0.01 matrix.55 0.01 rotate.3 0.01 rotate.4 0.01 rotate.6 0.01 rotate.8 0.01 > > > > > cleanEx() > nameEx("scoreWtd") > ### * scoreWtd > > flush(stderr()); flush(stdout()) > > ### Name: scoreWtd > ### Title: Score items using regression or correlation based weights > ### Aliases: scoreWtd > ### Keywords: multivariate models > > ### ** Examples > > > #find the weights from a regression model and then apply them to a new set > #derivation of weights from the first 20 cases > model.lm <- lm(rating ~ complaints + privileges + learning,data=attitude[1:20,]) > #or use setCor to find the coefficents > model <- setCor(rating ~ complaints + privileges +learning,data=attitude[1:20,],std=FALSE) > > #Apply these to a different set of data (the last 10 cases) > #note that the regression coefficients need to be a matrix > scores.lm <- scoreWtd(as.matrix(model.lm$coefficients),attitude[21:30,],sums=TRUE,std=FALSE) > scores <- scoreWtd(model$coefficients,attitude[21:30,],sums=TRUE,std=FALSE) > describe(scores) vars n mean sd median trimmed mad min max range skew kurtosis X1 1 10 62.79 11.84 62.68 63.16 14.86 44.19 78.39 34.2 -0.18 -1.58 se X1 3.74 > > > > > cleanEx() > nameEx("scrub") > ### * scrub > > flush(stderr()); flush(stdout()) > > ### Name: scrub > ### Title: A utility for basic data cleaning and recoding. Changes values > ### outside of minimum and maximum limits to NA. > ### Aliases: scrub > ### Keywords: multivariate > > ### ** Examples > > data(attitude) > x <- scrub(attitude,isvalue=55) #make all occurrences of 55 NA > x1 <- scrub(attitude, where=c(4,5,6), isvalue =c(30,40,50), + newvalue = c(930,940,950)) #will do this for the 4th, 5th, and 6th variables > x2 <- scrub(attitude, where=c(4,4,4), isvalue =c(30,40,50), + newvalue = c(930,940,950)) #will just do it for the 4th column > new <- scrub(attitude,1:3,cuts= c(10,40,50,60,100)) #change many values to fewer > #get rid of a complicated set of cases and replace with missing values > y <- scrub(attitude,where=2:4,min=c(20,30,40),max= c(120,110,100),isvalue= c(32,43,54)) > y1 <- scrub(attitude,where="learning",isvalue=55,newvalue=999) #change a column by name > y2 <- scrub(attitude,where="learning",min=45,newvalue=999) #change a column by name > > y3 <- scrub(attitude,where="learning",isvalue=c(45,48), + newvalue=999) #change a column by name look for multiple values in that column > y4 <- scrub(attitude,where="learning",isvalue=c(45,48), + newvalue= c(999,-999)) #change values in one column to one of two different things > > > > > cleanEx() > nameEx("set.cor") > ### * set.cor > > flush(stderr()); flush(stdout()) > > ### Name: setCor > ### Title: Multiple Regression and Set Correlation from matrix or raw input > ### Aliases: setCor setCor.diagram set.cor mat.regress matReg > ### crossValidation matPlot > ### Keywords: models multivariate > > ### ** Examples > > #First compare to lm using data input > summary(lm(rating ~ complaints + privileges, data = attitude)) Call: lm(formula = rating ~ complaints + privileges, data = attitude) Residuals: Min 1Q Median 3Q Max -12.7887 -5.6893 -0.0284 6.2745 9.9726 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 15.32762 7.16023 2.141 0.0415 * complaints 0.78034 0.11939 6.536 5.22e-07 *** privileges -0.05016 0.12992 -0.386 0.7025 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 7.102 on 27 degrees of freedom Multiple R-squared: 0.6831, Adjusted R-squared: 0.6596 F-statistic: 29.1 on 2 and 27 DF, p-value: 1.833e-07 > setCor(rating ~ complaints + privileges, data = attitude, std=FALSE) #do not standardize Call: setCor(y = rating ~ complaints + privileges, data = attitude, std = FALSE) Multiple Regression from raw data DV = rating slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 15.33 7.16 2.14 4.1e-02 0.64 30.02 1.00 0.00 complaints 0.78 0.12 6.54 5.2e-07 0.54 1.03 1.45 0.70 privileges -0.05 0.13 -0.39 7.0e-01 -0.32 0.22 1.45 -0.02 Residual Standard Error = 7.1 with 27 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p rating 0.83 0.68 0.62 0.38 0.66 0.08 29.1 2 27 1.83e-07 > z.attitude <- data.frame(scale(attitude)) #standardize the data before doing lm > summary(lm(rating ~ complaints + privileges, data = z.attitude)) #regressions on z scores Call: lm(formula = rating ~ complaints + privileges, data = z.attitude) Residuals: Min 1Q Median 3Q Max -1.05062 -0.46739 -0.00233 0.51546 0.81927 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.086e-15 1.065e-01 0.000 1.000 complaints 8.536e-01 1.306e-01 6.536 5.22e-07 *** privileges -5.042e-02 1.306e-01 -0.386 0.702 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.5834 on 27 degrees of freedom Multiple R-squared: 0.6831, Adjusted R-squared: 0.6596 F-statistic: 29.1 on 2 and 27 DF, p-value: 1.833e-07 > setCor(rating ~ complaints + privileges, data = attitude) #by default we standardize and Call: setCor(y = rating ~ complaints + privileges, data = attitude) Multiple Regression from raw data DV = rating slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 0.00 0.11 0.00 1.0e+00 -0.22 0.22 1.00 0.00 complaints 0.85 0.13 6.54 5.2e-07 0.59 1.12 1.45 0.70 privileges -0.05 0.13 -0.39 7.0e-01 -0.32 0.22 1.45 -0.02 Residual Standard Error = 0.58 with 27 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p rating 0.83 0.68 0.71 0.5 0.66 0.08 29.1 2 27 1.83e-07 > # the results are the same as the standardized lm > > > R <- cor(attitude) #find the correlations > #Do the regression on the correlations > #Note that these match the regressions on the standard scores of the data > setCor(rating ~ complaints + privileges, data =R, n.obs=30) Call: setCor(y = rating ~ complaints + privileges, data = R, n.obs = 30) Multiple Regression from matrix input DV = rating slope se t p lower.ci upper.ci VIF Vy.x complaints 0.85 0.13 6.54 5.2e-07 0.59 1.12 1.45 0.70 privileges -0.05 0.13 -0.39 7.0e-01 -0.32 0.22 1.45 -0.02 Residual Standard Error = 0.58 with 27 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p rating 0.83 0.68 0.71 0.5 0.66 0.08 29.1 2 27 1.83e-07 > > #now, partial out learning and critical > setCor(rating ~ complaints + privileges - learning - critical, data =R, n.obs=30) Call: setCor(y = rating ~ complaints + privileges - learning - critical, data = R, n.obs = 30) Multiple Regression from matrix input The following variables were partialed out: learning critical and are included in the calculation of df1 and df2 DV = rating* slope se t p lower.ci upper.ci VIF Vy.x complaints* 0.75 0.14 5.26 1.9e-05 0.45 1.04 1.16 0.55 privileges* -0.10 0.12 -0.87 3.9e-01 -0.35 0.14 1.16 -0.02 Residual Standard Error = 0.57 with 25 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p rating 0.73 0.53 0.71 0.5 0.45 0.1 13.99 2 25 8.38e-05 > #compare with the full regression: > setCor(rating ~ complaints + privileges + learning + critical, data =R, n.obs=30) Call: setCor(y = rating ~ complaints + privileges + learning + critical, data = R, n.obs = 30) Multiple Regression from matrix input DV = rating slope se t p lower.ci upper.ci VIF Vy.x complaints 0.75 0.14 5.15 2.5e-05 0.45 1.04 1.84 0.62 privileges -0.10 0.13 -0.78 4.4e-01 -0.38 0.17 1.55 -0.04 learning 0.23 0.14 1.67 1.1e-01 -0.05 0.51 1.65 0.14 critical 0.01 0.11 0.05 9.6e-01 -0.22 0.23 1.04 0.00 Residual Standard Error = 0.57 with 25 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p rating 0.85 0.72 0.71 0.5 0.67 0.07 15.68 4 25 1.52e-06 > > > > #Canonical correlations: > > #The first Kelley data set from Hotelling > kelley1 <- structure(c(1, 0.6328, 0.2412, 0.0586, 0.6328, 1, -0.0553, 0.0655, + 0.2412, -0.0553, 1, 0.4248, 0.0586, 0.0655, 0.4248, 1), .Dim = c(4L, + 4L), .Dimnames = list(c("reading.speed", "reading.power", "math.speed", + "math.power"), c("reading.speed", "reading.power", "math.speed", + "math.power"))) > lowerMat(kelley1) rdng.s rdng.p mth.s mth.p reading.speed 1.00 reading.power 0.63 1.00 math.speed 0.24 -0.06 1.00 math.power 0.06 0.07 0.42 1.00 > mod1 <- setCor(y = math.speed + math.power ~ reading.speed + reading.power, + data = kelley1, n.obs=140) > mod1$cancor [1] 0.39450592 0.06884787 > #Hotelling reports .3945 and .0688 we get 0.39450592 0.06884787 > > #the second Kelley data from Hotelling > kelley <- structure(list(speed = c(1, 0.4248, 0.042, 0.0215, 0.0573), power = c(0.4248, + 1, 0.1487, 0.2489, 0.2843), words = c(0.042, 0.1487, 1, 0.6693, + 0.4662), symbols = c(0.0215, 0.2489, 0.6693, 1, 0.6915), meaningless = c(0.0573, + 0.2843, 0.4662, 0.6915, 1)), .Names = c("speed", "power", "words", + "symbols", "meaningless"), class = "data.frame", row.names = c("speed", + "power", "words", "symbols", "meaningless")) > > lowerMat(kelley) speed power words symbl mnngl speed 1.00 power 0.42 1.00 words 0.04 0.15 1.00 symbols 0.02 0.25 0.67 1.00 meaningless 0.06 0.28 0.47 0.69 1.00 > > setCor(power + speed ~ words + symbols + meaningless,data=kelley) #formula mode Call: setCor(y = power + speed ~ words + symbols + meaningless, data = kelley) Multiple Regression from matrix input DV = power slope VIF Vy.x words -0.03 1.81 -0.01 symbols 0.12 2.72 0.03 meaningless 0.22 1.92 0.06 Multiple Regression R R2 Ruw R2uw power 0.29 0.09 0.26 0.07 DV = speed slope VIF Vy.x words 0.05 1.81 0 symbols -0.07 2.72 0 meaningless 0.08 1.92 0 Multiple Regression R R2 Ruw R2uw speed 0.07 0.01 0.05 0 Various estimates of between set correlations Squared Canonical Correlations [1] 0.0946 0.0035 Average squared canonical correlation = 0.05 Cohen's Set Correlation R2 = 0.1 Unweighted correlation between the two sets = 0.18> #setCor(y= 1:2,x = 3:5,data = kelley) #order of variables input > > #Hotelling reports canonical correlations of .3073 and .0583 or squared correlations of > # 0.09443329 and 0.00339889 vs. our values of cancor = 0.3076 0.0593 with squared values > #of 0.0946 0.0035, > > setCor(y=c(7:9),x=c(1:6),data=Thurstone,n.obs=213) #easier to just list variable Call: setCor(y = c(7:9), x = c(1:6), data = Thurstone, n.obs = 213) Multiple Regression from matrix input DV = Letter.Series slope se t p lower.ci upper.ci VIF Vy.x Sentences 0.23 0.12 2.02 0.044 0.01 0.46 3.71 0.10 Vocabulary 0.08 0.12 0.71 0.480 -0.15 0.32 3.94 0.04 Sent.Completion 0.04 0.10 0.39 0.700 -0.16 0.24 3.00 0.02 First.Letters 0.12 0.09 1.33 0.180 -0.06 0.29 2.20 0.04 Four.Letter.Words 0.19 0.08 2.27 0.024 0.03 0.36 2.00 0.08 Suffixes -0.05 0.08 -0.62 0.540 -0.20 0.11 1.74 -0.01 Residual Standard Error = 0.87 with 206 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 Letter.Series 0.51 0.26 0.49 0.25 0.24 0.05 12.37 6 206 p Letter.Series 7.01e-12 DV = Pedigrees slope se t p lower.ci upper.ci VIF Vy.x Sentences 0.20 0.11 1.87 0.063 -0.01 0.42 3.71 0.11 Vocabulary 0.15 0.11 1.34 0.180 -0.07 0.37 3.94 0.08 Sent.Completion 0.21 0.10 2.11 0.036 0.01 0.40 3.00 0.11 First.Letters 0.02 0.08 0.26 0.790 -0.14 0.19 2.20 0.01 Four.Letter.Words 0.11 0.08 1.41 0.160 -0.04 0.27 2.00 0.04 Suffixes -0.01 0.07 -0.12 0.900 -0.16 0.14 1.74 0.00 Residual Standard Error = 0.82 with 206 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p Pedigrees 0.59 0.35 0.56 0.31 0.33 0.05 18.19 6 206 6.56e-17 DV = Letter.Group slope se t p lower.ci upper.ci VIF Vy.x Sentences 0.18 0.12 1.58 0.1200 -0.05 0.41 3.71 0.07 Vocabulary -0.05 0.12 -0.38 0.7000 -0.28 0.19 3.94 -0.02 Sent.Completion 0.07 0.10 0.68 0.5000 -0.13 0.28 3.00 0.03 First.Letters 0.17 0.09 1.89 0.0610 -0.01 0.34 2.20 0.07 Four.Letter.Words 0.24 0.08 2.87 0.0045 0.08 0.41 2.00 0.11 Suffixes 0.00 0.08 0.05 0.9600 -0.15 0.16 1.74 0.00 Residual Standard Error = 0.87 with 206 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 Letter.Group 0.51 0.26 0.48 0.23 0.24 0.05 12.01 6 206 p Letter.Group 1.47e-11 Various estimates of between set correlations Squared Canonical Correlations [1] 0.4115 0.0689 0.0069 Chisq of canonical correlations [1] 109.7 14.8 1.4 Average squared canonical correlation = 0.16 Cohen's Set Correlation R2 = 0.46 Shrunken Set Correlation R2 = 0.4 F and df of Cohen's Set Correlation 7.47 18 560.51 Unweighted correlation between the two sets = 0.62> #locations if we have long names > #now try partialling out some variables > set.cor(y=c(7:9),x=c(1:3),z=c(4:6),data=Thurstone) #compare with the previous Call: setCor(y = y, x = x, data = data, z = z, n.obs = n.obs, use = use, std = std, square = square, main = main, plot = plot, show = show, zero = zero, part = part) Multiple Regression from matrix input The following variables were partialed out: First.Letters Four.Letter.Words Suffixes and are included in the calculation of df1 and df2 DV = Letter.Series slope VIF Vy.x Sentences 0.23 2.73 0.07 Vocabulary 0.08 2.69 0.02 Sent.Completion 0.04 2.20 0.01 Multiple Regression R R2 Ruw R2uw Letter.Series 0.31 0.1 0.46 0.21 DV = Pedigrees slope VIF Vy.x Sentences 0.20 2.73 0.08 Vocabulary 0.15 2.69 0.06 Sent.Completion 0.21 2.20 0.09 Multiple Regression R R2 Ruw R2uw Pedigrees 0.47 0.22 0.58 0.33 DV = Letter.Group slope VIF Vy.x Sentences 0.18 2.73 0.03 Vocabulary -0.05 2.69 -0.01 Sent.Completion 0.07 2.20 0.01 Multiple Regression R R2 Ruw R2uw Letter.Group 0.2 0.04 0.39 0.15 Various estimates of between set correlations Squared Canonical Correlations [1] 0.2281 0.0065 0.0041 Average squared canonical correlation = 0.08 Cohen's Set Correlation R2 = 0.24 Unweighted correlation between the two sets = 0.57> #compare complete print out with summary printing > sc <- setCor(SATV + SATQ ~ gender + education,data=sat.act) # regression from raw data > sc Call: setCor(y = SATV + SATQ ~ gender + education, data = sat.act) Multiple Regression from raw data DV = SATV slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 0.00 0.04 0.00 1.00 -0.07 0.07 1.00 0 gender -0.02 0.04 -0.61 0.54 -0.10 0.05 1.01 0 education 0.05 0.04 1.28 0.20 -0.03 0.12 1.01 0 Residual Standard Error = 1 with 697 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p SATV 0.05 0 0.05 0 0 0 0.94 2 697 0.391 DV = SATQ slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 0.00 0.04 0.00 1.0e+00 -0.07 0.07 1.00 0.00 gender -0.17 0.04 -4.54 6.6e-06 -0.24 -0.10 1.01 0.03 education 0.05 0.04 1.32 1.9e-01 -0.02 0.12 1.01 0.00 Residual Standard Error = 0.99 with 697 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p SATQ 0.17 0.03 0.15 0.02 0.03 0.01 10.73 2 697 2.56e-05 Various estimates of between set correlations Squared Canonical Correlations [1] 0.0417 0.0021 Chisq of canonical correlations [1] 29.6 1.4 Average squared canonical correlation = 0.02 Cohen's Set Correlation R2 = 0.04 Shrunken Set Correlation R2 = 0.04 F and df of Cohen's Set Correlation 5.21 6 1384 Unweighted correlation between the two sets = 0.11> summary(sc) Multiple Regression from raw data setCor(y = SATV + SATQ ~ gender + education, data = sat.act) Multiple Regression from matrix input Beta weights SATV SATQ (Intercept) 0.000 0.000 gender -0.023 -0.170 education 0.048 0.049 Multiple R SATV SATQ 0.052 0.173 Multiple R2 SATV SATQ 0.0027 0.0299 Cohen's set correlation R2 [1] 0.044 Squared Canonical Correlations [1] 0.0417 0.0021 > > setCor(Pedigrees ~ Sentences + Vocabulary - First.Letters - Four.Letter.Words , + data=Thurstone) #showing formula input with two covariates Call: setCor(y = Pedigrees ~ Sentences + Vocabulary - First.Letters - Four.Letter.Words, data = Thurstone) Multiple Regression from matrix input The following variables were partialed out: First.Letters Four.Letter.Words and are included in the calculation of df1 and df2 DV = Pedigrees slope VIF Vy.x Sentences 0.29 2.48 0.12 Vocabulary 0.23 2.48 0.09 Multiple Regression R R2 Ruw R2uw Pedigrees 0.46 0.21 0.56 0.32 > > #Do some regressions with real data (rather than correlation matrices) > setCor(reaction ~ cond + pmi + import, data = Tal.Or) Call: setCor(y = reaction ~ cond + pmi + import, data = Tal.Or) Multiple Regression from raw data DV = reaction slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 0.00 0.08 0.00 1.0e+00 -0.15 0.15 1.00 0.00 cond 0.03 0.08 0.43 6.7e-01 -0.12 0.19 1.05 0.01 pmi 0.34 0.08 4.26 4.0e-05 0.18 0.49 1.11 0.15 import 0.36 0.08 4.59 1.1e-05 0.21 0.52 1.11 0.17 Residual Standard Error = 0.83 with 119 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p reaction 0.57 0.33 0.52 0.27 0.31 0.07 19.11 3 119 3.5e-10 > > #partial out importance > setCor(reaction ~ cond + pmi - import, data = Tal.Or, main="Partial out importance") Call: setCor(y = reaction ~ cond + pmi - import, data = Tal.Or, main = "Partial out importance") Multiple Regression from raw data The following variables were partialed out: import and are included in the calculation of df1 and df2 DV = reaction* slope se t p lower.ci upper.ci VIF Vy.x (Intercept)* 0.00 0.08 0.00 1.00000 -0.15 0.15 1.00 0.00 cond* 0.03 0.08 0.43 0.67000 -0.12 0.19 1.02 0.00 pmi* 0.34 0.08 4.26 0.00004 0.18 0.49 1.02 0.14 Residual Standard Error = 0.83 with 119 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p reaction 0.37 0.14 0.39 0.16 0.12 0.06 9.62 2 119 0.000134 > > #compare with using lm by partialling > mod1 <- lm(reaction ~ cond + pmi + import, data = Tal.Or) > reaction.import <- lm(reaction~import,data=Tal.Or)$resid > cond.import <- lm(cond~import,data=Tal.Or)$resid > pmi.import <- lm(pmi~import,data=Tal.Or)$resid > mod.partial <- lm(reaction.import ~ cond.import + pmi.import) > summary(mod.partial) Call: lm(formula = reaction.import ~ cond.import + pmi.import) Residuals: Min 1Q Median 3Q Max -3.2502 -0.8319 -0.0356 0.8425 2.7394 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -2.196e-17 1.158e-01 0.000 1.000 cond.import 1.034e-01 2.381e-01 0.434 0.665 pmi.import 3.965e-01 9.259e-02 4.282 3.75e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.284 on 120 degrees of freedom Multiple R-squared: 0.1392, Adjusted R-squared: 0.1249 F-statistic: 9.703 on 2 and 120 DF, p-value: 0.0001242 > #lm uses raw scores, so set std = FALSE for setCor > print(setCor(y = reaction ~ cond + pmi - import, data = Tal.Or,std = FALSE, + main = "Partial out importance"),digits=4) Call: setCor(y = reaction ~ cond + pmi - import, data = Tal.Or, std = FALSE, main = "Partial out importance") Multiple Regression from raw data The following variables were partialed out: import and are included in the calculation of df1 and df2 DV = reaction* slope se t p lower.ci upper.ci VIF Vy.x (Intercept)* 0.0000 0.1163 0.0000 1.000e+00 -0.2302 0.2302 1.0000 0.0000 cond* 0.1034 0.2391 0.4324 6.662e-01 -0.3700 0.5768 1.0338 0.0132 pmi* 0.3965 0.0930 4.2645 4.038e-05 0.2124 0.5806 1.0338 0.1927 Residual Standard Error = 1.2895 with 119 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 reaction 0.3731 0.1392 0.3948 0.1559 0.1175 0.0554 9.6219 2 119 p reaction 0.00013385 > #notice that the dfs of the partial approach using lm are 1 more than the setCor dfs > > #Show how to find quadratic terms > sc <- setCor(reaction ~ cond + pmi + I(import^2), data = Tal.Or) > sc Call: setCor(y = reaction ~ cond + pmi + I(import^2), data = Tal.Or) Multiple Regression from raw data DV = reaction slope se t p lower.ci upper.ci VIF Vy.x (Intercept) 0.00 0.08 0.00 1.0e+00 -0.15 0.15 1.00 0.00 cond 0.03 0.08 0.35 7.3e-01 -0.13 0.18 1.06 0.00 pmi 0.34 0.08 4.22 4.8e-05 0.18 0.49 1.11 0.15 import^2 0.36 0.08 4.45 2.0e-05 0.20 0.52 1.12 0.16 Residual Standard Error = 0.84 with 119 degrees of freedom Multiple Regression R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p reaction 0.56 0.32 0.51 0.26 0.3 0.07 18.59 3 119 5.89e-10 > #pairs.panels(sc$data) #show the SPLOM of the data > > #Consider an example of a derivation and cross validation sample > set.seed(42) > ss <- sample(1:2800,1400) > model <- setCor(y=26:28,x=1:25,data=bfi[ss,],plot=FALSE) > original.fit <- crossValidation(model,bfi[ss,]) #the derivation set > cross.fit <- crossValidation(model,bfi[-ss,]) #the cross validation set > summary(original.fit) Cross Validation Call:crossValidation(model = model, data = bfi[ss, ]) Validities from raw items and from the correlation matrix Number of unique predictors used = 25 items mat gender 0.39 0.39 education 0.27 0.27 age 0.35 0.35 > summary(cross.fit) Cross Validation Call:crossValidation(model = model, data = bfi[-ss, ]) Validities from raw items and from the correlation matrix Number of unique predictors used = 25 items mat gender 0.35 0.35 education 0.13 0.13 age 0.27 0.27 > predicted <- predict(model,bfi[-ss,]) > cor2(predicted,bfi[-ss,26:28]) gender education age gender 0.35 0.01 0.05 education -0.11 0.11 0.12 age 0.02 0.06 0.27 > > > > > cleanEx() > nameEx("sim") > ### * sim > > flush(stderr()); flush(stdout()) > > ### Name: sim > ### Title: Functions to simulate psychological/psychometric data. > ### Aliases: sim sim.simplex sim.minor > ### Keywords: multivariate datagen > > ### ** Examples > > simplex <- sim.simplex() #create the default simplex structure > lowerMat(simplex) #the correlation matrix V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V1 1.00 V2 0.80 1.00 V3 0.64 0.80 1.00 V4 0.51 0.64 0.80 1.00 V5 0.41 0.51 0.64 0.80 1.00 V6 0.33 0.41 0.51 0.64 0.80 1.00 V7 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V8 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V9 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V10 0.13 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V11 0.11 0.13 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V12 0.09 0.11 0.13 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 [1] 1.00 > #create a congeneric matrix > congeneric <- sim.congeneric() > lowerMat(congeneric) V1 V2 V3 V4 V1 1.00 V2 0.56 1.00 V3 0.48 0.42 1.00 V4 0.40 0.35 0.30 1.00 > R <- sim.hierarchical() > lowerMat(R) V1 V2 V3 V4 V5 V6 V7 V8 V9 V1 1.00 V2 0.56 1.00 V3 0.48 0.42 1.00 V4 0.40 0.35 0.30 1.00 V5 0.35 0.30 0.26 0.42 1.00 V6 0.29 0.25 0.22 0.35 0.30 1.00 V7 0.30 0.26 0.23 0.24 0.20 0.17 1.00 V8 0.25 0.22 0.19 0.20 0.17 0.14 0.30 1.00 V9 0.20 0.18 0.15 0.16 0.13 0.11 0.24 0.20 1.00 > #now simulate categorical items with the hierarchical factor structure. > #Let the items be dichotomous with varying item difficulties. > marginals = matrix(c(seq(.1,.9,.1),seq(.9,.1,-.1)),byrow=TRUE,nrow=2) > X <- sim.poly.mat(R=R,m=marginals,n=1000) > lowerCor(X) #show the raw correlations C1 C2 C3 C4 C5 C6 C7 C8 C9 R1 1.00 R2 0.41 1.00 R3 0.38 0.28 1.00 R4 0.29 0.28 0.20 1.00 R5 0.24 0.16 0.21 0.23 1.00 R6 0.13 0.18 0.06 0.23 0.12 1.00 R7 0.25 0.16 0.20 0.15 0.21 0.01 1.00 R8 0.11 0.19 0.10 0.15 0.09 0.12 0.23 1.00 R9 0.15 0.06 0.16 0.06 0.14 0.02 0.15 0.12 1.00 > #lowerMat(tetrachoric(X)$rho) # show the tetrachoric correlations (not run) > #generate a structure > fx <- matrix(c(.9,.8,.7,rep(0,6),c(.8,.7,.6)),ncol=2) > fy <- c(.6,.5,.4) > Phi <- matrix(c(1,0,.5,0,1,.4,0,0,0),ncol=3) > R <- sim.structure(fx,Phi,fy) > cor.plot(R$model) #show it graphically > > simp <- sim.simplex() > #show the simplex structure using cor.plot > cor.plot(simp,colors=TRUE,main="A simplex structure") > #Show a STARS model > simp <- sim.simplex(alpha=.8,lambda=.4) > #show the simplex structure using cor.plot > cor.plot(simp,colors=TRUE,main="State Trait Auto Regressive Simplex" ) > > dichot.sim <- sim.irt() #simulate 5 dichotomous items > poly.sim <- sim.poly(theta=dichot.sim$theta) #simulate 5 polytomous items that correlate > #with the dichotomous items > > > > > cleanEx() > nameEx("sim.VSS") > ### * sim.VSS > > flush(stderr()); flush(stdout()) > > ### Name: sim.VSS > ### Title: create VSS like data > ### Aliases: sim.VSS VSS.simulate VSS.sim > ### Keywords: multivariate models datagen > > ### ** Examples > > ## Not run: > ##D simulated <- sim.VSS(1000,20,4,.6) > ##D vss <- VSS(simulated,rotate="varimax") > ##D VSS.plot(vss) > ## End(Not run) > > > > > cleanEx() > nameEx("sim.anova") > ### * sim.anova > > flush(stderr()); flush(stdout()) > > ### Name: sim.anova > ### Title: Simulate a 3 way balanced ANOVA or linear model, with or without > ### repeated measures. > ### Aliases: sim.anova > ### Keywords: models multivariate > > ### ** Examples > > set.seed(42) > data.df <- sim.anova(es1=1,es2=.5,es13=1) # one main effect and one interaction > describe(data.df) vars n mean sd median trimmed mad min max range skew kurtosis se IV1* 1 16 1.50 0.52 1.5 1.50 0.74 1.00 2.00 1.0 0.00 -2.12 0.13 IV2* 2 16 1.50 0.52 1.5 1.50 0.74 1.00 2.00 1.0 0.00 -2.12 0.13 IV3* 3 16 1.50 0.52 1.5 1.50 0.74 1.00 2.00 1.0 0.00 -2.12 0.13 DV 4 16 0.49 1.85 1.0 0.62 1.63 -3.78 3.03 6.8 -0.67 -0.45 0.46 > pairs.panels(data.df) #show how the design variables are orthogonal > # > summary(lm(DV~IV1*IV2*IV3,data=data.df)) Call: lm(formula = DV ~ IV1 * IV2 * IV3, data = data.df) Residuals: Min 1Q Median 3Q Max -0.8966 -0.3917 0.0000 0.3917 0.8966 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.1798 0.5803 2.033 0.076503 . IV11 -1.9469 0.8207 -2.372 0.045093 * IV21 0.1076 0.8207 0.131 0.898974 IV31 -4.0620 0.8207 -4.949 0.001122 ** IV11:IV21 2.6342 1.1607 2.269 0.052933 . IV11:IV31 6.0582 1.1607 5.220 0.000803 *** IV21:IV31 2.0421 1.1607 1.759 0.116556 IV11:IV21:IV31 -3.3524 1.6415 -2.042 0.075397 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.8207 on 8 degrees of freedom Multiple R-squared: 0.8952, Adjusted R-squared: 0.8035 F-statistic: 9.764 on 7 and 8 DF, p-value: 0.002274 > summary(aov(DV~IV1*IV2*IV3,data=data.df)) Df Sum Sq Mean Sq F value Pr(>F) IV1 1 9.749 9.749 14.473 0.005204 ** IV2 1 10.337 10.337 15.346 0.004435 ** IV3 1 2.890 2.890 4.290 0.072097 . IV1:IV2 1 0.918 0.918 1.362 0.276751 IV1:IV3 1 19.202 19.202 28.507 0.000695 *** IV2:IV3 1 0.134 0.134 0.199 0.667572 IV1:IV2:IV3 1 2.810 2.810 4.171 0.075397 . Residuals 8 5.389 0.674 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > set.seed(42) > #demonstrate the effect of not centering the data on the regression > data.df <- sim.anova(es1=1,es2=.5,es13=1,center=FALSE) # > describe(data.df) vars n mean sd median trimmed mad min max range skew kurtosis se IV1* 1 16 1.50 0.52 1.5 1.50 0.74 1.00 2.00 1.0 0.00 -2.12 0.13 IV2* 2 16 1.50 0.52 1.5 1.50 0.74 1.00 2.00 1.0 0.00 -2.12 0.13 IV3* 3 16 1.50 0.52 1.5 1.50 0.74 1.00 2.00 1.0 0.00 -2.12 0.13 DV 4 16 0.49 1.85 1.0 0.62 1.63 -3.78 3.03 6.8 -0.67 -0.45 0.46 > # > #this one is incorrect, because the IVs are not centered > summary(lm(DV~IV1*IV2*IV3,data=data.df)) Call: lm(formula = DV ~ IV1 * IV2 * IV3, data = data.df) Residuals: Min 1Q Median 3Q Max -0.8966 -0.3917 0.0000 0.3917 0.8966 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.1798 0.5803 2.033 0.076503 . IV11 -1.9469 0.8207 -2.372 0.045093 * IV21 0.1076 0.8207 0.131 0.898974 IV31 -4.0620 0.8207 -4.949 0.001122 ** IV11:IV21 2.6342 1.1607 2.269 0.052933 . IV11:IV31 6.0582 1.1607 5.220 0.000803 *** IV21:IV31 2.0421 1.1607 1.759 0.116556 IV11:IV21:IV31 -3.3524 1.6415 -2.042 0.075397 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.8207 on 8 degrees of freedom Multiple R-squared: 0.8952, Adjusted R-squared: 0.8035 F-statistic: 9.764 on 7 and 8 DF, p-value: 0.002274 > > summary(aov(DV~IV1*IV2*IV3,data=data.df)) #compare with the lm model Df Sum Sq Mean Sq F value Pr(>F) IV1 1 9.749 9.749 14.473 0.005204 ** IV2 1 10.337 10.337 15.346 0.004435 ** IV3 1 2.890 2.890 4.290 0.072097 . IV1:IV2 1 0.918 0.918 1.362 0.276751 IV1:IV3 1 19.202 19.202 28.507 0.000695 *** IV2:IV3 1 0.134 0.134 0.199 0.667572 IV1:IV2:IV3 1 2.810 2.810 4.171 0.075397 . Residuals 8 5.389 0.674 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > #now examine multiple levels and quadratic terms > set.seed(42) > data.df <- sim.anova(es1=1,es13=1,n2=3,n3=4,es22=1) > summary(lm(DV~IV1*IV2*IV3,data=data.df)) Call: lm(formula = DV ~ IV1 * IV2 * IV3, data = data.df) Residuals: Min 1Q Median 3Q Max -2.0018 -0.3397 0.0000 0.3397 2.0018 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.4260 0.7844 4.367 0.000207 *** IV11 -2.7790 1.1093 -2.505 0.019434 * IV20 -3.0489 1.1093 -2.748 0.011190 * IV21 -1.2009 1.1093 -1.083 0.289778 IV3-1 -1.5254 1.1093 -1.375 0.181825 IV31 -4.4713 1.1093 -4.031 0.000488 *** IV33 -5.1016 1.1093 -4.599 0.000115 *** IV11:IV20 1.5126 1.5689 0.964 0.344596 IV11:IV21 1.3254 1.5689 0.845 0.406549 IV11:IV3-1 3.2038 1.5689 2.042 0.052272 . IV11:IV31 6.1556 1.5689 3.924 0.000639 *** IV11:IV33 8.5233 1.5689 5.433 1.4e-05 *** IV20:IV3-1 2.1234 1.5689 1.353 0.188512 IV21:IV3-1 1.1223 1.5689 0.715 0.481280 IV20:IV31 1.3930 1.5689 0.888 0.383389 IV21:IV31 2.2484 1.5689 1.433 0.164711 IV20:IV33 1.5838 1.5689 1.010 0.322782 IV21:IV33 1.5504 1.5689 0.988 0.332899 IV11:IV20:IV3-1 -2.6968 2.2187 -1.215 0.236008 IV11:IV21:IV3-1 -1.2671 2.2187 -0.571 0.573237 IV11:IV20:IV31 -0.4246 2.2187 -0.191 0.849846 IV11:IV21:IV31 -3.3169 2.2187 -1.495 0.147959 IV11:IV20:IV33 -2.4872 2.2187 -1.121 0.273368 IV11:IV21:IV33 -0.6422 2.2187 -0.289 0.774713 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.109 on 24 degrees of freedom Multiple R-squared: 0.8651, Adjusted R-squared: 0.7359 F-statistic: 6.693 on 23 and 24 DF, p-value: 8.495e-06 > summary(aov(DV~IV1*IV2*IV3,data=data.df)) Df Sum Sq Mean Sq F value Pr(>F) IV1 1 36.11 36.11 29.346 1.45e-05 *** IV2 2 32.23 16.12 13.096 0.000143 *** IV3 3 10.76 3.59 2.914 0.054955 . IV1:IV2 2 0.03 0.01 0.011 0.988728 IV1:IV3 3 98.02 32.67 26.549 8.49e-08 *** IV2:IV3 6 3.71 0.62 0.502 0.800543 IV1:IV2:IV3 6 8.58 1.43 1.162 0.358682 Residuals 24 29.54 1.23 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > pairs.panels(data.df) > # > data.df <- sim.anova(es1=1,es2=-.5,within=c(-1,0,1),n=10) > pairs.panels(data.df) > > > > > cleanEx() > nameEx("sim.congeneric") > ### * sim.congeneric > > flush(stderr()); flush(stdout()) > > ### Name: sim.congeneric > ### Title: Simulate a congeneric data set > ### Aliases: congeneric.sim sim.congeneric make.congeneric > ### Keywords: multivariate datagen > > ### ** Examples > > test <- sim.congeneric(c(.9,.8,.7,.6)) #just the population matrix > test <- sim.congeneric(c(.9,.8,.7,.6),N=100) # a sample correlation matrix > test <- sim.congeneric(short=FALSE, N=100) > round(cor(test$observed),2) # show a congeneric correlation matrix V1 V2 V3 V4 V1 1.00 0.50 0.49 0.28 V2 0.50 1.00 0.37 0.29 V3 0.49 0.37 1.00 0.42 V4 0.28 0.29 0.42 1.00 > f1=fa(test$observed,scores=TRUE) > round(cor(f1$scores,test$latent),2) theta e1 e2 e3 e4 MR1 0.84 0.29 0.14 0.47 0.18 > #factor score estimates are correlated with but not equal to the factor scores > set.seed(42) > #500 responses to 4 discrete items > items <- sim.congeneric(N=500,short=FALSE,low=-2,high=2,categorical=TRUE) > d4 <- irt.fa(items$observed) #item response analysis of congeneric measures > > > > > > > cleanEx() > nameEx("sim.hierarchical") > ### * sim.hierarchical > > flush(stderr()); flush(stdout()) > > ### Name: sim.hierarchical > ### Title: Create a population or sample correlation matrix, perhaps with > ### hierarchical structure. > ### Aliases: sim.hierarchical make.hierarchical sim.bonds > ### Keywords: multivariate models datagen > > ### ** Examples > > > gload <- gload<-matrix(c(.9,.8,.7),nrow=3) # a higher order factor matrix > fload <-matrix(c( #a lower order (oblique) factor matrix + .8,0,0, + .7,0,.0, + .6,0,.0, + 0,.7,.0, + 0,.6,.0, + 0,.5,0, + 0,0,.6, + 0,0,.5, + 0,0,.4), ncol=3,byrow=TRUE) > > jensen <- sim.hierarchical(gload,fload) #the test set used by omega > round(jensen,2) V1 V2 V3 V4 V5 V6 V7 V8 V9 V1 1.00 0.56 0.48 0.40 0.35 0.29 0.30 0.25 0.20 V2 0.56 1.00 0.42 0.35 0.30 0.25 0.26 0.22 0.18 V3 0.48 0.42 1.00 0.30 0.26 0.22 0.23 0.19 0.15 V4 0.40 0.35 0.30 1.00 0.42 0.35 0.24 0.20 0.16 V5 0.35 0.30 0.26 0.42 1.00 0.30 0.20 0.17 0.13 V6 0.29 0.25 0.22 0.35 0.30 1.00 0.17 0.14 0.11 V7 0.30 0.26 0.23 0.24 0.20 0.17 1.00 0.30 0.24 V8 0.25 0.22 0.19 0.20 0.17 0.14 0.30 1.00 0.20 V9 0.20 0.18 0.15 0.16 0.13 0.11 0.24 0.20 1.00 > set.seed(42) #for reproducible results > jensen <- sim.hierarchical(n=10000) #use the same gload and fload values, but produce the data > #Compare factor scores using the sl model with those that generated the data > lowerCor(jensen$theta) #the correlations of the factors g F1* F2* F3* g 1.00 F1* -0.01 1.00 F2* 0.00 0.01 1.00 F3* 0.00 0.00 -0.01 1.00 > fs <- factor.scores(jensen$observed, jensen$sl) #find factor scores from the data > lowerCor(fs$scores) #these are now correlated g F1* F2* F3* g 1.00 F1* 0.40 1.00 F2* 0.20 -0.57 1.00 F3* 0.14 -0.39 -0.20 1.00 > cor2(fs$scores,jensen$theta) #correlation with the generating factors g F1* F2* F3* g 0.86 0.25 0.17 0.12 F1* 0.35 0.45 -0.30 -0.20 F2* 0.18 -0.22 0.57 -0.12 F3* 0.11 -0.16 -0.10 0.58 > > > #compare this to a simulation of the bonds model > set.seed(42) > R <- sim.bonds() > R$R V1 V2 V3 V4 V5 V6 V7 V8 V9 F1 F2 F3 F4 V1 1.00 0.61 0.30 0.25 0.25 0.30 0.25 0.25 0.00 0.00 0.48 0.40 0.00 V2 0.61 1.00 0.30 0.25 0.25 0.30 0.25 0.25 0.00 0.00 0.48 0.40 0.00 V3 0.30 0.30 1.00 0.36 0.36 0.30 0.36 0.00 0.30 0.48 0.40 0.00 0.00 V4 0.25 0.25 0.36 1.00 0.61 0.60 0.61 0.25 0.30 0.48 0.00 0.40 0.00 V5 0.25 0.25 0.36 0.61 1.00 0.60 0.61 0.25 0.30 0.48 0.00 0.40 0.00 V6 0.30 0.30 0.30 0.60 0.60 1.00 0.60 0.30 0.25 0.40 0.00 0.48 0.00 V7 0.25 0.25 0.36 0.61 0.61 0.60 1.00 0.25 0.30 0.48 0.00 0.40 0.00 V8 0.25 0.25 0.00 0.25 0.25 0.30 0.25 1.00 0.36 0.00 0.00 0.40 0.48 V9 0.00 0.00 0.30 0.30 0.30 0.25 0.30 0.36 1.00 0.40 0.00 0.00 0.48 F1 0.00 0.00 0.48 0.48 0.48 0.40 0.48 0.00 0.40 1.00 0.00 0.00 0.00 F2 0.48 0.48 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 F3 0.40 0.40 0.00 0.40 0.40 0.48 0.40 0.40 0.00 0.00 0.00 1.00 0.00 F4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.48 0.48 0.00 0.00 0.00 1.00 > > #simulate a non-hierarchical structure > fload <- matrix(c(c(c(.9,.8,.7,.6),rep(0,20)),c(c(.9,.8,.7,.6),rep(0,20)), + c(c(.9,.8,.7,.6),rep(0,20)),c(c(c(.9,.8,.7,.6),rep(0,20)),c(.9,.8,.7,.6))),ncol=5) > gload <- matrix(rep(0,5)) > five.factor <- sim.hierarchical(gload,fload,500,TRUE) #create sample data set > #do it again with a hierachical structure > gload <- matrix(rep(.7,5) ) > five.factor.g <- sim.hierarchical(gload,fload,500,TRUE) #create sample data set > #compare these two with omega > #not run > #om.5 <- omega(five.factor$observed,5) > #om.5g <- omega(five.factor.g$observed,5) > > > > cleanEx() > nameEx("sim.irt") > ### * sim.irt > > flush(stderr()); flush(stdout()) > > ### Name: sim.irt > ### Title: Functions to simulate psychological/psychometric data. > ### Aliases: sim.irt sim.rasch sim.npl sim.npn sim.poly sim.poly.npl > ### sim.poly.npn sim.poly.ideal sim.poly.ideal.npl sim.poly.ideal.npn > ### sim.poly.mat > ### Keywords: multivariate datagen > > ### ** Examples > > simplex <- sim.simplex() #create the default simplex structure > lowerMat(simplex) #the correlation matrix V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V1 1.00 V2 0.80 1.00 V3 0.64 0.80 1.00 V4 0.51 0.64 0.80 1.00 V5 0.41 0.51 0.64 0.80 1.00 V6 0.33 0.41 0.51 0.64 0.80 1.00 V7 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V8 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V9 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V10 0.13 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V11 0.11 0.13 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 1.00 V12 0.09 0.11 0.13 0.17 0.21 0.26 0.33 0.41 0.51 0.64 0.80 [1] 1.00 > #create a congeneric matrix > congeneric <- sim.congeneric() > lowerMat(congeneric) V1 V2 V3 V4 V1 1.00 V2 0.56 1.00 V3 0.48 0.42 1.00 V4 0.40 0.35 0.30 1.00 > R <- sim.hierarchical() > lowerMat(R) V1 V2 V3 V4 V5 V6 V7 V8 V9 V1 1.00 V2 0.56 1.00 V3 0.48 0.42 1.00 V4 0.40 0.35 0.30 1.00 V5 0.35 0.30 0.26 0.42 1.00 V6 0.29 0.25 0.22 0.35 0.30 1.00 V7 0.30 0.26 0.23 0.24 0.20 0.17 1.00 V8 0.25 0.22 0.19 0.20 0.17 0.14 0.30 1.00 V9 0.20 0.18 0.15 0.16 0.13 0.11 0.24 0.20 1.00 > #now simulate categorical items with the hierarchical factor structure. > #Let the items be dichotomous with varying item difficulties. > marginals = matrix(c(seq(.1,.9,.1),seq(.9,.1,-.1)),byrow=TRUE,nrow=2) > X <- sim.poly.mat(R=R,m=marginals,n=1000) > lowerCor(X) #show the raw correlations C1 C2 C3 C4 C5 C6 C7 C8 C9 R1 1.00 R2 0.41 1.00 R3 0.38 0.28 1.00 R4 0.29 0.28 0.20 1.00 R5 0.24 0.16 0.21 0.23 1.00 R6 0.13 0.18 0.06 0.23 0.12 1.00 R7 0.25 0.16 0.20 0.15 0.21 0.01 1.00 R8 0.11 0.19 0.10 0.15 0.09 0.12 0.23 1.00 R9 0.15 0.06 0.16 0.06 0.14 0.02 0.15 0.12 1.00 > #lowerMat(tetrachoric(X)$rho) # show the tetrachoric correlations (not run) > #generate a structure > fx <- matrix(c(.9,.8,.7,rep(0,6),c(.8,.7,.6)),ncol=2) > fy <- c(.6,.5,.4) > Phi <- matrix(c(1,0,.5,0,1,.4,0,0,0),ncol=3) > R <- sim.structure(fx,Phi,fy) > cor.plot(R$model) #show it graphically > > simp <- sim.simplex() > #show the simplex structure using cor.plot > cor.plot(simp,colors=TRUE,main="A simplex structure") > #Show a STARS model > simp <- sim.simplex(alpha=.8,lambda=.4) > #show the simplex structure using cor.plot > cor.plot(simp,colors=TRUE,main="State Trait Auto Regressive Simplex" ) > > dichot.sim <- sim.irt() #simulate 5 dichotomous items > poly.sim <- sim.poly(theta=dichot.sim$theta) #simulate 5 polytomous items that correlate > #with the dichotomous items > > > > > cleanEx() > nameEx("sim.item") > ### * sim.item > > flush(stderr()); flush(stdout()) > > ### Name: sim.item > ### Title: Generate simulated data structures for circumplex, spherical, or > ### simple structure > ### Aliases: sim.spherical item.sim sim.item sim.dichot item.dichot > ### sim.circ circ.sim con2cat > ### Keywords: multivariate datagen > > ### ** Examples > > > round(cor(circ.sim(nvar=8,nsub=200)),2) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [1,] 1.00 0.26 -0.04 -0.23 -0.21 -0.22 0.11 0.28 [2,] 0.26 1.00 0.21 0.00 -0.14 -0.35 -0.20 0.08 [3,] -0.04 0.21 1.00 0.35 0.08 -0.26 -0.42 -0.27 [4,] -0.23 0.00 0.35 1.00 0.23 -0.01 -0.30 -0.48 [5,] -0.21 -0.14 0.08 0.23 1.00 0.18 0.00 -0.26 [6,] -0.22 -0.35 -0.26 -0.01 0.18 1.00 0.20 -0.03 [7,] 0.11 -0.20 -0.42 -0.30 0.00 0.20 1.00 0.26 [8,] 0.28 0.08 -0.27 -0.48 -0.26 -0.03 0.26 1.00 > plot(fa(circ.sim(16,500),2)$loadings,main="Circumplex Structure") #circumplex structure > # > # > plot(fa(item.sim(16,500),2)$loadings,main="Simple Structure") #simple structure > # > cluster.plot(fa(item.dichot(16,low=0,high=1),2)) > > set.seed(42) > > data <- mnormt::rmnorm(1000, c(0, 0), matrix(c(1, .5, .5, 1), 2, 2)) #continuous data > new <- con2cat(data,c(-1.5,-.5,.5,1.5)) #discreet data > polychoric(new) Call: polychoric(x = new) Polychoric correlations C1 C2 R1 1.00 R2 0.49 1.00 with tau of 1 2 3 4 [1,] -1.5 -0.49 0.54 1.6 [2,] -1.5 -0.52 0.52 1.6 > #not run > #x12 <- sim.item(12,gloading=.6) > #f3 <- fa(x12,3,rotate="none") > #f3 #observe the general factor > #oblimin(f3$loadings[,2:3]) #show the 2nd and 3 factors. > #f3 <- fa(x12,3) #now do it with oblimin rotation > #f3 # not what one naively expect. > > > > > cleanEx() > nameEx("sim.multilevel") > ### * sim.multilevel > > flush(stderr()); flush(stdout()) > > ### Name: sim.multilevel > ### Title: Simulate multilevel data with specified within group and between > ### group correlations > ### Aliases: sim.multilevel sim.multi > ### Keywords: multivariate models > > ### ** Examples > > #First, show a few results from sim.multi > > x.df <- sim.multi() #the default is 4 subjects for two variables > # over 16 days measured 6 times/day > > #sb <- statsBy(x.df,group ="id",cors=TRUE) > #round(sb$within,2) #show the within subject correlations > > #get some parameters to simulate > data(withinBetween) > wb.stats <- statsBy(withinBetween,"Group") > rwg <- wb.stats$rwg > rbg <- wb.stats$rbg > eta <- rep(.5,9) > > #simulate them. Try this again to see how it changes > XY <- sim.multilevel(ncases=100,ngroups=10,rwg=rwg,rbg=rbg,eta=eta) > lowerCor(XY$wg) #based upon 89 df C1 C2 C3 C4 C5 C6 C7 C8 C9 R1 1.00 R2 -0.03 1.00 R3 -1.00 0.03 1.00 R4 1.00 -0.03 -1.00 1.00 R5 -0.03 1.00 0.03 -0.03 1.00 R6 -1.00 0.03 1.00 -1.00 0.03 1.00 R7 1.00 -0.03 -1.00 1.00 -0.03 -1.00 1.00 R8 -0.03 1.00 0.03 -0.03 1.00 0.03 -0.03 1.00 R9 -1.00 0.03 1.00 -1.00 0.03 1.00 -1.00 0.03 1.00 > lowerCor(XY$bg) #based upon 9 df -- C1 C2 C3 C4 C5 C6 C7 C8 C9 R1 1.00 R2 1.00 1.00 R3 1.00 1.00 1.00 R4 0.48 0.48 0.48 1.00 R5 0.48 0.48 0.48 1.00 1.00 R6 0.48 0.48 0.48 1.00 1.00 1.00 R7 -1.00 -1.00 -1.00 -0.48 -0.48 -0.48 1.00 R8 -1.00 -1.00 -1.00 -0.48 -0.48 -0.48 1.00 1.00 R9 -1.00 -1.00 -1.00 -0.48 -0.48 -0.48 1.00 1.00 1.00 > > > > cleanEx() > nameEx("sim.omega") > ### * sim.omega > > flush(stderr()); flush(stdout()) > > ### Name: sim.omega > ### Title: Further functions to simulate psychological/psychometric data. > ### Aliases: sim.omega sim.general sim.parallel > ### Keywords: multivariate datagen > > ### ** Examples > > #test <- sim.omega() > > > > cleanEx() > nameEx("sim.structural") > ### * sim.structural > > flush(stderr()); flush(stdout()) > > ### Name: sim.structure > ### Title: Create correlation matrices or data matrices with a particular > ### measurement and structural model > ### Aliases: sim.structure sim.structural sim.correlation simCor > ### Keywords: multivariate datagen > > ### ** Examples > > #First, create a sem like model with a factor model of x and ys with correlation Phi > fx <-matrix(c( .9,.8,.6,rep(0,4),.6,.8,-.7),ncol=2) > fy <- matrix(c(.6,.5,.4),ncol=1) > rownames(fx) <- c("V","Q","A","nach","Anx") > rownames(fy)<- c("gpa","Pre","MA") > Phi <-matrix( c(1,0,.7,.0,1,.7,.7,.7,1),ncol=3) > #now create this structure > gre.gpa <- sim.structural(fx,Phi,fy) > print(gre.gpa,2) Call: sim.structural(fx = fx, Phi = Phi, fy = fy) $model (Population correlation matrix) V Q A nach Anx gpa Pre MA V 1.00 0.72 0.54 0.00 0.00 0.38 0.32 0.25 Q 0.72 1.00 0.48 0.00 0.00 0.34 0.28 0.22 A 0.54 0.48 1.00 0.48 -0.42 0.50 0.42 0.34 nach 0.00 0.00 0.48 1.00 -0.56 0.34 0.28 0.22 Anx 0.00 0.00 -0.42 -0.56 1.00 -0.29 -0.24 -0.20 gpa 0.38 0.34 0.50 0.34 -0.29 1.00 0.30 0.24 Pre 0.32 0.28 0.42 0.28 -0.24 0.30 1.00 0.20 MA 0.25 0.22 0.34 0.22 -0.20 0.24 0.20 1.00 $reliability (population reliability) V Q A nach Anx gpa Pre MA 0.81 0.64 0.72 0.64 0.49 0.36 0.25 0.16 > #correct for attenuation to see structure > #the raw correlations are below the diagonal, the adjusted above > round(correct.cor(gre.gpa$model,gre.gpa$reliability),2) V Q A nach Anx gpa Pre MA V 0.81 1.00 0.71 0.00 0.00 0.70 0.70 0.70 Q 0.72 0.64 0.71 0.00 0.00 0.70 0.70 0.70 A 0.54 0.48 0.72 0.71 -0.71 0.99 0.99 0.99 nach 0.00 0.00 0.48 0.64 -1.00 0.70 0.70 0.70 Anx 0.00 0.00 -0.42 -0.56 0.49 -0.70 -0.70 -0.70 gpa 0.38 0.34 0.50 0.34 -0.29 0.36 1.00 1.00 Pre 0.32 0.28 0.42 0.28 -0.24 0.30 0.25 1.00 MA 0.25 0.22 0.34 0.22 -0.20 0.24 0.20 0.16 > > #These are the population values, > # we can also create a correlation matrix sampled from this population > GRE.GPA <- sim.structural(fx,Phi,fy,n=250,raw=FALSE) > lowerMat(GRE.GPA$r) V Q A nach Anx gpa Pre MA V 1.00 Q 0.70 1.00 A 0.46 0.42 1.00 nach -0.05 0.01 0.49 1.00 Anx 0.13 0.08 -0.35 -0.59 1.00 gpa 0.23 0.26 0.47 0.39 -0.22 1.00 Pre 0.17 0.23 0.35 0.24 -0.23 0.18 1.00 MA 0.28 0.21 0.29 0.23 -0.22 0.26 0.14 1.00 > > #or we can show data sampled from such a population > GRE.GPA <- sim.structural(fx,Phi,fy,n=250,raw=TRUE) > lowerCor(GRE.GPA$observed) V Q A nach Anx gpa Pre MA V 1.00 Q 0.75 1.00 A 0.61 0.52 1.00 nach 0.06 0.03 0.50 1.00 Anx -0.11 -0.09 -0.48 -0.56 1.00 gpa 0.41 0.33 0.47 0.37 -0.38 1.00 Pre 0.28 0.24 0.34 0.28 -0.20 0.28 1.00 MA 0.29 0.23 0.27 0.28 -0.14 0.25 0.23 1.00 > > > > congeneric <- sim.structure(f=c(.9,.8,.7,.6)) # a congeneric model > congeneric Call: sim.structure(f = c(0.9, 0.8, 0.7, 0.6)) $model (Population correlation matrix) V1 V2 V3 V4 V1 1.00 0.72 0.63 0.54 V2 0.72 1.00 0.56 0.48 V3 0.63 0.56 1.00 0.42 V4 0.54 0.48 0.42 1.00 $reliability (population reliability) [1] 0.81 0.64 0.49 0.36 > > #now take this correlation matrix as a population value and create samples from it > example.congeneric <- sim.correlation(congeneric$model,n=200) #create a sample matrix > lowerMat(example.congeneric ) #show the correlation matrix V1 V2 V3 V4 V1 1.00 V2 0.69 1.00 V3 0.65 0.56 1.00 V4 0.54 0.44 0.35 1.00 > #or create another sample and show the data > example.congeneric.data <- simCor(congeneric$model,n=200,data=TRUE) > describe(example.congeneric.data) vars n mean sd median trimmed mad min max range skew kurtosis se V1 1 200 0 1 0.01 0.00 1.09 -2.17 2.73 4.90 0.04 -0.51 0.07 V2 2 200 0 1 -0.03 0.01 1.09 -2.59 2.22 4.82 -0.14 -0.32 0.07 V3 3 200 0 1 0.04 -0.02 0.99 -2.19 2.31 4.50 0.12 -0.57 0.07 V4 4 200 0 1 0.00 -0.02 0.98 -2.56 2.48 5.05 0.10 -0.26 0.07 > lowerCor(example.congeneric.data ) V1 V2 V3 V4 V1 1.00 V2 0.71 1.00 V3 0.65 0.59 1.00 V4 0.52 0.48 0.39 1.00 > example.skewed <- simCor(congeneric$model,n=200,vars=c(1,2),data=TRUE,skew="log") > describe(example.skewed) vars n mean sd median trimmed mad min max range skew kurtosis se V1 1 200 0 1 0.14 0.13 0.73 -6.57 1.35 7.92 -3.00 14.72 0.07 V2 2 200 0 1 0.16 0.11 0.75 -9.32 1.35 10.67 -4.30 35.79 0.07 V3 3 200 0 1 0.01 0.00 0.97 -2.41 2.78 5.19 0.04 -0.07 0.07 V4 4 200 0 1 0.03 0.01 1.03 -2.67 2.60 5.27 -0.05 -0.39 0.07 > > > > > cleanEx() > nameEx("simulation.circ") > ### * simulation.circ > > flush(stderr()); flush(stdout()) > > ### Name: simulation.circ > ### Title: Simulations of circumplex and simple structure > ### Aliases: simulation.circ circ.simulation circ.sim.plot > ### Keywords: multivariate datagen > > ### ** Examples > > #not run > demo <- simulation.circ() > boxplot(demo[3:14]) > title("4 tests of Circumplex Structure",sub="Circumplex, Ellipsoid, Simple Structure") > circ.sim.plot(demo[3:14]) #compare these results to real data > > > > cleanEx() > nameEx("skew") > ### * skew > > flush(stderr()); flush(stdout()) > > ### Name: mardia > ### Title: Calculate univariate or multivariate (Mardia's test) skew and > ### kurtosis for a vector, matrix, or data.frame > ### Aliases: mardia skew kurtosi > ### Keywords: multivariate models > > ### ** Examples > > round(skew(attitude),2) #type 3 (default) [1] -0.36 -0.22 0.38 -0.05 0.20 -0.87 0.85 > round(kurtosi(attitude),2) #type 3 (default) rating complaints privileges learning raises critical advance -0.77 -0.68 -0.41 -1.22 -0.60 0.17 0.47 > #for the differences between the three types of skew and kurtosis: > round(skew(attitude,type=1),2) #type 1 [1] -0.38 -0.23 0.40 -0.06 0.21 -0.91 0.89 > round(skew(attitude,type=2),2) #type 2 [1] -0.40 -0.24 0.42 -0.06 0.22 -0.96 0.94 > mardia(attitude) Call: mardia(x = attitude) Mardia tests of multivariate skew and kurtosis Use describe(x) the to get univariate tests n.obs = 30 num.vars = 7 b1p = 20.09 skew = 100.45 with probability <= 0.11 small sample skew = 113.23 with probability <= 0.018 b2p = 61.91 kurtosis = -0.27 with probability <= 0.79> x <- matrix(rnorm(1000),ncol=10) > describe(x) vars n mean sd median trimmed mad min max range skew kurtosis X1 1 100 0.11 0.90 0.11 0.12 0.87 -2.21 2.40 4.62 -0.07 -0.05 X2 2 100 -0.04 0.96 -0.18 -0.08 0.87 -1.91 2.31 4.22 0.44 -0.31 X3 3 100 0.03 1.03 0.00 0.06 0.94 -2.89 2.65 5.54 -0.24 0.26 X4 4 100 0.05 0.99 0.02 0.09 0.97 -2.59 1.97 4.56 -0.32 -0.29 X5 5 100 -0.04 1.17 -0.14 -0.07 1.03 -3.01 3.81 6.82 0.35 0.36 X6 6 100 -0.04 0.97 -0.06 -0.02 1.01 -2.53 2.02 4.55 -0.22 -0.31 X7 7 100 -0.20 1.08 -0.25 -0.22 1.17 -2.94 2.68 5.62 0.11 -0.16 X8 8 100 0.00 1.10 -0.04 0.01 1.38 -2.60 2.02 4.62 -0.11 -0.85 X9 9 100 0.01 1.09 0.03 0.01 1.00 -2.42 3.06 5.48 0.08 -0.06 X10 10 100 0.00 1.05 0.11 0.02 0.93 -3.00 2.32 5.32 -0.22 0.24 se X1 0.09 X2 0.10 X3 0.10 X4 0.10 X5 0.12 X6 0.10 X7 0.11 X8 0.11 X9 0.11 X10 0.11 > mardia(x) Call: mardia(x = x) Mardia tests of multivariate skew and kurtosis Use describe(x) the to get univariate tests n.obs = 100 num.vars = 10 b1p = 10.93 skew = 182.16 with probability <= 0.97 small sample skew = 188.64 with probability <= 0.94 b2p = 115.25 kurtosis = -1.53 with probability <= 0.13> > > > cleanEx() > nameEx("smc") > ### * smc > > flush(stderr()); flush(stdout()) > > ### Name: smc > ### Title: Find the Squared Multiple Correlation (SMC) of each variable > ### with the remaining variables in a matrix > ### Aliases: smc > ### Keywords: multivariate > > ### ** Examples > > R <- make.hierarchical() > round(smc(R),2) V1 V2 V3 V4 V5 V6 V7 V8 V9 0.44 0.37 0.28 0.30 0.24 0.17 0.18 0.14 0.09 > > > > cleanEx() > nameEx("spider") > ### * spider > > flush(stderr()); flush(stdout()) > > ### Name: spider > ### Title: Make "radar" or "spider" plots. > ### Aliases: spider radar > ### Keywords: multivariate hplot > > ### ** Examples > > op <- par(mfrow=c(3,2)) > spider(y=1,x=2:9,data=Thurstone,connect=FALSE) #a radar plot > spider(y=1,x=2:9,data=Thurstone) #same plot as a spider plot > spider(y=1:3,x=4:9,data=Thurstone,overlay=TRUE) > #make a somewhat oversized plot > spider(y=26:28,x=1:25,data=cor(psychTools::bfi,use="pairwise"),fill=TRUE,scale=2) > par(op) > > #another example taken from Lippa (2001, page 193) > lippa.df <- + structure(list(labels = c("Assured - Dominant", "Gregarious\nExtraverted", + "Warm\nAgreeable", "Unassuming\nIngeneous", "Unassured - Submissive", + "Aloof\nIntroverted", "Cold\nHearted", "Arrogant\nCalculating" + ), pos = c(0.8, 0.85, 0.83, 0.8, 0.75, 0.83, 0.85, 0.85), values = c(0.41, + -0.29, -0.53, -0.61, -0.38, 0.14, 0.59, 0.6), delta = c(1.1, + 1.2, 1.2, 1.1, 1.1, 1.5, 1.2, 1.1)), row.names = c(NA, -8L), class = "data.frame") > > radar(lippa.df$values,abs=TRUE,labels=lippa.df$labels,angle=90,clockwise=TRUE,lwd=3, + label.pos=lippa.df$pos,main="Data from Lippa (2001)",scale=.9,circles=FALSE, + cut=0,delta=lippa.df$delta) > segments(-1,0,1,0,lwd=.2) # Add hairline axes > segments(0,-1,0,1,lwd=.2) > text(0,1.05,expression(italic("Masculine Instrumentality"))) > text(1.05,0,expression(italic("Feminine Communion")),srt=270) > > #show how to draw a hexagon > RIASEC.df <- structure(list(labels = c("Realistic", "Investigative", "Artistic", + "Social", "Enterprising", "Conventional"), Su = c(0.84, 0.26, + -0.35, -0.68, 0.04, -0.33), Morris = c(1.14, 0.32, -0.19, -0.38, + 0.22, 0.23)), row.names = c(NA, -6L), class = "data.frame") > > radar(RIASEC.df$Morris,RIASEC.df$labels,clockwise=TRUE,angle=0,absolute=TRUE,circl=FALSE,scale=.7, + position=c(1,0,0,0,0,0), lwd=4,label.pos=rep(.80,6),main="",cut=0, shape=TRUE, + delta =c(1.1,1.25,1.25, 1.25, 1.45,1.45) ) > text(-1.04,0,expression(italic("People")),srt=90) > text(1.04,0,expression(italic("Things")),srt=270) > text(0,.91,expression(italic("Data"))) > text(0,-.91 ,expression(italic("Ideas"))) > segments(-1,0,1,0,lwd=.2) #add hairline axes > segments(0,-.86,0,.86,lwd=.2) > text(0,1.2, "Data from Su") > > > > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > cleanEx() > nameEx("statsBy") > ### * statsBy > > flush(stderr()); flush(stdout()) > > ### Name: statsBy > ### Title: Find statistics (including correlations) within and between > ### groups for basic multilevel analyses > ### Aliases: statsBy statsBy.boot statsBy.boot.summary faBy > ### Keywords: multivariate models > > ### ** Examples > > #Taken from Pedhazur, 1997 > pedhazur <- structure(list(Group = c(1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, + 2L), X = c(5L, 2L, 4L, 6L, 3L, 8L, 5L, 7L, 9L, 6L), Y = 1:10), .Names = c("Group", + "X", "Y"), class = "data.frame", row.names = c(NA, -10L)) > pedhazur Group X Y 1 1 5 1 2 1 2 2 3 1 4 3 4 1 6 4 5 1 3 5 6 2 8 6 7 2 5 7 8 2 7 8 9 2 9 9 10 2 6 10 > ped.stats <- statsBy(pedhazur,"Group") > ped.stats Statistics within and between groups Call: statsBy(data = pedhazur, group = "Group") Intraclass Correlation 1 (Percentage of variance due to groups) Group X Y 1.00 0.62 0.83 Intraclass Correlation 2 (Reliability of group differences) Group X Y 1.00 0.89 0.96 eta^2 between groups X.bg Y.bg 0.53 0.76 To see the correlations between and within groups, use the short=FALSE option in your print statement. Many results are not shown directly. To see specific objects select from the following list: mean sd n F ICC1 ICC2 ci1 ci2 raw rbg pbg rwg nw ci.wg pwg etabg etawg nwg nG Call> > > #Now do this for the sat.act data set > sat.stats <- statsBy(sat.act,c("education","gender"),cors=TRUE) #group by two grouping variables > print(sat.stats,short=FALSE) Statistics within and between groups Call: statsBy(data = sat.act, group = c("education", "gender"), cors = TRUE) Intraclass Correlation 1 (Percentage of variance due to groups) gender education age ACT SATV SATQ 1.00 1.00 0.45 0.03 0.00 0.03 Intraclass Correlation 2 (Reliability of group differences) gender education age ACT SATV SATQ 1.00 1.00 0.98 0.67 -0.11 0.64 eta^2 between groups age.bg ACT.bg SATV.bg SATQ.bg 0.44 0.05 0.01 0.04 Correlation between groups ag.bg ACT.b SATV. SATQ. age.bg 1.00 ACT.bg 0.64 1.00 SATV.bg 0.26 0.72 1.00 SATQ.bg 0.22 0.52 0.64 1.00 Correlation within groups ag.wg ACT.w SATV. SATQ. age.wg 1.00 ACT.wg 0.03 1.00 SATV.wg -0.08 0.56 1.00 SATQ.wg -0.09 0.59 0.65 1.00 Many results are not shown directly. To see specific objects select from the following list: mean sd n F ICC1 ICC2 ci1 ci2 r r.ci within pooled sd.r raw rbg ci.bg pbg rwg nw ci.wg pwg etabg etawg nWg nwg nG Call> lowerMat(sat.stats$pbg) #get the probability values ag.bg ACT.b SATV. SATQ. age.bg 0.00 ACT.bg 0.02 0.00 SATV.bg 0.41 0.01 0.00 SATQ.bg 0.49 0.08 0.03 0.00 > > #show means by groups > round(sat.stats$mean) gender education age ACT SATV SATQ 0-1 1 0 17 29 640 643 0-2 1 1 20 27 603 626 1-1 1 2 25 27 560 569 1-2 1 3 21 29 617 643 2-1 1 4 32 29 620 636 2-2 1 5 36 31 623 658 3-1 2 0 17 26 595 600 3-2 2 1 19 28 597 593 4-1 2 2 30 27 594 586 4-2 2 3 21 28 610 591 5-1 2 4 29 29 615 598 5-2 2 5 34 29 620 607 > > #Do separate factor analyses for each group > sb.bfi <- statsBy(psychTools::bfi[1:10], group=psychTools::bfi$gender,cors=TRUE) > faBy(sb.bfi,1) #one factor per group Call: faBy(stats = sb.bfi, nfactors = 1) Factor analysis by Groups Average standardized loadings (pattern matrix) based upon correlation matrix for all cases as well as each group low and high 0.1 % quantiles Pooled mean sd low high A1 -0.23 -0.23 0.0060 -0.23 -0.22 A2 0.50 0.50 0.0197 0.49 0.51 A3 0.52 0.51 0.0064 0.51 0.52 A4 0.46 0.48 0.0716 0.44 0.52 A5 0.48 0.49 0.0623 0.45 0.52 C1 0.43 0.43 0.0475 0.41 0.46 C2 0.51 0.51 0.0143 0.50 0.52 C3 0.48 0.47 0.0127 0.47 0.48 C4 -0.53 -0.51 0.0625 -0.55 -0.48 C5 -0.52 -0.50 0.0603 -0.53 -0.46 > faBy(sb.bfi,2) #two factors per group Call: faBy(stats = sb.bfi, nfactors = 2) Factor analysis by Groups Average standardized loadings (pattern matrix) based upon correlation matrix for all cases as well as each group low and high 0.1 % quantiles Pooled mean sd low high A1 0.0763 0.07826 0.02392 0.0647 0.0918 A2 0.0101 0.01279 0.01655 0.0034 0.0222 A3 -0.0300 -0.03553 0.04009 -0.0582 -0.0129 A4 0.1471 0.16334 0.08211 0.1169 0.2098 A5 0.0272 0.03672 0.02481 0.0227 0.0508 C1 0.5740 0.57006 0.00196 0.5690 0.5712 C2 0.6435 0.62855 0.02590 0.6139 0.6432 C3 0.5446 0.53674 0.00820 0.5321 0.5414 C4 -0.6648 -0.64851 0.00660 -0.6522 -0.6448 C5 -0.5729 -0.56673 0.03256 -0.5851 -0.5483 A1 -0.3781 -0.37665 0.03976 -0.3991 -0.3542 A2 0.6706 0.65987 0.00051 0.6596 0.6602 A3 0.7849 0.76638 0.04245 0.7424 0.7904 A4 0.4284 0.42860 0.00280 0.4270 0.4302 A5 0.6142 0.60858 0.05974 0.5748 0.6424 C1 -0.0503 -0.03440 0.07011 -0.0741 0.0053 C2 -0.0133 0.00084 0.05349 -0.0294 0.0311 C3 0.0336 0.03819 0.00727 0.0341 0.0423 C4 0.0027 0.01059 0.05822 -0.0223 0.0435 C5 -0.0506 -0.03728 0.07911 -0.0820 0.0075 > > > > > > > cleanEx() > nameEx("structure.diagram") > ### * structure.diagram > > flush(stderr()); flush(stdout()) > > ### Name: structure.diagram > ### Title: Draw a structural equation model specified by two measurement > ### models and a structural model > ### Aliases: structure.diagram structure.graph structure.sem lavaan.diagram > ### sem.diagram sem.graph > ### Keywords: multivariate hplot > > ### ** Examples > > > #A set of measurement and structural models > #First set up the various matrices > fx <- matrix(c(.9,.8,.7,rep(0,9), .6,.7,-.8,rep(0,9),.5,.6,.4),ncol=3) > fy <- matrix(c(.9,.8,.6,rep(0,4),.6,.8,-.7),ncol=2) > Phi <- matrix(c(1,.35,0,0,0, + .35,1,.5,0,0, + 0,.5, 1,0,0, + .7,-.6, 0, 1,0, + .0, 0, .4,0,1 ),ncol=5,byrow=TRUE) > #now draw a number of models > f1 <- structure.diagram(fx,main = "A measurement model for x") > f2 <- structure.diagram(fx,Phi, main = "A measurement model for x") > f3 <- structure.diagram(fy=fy, main = "A measurement model for y") > f4 <- structure.diagram(fx,Phi,fy,main="A structural path diagram") > f5 <- structure.diagram(fx,Phi,fy,main="A structural path diagram",errors=TRUE) > > #a mimic model > fy <- matrix(c(.9,.8,.6,rep(0,4),.6,.8,-.7),ncol=2) > fx <- matrix(c(.6,.5,0,.4),ncol=2) > mimic <- structure.diagram(fx,fy=fy,simple=FALSE,errors=TRUE, main="A mimic diagram") > > fy <- matrix(c(rep(.9,8),rep(0,16),rep(.8,8)),ncol=2) > structure.diagram(fx,fy=fy) > > #symbolic input > X2 <- matrix(c("a",0,0,"b","e1",0,0,"e2"),ncol=4) > colnames(X2) <- c("X1","X2","E1","E2") > phi2 <- diag(1,4,4) > phi2[2,1] <- phi2[1,2] <- "r" > f2 <- structure.diagram(X2,Phi=phi2,errors=FALSE,main="A symbolic model") > > #symbolic input with error > X2 <- matrix(c("a",0,0,"b"),ncol=2) > colnames(X2) <- c("X1","X2") > phi2 <- diag(1,2,2) > phi2[2,1] <- phi2[1,2] <- "r" > f3 <- structure.diagram(X2,Phi=phi2,main="an alternative representation",e.size=.4) > > #and yet another one > X6 <- matrix(c("a","b","c",rep(0,6),"d","e","f"),nrow=6) > colnames(X6) <- c("L1","L2") > rownames(X6) <- c("x1","x2","x3","x4","x5","x6") > Y3 <- matrix(c("u","w","z"),ncol=1) > colnames(Y3) <- "Y" > rownames(Y3) <- c("y1","y2","y3") > phi21 <- matrix(c(1,0,"r1",0,1,"r2",0,0,1),ncol=3) > colnames(phi21) <- rownames(phi21) <- c("L1","L2","Y") > f4 <- structure.diagram(X6,phi21,Y3) > > ###the following example is not run but is included to show how to work with lavaan > # and finally, a regression model > X7 <- matrix(c("a","b","c","d","e","f"),nrow=6) > f5 <- structure.diagram(X7,regression=TRUE,main = "Regression model") > > #and a really messy regession model > x8 <- c("b1","b2","b3") > r8 <- matrix(c(1,"r12","r13","r12",1,"r23","r13","r23",1),ncol=3) > f6<- structure.diagram(x8,Phi=r8,regression=TRUE,main="Regression model") > > > > cleanEx() > nameEx("structure.list") > ### * structure.list > > flush(stderr()); flush(stdout()) > > ### Name: structure.list > ### Title: Create factor model matrices from an input list > ### Aliases: structure.list phi.list > ### Keywords: multivariate models > > ### ** Examples > > fx <- structure.list(9,list(F1=c(1,2,3),F2=c(4,5,6),F3=c(7,8,9))) > fy <- structure.list(3,list(Y=c(1,2,3)),"Y") > phi <- phi.list(4,list(F1=c(4),F2=c(1,4),F3=c(2),F4=c(1,2,3))) > fx F1 F2 F3 [1,] "a1" "0" "0" [2,] "a2" "0" "0" [3,] "a3" "0" "0" [4,] "0" "b4" "0" [5,] "0" "b5" "0" [6,] "0" "b6" "0" [7,] "0" "0" "c7" [8,] "0" "0" "c8" [9,] "0" "0" "c9" > phi F1 F2 F3 F4 F1 "1" "rba" "0" "rda" F2 "0" "1" "rcb" "rdb" F3 "0" "0" "1" "rdc" F4 "rad" "rbd" "0" "1" > fy Y [1,] "Ya1" [2,] "Ya2" [3,] "Ya3" > > > > > > cleanEx() > nameEx("super.matrix") > ### * super.matrix > > flush(stderr()); flush(stdout()) > > ### Name: superMatrix > ### Title: Form a super matrix from two sub matrices. > ### Aliases: superMatrix superCor super.matrix > ### Keywords: multivariate > > ### ** Examples > > mx <- matrix(c(.9,.8,.7,rep(0,4),.8,.7,.6),ncol=2) > my <- matrix(c(.6,.5,.4)) > > colnames(mx) <- paste("X",1:dim(mx)[2],sep="") > rownames(mx) <- paste("Xv",1:dim(mx)[1],sep="") > colnames(my) <- "Y" > rownames(my) <- paste("Yv",1:3,sep="") > mxy <- superMatrix(mx,my) > #show the use of a list to do this as well > key1 <- make.keys(6,list(first=c(1,-2,3),second=4:6,all=1:6)) #make a scoring key > key2 <- make.keys(4,list(EA=c(1,2),TA=c(3,4))) > superMatrix(list(key1,key2)) first second all EA TA Vx1 1 0 1 0 0 Vx2 -1 0 1 0 0 Vx3 1 0 1 0 0 Vx4 0 1 1 0 0 Vx5 0 1 1 0 0 Vx6 0 1 1 0 0 Vy1 0 0 0 1 0 Vy2 0 0 0 1 0 Vy3 0 0 0 0 1 Vy4 0 0 0 0 1 > > r <- cor(bfi[1:15],use="pairwise") > bfi.scores <- scoreOverlap(bfi.keys[1:2], r,select=FALSE) #note the select = FALSE > R <- superCor(bfi.scores,r) > lowerMat(R) agree cnscn A1 A2 A3 A4 A5 C1 C2 C3 C4 agree 1.00 conscientious 0.25 1.00 A1 -0.39 -0.06 1.00 A2 0.68 0.26 -0.34 1.00 A3 0.72 0.24 -0.27 0.49 1.00 A4 0.49 0.32 -0.15 0.34 0.36 1.00 A5 0.63 0.25 -0.18 0.39 0.50 0.31 1.00 C1 0.13 0.59 0.03 0.09 0.10 0.09 0.12 1.00 C2 0.22 0.67 0.02 0.14 0.14 0.23 0.11 0.43 1.00 C3 0.22 0.59 -0.02 0.19 0.13 0.13 0.13 0.31 0.36 1.00 C4 -0.24 -0.70 0.13 -0.15 -0.12 -0.15 -0.13 -0.34 -0.38 -0.34 1.00 C5 -0.26 -0.63 0.05 -0.12 -0.16 -0.24 -0.17 -0.25 -0.30 -0.34 0.48 E1 -0.32 -0.06 0.11 -0.21 -0.21 -0.11 -0.25 -0.02 0.02 0.00 0.09 E2 -0.40 -0.26 0.09 -0.23 -0.29 -0.19 -0.33 -0.09 -0.06 -0.08 0.20 E3 0.46 0.22 -0.05 0.25 0.39 0.19 0.42 0.12 0.15 0.09 -0.08 E4 0.53 0.25 -0.06 0.28 0.38 0.30 0.47 0.14 0.12 0.09 -0.11 E5 0.36 0.44 -0.02 0.29 0.25 0.16 0.27 0.25 0.25 0.21 -0.24 C5 E1 E2 E3 E4 E5 C5 1.00 E1 0.06 1.00 E2 0.26 0.47 1.00 E3 -0.16 -0.33 -0.38 1.00 E4 -0.20 -0.42 -0.51 0.42 1.00 E5 -0.23 -0.30 -0.37 0.38 0.32 1.00 > #or to just get the scale correlations with the items > R <- superCor(bfi.scores) > round(R,2) agree conscientious agree 1.00 0.25 conscientious 0.25 1.00 A1 -0.39 -0.06 A2 0.68 0.26 A3 0.72 0.24 A4 0.49 0.32 A5 0.63 0.25 C1 0.13 0.59 C2 0.22 0.67 C3 0.22 0.59 C4 -0.24 -0.70 C5 -0.26 -0.63 E1 -0.32 -0.06 E2 -0.40 -0.26 E3 0.46 0.22 E4 0.53 0.25 E5 0.36 0.44 > > > > cleanEx() > nameEx("table2matrix") > ### * table2matrix > > flush(stderr()); flush(stdout()) > > ### Name: table2matrix > ### Title: Convert a table with counts to a matrix or data.frame > ### representing those counts. > ### Aliases: table2matrix table2df > ### Keywords: models > > ### ** Examples > > data(cubits) Warning in data(cubits) : data set ‘cubits’ not found > cubit <- table2matrix(psychTools::cubits,labs=c("height","cubit")) > describe(cubit) vars n mean sd median trimmed mad min max range skew kurtosis height 1 348 67.07 2.36 67.00 67.11 2.97 63.0 71.00 8.00 -0.09 -0.92 cubit 2 348 18.10 0.78 18.25 18.11 0.74 16.5 19.75 3.25 -0.09 -0.60 se height 0.13 cubit 0.04 > ellipses(cubit,n=1) > data(bock) > responses <- table2df(bock.table[,2:6],count=bock.table[,7],labs= paste("lsat6.",1:5,sep="")) > describe(responses) vars n mean sd median trimmed mad min max range skew kurtosis lsat6.1 1 1000 0.92 0.27 1 1.00 0 0 1 1 -3.20 8.22 lsat6.2 2 1000 0.71 0.45 1 0.76 0 0 1 1 -0.92 -1.16 lsat6.3 3 1000 0.55 0.50 1 0.57 0 0 1 1 -0.22 -1.95 lsat6.4 4 1000 0.76 0.43 1 0.83 0 0 1 1 -1.24 -0.48 lsat6.5 5 1000 0.87 0.34 1 0.96 0 0 1 1 -2.17 2.72 se lsat6.1 0.01 lsat6.2 0.01 lsat6.3 0.02 lsat6.4 0.01 lsat6.5 0.01 > > > > cleanEx() > nameEx("tal_or") > ### * tal_or > > flush(stderr()); flush(stdout()) > > ### Name: Tal_Or > ### Title: Data set testing causal direction in presumed media influence > ### Aliases: Tal_Or Tal.Or pmi tctg > ### Keywords: datasets > > ### ** Examples > > data(Tal.Or) > mediate(reaction ~ cond + (pmi), data =Tal.Or,n.iter=50) Mediation/Moderation Analysis Call: mediate(y = reaction ~ cond + (pmi), data = Tal.Or, n.iter = 50) The DV (Y) was reaction . The IV (X) was cond . The mediating variable(s) = pmi . Total effect(c) of cond on reaction = 0.5 S.E. = 0.28 t = 1.79 df= 121 with p = 0.077 Direct effect (c') of cond on reaction removing pmi = 0.25 S.E. = 0.26 t = 0.99 df= 120 with p = 0.32 Indirect effect (ab) of cond on reaction through pmi = 0.24 Mean bootstrapped indirect effect = 0.25 with standard error = 0.14 Lower CI = 0.03 Upper CI = 0.5 R = 0.45 R2 = 0.21 F = 15.56 on 2 and 120 DF p-value: 1.31e-08 To see the longer output, specify short = FALSE in the print statement or ask for the summary> > > > > cleanEx() > nameEx("test.irt") > ### * test.irt > > flush(stderr()); flush(stdout()) > > ### Name: test.irt > ### Title: A simple demonstration (and test) of various IRT scoring > ### algorthims. > ### Aliases: test.irt > ### Keywords: multivariate models > > ### ** Examples > > #not run > #test.irt(9,1000) > > > > cleanEx() > nameEx("test.psych") > ### * test.psych > > flush(stderr()); flush(stdout()) > > ### Name: test.psych > ### Title: Testing of functions in the psych package > ### Aliases: test.psych > ### Keywords: multivariate > > ### ** Examples > > #test <- test.psych() > #not run > #test.psych(all=TRUE) > # f3 <- fa(bfi[1:15],3,n.iter=5) > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="Varimax") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="varimax") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="bifactor") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="varimin") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="bentlerT") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="geominT") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="equamax") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="Promax") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="cluster") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="biquartimin") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="equamax") > # f3 <- fa(bfi[1:15],3,n.iter=5,rotate="Promax") > # > # fpoly <- fa(bfi[1:10],2,n.iter=5,cor="poly") > # f1 <- fa(psychTools::ability,n.iter=4) > # f1p <- fa(psychTools::ability,n.iter=4,cor="tet") > > > > > > cleanEx() > nameEx("testRetest") > ### * testRetest > > flush(stderr()); flush(stdout()) > > ### Name: testRetest > ### Title: Find various test-retest statistics, including test, person and > ### item reliability > ### Aliases: testRetest testReliability > ### Keywords: multivariate models > > ### ** Examples > > > > > cleanEx() > nameEx("tetrachor") > ### * tetrachor > > flush(stderr()); flush(stdout()) > > ### Name: tetrachoric > ### Title: Tetrachoric, polychoric, biserial and polyserial correlations > ### from various types of input > ### Aliases: tetrachoric tetrachor polychoric biserial polydi polyserial > ### poly.mat > ### Keywords: multivariate > > ### ** Examples > > #if(require(mnormt)) { > data(bock) > tetrachoric(lsat6) Call: tetrachoric(x = lsat6) tetrachoric correlation Q1 Q2 Q3 Q4 Q5 Q1 1.00 Q2 0.17 1.00 Q3 0.23 0.19 1.00 Q4 0.11 0.11 0.19 1.00 Q5 0.07 0.17 0.11 0.20 1.00 with tau of Q1 Q2 Q3 Q4 Q5 -1.43 -0.55 -0.13 -0.72 -1.13 > polychoric(lsat6) #values should be the same Call: polychoric(x = lsat6) Polychoric correlations Q1 Q2 Q3 Q4 Q5 Q1 1.00 Q2 0.17 1.00 Q3 0.23 0.19 1.00 Q4 0.11 0.11 0.19 1.00 Q5 0.07 0.17 0.11 0.20 1.00 with tau of 1 Q1 -1.43 Q2 -0.55 Q3 -0.13 Q4 -0.72 Q5 -1.13 > tetrachoric(matrix(c(44268,193,14,0),2,2)) #MPLUS reports.24 For i = 1 j = 1 A cell entry of 0 was replaced with correct = 0.5. Check your data! Call: tetrachoric(x = matrix(c(44268, 193, 14, 0), 2, 2)) tetrachoric correlation [1] 0.23 with tau of [1] 2.6 3.4 > > #Do not apply continuity correction -- compare with previous analysis! > tetrachoric(matrix(c(44268,193,14,0),2,2),correct=0) Warning in optimize(tetraF, interval = c(-1, 1), rc = rc, cc = cc, tab = tab) : NA/Inf replaced by maximum positive value Call: tetrachoric(x = matrix(c(44268, 193, 14, 0), 2, 2), correct = 0) tetrachoric correlation [1] -0.7 with tau of [1] 2.6 3.4 > > #the default is to add correct=.5 to 0 cells > tetrachoric(matrix(c(61661,1610,85,20),2,2)) #Mplus reports .35 Call: tetrachoric(x = matrix(c(61661, 1610, 85, 20), 2, 2)) tetrachoric correlation [1] 0.35 with tau of [1] 1.9 2.9 > tetrachoric(matrix(c(62503,105,768,0),2,2)) #Mplus reports -.10 For i = 1 j = 1 A cell entry of 0 was replaced with correct = 0.5. Check your data! Call: tetrachoric(x = matrix(c(62503, 105, 768, 0), 2, 2)) tetrachoric correlation [1] -0.1 with tau of [1] 2.9 2.3 > tetrachoric(matrix(c(24875,265,47,0),2,2)) #Mplus reports 0 For i = 1 j = 1 A cell entry of 0 was replaced with correct = 0.5. Check your data! Call: tetrachoric(x = matrix(c(24875, 265, 47, 0), 2, 2)) tetrachoric correlation [1] -0.00016 with tau of [1] 2.3 2.9 > > polychoric(matrix(c(61661,1610,85,20),2,2)) #Mplus reports .35 [1] "You seem to have a table, I will return just one correlation." $rho [1] 0.3480731 $objective [1] 0.131411 $tau.row [1] 1.947799 $tau.col [1] 2.937045 > polychoric(matrix(c(62503,105,768,0),2,2)) #Mplus reports -.10 [1] "You seem to have a table, I will return just one correlation." $rho [1] -0.101916 $objective [1] 0.0778664 $tau.row [1] 2.937045 $tau.col [1] 2.253362 > polychoric(matrix(c(24875,265,47,0),2,2)) #Mplus reports 0 [1] "You seem to have a table, I will return just one correlation." $rho [1] -0.000161322 $objective [1] 0.07218902 $tau.row [1] 2.307219 $tau.col [1] 2.899962 > > #Do not apply continuity correction- compare with previous analysis > tetrachoric(matrix(c(24875,265,47,0),2,2), correct=0) Warning in optimize(tetraF, interval = c(-1, 1), rc = rc, cc = cc, tab = tab) : NA/Inf replaced by maximum positive value Warning in optimize(tetraF, interval = c(-1, 1), rc = rc, cc = cc, tab = tab) : NA/Inf replaced by maximum positive value Warning in optimize(tetraF, interval = c(-1, 1), rc = rc, cc = cc, tab = tab) : NA/Inf replaced by maximum positive value Call: tetrachoric(x = matrix(c(24875, 265, 47, 0), 2, 2), correct = 0) tetrachoric correlation [1] -0.78 with tau of [1] 2.3 2.9 > polychoric(matrix(c(24875,265,47,0),2,2), correct=0) #the same result [1] "You seem to have a table, I will return just one correlation." $rho [1] -0.8263137 $objective [1] 0.07195411 $tau.row [1] 2.307219 $tau.col [1] 2.899962 > > > #examples from Kirk 1973 > #note that Kirk's tables have joint probability followed by marginals, but > #tetrachoric needs marginals followed by joint probability > > tetrachoric(c(.5,.5,.333333)) #should be .5 Call: tetrachoric(x = c(0.5, 0.5, 0.333333)) tetrachoric correlation [1] 0.5 with tau of [1] 0 0 > tetrachoric(c(.5,.5,.1150267)) #should be -.75 Call: tetrachoric(x = c(0.5, 0.5, 0.1150267)) tetrachoric correlation [1] -0.75 with tau of [1] 0 0 > tetrachoric(c(.5,.5,.397584)) #should e .8 Call: tetrachoric(x = c(0.5, 0.5, 0.397584)) tetrachoric correlation [1] 0.8 with tau of [1] 0 0 > tetrachoric(c(.158655254,.158655254,.145003)) #should be .99 Call: tetrachoric(x = c(0.158655254, 0.158655254, 0.145003)) tetrachoric correlation [1] 0.99 with tau of [1] -1 -1 > > > #the example from Olsson, 1979 > x <- as.table(matrix(c(13,69,41,6,113,132,0,22,104),3,3)) > polychoric(x,correct=FALSE) [1] "You seem to have a table, I will return just one correlation." $rho [1] 0.4913712 $objective [1] 1.781665 $tau.row A B -1.7743819 -0.1357739 $tau.col A B -0.6871313 0.6682093 > #Olsson reports rho = .49, tau row = -1.77, -.14 and tau col = -.69, .67 > > #give a vector of two marginals and the comorbidity > tetrachoric(c(.2, .15, .1)) Call: tetrachoric(x = c(0.2, 0.15, 0.1)) tetrachoric correlation [1] 0.75 with tau of [1] -1.04 -0.84 > tetrachoric(c(.2, .1001, .1)) Call: tetrachoric(x = c(0.2, 0.1001, 0.1)) tetrachoric correlation [1] 0.98 with tau of [1] -1.28 -0.84 > #} else { > # message("Sorry, you must have mnormt installed")} > > # 4 plots comparing biserial to point biserial and latent Pearson correlation > set.seed(42) > x.4 <- sim.congeneric(loads =c(.9,.6,.3,0),N=1000,short=FALSE) > y <- x.4$latent[,1] > for(i in 1:4) { + x <- x.4$observed[,i] + r <- round(cor(x,y),1) + ylow <- y[x<= 0] + yhigh <- y[x > 0] + yc <- c(ylow,yhigh) + rpb <- round(cor((x>=0),y),2) + rbis <- round(biserial(y,(x>=0)),2) + ellipses(x,y,ylim=c(-3,3),xlim=c(-4,3),pch=21 - (x>0), + main =paste("r = ",r,"rpb = ",rpb,"rbis =",rbis)) + + dlow <- density(ylow) + dhigh <- density(yhigh) + points(dlow$y*5-4,dlow$x,typ="l",lty="dashed") + lines(dhigh$y*5-4,dhigh$x,typ="l") + } > > #show non-symmeteric results > test1 <- tetrachoric(psychTools::ability[,1:4],psychTools::ability[,5:10]) > test2 <- polychoric(psychTools::ability[,1:4],psychTools::ability[,5:10]) > all <- tetrachoric(psychTools::ability[,1:10]) > > > > > > cleanEx() > nameEx("thurstone") > ### * thurstone > > flush(stderr()); flush(stdout()) > > ### Name: thurstone > ### Title: Thurstone Case V scaling > ### Aliases: thurstone > ### Keywords: models > > ### ** Examples > > data(psychTools::vegetables) Warning in data(psychTools::vegetables) : data set ‘psychTools::vegetables’ not found > thurstone(psychTools::veg) Thurstonian scale (case 5) scale values Call: thurstone(x = psychTools::veg) Turn Cab Beet Asp Car Spin S.Beans Peas Corn 0.00 0.52 0.65 0.98 1.12 1.14 1.40 1.44 1.63 Goodness of fit of model 0.99> #But consider the case of 100 random orders > set.seed((42)) > ranks <- matrix(NA,nrow=100,ncol=5) > for(i in 1:100) ranks[i,] <- sample(5,5) > t <- thurstone(ranks,TRUE) > t #show the fits Thurstonian scale (case 5) scale values Call: thurstone(x = ranks, ranks = TRUE) [1] 0.17 0.13 0.17 0.06 0.00 Goodness of fit of model 1> t$hoice #show the choice matrix NULL > > > > > > > cleanEx() > nameEx("tr") > ### * tr > > flush(stderr()); flush(stdout()) > > ### Name: tr > ### Title: Find the trace of a square matrix > ### Aliases: tr > ### Keywords: multivariate > > ### ** Examples > > m <- matrix(1:16,ncol=4) > m [,1] [,2] [,3] [,4] [1,] 1 5 9 13 [2,] 2 6 10 14 [3,] 3 7 11 15 [4,] 4 8 12 16 > tr(m) [1] 34 > > > > > cleanEx() > nameEx("unidim") > ### * unidim > > flush(stderr()); flush(stdout()) > > ### Name: unidim > ### Title: Several indices of the unidimensionality of a set of variables. > ### Aliases: unidim > ### Keywords: models multivariate > > ### ** Examples > > #test the unidimensionality of the five factors of the bfi data set. > > > unidim(psychTools::bfi,psychTools::bfi.keys) A measure of unidimensionality Call: unidim(x = psychTools::bfi, keys.list = psychTools::bfi.keys) Unidimensionality index = u av.r fit fa.fit alpha av.r median.r Unidim.A agree 0.89 0.90 0.99 0.71 0.33 0.34 1 conscientious 0.95 0.97 0.98 0.73 0.35 0.34 1 extraversion 0.97 0.97 0.99 0.76 0.39 0.38 1 neuroticism 0.93 0.95 0.98 0.81 0.47 0.41 1 openness 0.85 0.88 0.97 0.61 0.24 0.23 1 unidim adjusted index reverses negatively scored items. alpha Based upon reverse scoring some items. average and median correlations are based upon reversed scored items> unidim(psychTools::ability,psychTools::ability.keys) A measure of unidimensionality Call: unidim(x = psychTools::ability, keys.list = psychTools::ability.keys) Unidimensionality index = u av.r fit fa.fit alpha av.r median.r Unidim.A ICAR16 0.84 0.91 0.93 0.83 0.23 0.21 1 reasoning 0.98 0.98 1.00 0.65 0.31 0.31 1 letters 0.99 0.99 1.00 0.67 0.34 0.33 1 matrix 0.92 0.94 0.98 0.54 0.23 0.22 1 rotate 0.99 0.99 1.00 0.77 0.45 0.44 1 unidim adjusted index reverses negatively scored items. alpha Based upon reverse scoring some items. average and median correlations are based upon reversed scored items> #Try a known 3 factor structure > x <- sim.minor(nfact=3,bipolar=FALSE) #this makes all the items positive > unidim(x$model) A measure of unidimensionality Call: unidim(x = x$model) Unidimensionality index = u av.r fit fa.fit alpha av.r median.r Unidim.A 0.04 0.11 0.37 0.58 0.10 0.04 0.80 unidim adjusted index reverses negatively scored items. alpha Based upon reverse scoring some items. average and median correlations are based upon reversed scored items> keys.list <- list(first =c(1:4),second = 5:8,third=9:12,all=1:12) > unidim(x$model,keys.list) A measure of unidimensionality Call: unidim(x = x$model, keys.list = keys.list) Unidimensionality index = u av.r fit fa.fit alpha av.r median.r Unidim.A first 0.96 0.96 1.00 0.86 0.61 0.60 1.0 second 0.98 0.98 1.00 0.84 0.57 0.56 1.0 third 0.98 0.98 1.00 0.85 0.58 0.56 1.0 all 0.04 0.11 0.37 0.58 0.10 0.04 0.8 unidim adjusted index reverses negatively scored items. alpha Based upon reverse scoring some items. average and median correlations are based upon reversed scored items> > x <- sim.minor(nfact=3) > unidim(x$model,keys.list) #we flip the negative items A measure of unidimensionality Call: unidim(x = x$model, keys.list = keys.list) Unidimensionality index = u av.r fit fa.fit alpha av.r median.r Unidim.A first 0.98 0.98 1.00 0.79 0.49 0.48 1.00 second 0.97 0.98 1.00 0.80 0.50 0.52 1.00 third 0.96 0.97 1.00 0.79 0.48 0.48 1.00 all 0.01 0.04 0.34 0.39 0.05 0.00 1.06 unidim adjusted index reverses negatively scored items. alpha Based upon reverse scoring some items. average and median correlations are based upon reversed scored items> > #what about a hierarchical model? > H <- sim.hierarchical() # by default, a nice hierarchical model > H.keys <- list(First = paste0("V",1:3),Second=paste0("V",4:6),Third=paste0("V",7:9), + All = paste0("V",1:9)) > unidim(H,H.keys) A measure of unidimensionality Call: unidim(x = H, keys.list = H.keys) Unidimensionality index = u av.r fit fa.fit alpha av.r median.r Unidim.A First 0.99 0.99 1.00 0.74 0.49 0.48 1 Second 0.98 0.98 1.00 0.62 0.36 0.35 1 Third 0.97 0.97 1.00 0.50 0.25 0.24 1 All 0.84 0.87 0.97 0.76 0.26 0.25 1 unidim adjusted index reverses negatively scored items. alpha Based upon reverse scoring some items. average and median correlations are based upon reversed scored items> > > > > > > cleanEx() > nameEx("winsor") > ### * winsor > > flush(stderr()); flush(stdout()) > > ### Name: winsor > ### Title: Find the Winsorized scores, means, sds or variances for a > ### vector, matrix, or data.frame > ### Aliases: winsor winsor.mean winsor.means winsor.sd winsor.var > ### Keywords: univar > > ### ** Examples > > data(sat.act) > winsor.means(sat.act) #compare with the means of the winsorized scores gender education age ACT SATV SATQ 1.647143 3.391429 23.954286 28.957143 615.570000 614.521106 > y <- winsor(sat.act) > describe(y) vars n mean sd median trimmed mad min max range skew gender 1 700 1.65 0.48 2 1.68 0.00 1.0 2 1.0 -0.61 education 2 700 3.39 1.03 3 3.36 1.48 2.0 5 3.0 0.27 age 3 700 23.95 5.11 22 23.57 4.45 19.0 32 13.0 0.56 ACT 4 700 28.96 3.18 29 28.97 4.45 24.8 33 8.2 -0.06 SATV 5 700 615.57 72.79 620 618.21 118.61 510.0 700 190.0 -0.24 SATQ 6 687 614.52 80.88 620 616.87 118.61 500.0 710 210.0 -0.24 kurtosis se gender -1.62 0.02 education -1.07 0.04 age -1.30 0.19 ACT -1.56 0.12 SATV -1.43 2.75 SATQ -1.47 3.09 > xy <- data.frame(sat.act,y) > #pairs.panels(xy) #to see the effect of winsorizing > x <- matrix(1:100,ncol=5) > winsor(x) [,1] [,2] [,3] [,4] [,5] [1,] 4.8 24.8 44.8 64.8 84.8 [2,] 4.8 24.8 44.8 64.8 84.8 [3,] 4.8 24.8 44.8 64.8 84.8 [4,] 4.8 24.8 44.8 64.8 84.8 [5,] 5.0 25.0 45.0 65.0 85.0 [6,] 6.0 26.0 46.0 66.0 86.0 [7,] 7.0 27.0 47.0 67.0 87.0 [8,] 8.0 28.0 48.0 68.0 88.0 [9,] 9.0 29.0 49.0 69.0 89.0 [10,] 10.0 30.0 50.0 70.0 90.0 [11,] 11.0 31.0 51.0 71.0 91.0 [12,] 12.0 32.0 52.0 72.0 92.0 [13,] 13.0 33.0 53.0 73.0 93.0 [14,] 14.0 34.0 54.0 74.0 94.0 [15,] 15.0 35.0 55.0 75.0 95.0 [16,] 16.0 36.0 56.0 76.0 96.0 [17,] 16.2 36.2 56.2 76.2 96.2 [18,] 16.2 36.2 56.2 76.2 96.2 [19,] 16.2 36.2 56.2 76.2 96.2 [20,] 16.2 36.2 56.2 76.2 96.2 > winsor.means(x) [1] 10.5 30.5 50.5 70.5 90.5 > y <- 1:11 > winsor(y,trim=.5) [1] 6 6 6 6 6 6 6 6 6 6 6 > > > > cleanEx() > nameEx("withinBetween") > ### * withinBetween > > flush(stderr()); flush(stdout()) > > ### Name: withinBetween > ### Title: An example of the distinction between within group and between > ### group correlations > ### Aliases: withinBetween > ### Keywords: datasets > > ### ** Examples > > data(withinBetween) > pairs.panels(withinBetween,bg=c("red","blue","white","black")[withinBetween[,1]], + pch=21,ellipses=FALSE,lm=TRUE) > stats <- statsBy(withinBetween,'Group') > print(stats,short=FALSE) Statistics within and between groups Call: statsBy(data = withinBetween, group = "Group") Intraclass Correlation 1 (Percentage of variance due to groups) Group V1 V2 V3 V4 V5 V6 V7 V8 V9 1.00 0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43 Intraclass Correlation 2 (Reliability of group differences) Group V1 V2 V3 V4 V5 V6 V7 V8 V9 1.00 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 eta^2 between groups V1.bg V2.bg V3.bg V4.bg V5.bg V6.bg V7.bg V8.bg V9.bg 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 Correlation between groups V1.bg V2.bg V3.bg V4.bg V5.bg V6.bg V7.bg V8.bg V9.bg V1.bg 1 V2.bg 1 1 V3.bg 1 1 1 V4.bg 0 0 0 1 V5.bg 0 0 0 1 1 V6.bg 0 0 0 1 1 1 V7.bg -1 -1 -1 0 0 0 1 V8.bg -1 -1 -1 0 0 0 1 1 V9.bg -1 -1 -1 0 0 0 1 1 1 Correlation within groups V1.wg V2.wg V3.wg V4.wg V5.wg V6.wg V7.wg V8.wg V9.wg V1.wg 1 V2.wg 0 1 V3.wg -1 0 1 V4.wg 1 0 -1 1 V5.wg 0 1 0 0 1 V6.wg -1 0 1 -1 0 1 V7.wg 1 0 -1 1 0 -1 1 V8.wg 0 1 0 0 1 0 0 1 V9.wg -1 0 1 -1 0 1 -1 0 1 Many results are not shown directly. To see specific objects select from the following list: mean sd n F ICC1 ICC2 ci1 ci2 raw rbg ci.bg pbg rwg nw ci.wg pwg etabg etawg nwg nG Call> > > > ### *