Overview

Here I show a number of functions that are useful for organizing and analyzing data.

There are the standard ‘R’ functions (dim, colnames, names, table, subset, list, summary, cor ) and some special functions from the psych package (testRetest, alpha, scoreItems, scoreOverlap, cor2, corr.test, splitHalf, omega, omegaSem).

I also show how to label the chunks so you can see your progress in Rmarkdown

From Revelle and Condon (2019) Reliability: from Alpha to Omega

For more details on reliabilty, see Here are the code snippets from the Revelle and Condon (2019) as discussed in the appendix. Please read those two articles for details.

Preliminaries

#install.packages("psych",dependencies = TRUE) #Just need to do this once 
#install.packages("psychTools") #Just do this once as well
library(psych) #make the psych package active-- need to do this everytime you start R 
library(psychTools) #if you want to use the example data sets and some convenient tools
sessionInfo() #to see what version you are using
## R version 4.4.0 beta (2024-04-12 r86412)
## Platform: aarch64-apple-darwin20
## Running under: macOS Sonoma 14.4.1
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib 
## LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.0
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## time zone: America/Chicago
## tzcode source: internal
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] psychTools_2.4.4 psych_2.4.4     
## 
## loaded via a namespace (and not attached):
##  [1] nlme_3.1-164      cli_3.6.1         knitr_1.43        rlang_1.1.1       xfun_0.39        
##  [6] jsonlite_1.8.5    rtf_0.4-14.1      htmltools_0.5.8.1 sass_0.4.9        rmarkdown_2.26   
## [11] grid_4.4.0        evaluate_0.21     jquerylib_0.1.4   fastmap_1.1.1     yaml_2.3.7       
## [16] lifecycle_1.0.3   compiler_4.4.0    rstudioapi_0.16.0 R.oo_1.26.0       lattice_0.22-6   
## [21] digest_0.6.31     R6_2.5.1          foreign_0.8-86    mnormt_2.1.1      parallel_4.4.0   
## [26] bslib_0.7.0       R.methodsS3_1.8.2 tools_4.4.0       cachem_1.0.8

Getting Help

This just allows you to see what to do. When in doubt about any function, always ask for ? (help)

#Getting help
help("psych") #opens a help window overview of the package
help("psychTools") #opens a help window listing the various data sets in psychTools
vignette(topic="intro",package="psychTools") #opens an extensive pdf document 
vignette(topic="overview",package="psychTools") #opens the second part of this vignette 
?omega #opens the specific help page for e.g., the omega function

Other vignettes are cross linked from those (look at the this and related documents section of each vignette.)

Reading in the data

(This is for new data, we will use data already available in psychTools), We do not need to execute any of this code.

# my.data <- read.file() #opens an OS dependent search window and reads data according to the suffix 
 #or first copy your data to the clipboard and then
#my.data <- read.clipboard() #assumes that header information is on the first line
#my.data <- read.clipboard(header=FALSE) #no header information: the data start on the first line

Use the sai and msqR data sets in psychTools

dim(sai)   #how many rows (subjects) and columns (variables) in the built in data set sai
## [1] 5378   23
dim(msqR) #how many rows (subjects) and columns  variables) in the msqR data set
## [1] 6411   88
headTail(sai) # show the first and last 3 lines of the sai data set
##      study time  id calm secure tense regretful at.ease upset worrying rested anxious comfortable
## 1     AGES    1   1    3      3     2         1       2     1        1      3       2           2
## 2     AGES    1   2    3      3     2         2       3     2        1      1       2           3
## 3     AGES    1   3    3      3     2         1       3     1        2      1       1           3
## 4     AGES    1   4    3      3     1         1       3     1        1      2       1           3
## ...   <NA>  ... ...  ...    ...   ...       ...     ...   ...      ...    ...     ...         ...
## 5375  XRAY    2 197    2      2     3      <NA>    <NA>  <NA>     <NA>   <NA>    <NA>        <NA>
## 5376  XRAY    2 198    4      4     1         1       4     1        1      1       1           4
## 5377  XRAY    2 199    4      4     1         1       4     1        1      1       1           4
## 5378  XRAY    2 200    3      3     2         2       3     2        2      2       2           4
##      confident nervous jittery high.strung relaxed content worried rattled joyful pleasant
## 1            3       2       2           2       2       3       1       1      3        3
## 2            2       2       1           1       2       3       2       1      1        2
## 3            3       2       2           1       2       3       1       1      3        3
## 4            4       1       2           1       3       4       1       1      2        3
## ...        ...     ...     ...         ...     ...     ...     ...     ...    ...      ...
## 5375      <NA>    <NA>    <NA>        <NA>    <NA>    <NA>    <NA>    <NA>   <NA>     <NA>
## 5376         3       1       1           1       4       3       2       1      3        4
## 5377         4       1       1           1       3       4       1       1      3        4
## 5378         2       2       2           2       2       3       4       1      3        3
colnames(msqR) #what are the variables in the msqR data set
##  [1] "active"       "afraid"       "alert"        "angry"        "aroused"      "ashamed"     
##  [7] "astonished"   "at.ease"      "at.rest"      "attentive"    "blue"         "bored"       
## [13] "calm"         "clutched.up"  "confident"    "content"      "delighted"    "depressed"   
## [19] "determined"   "distressed"   "drowsy"       "dull"         "elated"       "energetic"   
## [25] "enthusiastic" "excited"      "fearful"      "frustrated"   "full.of.pep"  "gloomy"      
## [31] "grouchy"      "guilty"       "happy"        "hostile"      "inspired"     "intense"     
## [37] "interested"   "irritable"    "jittery"      "lively"       "lonely"       "nervous"     
## [43] "placid"       "pleased"      "proud"        "quiescent"    "quiet"        "relaxed"     
## [49] "sad"          "satisfied"    "scared"       "serene"       "sleepy"       "sluggish"    
## [55] "sociable"     "sorry"        "still"        "strong"       "surprised"    "tense"       
## [61] "tired"        "unhappy"      "upset"        "vigorous"     "wakeful"      "warmhearted" 
## [67] "wide.awake"   "anxious"      "cheerful"     "idle"         "inactive"     "tranquil"    
## [73] "alone"        "kindly"       "scornful"     "Extraversion" "Neuroticism"  "Lie"         
## [79] "Sociability"  "Impulsivity"  "gender"       "TOD"          "drug"         "film"        
## [85] "time"         "id"           "form"         "study"

The msqR data set has lots of data, just choose some of it

Because the entire data set includes 6,411 row for 3,032 unique subjects (some studies included multiple administrations), we will select just subjects from studies that meet particular criteria. That is, for short term test-dependability, those studies where the SAI and MSQ was given twice in the same session (time = 1 and 2). For longer term stability (over 1-2 days), those studies where the SAI and MSQ were given on different days (time = 1 and 3). We use the subset function to choose just those subjects who meet certain conditions (e.g., the first occasion data). We use “==” to represent equality.

#?msqR #ask for information about the sai data set
table(msqR$study,msqR$time) #show the study names and sample sizes
##           
##              1   2   3   4
##   AGES      68  68   0   0
##   Cart      63  63   0   0
##   CITY     157 157   0   0
##   EMIT      71  71   0   0
##   Fast      94  94   0   0
##   FIAT      70  70   0   0
##   FILM      95  95  95   0
##   FLAT     170 170 170   0
##   GRAY     107 107   0   0
##   HOME      67  67   0   0
##   IMPS     102 102   0   0
##   ITEM      49  49   0   0
##   Maps     160 160   0   0
##   MITE      49  49   0   0
##   MIXX      71  71   0   0
##   PAT       65  65  65  65
##   PATS     132   0   0   0
##   RAFT      40  40   0   0
##   RIM      342   0 342   0
##   ROB       51  51  46  46
##   SALT     104 104   0   0
##   SAM      324   0 324   0
##   SHED      58  58   0   0
##   SHOP      98  98   0   0
##   SWAM.one  94   0   0   0
##   SWAM.two  54   0   0   0
##   VALE      77  77  70  70
##   XRAY     200 200   0   0
#Now, select some subsets for analysis using the subset function.
#the short term consistency sets
sai.control <- subset(sai,is.element(sai$study,c("Cart", "Fast", "SHED", "SHOP")) )
#pre and post drug studies
sai.drug <- subset(sai,is.element(sai$study, c("AGES","SALT","VALE","XRAY")))
#pre and post film studies
sai.film <- subset(sai,is.element(sai$study, c("FIAT","FLAT", "XRAY") ))
msq.control <- subset(msqR,is.element(msqR$study,c("Cart", "Fast", "SHED", "SHOP")) )
#pre and post drug studies
msq.drug <- subset(msqR,is.element(msqR$study, c("AGES","CITY","EMIT","SALT","VALE","XRAY"))) #pre and post film studies
msq.film <- subset(msqR,is.element(msqR$study, c("FIAT","FLAT", "MAPS", "MIXX","XRAY") )) 
msq.films4 <- subset(msqR, is.element(msqR$study, c("FLAT", "MAPS", "XRAY") ))
msq1 <- subset(msqR,msqR$time == 1) #just the first day measures
sai1 <- subset(sai,sai$time==1) #just the first set of observations for the SAI
sam.rim <- subset(sai,(sai$study %in% c("SAM" ,"RIM")))#choose SAM and RIM for 2 day test retest 
vale <- subset(sai,sai$study=="VALE") #choose the VALE study for multilevel analysis

Using dim, colnames, and table to explore data sets

Two basic R commands (dim and (table) allow us to the see the size (dimensions) of these data sets, and then count the cases meeting certain conditions.

dim(msq.control) #how many subjects and how many items?
## [1] 626  88
dim(sai.control) #show the number of subjects and items for the second subset 
## [1] 626  23
table(sam.rim$time) #how many were in each time point
## 
##   1   3 
## 666 666
table(vale$time) #how many were repeated twice on day 1 and then on day 2
## 
##  1  2  3  4 
## 77 77 70 70
table(msq.control$study,msq.control$time)
##       
##         1  2
##   Cart 63 63
##   Fast 94 94
##   SHED 58 58
##   SHOP 98 98

Analysis on particular sets of data

We want to do analyses on those items in the sai and msqR data sets that overlap. The items, although all thought to measure anxiety reflect two subdimensions, positive and negative affect/tension. We can score them for positive affect, negative affect, and total anxiety on one subset of sai items. Of the 20 items, 10 overlap with the msqR items, and we can use the other 10 as a logical alternate form. We indicate reversed keyed items by a negative sign. We specify the scoring keys for all seven of these scales. Note that these keys include over- lapping items. This will artificially inflate correlations of these scales. We form a list of the seven different keys, where each key is given a name and is formed by concatenating (using the c command) the separate elements of the key.

#create keying information for several analyses
sai.alternate.forms <- list(
sai=c( "anxious", "jittery", "nervous" ,"tense", "upset", "-at.ease" , "-calm" , "-confident", "-content","-relaxed", "regretful", "worrying", "high.strung",
"worried", "rattled", "-secure", "-rested", "-comfortable", "-joyful", "-pleasant"),
anx1 = c("anxious", "jittery", "nervous" ,"tense", "upset","-at.ease" , "-calm" ,"-confident", "-content","-relaxed"),
pos1 =c( "at.ease","calm","confident","content","relaxed"),
neg1 = c("anxious", "jittery", "nervous" ,"tense" , "upset"),
anx2 = c("regretful","worrying", "high.strung","worried", "rattled", "-secure",
"-rested", "-comfortable", "-joyful", "-pleasant" ),
pos2=c( "secure","rested","comfortable" ,"joyful" , "pleasant" ),
neg2=c("regretful","worrying", "high.strung","worried", "rattled" )
)
anx.keys <- sai.alternate.forms$anx1 #the overlapping keys for scoring sai and msq 
select <- selectFromKeys (anx.keys) #to be used later in alpha and multilevel.reliability
select  #show the items
##  [1] "anxious"   "jittery"   "nervous"   "tense"     "upset"     "at.ease"   "calm"      "confident"
##  [9] "content"   "relaxed"

Three measures of test-retest reliability using testRetest function

In some studies, the STAI and MSQ were given before and after a control, drug, or film manipulation. This allows us to test for the short term dependability of these measures. Here we show the commands do this but just the full output for one set. To run the testRetest function, the data need to be in one of two forms: one object where the subjects are identified with an identification number and the time (1 or 2) of testing is specified or two data objects with an equal number of rows. Here we show the first way of finding test-retest measures.

As is true of most R functions, the testRetest returns many different objects (results). The print function is called automatically by just specifying the name of the resulting object. print will give the output that the author of the function thinks is most useful in a somewhat neat manner. Some functions also have an associated summary function that gives even less output. To see all of the output, you need to inspect the various objects returned by the function.

sai.test.retest.control     <-     testRetest(sai.control, keys=anx.keys)
sai.test.retest.drug        <-     testRetest(sai.drug, keys=anx.keys)
## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero
sai.test.retest.film        <-     testRetest(sai.film, keys=anx.keys)
## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero
msq.test.retest.control     <-     testRetest(msq.control, keys=anx.keys)
msq.test.retest.drug        <-     testRetest(msq.drug, keys=anx.keys)
## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero

## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero
msq.test.retest.film        <-     testRetest(msq.film, keys=anx.keys)

sai.test.retest.control        #show the complete ouput 
## 
## Test Retest reliability 
## Call: testRetest(t1 = sai.control, keys = anx.keys)
## 
## Number of subjects =  313  Number of items =  10
##  Correlation of scale scores over time 0.76
##  Alpha reliability statistics for time 1 and time 2 
##        raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## Time 1   0.86   0.86 0.88 0.38 6.24 0.04  0.76  0.93  0.03
## Time 2   0.87   0.87 0.89 0.40 6.72 0.04  0.79  0.94  0.02
## 
##  Mean between person, across item reliability =  0.6
##  Mean within person, across item reliability =  0.67
## with standard deviation of  0.27 
## 
##  Mean within person, across item d2 =  0.54
## R1F  =  0.92 Reliability of average of all items for one  time (Random time effects)
## RkF  =  0.96 Reliability of average of all items and both times (Fixed time effects)
## R1R  =  0.73 Generalizability of a single time point across all items (Random time effects)
## Rc   =  0.72 Generalizability of change (fixed time points, fixed items) 
## Multilevel components of variance
##              variance Percent
## ID               0.21    0.22
## Time             0.01    0.01
## Items            0.28    0.29
## ID x time        0.05    0.05
## ID x items       0.19    0.20
## time x items     0.01    0.01
## Residual         0.20    0.21
## Total            0.95    1.00
## 
##  To see the item.stats, print with short=FALSE. 
## To see the subject reliabilities and differences, examine the 'scores' object.
summary( sai.test.retest.drug) #give just a  brief summary of the outpu
## Call: testRetest(t1 = sai.drug, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.73
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.88   0.88 0.90 0.43 7.49 0.03  0.81  0.93  0.03
## y   0.89   0.89 0.91 0.45 8.05 0.03  0.81  0.91  0.03
## 
## 
summary( sai.test.retest.film)  #many functions can be summarized
## Call: testRetest(t1 = sai.film, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.57
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.88   0.88 0.91 0.43 7.66 0.03  0.81  0.93  0.02
## y   0.88   0.88 0.91 0.42 7.25 0.03  0.78  0.91  0.03
## 
## 
summary(msq.test.retest.control)
## Call: testRetest(t1 = msq.control, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.74
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x    0.8    0.8 0.84 0.29 4.10 0.09  0.62  0.95  0.03
## y    0.8    0.8 0.84 0.28 3.92 0.09  0.59  0.93  0.04
## 
## 
summary( msq.test.retest.drug)
## Call: testRetest(t1 = msq.drug, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.2
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.83   0.83 0.87 0.33 5.03 0.06  0.69  0.94  0.03
## y   0.84   0.84 0.88 0.35 5.44 0.05  0.69  0.92  0.04
## 
## 
summary(msq.test.retest.film )
## Call: testRetest(t1 = msq.film, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.55
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.85   0.85 0.88 0.37 5.87 0.05  0.75  0.94  0.03
## y   0.83   0.83 0.87 0.33 4.96 0.06  0.66  0.92  0.04
## 
## 

Short term: dependability

Short term dependability is just the test-retest correlation after a short delay.
Here we show the test-retest correlation for the 313 subjects in the control condition

 sai.test.retest.control        #show the complete ouput 
## 
## Test Retest reliability 
## Call: testRetest(t1 = sai.control, keys = anx.keys)
## 
## Number of subjects =  313  Number of items =  10
##  Correlation of scale scores over time 0.76
##  Alpha reliability statistics for time 1 and time 2 
##        raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## Time 1   0.86   0.86 0.88 0.38 6.24 0.04  0.76  0.93  0.03
## Time 2   0.87   0.87 0.89 0.40 6.72 0.04  0.79  0.94  0.02
## 
##  Mean between person, across item reliability =  0.6
##  Mean within person, across item reliability =  0.67
## with standard deviation of  0.27 
## 
##  Mean within person, across item d2 =  0.54
## R1F  =  0.92 Reliability of average of all items for one  time (Random time effects)
## RkF  =  0.96 Reliability of average of all items and both times (Fixed time effects)
## R1R  =  0.73 Generalizability of a single time point across all items (Random time effects)
## Rc   =  0.72 Generalizability of change (fixed time points, fixed items) 
## Multilevel components of variance
##              variance Percent
## ID               0.21    0.22
## Time             0.01    0.01
## Items            0.28    0.29
## ID x time        0.05    0.05
## ID x items       0.19    0.20
## time x items     0.01    0.01
## Residual         0.20    0.21
## Total            0.95    1.00
## 
##  To see the item.stats, print with short=FALSE. 
## To see the subject reliabilities and differences, examine the 'scores' object.

Many functions show too much output, use summary to get the highlights

The summary command is tailored for each package. This is one of the joys of object oriented programming. Thus, summary discovers what it is supposed to summarize. The output will differ for each object.

summary( sai.test.retest.drug) #a brief summary 
## Call: testRetest(t1 = sai.drug, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.73
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.88   0.88 0.90 0.43 7.49 0.03  0.81  0.93  0.03
## y   0.89   0.89 0.91 0.45 8.05 0.03  0.81  0.91  0.03
## 
## 
 summary( sai.test.retest.film)
## Call: testRetest(t1 = sai.film, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.57
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.88   0.88 0.91 0.43 7.66 0.03  0.81  0.93  0.02
## y   0.88   0.88 0.91 0.42 7.25 0.03  0.78  0.91  0.03
## 
## 
 summary(msq.test.retest.control)
## Call: testRetest(t1 = msq.control, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.74
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x    0.8    0.8 0.84 0.29 4.10 0.09  0.62  0.95  0.03
## y    0.8    0.8 0.84 0.28 3.92 0.09  0.59  0.93  0.04
## 
## 
 summary( msq.test.retest.drug)
## Call: testRetest(t1 = msq.drug, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.2
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.83   0.83 0.87 0.33 5.03 0.06  0.69  0.94  0.03
## y   0.84   0.84 0.88 0.35 5.44 0.05  0.69  0.92  0.04
## 
## 
 summary(msq.test.retest.film )
## Call: testRetest(t1 = msq.film, keys = anx.keys)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.55
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.85   0.85 0.88 0.37 5.87 0.05  0.75  0.94  0.03
## y   0.83   0.83 0.87 0.33 4.96 0.06  0.66  0.92  0.04
## 
## 

Test dependability by examining duplicated items

As suggested by Dustin Wood and his colleagues, it is possible to repeat the same item in an inventory to get an estimate of the reliability of a single item.

Ten of the sai items were given immediately afterwards as part of the msqR. This gives us a measure of immediate dependability (see Table~\(\ref{tab:generalizability}\)). An alternative way to do testRetest to include two data sets with the same number of rows in each set.

The code for this is taken from the example for testRetest in the psych package. We find the overlapping items by using the is.element function which is identical to the %in% function. We use the example from the help pages for the testRetest function.

  sai.xray1 <- subset(sai,((sai$time==1) & sai$study=="XRAY"))
msq.xray <- subset(psychTools::msqR,
 (psychTools::msqR$study=="XRAY") & (psychTools::msqR$time==1))
select <- colnames(sai.xray1)[is.element(colnames(sai.xray1 ),colnames(psychTools::msqR))] 

select <-select[-c(1:3)]  #get rid of the id information
#The case where the two times are in the form x, y
#show the items we are including
select
##  [1] "calm"      "tense"     "at.ease"   "upset"     "anxious"   "confident" "nervous"   "jittery"  
##  [9] "relaxed"   "content"
dependability <-  testRetest(sai.xray1,msq.xray,keys=select)
## Some items were negatively correlated with total scale and were automatically reversed.
##  This is indicated by a negative sign for the variable name.
names(dependability)  #what are the objects included that could be examined
##  [1] "r12"        "alpha"      "rqq"        "dxy"        "item.stats" "scores"     "xy.df"     
##  [8] "key"        "ml"         "Call"
dependability  #print the main results by specifying the name of the object to be printed.
## 
## Test Retest reliability 
## Call: testRetest(t1 = sai.xray1, t2 = msq.xray, keys = select)
## 
## Number of subjects =  200  Number of items =  10
##  Correlation of scale scores over time 0.92
##  Alpha reliability statistics for time 1 and time 2 
##        raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## Time 1   0.90   0.90 0.91 0.46 8.54 0.02  0.83  0.92  0.03
## Time 2   0.87   0.87 0.89 0.40 6.54 0.04  0.78  0.94  0.02
## 
##  Mean between person, across item reliability =  0.71
##  Mean within person, across item reliability =  0.6
## with standard deviation of  0.3 
## 
##  Mean within person, across item d2 =  1.38
## R1F  =  0.94 Reliability of average of all items for one  time (Random time effects)
## RkF  =  0.97 Reliability of average of all items and both times (Fixed time effects)
## R1R  =  0.44 Generalizability of a single time point across all items (Random time effects)
## Rc   =  0.29 Generalizability of change (fixed time points, fixed items) 
## Multilevel components of variance
##              variance Percent
## ID               0.34    0.24
## Time             0.44    0.31
## Items            0.17    0.12
## ID x time        0.01    0.01
## ID x items       0.24    0.17
## time x items     0.01    0.01
## Residual         0.23    0.16
## Total            1.45    1.00
## 
##  To see the item.stats, print with short=FALSE. 
## To see the subject reliabilities and differences, examine the 'scores' object.

Long Term – Stability

Stablity can be taken over days, months, or even years (Deary et al.). To show the stability of mood over a few days, we use two data sets from `psychTools’.

There are several data sets that allow us to examine temporal stability over several days or weeks. The msqR and sai were given with a one to 2 delay in two studies, 666 subjects were available in two studies with within study repeated measures over two days.

These are mood items. Should they be stable? What does it mean if they are?

 #select the two day subjects
 sai.2 <- subset(sai, sai$study %in% cs(RIM,SAM))
 msqR.2 <- subset(msqR, msqR$study %in% cs(RIM,SAM))
 sai.stability <- testRetest(sai.2 ,keys = select)
## Some items were negatively correlated with total scale and were automatically reversed.
##  This is indicated by a negative sign for the variable name.
## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero

## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero
## boundary (singular) fit: see help('isSingular')
 msqR.stability <- testRetest(msqR.2, keys = select) 
## Some items were negatively correlated with total scale and were automatically reversed.
##  This is indicated by a negative sign for the variable name.
## boundary (singular) fit: see help('isSingular')
 summary(sai.stability)
## Call: testRetest(t1 = sai.2, keys = select)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.36
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.86   0.86 0.89 0.38 6.07 0.04  0.73  0.92  0.03
## y   0.87   0.87 0.90 0.41 6.85 0.04  0.78  0.93  0.03
## 
## 
 summary(msqR.stability)
## Call: testRetest(t1 = msqR.2, keys = select)
## Test-retest correlations and reliabilities
## Test retest correlation =  0.39
##  Alpha reliabilities for both time points 
##   raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## x   0.81   0.82 0.86 0.31 4.44 0.08  0.63  0.93  0.04
## y   0.82   0.83 0.86 0.32 4.72 0.07  0.65  0.93  0.04
## 
## 

Trait and State Measures

Just as we can find the 1-2 day stability of the state scores of the and data sets, so we can find the correlation between the trait measures on day 1 with the state measures at the same time as well as after a delay.

To do this, we first need to find the scores for the state measures and the trait measures and then correlate these measures. The psych package includes multiple ways of finding scale scores from items. Details on these may be found in the online Vignette included in the package. Here we show one way of obtaining scores based upon the items using thescoreItemsfunction. When we discuss thealpha function, we will show how to use that function to obtain scores as well.

There are two traditions in converting items into scales. (This is known as forming aggregations of items.) One is to find the average item response, the other is to total the item responses. Although both options are available in scoreItems, the default is to find item averages. This has the advantage that the scores are in the same metric as the item. When scoring items, it is important to recognize that some items need to be reverse keyed. Although it is possible to do this by manually recoding items, it is easier to let the function do it for you by specifying the direction of the items in a list. We will use the keys we created before.

sai.1 <- sai[sai$time == 1,]   #get just first measures for the sai
dim(sai.1)
## [1] 3032   23
dim(tai)  #note that these are the same
## [1] 3032   23
sai.1.scores <- scoreItems(keys=sai.alternate.forms , items = sai.1)
#now, show the correlations correcting for overlap.
sai.1.overlap <- scoreOverlap(keys = sai.alternate.forms, r = sai.1) 
sai.1.overlap
## Call: scoreOverlap(keys = sai.alternate.forms, r = sai.1)
## 
## (Standardized) Alpha:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.91 0.87 0.86 0.82 0.80 0.83 0.73 
## 
## (Standardized) G6*:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.86 0.78 0.87 0.84 0.72 0.84 0.79 
## 
## Average item correlation:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.34 0.40 0.54 0.48 0.28 0.50 0.36 
## 
## Median item correlation:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.32 0.39 0.56 0.55 0.24 0.47 0.28 
## 
## Number of items:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
##   20   10    5    5   10    5    5 
## 
## Signal to Noise ratio based upon average r and n 
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 10.2  6.6  5.9  4.7  4.0  4.9  2.8 
## 
## Scale intercorrelations corrected for item overlap and attenuation 
##  adjusted for overlap correlations below the diagonal, alpha on the diagonal 
##  corrected correlations above the diagonal:
##        sai  anx1  pos1  neg1  anx2  pos2  neg2
## sai   0.91  1.00 -0.92  0.85  1.00 -0.82  0.85
## anx1  0.89  0.87 -0.90  0.89  1.00 -0.77  0.87
## pos1 -0.81 -0.77  0.86 -0.60 -0.95  0.95 -0.57
## neg1  0.74  0.75 -0.50  0.82  0.81 -0.40  0.98
## anx2  0.86  0.83 -0.78  0.66  0.80 -0.86  0.82
## pos2 -0.71 -0.66  0.80 -0.33 -0.70  0.83 -0.41
## neg2  0.70  0.70 -0.45  0.76  0.63 -0.32  0.73
## 
##  Percentage of keyed items with highest absolute correlation with scale  (scale quality)
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
##  0.0  0.3  0.0  0.8  0.0  0.6  0.6 
## 
##  Average adjusted correlations within and between scales (MIMS)
##      sai   anx1  pos1  neg1  anx2  pos2  neg2 
## sai   0.34                                    
## anx1  0.37  0.40                              
## pos1 -0.39 -0.42  0.54                        
## neg1  0.34  0.39 -0.31  0.48                  
## anx2  0.31  0.34 -0.37  0.30  0.28            
## pos2 -0.33 -0.34  0.49 -0.19 -0.32  0.50      
## neg2  0.30  0.33 -0.25  0.41  0.26 -0.17  0.36
## 
##  Average adjusted item x scale correlations within and between scales (MIMT)
##      sai   anx1  pos1  neg1  anx2  pos2  neg2 
## sai  -0.04                                    
## anx1 -0.04 -0.02                              
## pos1 -0.71 -0.72  0.74                        
## neg1  0.62  0.67 -0.41  0.70                  
## anx2 -0.03 -0.01  0.16  0.15 -0.06            
## pos2 -0.60 -0.58  0.66 -0.28 -0.66  0.71      
## neg2  0.54  0.55 -0.34  0.58  0.55 -0.24  0.62
## 
##  In order to see the item by scale loadings and frequency counts of the data
##  print with the short option = FALSE

We then score the trait items (20 items are keyed) for the tai. Once again, we form positively and negatively keyed subscales, as well as an overall scale.

tai.keys <- list(tai =c("-pleasant" ,  "nervous" , "not.satisfied" , "wish.happy" ,
   "failure"  ,  "-rested" ,  "-calm" , "difficulties", "worry",   "-happy",  
   "disturbing.thoughts",    "lack.self.confidence", "-secure", "decisive"  , 
    "inadequate", "-content",  "thoughts.bother" , "disappointments" ,    
    "-steady",  "tension"  ) ,        
tai.pos = c( "pleasant" , "-wish.happy" ,"rested" ,"calm"  ,  "happy", "secure" ,
        "content"  , "steady"  ) ,
tai.neg =  c( "nervous" , "not.satisfied" , "failure"  , "difficulties","worry",
    "disturbing.thoughts" , "lack.self.confidence", "decisive","inadequate",
    "thoughts.bother","disappointments","tension"  )   )        

tai.scores <- scoreItems(keys=tai.keys, items =tai)   #find the scores 
tai.scores   #show the output
## Call: scoreItems(keys = tai.keys, items = tai)
## 
## (Unstandardized) Alpha:
##       tai tai.pos tai.neg
## alpha 0.9    0.87    0.83
## 
## Standard errors of unstandardized Alpha:
##          tai tai.pos tai.neg
## ASE   0.0039  0.0069  0.0067
## 
## Average item correlation:
##            tai tai.pos tai.neg
## average.r 0.31    0.46    0.29
## 
## Median item correlation:
##     tai tai.pos tai.neg 
##    0.30    0.46    0.29 
## 
##  Guttman 6* reliability: 
##           tai tai.pos tai.neg
## Lambda.6 0.91    0.88    0.84
## 
## Signal/Noise based upon av.r : 
##              tai tai.pos tai.neg
## Signal/Noise 9.1     6.8     4.9
## 
## Scale intercorrelations corrected for attenuation 
##  raw correlations below the diagonal, alpha on the diagonal 
##  corrected correlations above the diagonal:
##           tai tai.pos tai.neg
## tai      0.90   -1.01    1.07
## tai.pos -0.89    0.87   -0.77
## tai.neg  0.93   -0.66    0.83
## 
##  Average adjusted correlations within and between scales (MIMS)
##         tai   ta.ps ta.ng
## tai      0.31            
## tai.pos -0.21  0.46      
## tai.neg  0.17 -0.16  0.29
## 
##  Average adjusted item x scale correlations within and between scales (MIMT)
##         tai   ta.ps ta.ng
## tai      0.59            
## tai.pos -0.65  0.73      
## tai.neg  0.55 -0.39  0.59
## 
##  In order to see the item by scale loadings and frequency counts of the data
##  print with the short option = FALSE

Examining just part of the output

As mentioned before, return a great deal of output, most of which is not shown to user unless requested. In the case of scoreItems one of the objects returned is a matrix of scores for each subject on each scale. We can then correlate these scores from the state scores at time one with the trait scores at time one. If we do this for the state scores taken several days after the trait measures, this gives us the amount of the state measure that is actually trait. Although there are many functions that can find the correlations between two data sets, we use cor2 which automatically uses the “pairwise.complete” option in cor as well as rounds to 2 decimal places. corr.test would also give the correlations, as well as their confidence intervals, but given the sample size of 3,032, this not necessary.

cor2(sai.1.scores$scores,tai.scores$scores)  
##        tai tai.pos tai.neg
## sai   0.53   -0.52    0.46
## anx1  0.47   -0.47    0.40
## pos1 -0.50    0.55   -0.38
## neg1  0.30   -0.23    0.31
## anx2  0.55   -0.53    0.47
## pos2 -0.48    0.54   -0.36
## neg2  0.41   -0.31    0.42
#this is the same as
 R <- cor(sai.1.scores$scores,tai.scores$scores, use="pairwise")
 #but with useful defaults.
 #to find the confidence intervals of the  the correlations, we can use cor.test 
ci <- corr.test(sai.1.scores$scores,tai.scores$scores) 

Now, do this again, but find the correlation of trait on day 1 with the state measured on days 2 or 3. We choose the subjects from time 3 in the SAM and RIM experiments.

sai.sam.rim.time3 <- sam.rim[sam.rim$time==3,]
tai.sam.rim <- tai[tai$study %in% cs(SAM, RIM),]
sai.scores <- scoreItems(keys=  sai.alternate.forms, items = sai.sam.rim.time3)
tai.scores.sam.rim <- scoreItems(keys = tai.keys, items = tai.sam.rim)
cor2(sai.scores$scores,tai.scores.sam.rim$scores)
##        tai tai.pos tai.neg
## sai   0.48   -0.47    0.42
## anx1  0.43   -0.43    0.36
## pos1 -0.45    0.48   -0.35
## neg1  0.28   -0.24    0.28
## anx2  0.50   -0.48    0.44
## pos2 -0.44    0.46   -0.35
## neg2  0.37   -0.30    0.38
#Compare these delayed values to the immediate values.  


#cor2(sai.scores$scores,tai.scores$scores)     #we need to fix this line

Consistency using the testRetest function

sam.rim.test.retest <-  testRetest(sam.rim,keys=anx.keys)  #do the analysis
## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero

## Warning in cor(tx[, i], ty[, i], use = "pairwise"): the standard deviation is zero
## boundary (singular) fit: see help('isSingular')
sam.rim.test.retest    #show the results
## 
## Test Retest reliability 
## Call: testRetest(t1 = sam.rim, keys = anx.keys)
## 
## Number of subjects =  666  Number of items =  10
##  Correlation of scale scores over time 0.36
##  Alpha reliability statistics for time 1 and time 2 
##        raw G3 std G3   G6 av.r  S/N   se lower upper var.r
## Time 1   0.86   0.86 0.89 0.38 6.07 0.04  0.73  0.92  0.03
## Time 2   0.87   0.87 0.90 0.41 6.85 0.04  0.78  0.93  0.03
## 
##  Mean between person, across item reliability =  0.29
##  Mean within person, across item reliability =  0.48
## with standard deviation of  0.37 
## 
##  Mean within person, across item d2 =  0.99
## R1F  =  0.78 Reliability of average of all items for one  time (Random time effects)
## RkF  =  0.88 Reliability of average of all items and both times (Fixed time effects)
## R1R  =  0.36 Generalizability of a single time point across all items (Random time effects)
## Rc   =  0.84 Generalizability of change (fixed time points, fixed items) 
## Multilevel components of variance
##              variance Percent
## ID               0.10    0.11
## Time             0.00    0.00
## Items            0.21    0.23
## ID x time        0.17    0.19
## ID x items       0.11    0.12
## time x items     0.00    0.00
## Residual         0.32    0.35
## Total            0.92    1.00
## 
##  To see the item.stats, print with short=FALSE. 
## To see the subject reliabilities and differences, examine the 'scores' object.

Reliability using split halfs

To find split half reliabilities and to graph the distributions of split halves (e.g., Figure~\(\ref{fig:splits}\)) requires three lines. Here we use the built in ability data set of 16 items for 1,525 participants from psychTools. The data were taken from the Synthetic Aperture Personality Assessment (SAPA) project () and reported in .

sp <- splitHalf(ability,raw=TRUE, brute=TRUE)
sp #show the results
## Split half reliabilities  
## Call: splitHalf(r = ability, raw = TRUE, brute = TRUE)
## 
## Maximum split half reliability (lambda 4) =  0.87
## Guttman lambda 6                          =  0.84
## Average split half reliability            =  0.83
## Guttman lambda 3 (alpha)                  =  0.83
## Guttman lambda 2                          =  0.83
## Minimum split half reliability  (beta)    =  0.73
## Average interitem r =  0.23  with median =  0.21
##                                              2.5% 50% 97.5%
##  Quantiles of split half reliability      =  0.77 0.83 0.86
hist(sp$raw,breaks=101, xlab="Split half reliability",
      main="Split half reliabilities of 16 ICAR ability items")

Internal consistency using the alpha and omega functions

Although we do not recommend \(\alpha\) as a measure of consistency, many researchers want to report it. The alpha function will do that. Confidence intervals from normal theory as well as from the bootstrap are reported. We use 10 items from the anxiety inventory as an example. We use all the cases from the msqR data set. By default, items that are negatively correlated with the total score are reversed. However, if we specify that

then items with negative correlations with the total score are automatically reversed keys. A warning is produced.

select  #show the items we want to score
##  [1] "calm"      "tense"     "at.ease"   "upset"     "anxious"   "confident" "nervous"   "jittery"  
##  [9] "relaxed"   "content"
alpha(msq1[select],check.keys=TRUE)  #find  alpha -- reverse code some items automatically
## Warning in alpha(msq1[select], check.keys = TRUE): Some items were negatively correlated with the first principal component and were automatically reversed.
##  This is indicated by a negative sign for the variable name.
## 
## Reliability analysis   
## Call: alpha(x = msq1[select], check.keys = TRUE)
## 
##   raw_alpha std.alpha G6(smc) average_r  S/N    ase mean   sd median_r
##       0.83      0.48    0.68     0.084 0.92 0.0046    2 0.53   -0.044
## 
##     95% confidence boundaries 
##          lower alpha upper
## Feldt     0.82  0.83  0.84
## Duhachek  0.82  0.83  0.84
## 
##  Reliability if an item is dropped:
##           raw_alpha std.alpha G6(smc) average_r  S/N alpha se var.r  med.r
## calm           0.80      0.48    0.67     0.093 0.92   0.0054  0.13 -0.017
## tense-         0.81      0.46    0.65     0.086 0.85   0.0052  0.12 -0.017
## at.ease        0.80      0.45    0.65     0.083 0.82   0.0056  0.12 -0.017
## upset-         0.82      0.52    0.70     0.106 1.07   0.0049  0.14 -0.017
## anxious-       0.82      0.43    0.65     0.077 0.75   0.0048  0.14 -0.057
## confident      0.83      0.39    0.63     0.066 0.63   0.0046  0.15 -0.102
## nervous-       0.82      0.45    0.66     0.082 0.81   0.0050  0.13 -0.017
## jittery-       0.82      0.46    0.67     0.087 0.86   0.0048  0.14 -0.070
## relaxed        0.80      0.49    0.68     0.095 0.95   0.0055  0.12 -0.017
## content        0.82      0.40    0.62     0.068 0.66   0.0050  0.14 -0.030
## 
##  Item statistics 
##              n raw.r std.r r.cor r.drop mean   sd
## calm      3020  0.73  0.34 0.264   0.62  1.6 0.90
## tense-    3017  0.66  0.41 0.352   0.59  2.5 0.77
## at.ease   3018  0.77  0.43 0.390   0.66  1.6 0.93
## upset-    3020  0.52  0.23 0.074   0.43  2.7 0.66
## anxious-  1871  0.58  0.48 0.417   0.45  2.3 0.86
## confident 3021  0.52  0.58 0.527   0.36  1.5 0.91
## nervous-  3017  0.59  0.44 0.368   0.52  2.6 0.67
## jittery-  3026  0.53  0.40 0.300   0.42  2.3 0.83
## relaxed   3024  0.75  0.33 0.241   0.64  1.7 0.89
## content   3010  0.65  0.56 0.535   0.51  1.5 0.92
## 
## Non missing response frequency for each item
##              0    1    2    3 miss
## calm      0.12 0.34 0.37 0.17 0.00
## tense     0.61 0.28 0.09 0.03 0.00
## at.ease   0.13 0.32 0.36 0.18 0.00
## upset     0.76 0.18 0.04 0.02 0.00
## anxious   0.53 0.29 0.13 0.04 0.38
## confident 0.14 0.33 0.38 0.15 0.00
## nervous   0.71 0.22 0.06 0.02 0.00
## jittery   0.54 0.31 0.12 0.04 0.00
## relaxed   0.10 0.30 0.41 0.19 0.00
## content   0.17 0.35 0.35 0.13 0.01

Now do it again, using the omegaSem function which calls the lavaan package to do a SEM analysis and report both the EFA and CFA solutions. omega just reports the EFA solution. omegaSem reports both the EFA and the CFA solution. Note that they differ somewhat with lower \(\omega_h\) estimates from the EFA solution (.42) as contrasted to the CFA solution (.55). By forcing cross loadings to 0 in the CFA, more variance is accounted for by the general factor.

You might have to install lavaan for this to work.

omegaSem(msq1[select],nfactors = 2)   #specify a two factor solution
## Loading required namespace: lavaan
## Loading required namespace: GPArotation
## 
## Three factors are required for identification -- general factor loadings set to be equal. 
## Proceed with caution. 
## Think about redoing the analysis with alternative values of the 'option' setting.

##  
## Call: omegaSem(m = msq1[select], nfactors = 2)
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.83 
## G.6:                   0.86 
## Omega Hierarchical:    0.42 
## Omega H asymptotic:    0.48 
## Omega Total            0.87 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##               g   F1*   F2*   h2   u2   p2
## calm-      0.47  0.22 -0.47 0.49 0.51 0.45
## tense      0.48  0.63       0.63 0.37 0.36
## at.ease-   0.51       -0.60 0.64 0.36 0.40
## upset      0.32  0.29       0.22 0.78 0.47
## anxious    0.38  0.62       0.53 0.47 0.27
## confident- 0.29       -0.58 0.45 0.55 0.18
## nervous    0.42  0.59       0.53 0.47 0.33
## jittery    0.35  0.53       0.41 0.59 0.30
## relaxed-   0.48  0.23 -0.47 0.51 0.49 0.46
## content-   0.40       -0.68 0.64 0.36 0.25
## 
## With Sums of squares  of:
##   g F1* F2* 
## 1.7 1.7 1.6 
## 
## general/max  1.04   max/min =   1.01
## mean percent general =  0.35    with sd =  0.1 and cv of  0.28 
## Explained Common Variance of the general factor =  0.34 
## 
## The degrees of freedom are 26  and the fit is  0.25 
## The number of observations was  3032  with Chi Square =  756.57  with prob <  9.5e-143
## The root mean square of the residuals is  0.04 
## The df corrected root mean square of the residuals is  0.05
## RMSEA index =  0.096  and the 10 % confidence intervals are  0.09 0.102
## BIC =  548.13
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 35  and the fit is  1.82 
## The number of observations was  3032  with Chi Square =  5493.51  with prob <  0
## The root mean square of the residuals is  0.22 
## The df corrected root mean square of the residuals is  0.25 
## 
## RMSEA index =  0.227  and the 10 % confidence intervals are  0.222 0.232
## BIC =  5212.91 
## 
## Measures of factor score adequacy             
##                                                   g  F1*  F2*
## Correlation of scores with factors             0.66 0.78 0.79
## Multiple R square of scores with factors       0.43 0.61 0.62
## Minimum correlation of factor score estimates -0.14 0.22 0.23
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*  F2*
## Omega total for total scores and subscales    0.87 0.79 0.84
## Omega general for total scores and subscales  0.42 0.28 0.31
## Omega group for total scores and subscales    0.38 0.52 0.53
## 
##  The following analyses were done using the  lavaan  package 
## 
##  Omega Hierarchical from a confirmatory model using sem =  0.55
##  Omega Total  from a confirmatory model using sem =  0.88 
## With loadings of 
##              g  F1*  F2*   h2   u2   p2
## calm      0.69      0.31 0.58 0.42 0.82
## tense-    0.54 0.58      0.63 0.37 0.46
## at.ease   0.66      0.47 0.66 0.34 0.66
## upset-    0.34 0.32      0.22 0.78 0.53
## anxious-  0.41 0.60      0.53 0.47 0.32
## confident           0.77 0.61 0.39 0.02
## nervous-  0.44 0.60      0.55 0.45 0.35
## jittery-  0.47 0.41      0.39 0.61 0.57
## relaxed   0.68      0.33 0.57 0.43 0.81
## content   0.31      0.74 0.65 0.35 0.15
## 
## With sum of squared loadings of:
##   g F1* F2* 
## 2.5 1.3 1.6 
## 
## The degrees of freedom of the confirmatory model are  25  and the fit is  506.0981  with p =  0
## general/max  1.57   max/min =   1.18
## mean percent general =  0.47    with sd =  0.26 and cv of  0.57 
## Explained Common Variance of the general factor =  0.46 
## 
## Measures of factor score adequacy             
##                                                  g  F1*  F2*
## Correlation of scores with factors            0.86 0.80 0.88
## Multiple R square of scores with factors      0.74 0.65 0.77
## Minimum correlation of factor score estimates 0.48 0.29 0.55
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*  F2*
## Omega total for total scores and subscales    0.88 0.81 0.87
## Omega general for total scores and subscales  0.55 0.35 0.41
## Omega group for total scores and subscales    0.33 0.46 0.46
## 
## To get the standard sem fit statistics, ask for summary on the fitted object

It is helpful to examine the graphic output to try to understand what is happening.

Parallel Forms

Reliability is the correlation of a test with a test just like it. One way of finding such a test is to create a second test with similar content.

The sai data set includes 20 items. 10 overlap with the msqR data set and are used for most examples. But we may also score anxiety from the second set of items. We can use either the scoreItems or the scoreOverlap functions. The latter function corrects for the fact that the positive and negative subsets of the anxiety scales overlap with the total scale.

sai.parallel <- scoreOverlap(sai.alternate.forms,sai1)
sai.parallel
## Call: scoreOverlap(keys = sai.alternate.forms, r = sai1)
## 
## (Standardized) Alpha:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.91 0.87 0.86 0.82 0.80 0.83 0.73 
## 
## (Standardized) G6*:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.86 0.78 0.87 0.84 0.72 0.84 0.79 
## 
## Average item correlation:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.34 0.40 0.54 0.48 0.28 0.50 0.36 
## 
## Median item correlation:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 0.32 0.39 0.56 0.55 0.24 0.47 0.28 
## 
## Number of items:
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
##   20   10    5    5   10    5    5 
## 
## Signal to Noise ratio based upon average r and n 
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
## 10.2  6.6  5.9  4.7  4.0  4.9  2.8 
## 
## Scale intercorrelations corrected for item overlap and attenuation 
##  adjusted for overlap correlations below the diagonal, alpha on the diagonal 
##  corrected correlations above the diagonal:
##        sai  anx1  pos1  neg1  anx2  pos2  neg2
## sai   0.91  1.00 -0.92  0.85  1.00 -0.82  0.85
## anx1  0.89  0.87 -0.90  0.89  1.00 -0.77  0.87
## pos1 -0.81 -0.77  0.86 -0.60 -0.95  0.95 -0.57
## neg1  0.74  0.75 -0.50  0.82  0.81 -0.40  0.98
## anx2  0.86  0.83 -0.78  0.66  0.80 -0.86  0.82
## pos2 -0.71 -0.66  0.80 -0.33 -0.70  0.83 -0.41
## neg2  0.70  0.70 -0.45  0.76  0.63 -0.32  0.73
## 
##  Percentage of keyed items with highest absolute correlation with scale  (scale quality)
##  sai anx1 pos1 neg1 anx2 pos2 neg2 
##  0.0  0.3  0.0  0.8  0.0  0.6  0.6 
## 
##  Average adjusted correlations within and between scales (MIMS)
##      sai   anx1  pos1  neg1  anx2  pos2  neg2 
## sai   0.34                                    
## anx1  0.37  0.40                              
## pos1 -0.39 -0.42  0.54                        
## neg1  0.34  0.39 -0.31  0.48                  
## anx2  0.31  0.34 -0.37  0.30  0.28            
## pos2 -0.33 -0.34  0.49 -0.19 -0.32  0.50      
## neg2  0.30  0.33 -0.25  0.41  0.26 -0.17  0.36
## 
##  Average adjusted item x scale correlations within and between scales (MIMT)
##      sai   anx1  pos1  neg1  anx2  pos2  neg2 
## sai  -0.04                                    
## anx1 -0.04 -0.02                              
## pos1 -0.71 -0.72  0.74                        
## neg1  0.62  0.67 -0.41  0.70                  
## anx2 -0.03 -0.01  0.16  0.15 -0.06            
## pos2 -0.60 -0.58  0.66 -0.28 -0.66  0.71      
## neg2  0.54  0.55 -0.34  0.58  0.55 -0.24  0.62
## 
##  In order to see the item by scale loadings and frequency counts of the data
##  print with the short option = FALSE

Test retest over several weeks as a measure of stability

The prior measures were mood measures (“how do you feel right now”), which should not show too much stability. Trait measures (“What do you normally do?”) however, should show more stability.
The Eysenck Personality Inventory EPI was given twice to 666 participants. The first time was during group testing for all students enrolled in introductory psychology. The second time, several weeks later, was given during the normal PMC lab experiment.

These data are particularly useful, because although the measure of internal consistency reliability (e.g., alpha) is very low at both time points (.51 and .49), the test retest reliability was substantially higher (.70). This is a nice example of how internal consistency is not the same as test-retest.

This is also a nice example of creating and running a small script.

Lets see how we found these results.

First, select the time 1 and time 2 data from the epiR data. Then score both time points using scoreItems. Then correlate these scores.

epi1 <- subset(epiR,epiR$time==1)
epi2 <- subset(epiR, epiR$time==2)
scores1 <- scoreItems(epi.keys,epi1)
scores2 <- scoreItems(epi.keys,epi2)
summary(scores1)
## Call: scoreItems(keys = epi.keys, items = epi1)
## 
## Scale intercorrelations corrected for attenuation 
##  raw correlations below the diagonal, (unstandardized) alpha on the diagonal 
##  corrected correlations above the diagonal:
##         E       N     L    Imp   Soc
## E    0.77 -0.2083 -0.36  1.219  1.14
## N   -0.16  0.8135 -0.39 -0.015 -0.28
## L   -0.19 -0.2185  0.39 -0.432 -0.15
## Imp  0.77 -0.0099 -0.19  0.519  0.66
## Soc  0.87 -0.2216 -0.08  0.417  0.76
summary(scores2)
## Call: scoreItems(keys = epi.keys, items = epi2)
## 
## Scale intercorrelations corrected for attenuation 
##  raw correlations below the diagonal, (unstandardized) alpha on the diagonal 
##  corrected correlations above the diagonal:
##         E      N     L    Imp   Soc
## E    0.74 -0.271 -0.46  1.209  1.18
## N   -0.21  0.796 -0.30 -0.074 -0.32
## L   -0.25 -0.167  0.40 -0.593 -0.26
## Imp  0.73 -0.046 -0.26  0.488  0.62
## Soc  0.88 -0.251 -0.14  0.379  0.75
r12 <- cor2(scores1$scores,scores2$scores)
##         E     N     L   Imp   Soc
## E    0.81 -0.16 -0.21  0.58  0.73
## N   -0.14  0.80 -0.18  0.00 -0.18
## L   -0.22 -0.16  0.65 -0.22 -0.11
## Imp  0.60 -0.02 -0.21  0.70  0.38
## Soc  0.73 -0.19 -0.11  0.35  0.81
round(scores1$alpha,2) 
##          E    N    L  Imp  Soc
## alpha 0.77 0.81 0.39 0.52 0.76
round(scores2$alpha,2)
##          E   N   L  Imp  Soc
## alpha 0.74 0.8 0.4 0.49 0.75
round(diag(r12),2)
##    E    N    L  Imp  Soc 
## 0.81 0.80 0.65 0.70 0.81

Showing the items from these questionaires using lookupFromKeys and the appropriate dictionary.

lookupFromKeys(epi.keys, epi.dictionary)
## $E
##                                                                                                                Content
## V1                                                                                   Do you often long for excitement?
## V3                                                                                           Are you usually carefree?
## V8                                               Do you generally do and say things quickly without stopping to think?
## V10                                                                           Would you do almost anything for a dare?
## V13                                                                  Do you often do things on the spur of the moment?
## V17                                                                                       Do you like going out a lot?
## V22                                                                        When people shout at you do you shout back?
## V25                                        Can you usually let yourself go and enjoy yourself a lot at a lively party?
## V27                                                                 Do other people think of you as being very lively?
## V39                                                         Do you like doing things in which you have to act quickly?
## V44                       Do you like talking to people so much that you never miss a chance of talking to a stranger?
## V46                                    Would you be very unhappy if you could not see lots of people most of the time?
## V49                                                                 Would you say that you were fairly self-confident?
## V53                                                                    Can you easily get some life into a dull party?
## V56                                                                              Do you like playing pranks on others?
## V5-                                                           Do you stop and think things over before doing anything?
## V15-                                                                Generally do you prefer reading to meeting people?
## V20-                                                                    Do you prefer to have few but special friends?
## V29-                                                              Are you mostly quiet when you are with other people?
## V32- If there is something you want to know about, would you rather look it upin a book than talk to someone about it?
## 
## $N
##                                                                                          Content
## V2                                      Do you often need understanding friends to cheer you up?
## V4                                            Do you find it very hard to take no for an answer?
## V7                                                                 Do your moods go up and down?
## V9                                           Do you ever feel just miserable for no good reason?
## V11                    Do you suddenly feel shy when you want to talk to an attractive stranger?
## V14                                Do you often worry about things you should have done or said?
## V16                                                        Are your feelings rather easily hurt?
## V19                     Are you sometimes bubbling over with energy and sometimes very sluggish?
## V21                                                                       Do you daydream a lot?
## V23                                              Are you often troubled about feelings of guilt?
## V26                                              Would you call yourself tense or highly strung?
## V28 After you have done something important, do you come away feelingyou could have done better?
## V31                                     Do ideas run through your head so that you cannot sleep?
## V33                                            Do you get palpitations or thumping in your hear?
## V35                                                  Do you get attacks of shaking or trembling?
## V38                                                                 Are you an irritable person?
## V40                                           Do you worry about awful things that might happen?
## V43                                                                 Do you have many nightmares?
## V45                                                         Are you troubled by aches and pains?
## V47                                                    Would you call yourself a nervous person?
## 
## $L
##                                                                                                                Content
## V6   If you say you will do something do you always keep your promise,no matter how inconvenient it might be to do so?
## V24                                                                       Are all your habits good and desirable ones?
## V36                      Would you always declare everything at customs, even if you knewyou could never be found out?
## V12-                                                            Once in a while do you lose your temper and get angry?
## V18-                    Do you occasionally have thoughts and ideas that you would not like otherpeople to know about?
## V30-                                                                                          Do you sometimes gossip?
## V42-                                                               Have you ever been late for an appointment or work?
## V48-                                       Of all the people you know, are there some whom you definitely do not like?
## V54-                                                        Do you sometimes talk about things you know nothing about?
## 
## $Imp
##                                                                    Content
## V1                                       Do you often long for excitement?
## V3                                               Are you usually carefree?
## V8   Do you generally do and say things quickly without stopping to think?
## V10                               Would you do almost anything for a dare?
## V13                      Do you often do things on the spur of the moment?
## V22                            When people shout at you do you shout back?
## V39             Do you like doing things in which you have to act quickly?
## V5-               Do you stop and think things over before doing anything?
## V41-                       Are you slow and unhurried in the way you move?
## 
## $Soc
##                                                                                                                Content
## V17                                                                                       Do you like going out a lot?
## V25                                        Can you usually let yourself go and enjoy yourself a lot at a lively party?
## V27                                                                 Do other people think of you as being very lively?
## V44                       Do you like talking to people so much that you never miss a chance of talking to a stranger?
## V46                                    Would you be very unhappy if you could not see lots of people most of the time?
## V53                                                                    Can you easily get some life into a dull party?
## V11-                                         Do you suddenly feel shy when you want to talk to an attractive stranger?
## V15-                                                                Generally do you prefer reading to meeting people?
## V20-                                                                    Do you prefer to have few but special friends?
## V29-                                                              Are you mostly quiet when you are with other people?
## V32- If there is something you want to know about, would you rather look it upin a book than talk to someone about it?
## V37-                                                     Do you hate being with a crowd who play jokes on one another?
## V51-                                                   Do you find it hard to really enjoy yourself at a lively party?