score.alpha {psych}R Documentation

Score scales and find Cronbach's alpha as well as associated statistics

Description

Given a matrix or data.frame of k keys for m items (-1, 0, 1), and a matrix or data.frame of items scores for m items and n people, find the sum scores or average scores for each person and each scale. In addition, report Cronbach's alpha, the average r, the scale intercorrelations, and the item by scale correlations. (Superseded by score.items).

Usage

score.alpha(keys, items, labels = NULL, totals=TRUE,digits = 2) #deprecated

Arguments

keys

A matrix or dataframe of -1, 0, or 1 weights for each item on each scale

items

Data frame or matrix of raw item scores

labels

column names for the resulting scales

totals

Find sum scores (default) or average score

digits

Number of digits for answer (default =2)

Details

This function has been replaced with scoreItems (for multiple scales) and alpha for single scales.

The process of finding sum or average scores for a set of scales given a larger set of items is a typical problem in psychometric research. Although the structure of scales can be determined from the item intercorrelations, to find scale means, variances, and do further analyses, it is typical to find the sum or the average scale score.

Various estimates of scale reliability include “Cronbach's alpha", and the average interitem correlation. For k = number of items in a scale, and av.r = average correlation between items in the scale, alpha = k * av.r/(1+ (k-1)*av.r). Thus, alpha is an increasing function of test length as well as the test homeogeneity.

Alpha is a poor estimate of the general factor saturation of a test (see Zinbarg et al., 2005) for it can seriously overestimate the size of a general factor, and a better but not perfect estimate of total test reliability because it underestimates total reliability. None the less, it is a useful statistic to report.

Value

scores

Sum or average scores for each subject on the k scales

alpha

Cronbach's coefficient alpha. A simple (but non-optimal) measure of the internal consistency of a test. See also beta and omega.

av.r

The average correlation within a scale, also known as alpha 1 is a useful index of the internal consistency of a domain.

n.items

Number of items on each scale

cor

The intercorrelation of all the scales

item.cor

The correlation of each item with each scale. Because this is not corrected for item overlap, it will overestimate the amount that an item correlates with the other items in a scale.

Author(s)

William Revelle

References

An introduction to psychometric theory with applications in R (in preparation). https://personality-project.org/r/book/

See Also

score.items, alpha, correct.cor, cluster.loadings, omega

Examples


y <- attitude     #from the datasets package
keys <- matrix(c(rep(1,7),rep(1,4),rep(0,7),rep(-1,3)),ncol=3)
labels <- c("first","second","third")
x <- score.alpha(keys,y,labels) #deprecated



[Package psych version 1.9.11 ]