<P> Velicer's (1976) MAP test "involves a complete principal components analysis followed by the examination of a series of matrices of partial correlations" (p. 397). The squared correlation for Step "0" (see Figure 4) is the average squared off - diagonal correlation for the unpartialed correlation matrix . On Step 1, the first principal component and its associated items are partialed out . Thereafter, the average squared off - diagonal correlation for the subsequent correlation matrix is then computed for Step 1 . On Step 2, the first two principal components are partialed out and the resultant average squared off - diagonal correlation is again computed . The computations are carried out for k minus one step (k representing the total number of variables in the matrix). Thereafter, all of the average squared correlations for each step are lined up and the step number in the analyses that resulted in the lowest average squared partial correlation determines the number of components or factors to retain (Velicer, 1976). By this method, components are maintained as long as the variance in the correlation matrix represents systematic variance, as opposed to residual or error variance . Although methodologically akin to principal components analysis, the MAP technique has been shown to perform quite well in determining the number of factors to retain in multiple simulation studies . This procedure is made available through SPSS's user interface . See Courtney (2013) for guidance . </P> <P> Kaiser criterion: The Kaiser rule is to drop all components with eigenvalues under 1.0--this being the eigenvalue equal to the information accounted for by an average single item . The Kaiser criterion is the default in SPSS and most statistical software but is not recommended when used as the sole cut - off criterion for estimating the number of factors as it tends to over-extract factors . A variation of this method has been created where a researcher calculates confidence intervals for each eigenvalue and retains only factors which have the entire confidence interval greater than 1.0 . </P> <P> Scree plot: The Cattell scree test plots the components as the X axis and the corresponding eigenvalues as the Y - axis . As one moves to the right, toward later components, the eigenvalues drop . When the drop ceases and the curve makes an elbow toward less steep decline, Cattell's scree test says to drop all further components after the one starting the elbow . This rule is sometimes criticised for being amenable to researcher - controlled "fudging". That is, as picking the "elbow" can be subjective because the curve has multiple elbows or is a smooth curve, the researcher may be tempted to set the cut - off at the number of factors desired by their research agenda . </P> <P> Variance explained criteria: Some researchers simply use the rule of keeping enough factors to account for 90% (sometimes 80%) of the variation . Where the researcher's goal emphasizes parsimony (explaining variance with as few factors as possible), the criterion could be as low as 50% . </P>

How you can measure the external factor analysis