بخشی از مقاله انگلیسی:
The developers of the Wechsler Intelligence Scale for Children—Fifth Edition (WISC-V; Wechsler, 2014a) stated that they not only used current cognitive, intellectual, and neuropsychological theories (Carroll, 1993, 2003; Cattell & Horn, 1978; Horn, 1991; Horn & Blankson, 2012; Horn & Cattell, 1966; McCloskey, Whitaker, Murphy, & Rogers, 2012; Miller & Maricle, 2012) to guide its creation, but also retained its longstanding linkage to Spearman’s (1904) notion of general intelligence. Evidence of structural validity was established via confirmatory factor analyses (CFA) and reported in the WISC-V Technical and Interpretive Manual (Wechsler, 2014b), which included the specification of higherorder factor models with a single second-order general intelligence (g) factor indirectly influencing subtests through five first-order factors. However, scholars have raised a number of concerns regarding that structure (Canivez & Watkins, in press; Canivez, Watkins, & Dombrowski, 2015). Canivez and Watkins (in press) and Canivez, Watkins, James, James, and Good (2014) noted that there was insufficient detail in describing how the factors were defined and why weighted least squares estimation was used. For example, WLS estimation is typically used with categorical or non-normal data, requires much larger sample sizes, and can lead to model misspecification more readily than maximum likelihood estimation (Hu, Bentler, & Kano, 1992; Olsson, Foss, Troye, & Howell, 2000; Yuan & Chan, 2005). Canivez and colleagues also indicated that the preferred CFA model abandoned the parsimony of simple structure by allowing cross-loadings of the Arithmetic subtest. Further, there was a standardized path coefficient of 1.00 between the higher-order general intelligence factor and the first-order Fluid Reasoning (FR) factor, suggesting that g and FR were empirically redundant (Le, Schmidt, Harter, & Lauver, 2010). Canivez et al. also expressed concern about the use of chi-square difference tests of nested models to identify the five-factor model because this approach has been shown to be misleading when the base model is misspecified (Yuan & Bentler, 2004) and is overly powerful with large sample sizes (Millsap, 2007). There are five additional issues with the test publisher’s approach to documenting the WISC-V structure. First, the test publisher did not examine rival models, such as a bifactor model. Bifactor models are sometimes preferred over higher-order models (Canivez, in press; Reise, 2012) and have been recommended for tests of cognitive ability because they allow for partitioning of general and group factor variance (Beaujean, Parkin, & Parker, 2014; Canivez, 2014b; Canivez et al., 2015, 2014; Carroll, 1997; Gignac, 2005, 2006; Gignac & Watkins, 2013; Nelson, Canivez, & Watkins, 2013; Watkins, 2010; Watkins & Beaujean, 2014; Watkins, Canivez, James, James, & Good, 2013; Brunner, Nagy, & Wilhelm, 2012) and are more in line with Carroll’s three-stratum theory cognitive ability (Beaujean, 2015). This inclusion would aid clinicians and researchers in determining the interpretability of group factors (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014; Gustafsson & Aberg-Bengtsson, 2010). Second, model-based reliability estimates including omega-hierarchical (ωh) and omega-subscale (ωs) (Gignac & Watkins, 2013; Reise, 2012; Reise, Bonifay, & Haviland, 2013; Shrout & Lane, 2012; Zinbarg, Revelle, Yovel, & Li, 2005, 2009) were not included in the Technical and Interpretive Manual. Several researchers (e.g. Canivez, 2010; Canivez, 2014a; Canivez & Kush, 2013) as well as the Standards for educational and psychological testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014), have emphasized the need for these statistics in IQ test manuals that recommend the interpretation of subscores. Along with the measurement of total and common variance for general- and group/specific, ω estimates can aid in determining how much interpretive emphasis should be placed upon scores designed to measure primary and secondary factors. Third, the WISC-V authors did not furnish EFA results; instead they relied exclusively upon CFA procedures when providing structural validity evidence. Gorsuch (1983) and others (e.g. Brown, 2015; Carroll, 1993; Reise, 2012) indicated that EFA and CFA are complementary, and test users can have greater confidence in an instrument’s structure when both procedures are in agreement, particularly when an instrument has been revised and reformulated. For instance, elimination of the Word Reasoning and Picture Completion subtests and addition of Visual Puzzles, Figure Weights, and Picture Span subtests could have caused unexpected changes to the WISC-V factor structure that would benefit from EFA prior to the use of CFA (Strauss, Spreen, & Hunter, 2000). Fourth, previous independent investigations of intelligence test factor structures using EFA methods have produced divergent results from those offered by CFA-based models of extant IQ subtests (e.g. Canivez, 2008; Canivez & Watkins, 2010a, 2010b; DiStefano & Dombrowski, 2006; Dombrowski, 2013; Dombrowski, 2014a, 2014b; Dombrowski & Watkins, 2013; Dombrowski, Watkins, & Brogan, 2009; Watkins, 2006). In fact, some researchers contend that present day IQ tests are overfactored (Frazier & Youngstrom, 2007). Finally, Canivez et al. (2015, 2014) recently subjected the WISC-V total sample correlation matrix to EFA using the Schmid–Leiman (SL) procedure. The SL procedure mathematically transforms a secondorder factor solution into an orthogonal first-order structure where general and group factors both directly influence indicator variables. Schmid and Leiman (1957) argued that this process “preserves the desired characteristics of the oblique solution” and “discloses the hierarchical structure of the variables” (p. 53). Carroll (1995) also emphasized that orthogonal factors are appropriate only when produced in the context of a Schmid–Leiman solution. Canivez et al.’s SL analysis resulted in a four-factor solution where the fluid reasoning and visual spatial subtests combined to form the WISC-IV’s previously identified perceptual reasoning factor. Additionally, their results revealed the preeminence of the higher-order g factor and prompted them to recommend that primary interpretative emphasis should be placed on the FSIQ with possible secondary interpretive emphasis on the processing speed index score. Although useful, the SL procedure is simply a re-parameterization of the higher-order model to show how the measured variables relate to the second-order factor and residualized versions of the first-order factors. As with higher-order models in general, loading values from the SL transformation may be biased if there are cross-loadings (Reise, 2012). Likewise, the loadings of all measured variables on a group factor are constrained to be proportional (Schmiedek & Li, 2004). Given these issues, Jennrich and Bentler (2011) developed an alternative to the SL procedure for EFA: exploratory bifactor analysis (EBFA). They described EBFA as “simply exploratory factor analysis using a bi-factor rotation criterion” (p. 2). EBFA is designed to estimate loadings from bifactor models directly, which Jennrich and Bentler contend can be better than the SL transformation in some cases. The only independently published article comparing the two procedures on cognitive ability data, however, found consistent results between EBFA and the SL (Dombrowski, 2014b).
تمامی حقوق مطالب برای آی آر 7 محفوظ است و هرگونه کپی برداری بدون ذکر منبع ممنوع می باشد.