Current Location:home > Detailed Browse

Article Detail

引入眼动注视点的联合-交叉负载多模态认知诊断建模

Joint-Cross-Loading Multimodal Cognitive Diagnostic Modeling Incorporating Visual Fixation Counts

Abstracts

多模态数据为实现对认知结构的精准诊断及其他认知特征(如, 认知风格)的全面反馈提供了可能性。为实现对题目作答精度、作答时间(RT)和视觉注视点数(FC)的联合分析,本文基于联合-交叉负载建模法提出三个多模态认知诊断模型。实证研究及模拟研究结果表明:(1)联合分析比分离分析更适用于多模态数据;(2)新模型可直接利用RT和FC中信息提高潜在能力或潜在属性的估计准确性;(3)新模型的参数估计返真性较好;(4)忽略交叉负载所导致的负面结果比冗余考虑交叉负载所导致的更严重。
[英文摘要]Students' observed behavior (e.g., learning behavior and problem-solving behavior) comprises of activities that represent complicated cognitive processes and latent conceptions that are frequently systematically related to one another. Cognitive characteristics such as cognitive styles and fluency may differ between students with the same cognitive/knowledge structure. However, practically all cognitive diagnosis models (CDMs) that merely assess item response accuracy (RA) data are currently incapable of estimating or inferring individual differences in cognitive traits. With advances in technology-enhanced assessments, it is now possible to capture multimodal data, such as outcome data (e.g., response accuracy), process data (e.g., response times (RTs), and biometric data (e.g., visual fixation counts (FCs)), automatically and simultaneously during the problem-solving activity. Multimodal data allows for precise cognitive structure diagnosis as well as comprehensive feedback on various cognitive characteristics. First, using joint analysis of RA, RT, and FC data as an example, this study elaborated three multimodal data analysis methods and models, including separate modeling (whose model is denoted as S-MCDM), joint-hierarchical modeling (whose model is denoted as H-MCDM) (Zhan et al., 2021), and joint-cross-loading modeling (whose model is denoted as C-MCDM). Following that, three C-MCDMs with distinct hypotheses were presented based on joint-cross-loading modeling, namely, the C-MCDM-θ, C-MCDM-D, and C-MCDM-C, respectively. Three C-MCDMs, in comparison to the H-MCDM, introduce two item-level weight parameters (i.e., φi?and λi) into the RT and FC measurement models, respectively, to quantify the impact of latent ability or latent attributes on RT and FC. The Markov Chain Monte Carlo method was used to estimate model parameters using a full Bayesian approach. To illustrate the three proposed models' application and compare them to the S-MCDM and H-MCDM, multimodal data for a real-world mathematics test was used. Data was gathered at a prominent university on the East Coast of the United States in an eye-tracking lab. An I = 10 mathematics items test was given to N = 93 university students with normal or corrected vision. The test included K = 4 attributes, and the related Q-matrix is shown in Figure 3. The data is divided into three modalities: RA, RT, and FC, which were all collected at the same time. The data was fitted to all five multimodal models. In addition, two simulation studies were conducted further to explore the psychometric performance of the proposed models. The purpose of simulation study 1 was to explore whether the parameter estimates of the proposed models can converge effectively and explore the recovery of parameter estimation under different simulated test situations. The purpose of simulation study 2 was to explore the relative merits of C-MCDMs and H-MCDM, that is, to explore the necessity of considering cross-loading in multimodal data analysis. The results of the empirical study showed that (1) the C-MCDM-θ has the best model-data fitting, followed by the H-MCDM and the S-MCDM. Although the DIC showed that the C-MCDM-D and C-MCDM-C also fitted the data well, the results were only for reference because some parameter estimates in these two models did not converge; that (2) the correlation coefficients between latent ability and latent processing speed and that between latent ability and latent concentration were weak, making it difficult to fully exploit the theoretical advantages of H-MCDM over S-MCDM (Ranger, 2013). By contrast, since the C-MCDM-θ can directly utilize the information from RT and FC data, the standard error of the estimates of its latent ability was significantly lower than that of the previous two competing models; and that (3) the median of the estimates of φi?was less than 0, which indicated that for most items, the higher the participant’s latent ability is, the longer the time it will take to solve the items; and the median of the estimates of λi?was higher than 0, which indicated that for most items, the higher the participant’s latent ability is, the more number of fixation counts he/she shown in problem-solving. Furthermore, it should be noted that the estimates of φi?and λi?do not always have the same sign for different items, indicating that the influence of latent abilities on RT and FC has different directions (i.e., facilitation or inhibition) for different items. Furthermore, simulation study 1 indicated that the parameter estimation of the proposed three models could converge effectively and the recovery of model parameters was good under different simulated test situations. The results of simulation study 2 indicated that the adverse effects of ignoring the possible cross-loadings are more severe than redundantly considering the cross-loadings. Overall, the results of this study indicate that (1) fusion analysis is more suitable for multimodal data that provides parallel information than separate analysis; that (2) through cross-loading, the proposed models can directly use information from RT and FC data to improve the parameter estimation accuracy of latent ability or latent attributes; that (3) the results of the proposed models can be used to diagnose cognitive structure and infer other cognitive characteristics such as cognitive styles and fluency; and that (4) the proposed models have better compatibility with different test situations than H-MCDM.
Download Comment Hits:6896 Downloads:874
From: 詹沛达
DOI:10.12074/202106.00029
Recommended references: 詹沛达.(2021).引入眼动注视点的联合-交叉负载多模态认知诊断建模.[ChinaXiv:202106.00029] (Click&Copy)
Version History
[V3] 2021-11-30 09:31:51 chinaXiv:202106.00029V3 Download
[V2] 2021-06-15 11:54:18 chinaXiv:202106.00029v2(View This Version) Download
[V1] 2021-06-08 23:07:59 chinaXiv:202106.00029v1(View This Version) Download
Related Paper

10. 冥想的安全性 2022-05-09

Download

Current Browse

Change Subject Browse

Cross Subject Browse

  • - NO