Your conditions: 张伟霞
  • Prediction of depression onset and development based on network analysis

    Subjects: Other Disciplines >> Synthetic discipline submitted time 2023-10-09 Cooperative journals: 《心理科学进展》

    Abstract: Depression is a public health issue that needs to be addressed urgently in modern society, and prevention is one of the most effective ways to tackle this problem. The key to effective prevention is to accurately identify potential depression patients, capture warning signals of changes in depressive states, and take preventive measures timely. Traditional models of common cause consider depression as potential factors resulting from multiple symptom manifestations, neglecting the dynamic relationships among symptoms. From the perspective of a complex system, depression is a network system composed of multiple symptoms interacting with each other, and the structural and dynamic characteristics of this network can provide new theoretical perspectives and measurable indicators for predicting the occurrence and evolution of depression. Structural characteristics refer to the topological properties of symptom networks, while dynamic characteristics refer to the patterns that the network system exhibits during evolution. Starting from the key issue of predicting the occurrence and evolution of depression, this paper discusses the relationship between symptom networks and depression from a theoretical perspective, and further examines the performance of topological structure features and critical phenomena-related indicators of depression symptom networks in predicting depression onset and mutation. In terms of the structural features of symptom networks, connectivity, density, centrality, and hubs are able to predict the onset of depression. In terms of dynamic features, the presence of critical slowing down and critical fluctuations provides the basis for predicting phase transitions of depression system. However, there are some urgent problems to be solved in current research: (1) In the construction of symptom networks, the determination of node content is relatively single, often only including emotion, and other manifestations of depression are often ignored; (2) The biggest challenge of critical phenomena in depression research is that the appearance of relevant indicators is not synchronous with the change of symptoms. That is to say, there is no clear mapping relationship between clinical manifestations and warning indicators. In empirical research, the relationship between the occurrence of critical phenomena and phase transitions in depression systems is complex, and the occurrence of critical phenomena does not necessarily mean that phase transitions occur, and phase transitions may not necessarily be accompanied by the occurrence of critical phenomena. (3) The dynamics of the system include self-dynamics and interaction dynamics, and dynamic analysis is based on network structure analysis. However, in current empirical research, predictions based on network structure and those based on critical phenomena are artificially “divorced”, and there is no research on dynamic analysis based on network structure analysis. To increase the accuracy of early warning signals in predicting depression, future research should construct more systematic and comprehensive networks, and optimize the method of determining depression states by using integrated or machine learning-based warning indicators.

  • 声乐与器乐情绪加工的ERP研究

    Subjects: Psychology >> Developmental Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: The event-related potential (ERP) technique was used to investigate whether there are different neural responses to musical emotion when the same melodies are presented in the voice and instrumental timbre such as the violin. With a crossmodal affective priming paradigm, target faces were primed by affectively congruent or incongruent vocal and instrumental music. Participants were asked to judge whether the prime-target pair was affectively congruent or incongruent. The results revealed a larger late positive component (LPC) at the time window of 473~677 ms in response to affectively incongruent versus congruent trials in the vocal version, whereas a larger N400 effect at the time window of 281~471 ms was observed in the instrumental version. These results indicate differential patterns of neurophysiological responses to emotion processing of vocal and instrumental music.

  • 歌词对音乐情绪加工的影响:行为与ERP研究

    Subjects: Psychology >> Social Psychology submitted time 2023-03-27 Cooperative journals: 《心理学报》

    Abstract: Music and language are unique to the human beings. It has been suggested that music and language have a common origin as an emotional protolanguage. The development of socialisation has resulted in the development of language into a symbolic communication system with explicit semantics. By contrast, music has become an important means of emotional expression. However, whether language with explicit semantics affects the emotional processing of music remains uncertain. Given that songs contain melody and lyrics, previous behavioural studies have focused on songs to analyse the influence of lyrics on the processing of musical emotion. However, several studies have also shown the influence of lyrics, although such findings are relatively contradictory.Thus, the current study used behavioural and electrophysiological measurements to investigate the impact of lyrics on the processing of musical emotion. Experiment 1 analysed whether the emotional connotations in music with and without lyrics could be perceived by listeners at the behavioural level. Experiment 2 further investigated whether there are different neural responses to emotions conveyed by melodies with and without lyrics.A cross-modal affective priming paradigm was used in Experiments 1 and 2, in which musical excerpts served as the prime and emotional faces as target. To avoid the impact of familiarity, 120 musical stimuli were selected from European opera. Each was sung by a vocalist with and without lyrics, thereby resulting in 240 musical stimuli in two versions as potential prime stimuli. A total of 160 facial expressions affectively congruent or incongruent with the preceding musical stimuli were selected as potential target stimuli. Three pre-tests were conducted to ensure the validity of the stimuli. Eventually, 60 musical stimuli for each music version were selected as the prime stimuli, whilst 120 images were used as the target stimuli, thereby resulting in 240 music-image pairs. To ensure that each stimulus appears only once for each participant, two lists were prepared using a Latin square design. Each prime and target was presented in either the congruent or incongruent condition within each list. Thus, each list comprised 120 trials, with 30 trials in each condition. During the experiment, the two lists were equally distributed across the participants. A total of 40 healthy adults participated in Experiment 1. They were asked to judge as quickly and accurately as possible whether the emotion of the target was happy or sad. The accuracy and reaction time were collected. Meanwhile, 20 healthy adults participated in Experiment 2. They were required to judge whether the emotion between music and image was congruent or incongruent whilst their EEG waveforms were recorded. ERPs were analysed and compared between conditions at the time windows of 250~450 ms and 500~700 ms after the onset of the target.The Experiment 1 results showed that when faces were primed by music either with or without lyrics, the participants responded faster and more accurately under affectively congruent condition compared with affectively incongruent condition. This finding indicated that the emotional connotations in music with and without lyrics could both be perceived. The ERP results in Experiment 2 showed that distinct neural mechanisms were activated by music with and without lyrics. Specifically, when faces were primed by music without lyrics, a larger N400 was elicited in response to affectively incongruent pairs than to affectively congruent pairs at the time window of 250~450 ms. However, when faces were primed by music with lyrics, a more positive LPC was observed in response to the affectively incongruent pairs than to the affectively congruent pairs at 500~700 ms. This finding confirms the results of Experiment 1, thereby suggesting that the emotion conveyed by music with and without lyrics could be perceived by the listeners. Moreover, the emotional processing between music with and without lyrics differs in the time course of neural processing. That is, the emotional processing of music with lyrics lagged behind that of music without lyrics.In conclusion, the present results suggest that the neural processing of emotional connotations in music without lyrics preceded that of music with lyrics, although the emotional connotations conveyed by music with and without lyrics could both be perceived. These findings also supported theory of musical philosophy, which suggests that music without lyrics can express emotion more immediately and more directly than music with lyrics owing to the lack of “translation” from the propositional system. On the other hand, considering that lyrics influenced the time course of emotional processing in music with lyrics, our results also provide evidence that the emotional processing of music and language may share neural resources to some extent.

  • 声乐与器乐情绪加工的ERP研究

    Subjects: Psychology >> Other Disciplines of Psychology submitted time 2020-03-01

    Abstract: The event-related potential (ERP) technique was used to investigate whether there are different neural responses to musical emotion when the same melodies are presented in the voice and instrumental timbre such as the violin. With a crossmodal affective priming paradigm, target faces were primed by affectively congruent or incongruent vocal and instrumental music. Participants were asked to judge whether the prime-target pair was affectively congruent or incongruent. The results revealed a larger late positive component (LPC) at the time window of 473~677 ms in response to affectively incongruent versus congruent trials in the vocal version, whereas a larger N400 effect at the time window of 281~471 ms was observed in the instrumental version. These results indicate differential patterns of neurophysiological responses to emotion processing of vocal and instrumental music.

  • 歌词对音乐情绪加工的影响:行为与ERP的研究

    Subjects: Psychology >> Other Disciplines of Psychology submitted time 2018-10-26 Cooperative journals: 《心理学报》

    Abstract:本研究探讨了歌词对音乐情绪加工的影响。实验1使用情感启动范式, 带有歌词与无歌词音乐片段为启动刺激, 与音乐情绪一致或不一致的面孔图片为目标刺激, 被试任务是既快又准确地判断目标面孔的情绪。结果显示, 无论音乐是否带有歌词, 听者在一致条件下的反应都比不一致条件更快更准确, 这表明听者能加工音乐传达的情绪信息。实验2进一步通过电生理手段探讨歌词影响音乐情绪加工的神经机制。研究结果显示, 尽管听者对带有歌词和无歌词音乐情绪的加工都产生了启动效应, 但是无歌词音乐条件在250~450 ms时间窗口产生了N400效应, 而带有歌词音乐条件在500~700 ms时间窗口诱发了LPC效应, 该结果表明, 歌词影响了大脑加工音乐情绪的时间进程。本研究结果将在一定程度上为音乐与语言关系的探究提供依据。