Your conditions: 张光耀
  • Does editorial board effect exist in Chinese academic journals: An empirical study based on management journals

    Subjects: Digital Publishing >> Internet Journals submitted time 2024-04-27

    Abstract: Purposes This paper aims to explore whether editorial board effect exists in academic journals by analyzing the quantity and quality of editorial board articles in Chinese management journals, and to provide reference for journal quality construction. Methods Taking CSSCI source journals of management as the research object, this paper used the regression analysis and causal inference methods to reveal the relationship between the publishing of papers with different identity of editorial board members and the influence of papers. Findings The editorial board members are less active, and most of them did not publish many papers in their journals. There is a significant relationship between the publication of different identities of editorial board members and the influence of papers, and the editorial board effect exists conditionally. Specifically, the influence of the editorial board as an important author is significantly higher than that of non-editorial board authors. The influence of editorial board members who are not important authors is significantly lower than that of non-editorial board members. Conclusions It is suggested that the editorial department should enhance the transparency of journal operation and formulate the editorial board submission policy to reduce the potential editorial board effect.

  • 基于词嵌入技术的心理学研究:方法及应用

    Subjects: Psychology >> Social Psychology submitted time 2023-03-28 Cooperative journals: 《心理科学进展》

    Abstract: As a fundamental technique in natural language processing (NLP), word embedding quantifies a word as a low-dimensional, dense, and continuous numeric vector (i.e., word vector). This process is based on machine learning algorithms such as neural networks, through which semantic features of a word can be extracted automatically. There are two types of word embeddings: static and dynamic. Static word embeddings aggregate all contextual information of a word in an entire corpus into a fixed vectorized representation. The static word embeddings can be obtained by predicting the surrounding words given a word or vice versa (Word 2Vec and FastText) or by predicting the probability of co-occurrence of multiple words (GloVe) in large-scale text corpora. Dynamic or contextualized word embeddings, in contrast, derive a word vector based on a specific context, which can be generated through pre-trained language models such as ELMo, GPT, and BERT. Theoretically, the dimensions of a word vector reflect the pattern of how the word can be predicted in contexts; however, they also connote substantial semantic information of the word. Therefore, word embeddings can be used to analyze semantic meanings of text.  In recent years, word embeddings have been increasingly applied to study human psychology. In doing this, word embeddings have been used in various ways, including the raw vectors of word embeddings, vector sums or differences, absolute or relative semantic similarity and distance. So far, the Word Embedding Association Test (WEAT) has received the most attention. Based on word embeddings, psychologists have explored a wide range of topics, including human semantic processing, cognitive judgment, divergent thinking, social biases and stereotypes, and sociocultural changes at the societal or population level. Particularly, the WEAT has been widely used to investigate attitudes, stereotypes, social biases, the relationship between culture and psychology, as well as their origin, development, and cross-temporal changes.   As a novel methodology, word embeddings offer several unique advantages over traditional approaches in psychology, including lower research costs, higher sample representativeness, stronger objectivity of analysis, and more replicable results. Nonetheless, word embeddings also have limitations, such as their inability to capture deeper psychological processes, limited generalizability of conclusions, and dubious reliability and validity. Future research using word embeddings should address these limitations by (1) distinguishing between implicit and explicit components of social cognition, (2) training fine-grained word vectors in terms of time and region to facilitate cross-temporal and cross-cultural research, and (3) applying contextualized word embeddings and large pre-trained language models such as GPT and BERT. To enhance the application of word embeddings in psychological research, we have developed the R package “PsychWordVec”, an integrated word embedding toolkit for researchers to study human psychology in natural language.

  • 基于词嵌入技术的心理学研究:方法及应用

    submitted time 2023-03-25 Cooperative journals: 《心理科学进展》

    Abstract: As a fundamental technique in natural language processing (NLP), word embedding quantifies a word as a low-dimensional, dense, and continuous numeric vector (i.e., word vector). This process is based on machine learning algorithms such as neural networks, through which semantic features of a word can be extracted automatically. There are two types of word embeddings: static and dynamic. Static word embeddings aggregate all contextual information of a word in an entire corpus into a fixed vectorized representation. The static word embeddings can be obtained by predicting the surrounding words given a word or vice versa (Word 2Vec and FastText) or by predicting the probability of co-occurrence of multiple words (GloVe) in large-scale text corpora. Dynamic or contextualized word embeddings, in contrast, derive a word vector based on a specific context, which can be generated through pre-trained language models such as ELMo, GPT, and BERT. Theoretically, the dimensions of a word vector reflect the pattern of how the word can be predicted in contexts; however, they also connote substantial semantic information of the word. Therefore, word embeddings can be used to analyze semantic meanings of text.  In recent years, word embeddings have been increasingly applied to study human psychology. In doing this, word embeddings have been used in various ways, including the raw vectors of word embeddings, vector sums or differences, absolute or relative semantic similarity and distance. So far, the Word Embedding Association Test (WEAT) has received the most attention. Based on word embeddings, psychologists have explored a wide range of topics, including human semantic processing, cognitive judgment, divergent thinking, social biases and stereotypes, and sociocultural changes at the societal or population level. Particularly, the WEAT has been widely used to investigate attitudes, stereotypes, social biases, the relationship between culture and psychology, as well as their origin, development, and cross-temporal changes.   As a novel methodology, word embeddings offer several unique advantages over traditional approaches in psychology, including lower research costs, higher sample representativeness, stronger objectivity of analysis, and more replicable results. Nonetheless, word embeddings also have limitations, such as their inability to capture deeper psychological processes, limited generalizability of conclusions, and dubious reliability and validity. Future research using word embeddings should address these limitations by (1) distinguishing between implicit and explicit components of social cognition, (2) training fine-grained word vectors in terms of time and region to facilitate cross-temporal and cross-cultural research, and (3) applying contextualized word embeddings and large pre-trained language models such as GPT and BERT. To enhance the application of word embeddings in psychological research, we have developed the R package “PsychWordVec”, an integrated word embedding toolkit for researchers to study human psychology in natural language.

  • Using word embeddings to investigate human psychology: Methods and applications

    Subjects: Psychology >> Social Psychology Subjects: Psychology >> Cognitive Psychology Subjects: Psychology >> Psychological Measurement Subjects: Computer Science >> Natural Language Understanding and Machine Translation submitted time 2023-01-30

    Abstract: As a basic technique in natural language processing (NLP), word embedding represents a word with a low-dimensional, dense, and continuous numeric vector (i.e., word vector). Word embeddings can be obtained by using neural network algorithms to predict words from the surrounding words or vice versa (Word2Vec and FastText) or words’ probability of co-occurrence (GloVe) in large-scale text corpora. In this case, the values of dimensions of a word vector denote the pattern of how a word can be predicted in a context, substantially connoting its semantic information. Therefore, word embeddings can be utilized for semantic analyses of text. In recent years, word embeddings have been rapidly employed to study human psychology, including human semantic processing, cognitive judgment, individual divergent thinking (creativity), group-level social cognition, sociocultural changes, and so forth. We have developed the R package “PsychWordVec” to help researchers utilize and analyze word embeddings in a tidy approach. Future research using word embeddings should (1) distinguish between implicit and explicit components of social cognition, (2) train fine-grained word vectors in terms of time and region to facilitate cross-temporal and cross-cultural research, and (3) deepen and expand the application of contextualized word embeddings and large pre-trained language models such as GPT and BERT.