Management Review ›› 2023, Vol. 35 ›› Issue (9): 142-154.

• E-business and Information Management • Previous Articles     Next Articles

Measuring Task Complexity in Tournament-based Crowdsourcing: A Topic Modeling Approach

Liu Zhongzhi1,2, Zhao Ming1   

  1. 1. Dongwu Business School, Soochow University, Suzhou 215021;
    2. Research Center for Smarter Supply Chain, Soochow University, Suzhou 215021
  • Received:2021-10-06 Online:2023-09-28 Published:2023-10-31

Abstract: In tournament-based crowdsourcing, task complexity significantly affects the crowd size of a contest and the quality of solutions. Existing literature mainly depends on participants' behaviors to get an indirect measure of task complexity. This measurement thus contains many subjective information which leads to measurement errors and inconsistent research conclusions. Due to lack of objective and effective measurement, it is difficult for firms and participants to match the task complexity with other contest parameters and personal characteristics. Therefore, effectively measuring task complexity becomes a challenging pursuit in empirical crowdsourcing research. This study takes 3,205 samples of Topcoder platform and implements the Latent Dirichlet Allocation (LDA) to extract 38 topics. This study constructs three objective task complexity using the corresponding topic characteristics. The factor analysis and negative binomial regression method are used to refine and validate the three measurements. This study finds that the measurements based on topic modeling have acceptable discriminant validity among other measurements and reliable convergent validity within the measurements. Regression results show that the technology modules, contest readability, modularity and dynamic complexity negatively affect crowd size (i.e., registrants and submitters). This finding is consistent with theoretical predictions. Moreover, this study validates our measurements with domain experts, which shows there exists a strongly consistent relationship. This paper provides an automated method to solve the challenging problem of task complexity measurement. It not only broadens the crowdsourcing study about task characteristics and participant behavior, but also provides a novel perspective for platform managers to conduct a multi-level analysis about task complexity and optimize resource allocations between task complexity and other task characteristics.

Key words: topic model, tournament-based crowdsourcing, task complexity, text mining, machine learning