Abstract
When selecting workers in microtask crowdsourcing platforms, requesters select qualified workers by looking at the evaluation results for the tasks in the past or by conducting qualifying tests for the tasks. As a result, they choose workers whose skill levels are above some threshold. This sometimes limits the number of workers who perform the tasks, which has a negative effect for both of requesters and workers. In this paper, we explore an approach to increasing the work opportunities for many workers, by finding task assignment based on the estimated skill level of workers and the difficulty level of tasks. We show the result of a preliminary experiment to discuss the potential and limitation of this approach.