Abstract
Our requirements research at City University London has used a range of empirical methods. Although we have undertaken experiments, it has only been rarely, when we are seeking to collect codified expert knowledge from requirements practitioners. Some notable examples include using formal card sorts with experienced requirements engineers to validate domain abstractions designed to support reuse of domain knowledge and novelty rankings of requirements to validate the use of creativity tools designed to support requirements work. However, most of the time, experiments in requirements research are inappropriate — a comment that can be made about much empirical research in requirements engineering. In contrast, much of our empirical requirements research has used action research to undertake case studies that we seek to repeat and draw lessons learned. There are several reasons for this. One is the need to demonstrate scalability of the technique or tool being applied. Another is that such lessons often yield new research directions that more constrained experimental findings simply do not yield. And a third is, well, most other empirical methods simply do not apply. Examples of this research include the rollout and evaluation of creativity and scenario-based requirements techniques in a sequence of major European air traffic management systems, and the investigation of large-scale goal modelling technique applied to projects ranging from food information traceability to health protection. Our research has sought to use industry-as-lab rather than classroom-as-lab. More difficult, granted, but far more valuable to our research and the requirements engineering community. In this panel I will urge requirements researchers to take up this challenge rather than assume the path of least resistance.