Ease 2000 Conference paper Extracts

 

Practitioners Views on the Use of Formal Methods:- An Industrial Survey by Structured interview.

Colin Snook & Rachel Harrison

There is a recognised deficiency in the level of empirical investigation of software engineering methods. This deficiency is particularly acute in the area of formal methods, where reports about the usefulness of such methods vary widely. We interviewed several experienced formal method users about many aspects of software engineering including the effects on the company, its products and its development processes as well as pragmatic issues such as scalability, understandability and tool support. The interviews are a first stage of empirical assessment. Future work will investigate some of the issues raised using formal experimentation and case studies.

 

Fault Content Estimations using Extended Curve Fitting Models and Model Selection

Thomas Thelin & Per Runeson

Estimations of fault content after inspection of a software artefact constitute a basis for monitoring and controlling the quality of the product throughout the software development process. Two estimation techniques, capture-recapture and curve fitting models have been used to estimate the fault content after inspections. This paper extends two existing curve fitting models by adding more parameters and investigates whether this improves the estimation capabilities. Furthermore, a model selection criterion based on information-theoretic approaches is introduced, in order to choose the best estimator for a specific inspection data set. The Jackknife capture-recapture model is used as a reference in the investigation. The estimators and the model selection criterion are analysed using 30 different data sets from inspections. The results indicate that the extended models can improve the estimation results. In addition, the model selection criterion works well for some data sets, but is not better than choosing one model in each case. These results in combination with existing results are discussed and important future research is pinpointed.

Using Students as Subjects- A Comparative Study Of Students & Professionals in Lead-Time Impact Assessment

Martin Höst, Björn Regnell, Claes Wohlin

In many studies in software engineering students are used instead of professional software developers, although the objective is to draw conclusions valid for professional software developers. This paper presents a study where the difference between the two groups is evaluated. People from the two groups have individually carried out a non-trivial software engineering judgement task involving the assessment of how ten different factors affect the lead-time of software development projects. It is found that the differences are only minor, and it is concluded that software engineering students may be used instead of professional software developers under certain conditions. These conditions are identified and described based on generally accepted criteria for validity evaluation of empirical studies.

 

An Instrument for Assessing Software Measurement Programs

Michael Berry, Ross Jeffery
 

This paper reports on the development and validation of an instrument for the collection of empirical data on the establishment and conduct of software measurement programs. The instrument is distinguished by a novel emphasis on defining the context in which a software measurement program operates. This emphasis is perceived to be the key to 
1) generating knowledge about measurement programs that can be generalised to various contexts, and, 2) supporting a contingency approach to the conduct of measurement programs. A pilot study of thirteen measurement programs was carried out to trial the instrument. Analysis of this data suggests that collecting observations of software measurement programs with the instrument will lead to more complete knowledge of program success factors that will provide assistance to practitioners in an area that has proved notoriously difficult.

 

A Meta-analysis of Multidisciplinary Research

Rachel Harrison and Moira Wells

The need for software engineers and information systems practitioners to collaborate is becoming increasingly urgent. Software development environments change so rapidly that it is difficult for business organisations to keep pace with these changes and to understand not only what is available, but also what software or system would satisfy the varied requirements of the organisation. Similarly, software engineers need to be aware of and understand the social and organisational processes intrinsic to the environment in which an information system is to be implemented. Because of the rapid changes experienced in both software engineering and information systems, methods to facilitate systems and organisation co-evolution need to be investigated. Throughout systems evolution (from requirements capture to product and process evaluation) both disciplines may utilise a number of empirical research methods. The meta-analysis in this paper was undertaken to gain some understanding of the research methods used by each discipline and to identify areas where one discipline could inform the other to assist the process of systems evolution.
 

Requirements-based Estimation of Change Costs

Luigi Lavazza and Giuseppe Valetto

We present a case study that aims at quantitative assessment of the impact of requirements changes, and quantitative estimation of costs of the development activities that must be carried out to accomplish those changes.
Our approach is based on enhanced traceability and an integrated view of the process and product models. The elements in the process and product models are quantitatively characterised through proper measurement, thus achieving a sound basis for different kinds of sophisticated analysis concerning the impact of requirements changes and their costs.
We present the results of the application of modelling and measurement to an industrial project dealing with real-time software development. The ability to predict the impact of changes in requirements and the cost of related activities is shown.

 

Empirical Evaluation and Prediction of Web Applications' Development Effort

Rachel Fewster & Emilia Mendes

Predicting the effort required in developing Web applications, although a difficult task, plays an important role in the successful management of Web development projects.
This paper has two objectives: i) the first one is to describe a quantitative case study which measured a set of proposed effort metrics and also the development effort involved in authoring Web (World Wide Web) applications; and ii) the second one is to suggest a Generalised Linear Model (GLM) for effort prediction of Web applications' development.
Results have shown that, based on the dataset used, three (half) of the proposed metrics should be included in the effort model generated.

 

Use Case Authoring: Replicating the CREWS Guidelines Experiment

Karl Cox and Keith Phalp

This paper describes the replication of a CREWS project experiment that suggests CREWS use-case authoring guidelines improve use-case descriptions. Our results show that the CREWS guidelines do not necessarily improve use-case descriptions, only that the subjects implemented the guidelines with varying degrees of success. Subjects in the control group implemented a significant percentage of the guidelines by chance. To further justify our results, we also apply a different marking scheme to compare with the CREWS approach. The results from the alternative marking approach show that there was virtually no significant difference between the qualities of the use cases across the various groups.

 

Picking the Right Problem Frame - An Empirical Study

Keith Phalp & Karl Cox

Problem frames are a relatively new approach to requirements engineering, promising benefits not only in elicitation but also in subsequent design, by allowing their users to select methods and techniques appropriate to their given problem domain.
In order to be effective this approach relies upon the correct identification of relevant problem frames for a given problem or scenario. Hence, we examine whether people are able to identify the correct (relevant) frames for a given set of problem descriptions, and whether they can also correctly gauge the relative contribution of each identified frame to the given problem.
We note the Euclidean distance of (individual and group) answers from an expert solution, considering each problem frame as a separate dimension. Examination of this distance (or magnitude of error) allows us to gauge the accuracy with which people can assign problem frames. We compare the performance of individuals within groups, and the performance where groups work together to provide a collective solution, comparing both of these with a fair-allocation strategy.
In brief, we find that people can choose the relevant frames with a reasonable degree of accuracy, but that this is improved where they work to provide a collective solution. We also note differences among groups, for example, that experience appears to improve the accuracy with which groups can choose relevant frames.

 

Experimental Evaluation of the Stimulus Response Requirements Specification Notation

Kendra Cooper and Mabo. Ito

In order to transfer a new tool or technique to industry, project managers need some kind of an evaluation to determine if its benefits outweigh the costs. There are several ways to evaluate a new tool or technique including case studies, pilot projects, or experiments. In this work, an experimental evaluation is selected to objectively evaluate a new, formal requirements specification technique, Stimulus Response Requirements Specification (SRRS). The SRRS notation is designed for the specification of large, software intensive systems with complex data requirements. An overview of the experimental design, results, and the conclusions based on the experimental data are presented in this paper. The results are very encouraging and indicate the SRRS technique is a cost-effective way to develop requirement specifications. The time to write, review, and correct the specification is reduced as are the number of defects detected in a peer review process. There is, however, an increase in the training time for the authors.

 

Experiences Using Case-Based Reasoning to Predict Software Project Effort

Gada Kadoda, Michelle Cartwright, Liguang Chen, and Martin Shepperd

This paper explores some of the practical issues associated with the use of case-based reasoning (CBR) or estimation by analogy. We note that different research teams have reported widely differing results with this technology. Whilst we accept that underlying characteristics of the datasets being used play a major role we also argue that configuring a CBR system can also have an impact. We examine the impact of the choice of number of analogies when making predictions; we also look at different adaptation strategies. Our analysis is based on a dataset of software projects collected by a Canadian software house. Our results show that choosing analogies is important but adaptation strategy appears to be less so. These findings must be tempered, however, with the finding that it was difficult to show statistical significance for smaller datasets even when the accuracy indicators differed quite substantially. For this reason we urge some degree caution when comparing competing prediction systems and only modest numbers of cases.

 

Early Lifecycle work: Influence of individual characteristics, methodological constraints, and interface constraints

Andrew Brooks, Fredrik Utbult, Catherine Mulligan & Ross Jeffery

This paper reports the results of an experiment undertaken for the CADPRO(Constraints And the Decision PROject) project. Subjects with varied experience produced data flow diagrams using a data flow diagramming (DFD) tool generated by CASEMaker, a meta-CASE tool. Half the subjects received routine notice of instances of internal (as opposed to hierarchical) methodological constraint violations via an unobtrusive window whilst the other half did not. Detailed results of subjects' performances are presented. The CASEMaker DFD tool automatically recorded subjects' delivery and constraint profiles. Video records, observer notes, and subject debriefings were also used to yield other performance data. Evidence was found in support of the research model underpinning the CADPRO project. The model, however, needs to be revised, to take into account the effects of human-computer interface constraints and the different speeds with which people work.

Evaluating Defect Content Estimation Rules in Software Inspections

Håkan Petersson and Claes Wohlin

This paper evaluates two different improvements of an existing defect content estimation model. The model is a curve-fitting model used to estimate the number of remaining defects after an inspection. Two new estimation rules are evaluated and compared with the original model. Further, the new estimation rules and the original model are evaluated against one of the most successful defect content estimation models, which is a capture-recapture model. It is concluded that one of the new estimation rules for the curve-fitting model could be a good complement to the capture-recapture model. Moreover, it is concluded that the results support previously published results and hence show strong evidence that the studied models are mature enough to be transferred to industrial use to support continuous quality assessment and control.
Return to EASE Home Page