The problemThis project is aimed at the gap between usability engineering, as envisaged by the Human Computer Interface (HCI) profession, and software engineering (SE). Specifically, it is aimed at the historical gap between usability testing and formal software testing, reasons for the gap and prospects for improving the situation, and means by which software test managers can improve matters on individual projects.
MethodologyThe project was carried out in three phases; a literature review, company visits and analysis of research.
Major findingsIn the 1960's and 1970's SE evolved following dysfunctional models, initially the Waterfall Model and then also Structured Methods. These were implicitly built on the mistaken assumption that SE was an orderly process, akin to civil engineering. Usability was a major victim of this failure.
The HCI profession lacked sufficient understanding of the reality of software development, thus contributing to HCI being marginalised by SE.
The problem is not that usability testing needs to be incorporated into SE testing. HCI needs to penetrate the whole development lifecycle, rather than be confined to performing summative evaluation at the end of the project when it is too late to make a difference.
It is widely recognised that HCI has not had sufficient penetration into SE. This dissertation argues that the situation is even worse than most academics have acknowledged. It is still common to find highly experienced and competent SEs with no knowledge of HCI or usability.
ConclusionTraditional development models and practices may not prevent the development of usable applications, but they make the job very much harder.
Promising possibilities have been offered by Object Oriented and Agile development techniques. However, neither was specifically geared towards usability, and the key issue remains whether the organisation is committed to developing usable applications. Without that commitment new techniques and technology will just be used to develop bad systems more efficiently.
However, even if the organisation is not committed to usability, forceful test managers can make a significant difference to the usability of applications, provided they have the knowledge and skills required to influence fellow managers.
The following text has been lifted verbatim from my Master's dissertation. I still think the argument is valid and well put. However, the claim that there is no such thing as a software testing model is verging on the tendentious, though useful for the purposes of developing my argument and illustrating the problem.
I believe that TMap might satisfy reasonable criteria for being a free-standing model, independent of development models, but that does not necessarily make it valuable! TMap is, however, little used in the UK, and I believe that the great majority of testers and developers have little or no exposure to anything that I would consider a genuine testing model.
However, I don't believe the debate about whether there really can be a true, free-standing testing model is one worth spending much time on.
The introduction follows ...
The aims of this dissertation are as follows;
- to review the historical and current relationship between the Human Computer Interface (HCI) and Software Engineering (SE) professions, with specific regard to testing,
- to identify further work that is required to improve the relationship, specifically to incorporate usability testing effectively into SE testing models,
- to identify practical means by which SE test managers can improve the situation on projects that are threatened by inadequate communication between the HCI and SE disciplines.
The research soon revealed that the second aim was misconceived. There are two aspects to the problem.
Firstly, there is no such thing as a software testing model. There are differing techniques, and differing philosophies, but when one considers what is done in practice there are no testing models, only adjuncts of development models. Testing is essentially a response to development methods and is dictated by them.
Secondly, the focus on usability testing in the aims of this dissertation, and also historically within the HCI profession was based on a rather narrow assumption of how usability testing can make a significant difference to the usability of applications.
As will be explained, usability testing, where it has taken place within SE, has been forced back into the final stages of projects, where it is summative evaluation, i.e. checking the application against its requirements. In practice, this often amounts to no more than documenting the usability problems, with no hope that they will be fixed.
If usability testing is to be effective, it has to entail formative evaluation, i.e. shaping the design before it is finalised. This requires different tactics and techniques from simply testing. Usability requires the adoption of user centred design principles throughout the development process.
These are separate issues, but they are related. In particular, the lack of independence of testing as a discipline has meant that trust in usability testing has been misplaced. SE testers have often been isolated and ineffective, a little respected community within SE. HCI inadvertently joined them in their ghetto. Also, misconceptions about the effectiveness of usability testing have perhaps made HCI too passive in the face of the dominance of the SEs in the software development process.
The two issues run as a theme through this dissertation. Sometimes they appear to be the cause of problems, but it would be more accurate to regard them as symptoms of a malaise afflicting the way that software applications have been developed. To understand this it is necessary to go back to the early days of IT and explain the historical development of SE and HCI.
This dissertation will not discuss in any detail the techniques of usability testing. These are described in outline in Appendix E.