What happens to usability when development goes offshore?
Two of the most important trends in software development over the last 20 years have been the increasing number of companies sending development work to cheaper labour markets, and the increasing attention that is paid to the usability of applications.
Developers in Europe and North America cannot fail to have missed the trend to offshore development work and they worry about the long-term implications.
Usability, however, has had a mixed history. Many organizations and IT professionals have been deeply influenced by the need to ensure that their products and applications are usable; many more are only vaguely aware of this trend and do not take it seriously.
As a result, many developers and testers have missed the significant implications that offshoring has for building usable applications, and underestimate the problems of testing for usability. I will try to explain these problems, and suggest possible remedial actions that testers can take if they find themselves working with offshore developers. I will be looking mainly at India, the largest offshoring destination, because information has been more readily available. However, the problems and lessons apply equally to other offshoring countries.
According to NASSCOM, the main trade body representing the Indian IT industry , the numbers of IT professionals employed in India rose from 430,000 in 2001 to 2,010,000 in 2008. The numbers employed by offshore IT companies rose 10 fold, from 70,000 to 700,000.
It is hard to say how many of these are usability professionals. Certainly at the start of the decade there were only a handful in India. Now, according to Jumkhee Iyengar, of User in Design, "a guesstimate would be around 5,000 to 8,000". At most that's about 0.4% of the people working in IT. Even if they were all involved in offshore work they would represent no more than 1% of the total.
Does that matter? Jakob Nielsen, the leading usability expert, would argue that it does. His rule of thumb  is that "10% of project resources must be devoted to usability to ensure a good quality product".
Clearly India is nowhere near capable of meeting that figure. To be fair, the same can be said of the rest of the world given that 10% represents Nielsen's idea of good practice, and most organizations have not reached that stage. Further, India traditionally provided development of "back-office" applications, which are less likely to have significant user interaction.
Nevertheless, the shortage of usability professionals in the offshoring destinations does matter. Increasingly offshore developments have involved greater levels of user interaction, and any shortage of usability expertise in India will damage the quality of products.
Sending development work offshore always introduces management and communication problems. Outsourcing development, even within the same country, poses problems for usability. When the development is both offshored and outsourced, the difficulties in producing a usable application multiply. If there are no usability professionals on hand, the danger is that the developers will not only fail to resolve those problems - they will probably not even recognize that they exist.
Why can outsourcing be a problem for usability?
External software developers are subject to different pressures from internal developers, and this can lead to poorer usability. I believe that external suppliers are less likely to be able to respond to the users' real needs, and research supports this. [3, 4, 5, 6, 7]
Obviously external suppliers have different goals from internal developers. Their job is to deliver an application that meets the terms of the contract and makes a profit in doing so. Requirements that are missed or are vague are unlikely to be met, and usability requirements all too often fall into one of these categories. This is not simply a matter of a lack of awareness. Usability is a subjective matter, and it is difficult to specify precise, objective, measurable and testable requirements. Indeed, trying too hard to do so can be counter-productive if the resulting requirements are too prescriptive and constrain the design.
A further problem is that the formal nature of contractual relationships tends to push clients towards more traditional, less responsive and less iterative development processes, with damaging results for usability. If users and developers are based in different offices, working for different employers, then rapid informal feedback becomes difficult.
Some of the studies that found these problems date back to the mid 90s. However, they contain lessons that remain relevant now. Many organizations have still not taken these lessons on board, and they are therefore facing the same problems that others confronted 10 or even 20 years ago.
How can offshoring make usability problems worse?
So, if simple outsourcing to a supplier in the same country can be fraught with difficulty, what are the usability problems that organizations face when they offshore?
Much of the discussion of this harks back to an article by Jakob Nielsen in 2002 . Nielsen stirred up plenty of discussion about the problem, much of it critical.
"Offshore design raises the deeper problem of separating interaction designers and usability professionals from the users. User-centered design requires frequent access to users: the more frequent the better."
If the usability professionals need to be close to the users, can they stay onshore and concentrate on the design while the developers build offshore? Nielsen was emphatic on that point.
"It is obviously not a solution to separate design and implementation since all experience shows that design, usability, and implementers need to proceed in tight co-ordination. Even separating teams across different floors in the same building lowers the quality of the resulting product (for example, because there will be less informal discussions about interpretations of the design spec)."
So, according to Nielsen, the usability professionals have to follow the developers offshore. However, as we've seen, the offshore countries have nowhere near enough trained professionals to cover the work. Numbers are increasing, but not by enough to keep pace with the growth in offshore development, never mind the demands of local commerce.
This apparent conundrum has been dismissed by many people who have pointed out, correctly, that offshoring is not an "all or nothing" choice. Usability does not have to follow development. If usability is a concern, then user design work can be retained onshore, and usability expertise can be deployed in both countries. This is true, but it is a rather unfair criticism of Nielsen's arguments. The problem he describes is real enough. The fact that it can be mitigated by careful planning certainly does not mean the problem can be ignored.
User-centred design assumes that developers, usability analysts and users will be working closely together. Offshoring the developers forces organizations to make a choice between two unattractive options; separating usability professionals from the users, or separating them from the developers.
It is important that organizations acknowledge this dilemma and make the choice explicitly, based on their needs and their market. Every responsible usability professional would be keenly aware that their geographical separation from their users was a problem, and so those organizations that hire usability expertise offshore are at least implicitly acknowledging the problems caused by offshoring. My concern is for those organizations who keep all the usability professionals onshore and either ignore the problems, or assume that they don't apply in their case.
How not to tackle the problems
Jhumkee Iyengar has studied the responses of organizations wanting to ensure that offshore development will give them usable applications . Typically they have tried to do so without offshore usability expertise. They have used two techniques sadly familiar to those who have studied usability problems; defining the user interaction requirements up-front and sending a final, frozen specification to the offshore developers, or adopting the flawed and fallacious layering approach.
Attempting to define detailed up-front requirements is anathema to good user-centred design. It is an approach consistent with the Waterfall approach and is attractive because it is neat and easy to manage (as I discussed in my article on the V Model in Testing Experience, issue 4).
Building a usable application that allows users and customers to achieve their personal and corporate goals painlessly and efficiently requires iteration, prototyping and user involvement that is both early in the lifecycle and repeated throughout it.
The layering approach was based on the fallacy that the user interface could be separated from the functionality of the application, and that each could be developed separately. This fallacy was very popular in the 80s and 90s. Its influence has persisted, not because it is valid, but because it lends an air of spurious respectability to what people want to do anyway.
Academics expended a huge amount of effort trying to justify this separability. Their arguments, their motives and the consequences of their mistake are worth a full article in their own right. I'll restrict myself to saying that the notion of separability was flawed on three counts.
It was flawed conceptually because usability is a product of the experience of the user with the whole application, not just the interface.
It was flawed architecturally, because design decisions taken by system architects can have a huge impact on the user experience.
Finally, it was flawed in political and organizational terms because it encouraged usability professionals to retreat into a ghetto, isolated from the hubbub of the developers, where they would work away on an interface that could be bolted onto the application in time for user testing.
Lewis & Rieman  memorably savaged the idea that usability professionals could hold themselves aloof from the application design, calling it "the peanut butter theory of usability, in which usability is seen as a spread that can be smeared over any design, however dreadful, with good results if the spread is thick enough. If the underlying functionality is confusing, then spread a graphical user interface on it. ... If the user interface still has some problems, smear some manuals over it. If the manuals are still deficient, smear on some training which you force users to take."
If the usability professionals stay onshore, and adopt either the separability or the peanut butter approach, the almost inescapable result is that they will be delegating decisions about usability to the offshore developers.
Developers are just about the worst people to take these decisions. They are too close to the application, and instinctively see workarounds to problems that might appear insoluble to real users.
Developers also have a different mindset when approaching technology. Even if they understand the business context of the applications they can't unlearn their technical knowledge and experience to see the application as a user would; and this is if developers and users are in the same country. The cultural differences are magnified if they are based in different continents.
The relative lack of maturity of usability in the offshoring destinations means that developers often have an even less sophisticated approach than developers in the client countries. User interaction is regarded as an aesthetic matter restricted to the interface, with the developers more interested in the guts of the application.
Pradeep Henry reported in 2003 that most user interfaces at Indian companies were being designed by programmers, and that in his opinion they had great difficulty switching from their normal technical, system-focused mindset to that of a user. 
They also had very little knowledge of user centered design techniques. This is partly a matter of education, but there is more to it. In explaining the shortage of usability expertise in India, Jhumkee Iyengar told me that she believes important factors are the "phenomenal success of Indian IT industry, which leads people to question the need for usability, and the offshore culture, which has traditionally been a 'back office culture' not conducive to usability".
The situation is, however, certainly improving. Although the explosive growth in development work in India, China and Eastern Europe has left the usability profession struggling to keep up, the number of usability experts has grown enormously over the last 10 years. There are nowhere near enough, but there are firms offering this specialist service keen to work with offshoring clients.
This trend is certain to continue because usability is a high value service. It is a hugely attractive route to follow for these offshore destinations, complementing and enhancing the traditional offshore development service.
Testers must warn of the dangers
The significance of all this from the perspective of testers is that even though usability faces significant threats when development is offshored, there are ways to reduce the dangers and the problems. They cannot be removed entirely, but offshoring offers such cost savings it will continue to grow and it is important that testers working for client companies understand these problems and can anticipate them.
Testers may not always, or often, be in a position to influence whether usability expertise is hired locally or offshore. However, they can flag up the risks of whatever approach is used, and adopt an appropriate test strategy.
The most obvious danger is if an application has significant interaction with the user and there is no specialist usability expertise on the project. As I said earlier, this could mean that the project abdicates responsibility for crucial usability decisions to the offshore developers.
Testers should try to prevent a scenario where the interface and user interaction are pieced together offshore, and thrown "over the wall" to the users back in the client's country for acceptance testing when it may be too late to fix even serious usability defects.
Is it outside the traditional role of a tester to lobby project management to try and change the structure of the project? Possibly, but if testers can see that the project is going to be run in a way that makes it hard to do their job effectively then I believe they have a responsibility to speak out.
I'm not aware of any studies looking at whether outsourcing contracts (or managed service agreements) are written in such prescriptive detail that they restrict the ability of test managers to tailor their testing strategy to the risks they identify. However, going by my experience and the anecdotal evidence I've heard, this is not an issue. Testing is not usually covered in detail in contracts, thus leaving considerable scope to test managers who are prepared to take the initiative.
Although I've expressed concern about the dangers of relying on a detailed up front specification there is no doubt that if the build is being carried out by offshore developers then they have to be given clear, detailed, unambiguous instructions.
The test manager should therefore set a test strategy that allows for significant static testing of the requirements documents. These should be shaped by walkthroughs and inspections to check that the usability requirements are present, complete, stated in sufficient detail to be testable, yet not defined so precisely that they constrain the design and rule out what might have been perfectly acceptable solutions to the requirements.
Once the offshore developers have been set to work on the specification it is important that there is constant communication with them and ongoing static testing as the design is fleshed out.
Hienadz Drahun leads an offshore interaction design team in Belarus. He stresses the importance of communication. He told me that "communication becomes a crucial point. You need to maintain frequent and direct communication with your development team."
Dave Cronin of the US Cooper usability design consultancy wrote an interesting article about this in 2004, .
"We already know that spending the time to holistically define and design a software product dramatically increases the likelihood that you will deliver an effective and pleasurable experience to your customers, and that communication is one of the critical ingredients to this design process. All this appears to be even more true if you decide to have the product built in India or Eastern Europe."
"To be absolutely clear, to successfully outsource product development, it is crucial that every aspect of the product be specifically defined, designed and documented. The kind of hand-waving you may be accustomed to when working with familiar and well-informed developers will no longer suffice."
Significantly Cronin did not mention testing anywhere in his article, though he does mention "feedback" during the design process.
The limits of usability testing
One of the classic usability mistakes is to place too much reliance on usability testing. In fact, I've heard it argued that there is no such thing as usability testing. It's a provocative argument, but it has some merit.
If usability is dependent only on testing, then it will be left to the end of the development, and serious defects will be discovered too late in the project for them to be fixed.
"They're only usability problems, the users can work around them" is the cry from managers under pressure to implement on time. Usability must be incorporated into the design stages, with possible solutions being evaluated and refined. Usability is therefore produced not by testing, but by good design practices.
Pradeep Henry called his new team "Usability Lab" when he introduced usability to Cognizant, the offshore outsourcing company, in India. However, the name and the sight of the testing equipment in the lab encouraged people to equate usability with testing. As Henry explained;
"Unfortunately, equating usability with testing leads people to believe that programmers or graphic designers should continue to design the user interface and that usability specialists should be consulted only later for testing." Henry renamed his team the Cognizant Usability Group (now the Cognizant Usability Center of Excellence). 
Tactical improvements testers can make
So if usability evaluation has to be integrated into the whole development process then what can testers actually do in the absence of usability professionals? Obviously it will be difficult, but if iteration is possible during design, and if you can persuade management that there is a real threat to quality then you can certainly make the situation a lot better.
There is a lot of material readily available to guide you. I would suggest the following.
Firstly, Jakob Nielsen's Discount Usability Engineering  consists of cheap prototyping (maybe just paper based), heuristic (rule based) evaluation and getting users to talk their way through the application, thinking out loud as they work their way through a scenario.
Steve Krug's "lost our lease" usability testing basically says that any usability testing is better than none, and that quick and crude testing can be both cheap and effective. Krug's focus is more on the management of this approach rather than the testing techniques themselves, so it fits with Nielsen's DUE, rather than being an alternative in my opinion. It's all in his book "Don't make me think". 
Lockwood & Constantine's Collaborative Usability Inspections offer a far more formal and rigorous approach, though you may be stretching yourself to take this on without usability professionals. It entails setting up formal walk-throughs of the proposed design, then iteration to remove defects and improve the product. [13, 14, 15]
On a lighter note, Alan Cooper's book "The inmates are running the asylum" , is an entertaining rant on the subject. Cooper's solution to the problem is his Interaction Design approach. The essence of this is that software development must include a form of functional analysis that seeks to understand the business problem from the perspective of the users, based on their personal and corporate goals, working through scenarios to understand what they will want to do.
Cooper's Interaction Design strikes a balance between the old, flawed extremes of structured methods (which ignored the individual) and traditional usability (which often paid insufficient attention to the needs of the organization). I recommend this book not because I think that a novice could take this technique on board and make it work, but because it is very readable and might make you question your preconceptions and think about what is desirable and possible.
Longer term improvements
Of course it's possible that you are working for a company that is still in the process of offshoring and where it is still possible to influence the outcome. It is important that the invitation to tender includes a requirement that suppliers can prove expertise and experience in usability engineering. Additionally, the client should expect potential suppliers to show they can satisfy the following three criteria.
The supplier should have a process or lifecycle model that not only has usability engineering embedded within it but that also demonstrates how the onshore and offshore teams will work together to achieve usability. The process must involve both sides.
Offshore suppliers have put considerable effort into developing such frameworks. Three examples are Cognizant's "End-to-End UI Process", HFI's "Schaffer-Weinschenk Method™" and Persistent's "Overlap Usability".
Secondly, suppliers should carry out user testing with users from the country where the application will be used. The cultural differences are too great to use people who happen to be easily available to the offshore developers.
Remote testing entails usability experts based in one location conducting tests with users based elsewhere, even another continent. It would probably not be the first choice of most usability professionals, but it is becoming increasingly important. As Jumkhee Iyengar told me it "may not be the best, but it works and we have had good results. A far cry above no testing."
Finally, suppliers should be willing to send usability professionals to the onshore country for the requirements gathering. This is partly a matter of ensuring that the requirements gathering takes full account of usability principles. It is also necessary so that these usability experts can fully understand the client's business problem and can communicate it to the developers when they return offshore.
It's possible that there may still be people in your organization who are sceptical about the value of usability. There has been a lot of work done on the return on investment that user centered design can bring. It's too big a topic for this article, but a simple internet search on "usability" and "roi" will give you plenty of material.
What about the future?
There seems no reason to expect any significant changes in the trends we've seen over the last 10 years. The offshoring countries will continue to produce large numbers of well-educated, technically expert IT professionals. The cost advantage of developing in these countries will continue to attract work there.
Proactive test managers can head off some of the usability problems associated with offshoring. They can help bring about higher quality products even if their employers have not allowed for usability expertise on their projects. However, we should not have unrealistic expectations about what they can achieve. High quality, usable products can only be delivered consistently by organizations that have a commitment to usability and who integrate usability throughout the design process.
Offshoring suppliers will have a huge incentive to keep advancing into user centered design and usability consultancy. The increase in offshore development work creates the need for such services, whilst the specialist and advanced nature of the work gives these suppliers a highly attractive opportunity to move up the value chain, selling more expensive services on the basis of quality rather than cost.
The techniques I have suggested are certainly worthwhile, but they may prove no more than damage limitation. As Hienadz Drahun put it to me; "to get high design quality you need to have both a good initial design and a good amount of iterative usability evaluation. Iterative testing alone is not able to turn a bad product design into a good one. You need both." Testers alone cannot build usability into an application, any more than they can build in quality.
Testers in the client countries will increasingly have to cope with the problems of working with offshore development. It is important that they learn how to work successfully with offshoring and adapt to it.
They will therefore have to be vigilant about the risks to usability of offshoring, and advise their employers and clients how testing can best mitigate these risks, both on a short term tactical level, i.e. on specific projects where there is no established usability framework, and in the longer term, where there is the opportunity to shape the contracts signed with offshore suppliers.
There will always be a need for test management expertise based in client countries, working with the offshore teams, but it will not be the same job we knew in the 90s.
 NASSCOM (2009). "Industry Trends, IT-BPO Sector-Overview". Accessed 26th July 2011. NB the page and figures have been updated since the article was written.
 Nielsen, J. (2002). "Offshore Usability". Jakob Nielsen's website. Accessed 26th July 2011.
 Lewis, C., Rieman, J. (1994). "Task-Centered User Interface Design: A Practical Introduction". University of Colorado e-book. Accessed 26th July 2011.
 Artman, H. (2002). "Procurer Usability Requirements: Negotiations in Contract Development" (NB PDF download). Proceedings of the second Nordic conference on Human-Computer Interaction (NordiCHI) 2002. Accessed 26th July 2011.
 Holmlid, S. Artman, H. (2003). "A Tentative Model for Procuring Usable Systems" (NB PDF download). 10th International Conference on Human-Computer Interaction, 2003. Accessed 26th July 2011.
 Grudin, J. (1991). "Interactive Systems: Bridging the Gaps Between Developers and Users". Computer, April 1991, Vol. 24, No. 4 (subscription required). Accessed 26th July 2011.
 Grudin, J. (1996). "The Organizational Contexts of Development and Use". ACM Computing Surveys. Vol 28, issue 1, March 1996, pp 169-171 (subscription required). Accessed 26th July 2011.
 Iyengar, J. (2007). "Usability Issues in Offshore Development: an Indian Perspective". Usability Professionals Association Conference, 2007 (UPA membership required). Accessed 26th July 2011.
 Henry, P. (2003). "Advancing UCD While Facing Challenges Working from Offshore". ACM Interactions, March/April 2003 (subscription required). Accessed 26th July 2011.
 Cronin D. (2004). "Designing for Offshore Development" (no longer available online). Cooper Journal blog. Checked 26th July 2011.
 Nielsen, J. (1994). "Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier", Jakob Nielsen's website. Accessed 26th July 2011.
 Krug, S. (2006). "Don't Make Me Think!: A Common Sense Approach to Web Usability" 2nd edition. New Riders. Accessed 26th July 2011.
 Constantine, L. Lockwood, L. (1999). "Software for Use". Addison Wesley. Accessed 26th July 2011.
 Lockwood, L. (1999). "Collaborative Usability Inspecting - more usable software and sites" (NB PDF download), Constantine and Lockwood website. Accessed 26th July 2011.
 Cooper, A. (2004). "The Inmates Are Running the Asylum: Why High-tech Products Drive Us Crazy and How to Restore the Sanity". Sams. Accessed 26th July 2011.