FLOSS Development Model and Usability - Some Thoughts
The popularity and increased usage of a software (or a software component) is related to the lesser number of ‘user frustation’ features it incorporates. Software (and thus software technology) is aimed at facilitating tasks. In the recent years, a large number of software and components have been released under the FLOSS development model. However, the success ratio (as empirically gauged by number of ‘continuing users’) for a majority of these have been marginal. This article proposes to look into the causal agents and thus propose a few ’simple’ solutions to be implemented at various levels to address the issue of Usability and Human Computer Interaction (and interfacing). The Problem
The Free/Libre Open Source Software model(s) of development has/have been characterised by the iterative development model (cycle) of ‘release often’. Another overriding feature of this model has been the insistence of development as a means of soothing a ‘personal itch’. Thus the main basis of the FLOSS model(s) has always been functionality and power. The average user, who relies mainly on intuitive interfaces soon finds it difficult to grasp the computing paradigm proposed by the model.
Usability can be ranked on the following aspects :
* Ease of learning
* Efficiency of use
* Error frequency and severity
* Subjective satisfaction
The nature of ‘intangibles’ present in the ranking definition creates the need for implementing and utilising framework(s) and models to capture the data and end-user feedback. The main point of the exercise being to understand whether the product is ‘desirable’.
The crux of the problem lies in the fact that the average software developer cannot ‘create and ideate’ for the average user.
If this [desktop and application design] were primarily a technical problem, the outcome would hardly be in doubt. But it isn’t; it’s a problem in ergonomic design and interface psychology, and hackers have historically been poor at it. That is, while hackers can be very good at designing interfaces for other hackers, they tend to be poor at modeling the thought processes of the other 95% of the population well enough to write interfaces that J. Random End-User and his Aunt Tillie will pay to buy. (Raymond, 1999)
This has been more due to tradition than convention. Free and Open Source software users were ‘power users’. For these groups of early adopters and pioneers, the scope and utility of any software lay in the functionality offered.
Thus the problems can be listed as:
The end-user issues are not addressed by the developers. Software development is carried out by using the personal equations of the developers and not the clueless (and technically challenged)newbie. This leads to a situation where esoteric and non-standard interfaces and layout dominate.
Domain experts in Usability are not part of the development team. The strength and probably the Unique proposition of the FLOSS model is the emphasis on ‘hacker culture’. Thus it is not surprising to note that anecdotal evidence suggests that few people with usability experience are involved in OSS projects; one of the “lessons learned” in the Mozilla project (Mozilla, 2002) is to ‘ensure that UI [user interface] designers engage the Open Source community’ Trudelle (2002). Thus the ‘end-user’ perspective is lost.
Usability encompasses a wide range of disciplines both applied and theoretical eg.psychology, sociology, graphic design and even theatre studies. This makes it mandatory that a cross functional and matrix-oriented multidisciplinary design team be put in place to effectively leverage skills and competencies. These are required to create,initiate and sustain the usability momentum. It is more a norm rather an exception that existing OSS teams may just lack the skills to solve usability problems and even the skills to bring in “outsiders” to help.
Possible explanations for the absence (and sometimes ‘active’ non-participation)of HCI and usability people in OSS projects:
* Lesser number of ‘reliable and capable’ Usability experts.
* Lack of incentives for experts to participate.
* Usability experts do not feel welcomed into OSS projects.
* Inertia: traditionally projects haven’t needed usability experts.
[There is not a critical mass of usability experts involved for the incentives of peer acclaim and recruitment opportunities to operate.]
The major motivating factor for FLOSS model is the ‘personal itch’. Commercial software, being more targetted towards establishing and capturing an user base, is more about being attuned to the needs of the clientele. The requirements analysis and requirement capture phases thus go through iterative review cycles. By contrast, many OSS projects lack formal requirements capture processes and even formal specifications (Scacchi, 2002). Instead they rely on understood requirements of initially individuals or tight-knit communities. A personal itch implies designing software for one’s own needs. Explicit requirements are consequently less necessary. Within OSS this is then shared with a like-minded community and the individual tool is refined and improved for the benefit of all within that community.
Usability problems are, by the very nature of their intangible aspects, harder to specify and distribute than functionality problems (which are easier to specify and also evaluate.) Thus there is an increased tendency towards design inconsistency and hence lower the overall usability. The modularity of OSS projects contributes to the effectiveness of the approach (O’Reilly, 1999), enabling them to side-step Brooks’ Law. This effectively translates into the ability to swap the ‘offending’ parts out to be replaced by more ‘user-oriented’ modules. Yet one major success criterion for usability is consistency of design. Slight variations in the interface between modules and different versions of modules can irritate and confuse, marring the overall user experience. If there is no collective advance planning even for the coding, there is no opportunity to factor in interface issues in the early design phases. FLOSS planning is usually done by the project initiator and/or the designated project leader. Thus unless this person is fortunate to possess consumately significant interaction design skills, important aspects of usability tend to be overlooked until it is too late.
Free/Libre Open source projects, as a matter of tradition, have been lacking in terms of resources to undertake high quality (and of consistent) usability initiatives. Almost a majority of such projects are voluntary and thus the allocated financial capital is small (and sometimes non-existent in practice). The question of involving Subject Matter Experts (technical authors and graphic designers) thus simply does not arise. Research and development oriented large scale and long running experiments coupled with Usability Laboratories are simply economically unviable for most projects.
An approach by which the FLOSS model does address the issue of Usability is software internationalization (or to be more precise, software localisation), where the language of the interface (and any culture-specific icons) is translated. This approach incorporates the ‘best practices of the modular OSS approach.
Provisioning for usability should ideally take place in advance of any coding for the software. Thus the requirements analysis and design analysis phases of the system design should incorporate Joint Sessions with the various stakeholders of the system. The fallacy in this case being that such a scenario is suitable for a ‘close-knit’ approach to a software development model. For a globally distributed community based approach followed by FLOSS, it is surprising the the model works in spite of probably violating (or otherwise stress testing) every single known principle of Software Engineering.
In a traditional setup projects are expected to be carefully planned and strategy sessions initiated to find an optimal plan. In contrast all the FLOSS projects seem to be hurrying forward into the ‘Coding Stage’ to react to the ‘personal itch’. The iterative nature of the software development model relies mainly on the ‘many eyeballs’ concept to review and re-structure the code alongside improvement of the original design (more often than not based on performance optimisation parameters). While writing about the experience of Mozilla with the Usability paradigm, Trudelle (2002) states (quite blandly and truthfully)that skipping much of the design stage with Mozilla resulted in design and requirements work occurring in bug reports, after the distribution of early versions.
Taking into account the known problems of the FLOSS model to integrate and embrace Usability Concepts, it becomes imperative that at least a semblance of planning is done at the early stages. In a fully setup laboratory, formal usability studies provide complete toolsets to help evaluate the task fulfillment ability of the end-users. In case of FLOSS products aiming at commercial success or a modicum of it, the main index of measurement should however be ‘Desirability’ (defined as the need/urge to possess a commercial copy of the software under test). Desirability involves measuring intangible aspects (again) that aim to rationalise the satisfaction gained from using the product. Traditionally two approaches are prescribed :
[i] usage of Likert scales (with the disadvantage being that the level of understanding of the user, as well as the perceptions of the practitioner are obvious points of bias creep).
[ii] interview of the users on a face-to-face basis (the cons in this case being the timespan for such a project as well as the manpower required to execute it).
To facilitate the design process as well as ensure adequate end-user feedback, the alpha release(s) could be taken as a test bed in a real-life environment. Considering the test case where a ’stable’ release with optimal features that encapsulate the objective of the software development, field studies when captured on film and re-evaluated, provide some depth of information as to the ‘Desirability and Usability’ of the software.
The issues to be considered are:
1. An understanding and assessment of the (expected and desired) level(s) of cognitive behavior of the ’subjects’
2. The basic concept that there is really no requirement to provide lengthy and detailed explanations on the requirements at hand or the objective - this creates a chimera that becomes self-fulfilling
3. Since a majority of the tests will be based on field work and field level setups as opposed to typical research lab surroundings, an eye on the stup to ensure that basic conveniences are at hand
4. Providing an ease-of-use ambience to the subject(s)
5. Ensuring that the setup is as non-intrusive as possible and arrange furniture/settings such that the camera is not being directly focussed on the subjects
6. An objective assessment of the level of concentration as well as awareness and thus time the test sessions accordingly
7. Separating the more ‘active’ and ‘aware’ from the rest for other spatial and cognitive sessions
8. Scripting a test session so as to make the interactions more suited to the audience and thus engaging
9. Providing continuous re-assurance, but ceasing from being patronising (either overtly or in an implied manner)
10. At the initial stage some lessons might be needed on handling the mouse, cursors etc
11. Encouraging questions and ensuring that they are recorded
12. Using simplified and illustrated/illustrative instructions so as to keep in mind the (often) limited computing term vocabulary
13. Rewarding (providing incentive to participate) subjects by providing token of appreciation