Computer user satisfaction

Computer user satisfaction (and closely related concepts such as System Satisfaction, User Satisfaction, Computer System Satisfaction, End User Computing Satisfaction) is the attitude of a user to the computer system (s)he employs in the context of his/her work environments. Doll and Torkzadeh's (1988) definition of user satisfaction is, the opinion of the user about a specific computer application, which they use. In a broader sense, the definition of user satisfaction can be extended to user satisfaction with any computer-based electronic appliance. However, scholars distinguish between user satisfaction and usability as part of Human-Computer Interaction. Successful organisations have systems in place which they believe help maximise profits and minimise overheads. It is therefore desirable that all their systems succeed and remain successful; and this includes their computer-based systems. According to key scholars such as DeLone and McLean (2002), user satisfaction is a key measure of computer system success, if not synonymous with it. However, the development of techniques for defining and measuring user satisfaction have been ad hoc and open to question. The term Computer User Satisfaction is abbreviated to user satisfaction in this article.


.The Computer User Satisfaction Questionnaire and its reduced version, the User Information Satisfaction Short-form

Bailey and Pearson’s (1983) 39‑Factor Computer User Satisfaction (CUS) questionnaire and its derivative, the User Information Satisfaction (UIS) short-form of Baroudi, Olson and Ives are typical of instruments which one might term as 'factor-based'. They consist of lists of factors, each of which the respondent is asked to rate on one or more multiple point scales. Bailey and Pearson’s CUS asked for five ratings for each of 39 factors. The first four scales were for quality ratings and the fifth was an importance rating. From the fifth rating of each factor, they found that their sample of users rated as most important: accuracy, reliability, timeliness, relevancy and confidence in the system. The factors of least importance were found to be feelings of control, volume of output, vendor support, degree of training, and organisational position of EDP (the electronic data processing, or computing department). However, the CUS requires 39 x 5 = 195 individual seven‑point scale responses. Ives, Olson and Baroudi (1983), amongst others, thought that so many responses could result in errors of attrition. This means, the respondent's failure to return the questionnaire or the increasing carelessness of the respondent as they fill in a long form. In psychometrics, such errors not only result in reduced sample sizes but can also distort the results, as those who return long questionnaires, properly completed, may have differing psychological traits from those who do not. Ives, et al. thus developed the UIS. This only requires the respondent to rate 13 factors, and so remains in significant use at the present time. Two seven‑point scales are provided per factor (each for a quality), requiring 26 individual responses in all. But in a recent article, Islam, Mervi and Käköla (2010) argued that it is difficult to measure user satisfaction in the industry settings as the response rate often remain low. Thus, a simpler version of user satisfaction measurement instrument is necessary.

The problem with the dating of factors

An early criticism of these measures was that the factors date as computer technology evolves and changes. This suggested the need for updates and led to a sequence of other factor-based instruments. Doll and Torkzadeh (1988), for example, produced a factor-based instrument for a new type of user emerging at the time, called an end-user. They identified end-users as users who tend to interact with a computer interface only, while previously users interacted with developers and operational staff as well. McKinney, Yoon and Zahedi (2002) developed a model and instruments for measuring web-customer satisfaction during the information phase. Cheung and Lee (2005) in their development of an instrument to measure user satisfaction with e-portals, based their instrument on that of McKinney, Yoon and Zahedi (2002), which in turn was based primarily on instruments from prior studies.

The problem of defining user satisfaction

As none of the instruments in common use really rigorously define their construct of user satisfaction, some scholars such as Cheyney, Mann and Amoroso (1986) have called for more research on the factors which influence the success of end-user computing. Little subsequent effort which sheds new light on the matter exists, however. All factor-based instruments run the risk of including factors irrelevant to the respondent, while omitting some that may be highly significant to him/her. Needless to say, this is further exacerbated by the ongoing changes in information technology.

In the literature there are two definitions for user satisfaction, ‘User satisfaction’ and ‘User Information Satisfaction’ are used interchangeably. According to Doll and Torkzadeh (1988) ‘user satisfaction’ is defined as the opinion of the user about a specific computer application, which they use. Ives et al. (1983) defined ‘User Information Satisfaction’ as “the extent to which users believe the information system available to them meets their information requirements.” Other terms for User Information Satisfaction are “system acceptance” (Igersheim, 1976), “perceived usefulness” (Larcker and Lessig, 1980), “MIS appreciation” (Swanson, 1974) and “feelings about information system” (Maish, 1979). Ang en Koh (1997) have described user information satisfaction (UIS) as “a perceptual or subjective measure of system success”. This means that user information satisfaction will differ in meaning and significance from person to person. In other words, users who are equally satisfied with the same system according to one definition and measure may not be equally satisfied according to another.

Several studies have investigated whether or not certain factors influence the UIS; for example, those by Yaverbaum (1988) and Ang and Soh (1997). Yaverbaum's (1988) study found that people who use their computer irregularly tend to be more satisfied than regular users. Ang en Soh's(1997)research, on the other hand, could find no evidence that computer background affects UIS.

Mullany, Tan and Gallupe (2006) do essay a definition of user satisfaction, claiming that it is based on memories of the past use of a system. Conversely motivation, they suggest, is based on beliefs about the future use of the system. (Mullany et al., 2006).

The large number of studies over the past few decades, as cited in this article, shows that user information satisfaction remains an important topic in research studies despite somewhat contradictory results.

A lack of theoretical underpinning

Another difficulty with most of these instruments is their lack of theoretical underpinning by psychological or managerial theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran (2000), and a measure of user satisfaction with e-portals, developed by Cheung and Lee (2005). Both of these models drew upon Herzberg’s two-factor theory of motivation. Consequently, their factors were designed to measure both 'satisfiers' and 'hygiene factors'. However, Herzberg’s theory itself is criticized for failing to distinguish adequately between the terms motivation, job motivation, job satisfaction, and so on. Islam (2011) in a recent study found that the sources of dissatisfaction differs from the sources of satisfaction. He found that the environmental factors (e.g., system quality) were more critical to cause dissatisfaction while outcome specific factors (e.g., perceived usefulness) were more critical to cause satisfaction.

Computer User Satisfaction and Cognitive Style

A study by Mullany (2006) showed that during the life of a system, satisfaction from users will on average increase in time as the users' experiences with the system increase. Whilst the overall findings of the studies showed only a weak link between the gap in the users' and analysts' cognitive style (measured using the KAI scales) and user satisfaction, a more significant link was found in the regions of 85 and 652 days into the systems' usage. This link shows that a large absolute gap between user and analyst cognitive styles often yields a higher rate of user dissatisfaction than a smaller gap. Furthermore, an analyst with a more adaptive cognitive style than the user at the early and late stages (approximately days 85 and 652) of system usage tends to reduce user dissatisfaction.

Mullany, Tan and Gallupe (2006) devised an instrument (the System Satisfaction Schedule (SSS)), which utilizes user generated factors (that is, almost exclusively, and so avoids the problem of the dating of factors. Also aligning themselves to Herzberg, these authors argue that the perceived usefulness (or otherwise) of tools of the trade are contextually related, and so are special cases of hygiene factors. They consequently define user satisfaction as the absence of user dissatisfaction and complaint, as assessed by users who have had at least some experience of using the system. In other words, satisfaction is based on memories of the past use of a system. Motivation, conversely, is based on beliefs about the future use of the system. (Mullany et al., 2007, p. 464)

Future developments

Currently, some scholars and practitioners are experimenting with other measurement methods and further refinements of the definition for satisfaction and user satisfaction. Others are replacing structured questionnaires by unstructured ones, where the respondent is asked simply to write down or dictate all the factors about a system which either satisfies or dissatisfies them. One problem with this approach, however, is that the instruments tend not to yield quantitative results, making comparisons and statistical analysis difficult. Also, if scholars cannot agree on the precise meaning of the term satisfaction, respondents will be highly unlikely to respond consistently to such instruments. Some newer instruments contain a mix of structured and unstructured items.

References

This article is issued from Wikipedia - version of the 8/30/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.