CHAPTER 5. FINDINGS AND DISCUSSION
5.2 Summary of Findings in Papers 1-4
In order to attempt to answer the overall research question addressed by the thesis:
What is website quality and success in public sector websites? Four sub-questions were developed (presented in Section 1.3). Table 6 presents an overview by summing up the main findings in each of the four papers.
Paper Main findings
Paper 1 Explanation of website quality concerns issues such as:
- User-friendliness of websites in general and usability issues that cover a broad definition of use
- Content quality in regards to correct, relevant and trustworthy information
- Accessibility requirements based upon WAI-principles and standards
- Overall service quality in regards to online information and digital services
Paper 2 Assessment of website quality and user satisfaction:
- Website quality covers highly technical issues - Users are replaced by experts for quality assessment
purposes
- No positive significant correlation between the quality of websites and user satisfaction
- More focus should be put on actual use and user performances
Paper 3 Perceptions of website quality:
- In general, website quality is perceived to be of a great quality in the public sector
- Potential improvements regarding usability issues, system integration and use of technologies
- Empathy towards website users and their needs and requirements
Paper 4 User testing and constructs of success:
- Low extent of user testing conducted in relation to website quality improvements
- No advanced or sophisticated methods applied for testing - Frequency of user testing positively affects constructs of
website success
Table 6. Main findings in Papers 1-4.
Before offering the cross-paper analysis carried out in this thesis, each of the papers is presented:
Paper 1: Hanne Sørum, Kim Normann Andersen and Torkil Clemmensen,
“Website quality in government: Exploring webmasters perception and explanation of website quality”, Transforming Government: People, Process and Policy, 7(3), 2013, pp. 322-341
Research question addressed: What is a webmaster’s explanation of website quality?
Regarding Paper 1, the aim of this paper was investigation of the webmasters’
perceptions and explanations of website quality and it offered the opportunity to understand how practitioners (i.e. webmasters) facilitate website quality, grounded in their perceptions and explanations of aspects of quality, which they considered to be of importance. The findings appeared to be consistent with other studies of webmasters’ views of website quality, although a key contribution of this study was a more detailed and comprehensive list of quality aspects within websites. An
analysis of webmasters’ explanations of website quality revealed 15 different explanations or aspects of website quality. The keywords to describe website quality were predominantly related to user-friendliness, effective website usage, content-related issues and accessibility (WAI-principles). Moreover, the findings revealed that usability was an important dimension of a broader concept of website quality. In this regard, system quality was important to the webmasters, but there were completely different aspects of system quality that were important in different types of websites. Moreover, information quality was explicitly mentioned by the webmasters as being significant. The eight webmasters stated that information should be easily accessible and locatable, and therefore, it was imperative not only for users to discover what they were looking for, but also to understand that information as well as accept that information to be beneficial and trustworthy.
Comparing the DeLone and McLean model with the grounded theory model of webmasters’ explanations of website quality, consisting of 15 website quality aspects across different categories of websites, findings show an overlap between the three DeLone and McLean quality dimensions in the model. The webmasters explained and enumerated upon information quality, system quality and service quality with varying levels of detail and comprehensiveness; all three aspects of the DeLone and McLean model thoroughly investigated each webmaster’s explanation of what was relevant and important in order to offer a high quality website to the public. We can speculate that this explanation was independent of the business domain or users in professions of commerce and trade. Furthermore, compared to the DeLone and McLean model, the webmasters tended to focus their explanations on system quality.
Paper 2: Hanne Sørum, Kim Normann Andersen and Ravi Vatrapu, “Public websites and human-computer interaction: an empirical study of measurement of website quality and user satisfaction”, Behaviour & Information Technology, 31(7), 2012, pp. 697-706
Research question addressed: Are website users more satisfied with high-quality websites than low-high-quality websites?
Regarding Paper 2, the aim of this paper was to investigate measurement of website quality in government bodies, and the manner in which the quality of websites impacts perceptions of user satisfaction. With respect to the use of quality criteria and the evaluation process in yearly rankings of public sector websites, the findings proved that the use of criteria in such evaluations were largely very technical, mainly driven by standardised objective criteria, connected to technical subjects and aspects linked to system quality. Design issues, traditional usability testing and user experiences as criteria were basically omitted from these evaluations. Users were replaced by experts (consultants), who aimed to take care of the users’ interests through the use of quality criteria. In addition, these findings proved that user involvement was not generally a part of the evaluation process, and to the extent that it was part of the process, not very sophisticated methods were applied for testing. Most of the criteria related to system quality aspects, which were easy and uncomplicated to measure in an objective way, and did not necessarily reflect aspects of websites, proved to be related to by users. Although the content of a website was emphasised in the evaluation process and perceived as important, concerns regarding whether the information presented was accurate and whether that information was relevant to the users, were assessed to a much lower degree.
Moreover, it was noticed that on the basis of the annual evaluations performed, user involvement was included in the evaluation process to a lower extent, although it was perceived to be of great significance for user satisfaction.
Applying a HCI perspective on the evaluation process and the use of quality criteria, the findings revealed that there existed a huge potential for future improvements. Given the methods by which quality assessments were conducted within these evaluations, in relation to both the use of quality criteria and methods of evaluation, findings show that users were not necessarily more satisfied with high-quality websites, as compared to low-quality websites. Accordingly, no positive correlation between website quality and actual satisfaction of users was proved in the present study. This was a surprising and interesting as well as thought-provoking conclusion, for the main reason that these criteria were frequently established as essential quality indicators in public sector organisations.
Yet, in another sense, the conclusion was not so surprising, considering that the evaluation process undertaken in the light of the webmasters’ personal explanations of website quality, was to a higher degree geared toward content, design issues and user friendliness.
Paper 3: Hanne Sørum, “Dressed for Success? Perception of Website Quality Among Webmasters in Government Bodies”, Proceedings of NOKOBIT (Norsk konferanse for organisasjoners bruk av IT), 2012, pp. 63-75
Research question addressed: What is a webmaster’s perception of website quality within government bodies?
The aim of Paper 3 was investigating perceptions of website quality in government bodies through a webmaster’s perspective. The findings proved that
the degree to which different dimensions of website quality scored with reference to information quality, system quality and service quality, varied. In addition, the results indicated that the public sector generally perceived that websites in government bodies displayed a relatively high level of quality with regard to information quality, system quality and service quality. Taking into account information quality, it was established that trustworthy information scored highly in both countries and was related to the extent to which users relied on the information presented. In addition, the webmasters themselves constantly endeavoured to present updated, current and relevant information to the users.
Furthermore, in both countries it was affirmed that information adapted to the users’ needs and the level of details presented in the information, were insubstantial and weak.
The pattern also revealed that, in general, quality dimensions regarding information (content) quality were perceived to be superior to aspects concerning system and service quality, except for issues of trust and security. The overall findings disclosed that there was substantial potential for improvements in system quality with reference to usability, accessibility, and use and integration of web technologies. However, the findings also showed that public websites were perceived to be secure to use, which was an important factor, in conducting online transactions and handling of personal information. For public websites, accessibility requirements were also of particular significance, in order to ensure that all participants (users) in the digital community possessed equal access to online information and services. Thus, it was an unexpected finding to note that the webmasters perceived that their websites did not perform suitably with respect to accessibility requirements. An obvious implication of this point could be that
requirements related to accessibility (WAI-principles) were primarily dependent on the use of (website) technologies.
In order to ensure efficiency and effectiveness of website usage, usefulness and ease of use were very significant factors. This study measured issues pertaining to usability by the simplicity and ease with which users could locate information and services on the website and the extent to which the menu structure was considered to be lucid and comprehensible by the users. The findings demonstrated that there was room for improvement and more attention should be devoted to the issue of development and quality improvements in websites. On the subject of service quality, users reiterated the fact that the public sector delivered services that were reliable and dependable. Trust was noteworthy in facilitating high quality interactions between governments and citizens. Apropos to service quality, the findings revealed that organisations responded relatively quickly, to both specific questions and general inquiries by users. In addition, organisations were as a rule, helpful and genuinely attempted to solve the users’ problems. The same held true for empathy and trust which were considered as effectively powerful aspects in delivering service quality. In furtherance of delivering quality associated with online services, it is imperative to continually meet users’ needs and requirements, and the findings also indicated that users’ expectations were largely fulfilled by organisations.
Paper 4: Hanne Sørum, Rony Medaglia, Kim Normann Andersen, Murray Scott and William H. DeLone, “Perceptions of information system success in the public Sector: Webmasters at the steering wheel?”, Transforming Government: People, Process and Policy, 6(3), 2012, 239-257
Research questions addressed: (1) What are the relationships between
constructs of IS success in the public sector, as perceived by webmaster intermediaries? and (2) How does user testing affect these relationships?
The aim of Paper 4 was to investigate the extent of user involvement (testing) in website quality improvements and the relationships among constructs of success.
The findings affirmed that over 50 percent of the organisations had not conducted any form of user testing, based upon the respondents’ answers. Therefore, this clearly demonstrated that many organisations had little or no knowledge of their website users’ satisfaction levels, except through feedback received by mail or through other channels of communication. Less than 20 percent of the organisations responded by confirming that they had not conducted and accomplished any user testing for over two years, while about 10 percent of the organisations answered that they had conducted user testing merely once or twice during the past two years. These results revealed that about 80 percent of the organisations had not performed any testing during the last year. On the other hand, less than 20 percent claimed that they tested a bare minimum, i.e. one test during the last year (whereas, some of the respondents had conducted tests more often than that).
Likewise, the results confirmed that the most frequently-used method in eGovernment environments involved online user satisfaction surveys. Compared to traditional usability testing, where typical users resolved tasks in real user settings, online surveys were considered to be a fast and economical method of collecting data and were in many cases very time-effective. Online surveys succeeded in reaching out a large number of respondents in a short period of time and the results could be accessed easily and immediately. However, it was necessary to be aware of some critical issues with respect to online surveys. The
second method used in a majority of instances was user testing with representative users solving realistic tasks. Focus groups and interviews were applied to a lesser extent among the respondents in this study. Acquiring user feedback through telephonic interviews and in person (face-to-face), was applied in some cases, while determining what the user looked at or eye-tracking occurred only in a very few cases. There is potential to increase user involvement, while aiming for website success and website quality improvements, in a public sector setting, both in terms of frequency and methods applied.
Moreover, the findings revealed that little or no user testing resulted in a perception of weaker correlation between constructs of IS quality (information quality, system quality and service quality) and user satisfaction. This finding suggested that the less webmasters knew about their users (by performing user testing), the less they tended to see a relationship between IS quality and user satisfaction. These findings also seemed to suggest that webmasters who performed little or no user testing conveniently assumed that citizen users were satisfied, while webmasters who were evidently more knowledgeable about the user experience enjoyed an improved perception of levels of success. This interpretation implied that webmasters who did not conduct user testing were poor judges of user satisfaction and user benefits.