CHAPTER 4. RESEARCH APPROACH
4.4 Empirical Components of the Thesis
4.4.2 Online Survey Questionnaire
In view of the sample selection of webmasters, one implication could be that the interviewed webmasters represent elite websites, namely, websites that received national awards. It would definitely have augmented the value of the study if webmasters from other backgrounds were interviewed. It would also have unquestionably enhanced the analysis, if we (the research team) had included an added round of interviews and follow-up questions, on distinct issues relevant to website quality, such as investigating the degree to which webmasters are permitted to explore their freedom to design, or whether they are merely expected to perform the job assigned to them. This could be something to explore in upcoming research contributions. In regards to the use of respondents (webmasters in award winning organisations), it is also important to be aware of that the use of evaluation criteria in these awards not necessarily reflect and measure quality in a good way. The webmasters might also focus on other quality aspects compared to the awards, which are more critical perceived from a user’s point of view. We can also speculate whether these aspects (suggested by the webmasters) are covered in the website awards. Additionally, we must be aware that each of the awards has its own focus and the evaluations (use of quality criteria and methods applied during the evaluation process) are largely affected by this.
sector organisations. When studying quality and success of websites, views can elicited from different personnel in an organisation, and this study opted to accept a webmaster’s perspective. Webmasters (or persons in similar positions) are acquainted with, and therefore knowledgeable about the organisation’s website, with reference to technical aspects as well as design issues and content quality.
For development of survey questionnaires, various types of measurement scales were utilised. Hair et al. (2010) distinguish between two types of scales; nonmetric (qualitative) and metric (quantitative) data. Nonmetric data includes nominal and ordinal scales (Constant sum method, rank order and sorting), and metric data includes interval and ratio level scales (Likert-type, numerical, semantic differential and graphic-ratings). The design of the survey instrument in this study drew on the operationalisation of the DeLone and McLean IS success model, adjusted to an eGovernment context. Each construct was operationalised with a set of questions to be answered by the respondents. The respondents in the survey questionnaire were exposed to: a Likert-type scale with Likert items, with the following alternatives to each of the questions: very low degree, low, medium, high, very high degree. In addition, they were provided the opportunity to write their personal comments and observations in an open text field, in order to expand on their statements and furnish supplementary information.
A pilot test of the survey questionnaire was conducted in June 2010 among the finalists of the European eGovernment Award held in November 2009. The aim was to test the questionnaire in order to investigate investments in website quality and benefits achievements. Since all the 52 finalists could presumably be viewed as winners of the award, as they had all made considerable investments in website/service improvements, this specific population was of particular interest.
Even though the most important issue in this phase of the survey was to obtain
feedback in order to improve the questionnaire, it was also very educative and enlightening to investigate the extent to which the finalists invested resources in website improvements; the degree to which they were acquainted with their users and, finally, the methods by which they evaluated the outcome of the investments made in website improvements. The pilot survey contained a total of 26 questions, divided mainly into six categories. Each of the questions had a variety of sub-questions. Most of the questions had alternative answers, while open-ended questions were also applicable for some of the categories. The respondents could therefore contribute with their comments and interpretations of the questions, which was very essential in order to improve the questions (questionnaire). No remarkable comments were made.
With reference to the distribution of the final survey, e-mail addresses of the respondents were acquired by actually visiting each website. All of the respondents had participated in a web award contest arranged by the governments in Norway and Denmark in 2009. The survey was distributed (N=1.237) in the second week of November 2010 and all entries had to be submitted by December 2010. The respondents received an e-mail with an introductory letter that informed them about the purpose of the study, and a link (URL) to the online questionnaire.
A fortnight later, a second e-mail was circulated as a reminder to all of the respondents. Those who had participated in time were thanked for their participation, and those who had not answered the questionnaire were encouraged to complete the survey within a week. The survey was closed after four weeks, with 541 suitable responses, representing a response rate of 44 percent. In prior studies, research revealed that financial incentives impacted the rate of response (Frick et al., 2001). During this investigation, it was incapable of using financial incentives, as any form of private compensation to public officials is prohibited
under Norwegian and Danish law. To strive for a higher response rate, respondents were offered a report (summary of the survey) as compensation. This served to generate a positive effect with regard to the number of respondents who participated in the survey. A summary report was mailed out to the respondents during the spring of 2011. In relation to the analysis of the data, descriptive analyses were performed with regard to information quality, system quality and service quality (Paper 3). In addition, Pearson correlation analysis was conducted to explore the relationships among constructs of website success, and the impacts of frequency on user testing (Paper 4). Tools applied during the analysis are Microsoft Excel and Statistical Package for the Social Sciences (SPSS).
There are several advantages of using online surveys which are worth mentioning, such as the fact that data can be collected relatively quickly, and it is possible to reach numerous respondents immediately, within a short period of time (Bourque and Fielder, 1995). Most believable, the respondents also enjoy the opportunity to remain anonymous and they can answer the questionnaire as and when they find it suitable and convenient to their needs and requirements (within a given time period).
Regarding the online survey conducted in this study, time and effort was spent in order to follow up on reminders. The reason for this was that the tool used in this study (Survey Monkey®) did not permit sending out online surveys to respondents who had not been approved in advance. Therefore, the survey was distributed via my personal mail account (the work e-mail address), wherein a link to the survey questionnaire was included. As I had no opportunity to determine the respondents who had answered the survey and the respondents who had not, the reminder mail was accordingly addressed to all respondents. I thanked those who had already participated in the survey and addressed a friendly reminder to those who had not.
With regard to this survey, there was no guarantee that everyone who responded understood the questions and could relate to them, although it was possible to respond “not applicable” on each question, if the question was defined as irrelevant by the respondents. To ensure that the questions put forth were understandable and relevant to public websites, two face-to-face meetings with experienced webmasters were held before distribution. At these meetings, the questionnaire as a whole was reviewed, and the webmasters received the opportunity to comment on each question, and the use of the measurement scale in the survey. The comments that were provided were primarily related to the formulation of questions and the meanings of questions. The meetings with the webmasters did not lead to major changes or modifications in the questionnaire, other than a few additional questions which were included in the survey. This feedback imparted confidence to me in relation to the fact that the questions were easy to understand, relevant and meaningful.
In terms of the individuals who ultimately responded to the survey, it was not entirely feasible to ensure that the questionnaire was answered by the webmasters themselves (or persons in similar positions), even though this was emphatically stated in the e-mail. However, this is a challenge which has to be continually considered when conducting such surveys, wherein the researcher can exercise no control over who actually and ultimately fills out the questionnaire. This is an especially significant issue in relation to online survey questionnaires, where the respondents in most cases are based in another location, separate from the researcher.
A further disadvantage of online surveys is that there is no opportunity to pose follow-up questions or clarify questions the respondents may comprehend as ambiguous. Despite the fact that it was relatively easy to collect, collate and
compile the quantitative data, considerable time was devoted to design the study, develop the questions and conduct pilot tests.