• Ingen resultater fundet

thinking and integration in the different studios as a basis for comparison. The questions thus ask about the individual’s abilities within multiple technical aspects and the individ-ual’s opinion about his/her own level of holistic thinking. Conclusions about the actual level of holistic approach of the individuals is thus made possible. The survey is distributed among multiple studios and the collected data is a formal assembly of data, enabling the creation of a ‘profile’ of each studio.

Intro

The company of Studio A is one of the larger studios in Europe and is certainly a well-re-nowned brand. The brand is known for being a research company with a broad knowl-edge on sustainability, but due to the size of the company, the working methods might differ between offices, and hence be a purely commercial factor. It is therefore interesting to compare the different Studio A offices for design decision disparities, as well as isolate the working methods and knowledge of environmental factors in the Studio A office from the rest. The following sections contains a description of general questionnaire theory applied in construction of questions and the general design of the questionnaire.

This is followed by the analysis results in the form of an architectural studio ‘profile’.

These results are compared to general tendencies among a number of larger Copenha-gen-based studios, in order to provide context for the findings.

The investigation of the Studio A offices is part of a larger research in collaboration with a PhD student at the Technical University of Denmark, Mathilde Landgren, aiming to map design processes at different architectural studios in Copenhagen.

When the Copenhagen office can be investigated separately from the other Studio A offices, it is possible to compare working methods of Studio A to other Copenhagen based studios, which they normally would be associated with. The questionnaire is thus conducted at both Studio A offices around Scandinavia and larger Copenhagen based studios. The external studios are the nearest “competitors” of the Copenhagen office, and it is therefore interesting to see similarities and differences across company borders – especially compared to the brands of the different studios. Some of the external studios

QUESTIONNAIRE

are known for their integration with different disciplines, others for their traditional architectural approach.

The responses of the questionnaire will result in a studio profile with focus on the early design phases and implementation of environmental factors in their design decisions.

The research is ongoing and will be finalised in the second half of 2017. The findings pre-sented in the following are thus only preliminary. At the current stage, it is only possible to see tendencies in the different architectural studios however an elaborate analysis of the questionnaire responses is not included in this thesis project.

The questions in the questionnaire regards the single employee’s general understanding and experience with engineering knowledge, how they normally implement it, and in which phases of the building design.

The questionnaire is built on the following structure:

1. Employee’s own evaluation of their cooperation across disciplines 2. Employee’s knowledge, experience and view on engineering expertise

3. Categorising employee e.g. architect/engineer, how many years of experience etc.

The category of the first bullet is about the employee’s own view on their multidisci-plinary both personally and studio wise. The category of the second bullet investigate the subject areas individually. The focus of this thesis project is limited to environmental factors of microclimate comfort, daylight and energy performance. However, due to the scope of the collaboration with PhD student Mathilde Landgren, questions about LCA and LCC are also included, which can contribute to this study with its broader angle on sustainable design, hence more factors to look at. All disciplines will be affected by the overall geometry of the building, which is why it is interesting to investigate if there is a connection between the answers and whether some elements are favoured.

From the first category respondents will answer based on their idea of multidisciplinary work, both personally and studio wise. From the second category respondents will an-swer based on their experience and knowledge of different environmental factors, which is part of multidisciplinary work, as the factors affect the overall geometry when imple-mented.

The last category sorts each employee based on their profession, seniority, age etc. This category is placed last, so the first impression of the questionnaire is not about stan-dardised information, but on the individual’s knowledge and shows interest in them, not their information. The full questionnaire can be found in appendix H.

Theory and method

The formulations of questions and the overall structure of a questionnaire is essential for the possible answers and thus the quality of the gathered data. If the quality of the data is low, it will form a poor foundation for the analysis and future conclusions to be drawn (Jensen & Knudsen, 2014). The questionnaire design and underlying theory will be elaborated in the following. The construction of questions and the general design of the questionnaire influence the possible outcome. The following sections hence explain the questionnaire design choices based on the desired outcome.

The properties of the collected data are dependent on the type of questions asked to the respondents, and the scale on which the respondent can answer (Jensen & Knudsen, 2014).

Question types

Questions in questionnaires can either be closed or open. The most common kind of questions in questionnaires are closed questions, where the respondent is given a fixed list of possible answers to the respective questions.

Closed questions ensures a uniformity as well as eases both the answering- and process-ing time (Jensen & Knudsen, 2014). The strength of these kinds of questions is beside the above mentioned, that the list of possible answers might help the memory of the respondent. The greatest weakness is that the list might not be adequate, hence the respondent will not be able to answer what is most correct for her/him. This means that some aspects might be invisible in the data, as the questioner have decided beforehand what they think the respondents might answer.

Open questions do not have an obvious direction of answers, why the answers are more qualitative, as they are fully the respondent’s opinion. By asking open question, the an-swers will be diverse and hence almost impossible to categorise and analyse. There will therefore be no open questions in the questionnaire, as they cannot be quantified.

Nominal Scale

The data properties of the nominal scale are numerals or other symbols, which express the category of the measured property. Questions with nominal answers are questions such as determination of the respondent’s gender. The question is closed, and the nu-merical assignment of the categories is arbitrary and gives no information about the interrelationships of categories(Jensen & Knudsen, 2014). The nominal scale is therefore not really a scale, but simply assign labels to distinguish categories.

Ordinal Scale

The data properties of the ordinal scale are numerals, which indicate a ranking of the measured properties, but nothing about the absolute difference between the categories.

(Jensen & Knudsen, 2014). Many closed questions about people’s opinions are measured on an ordinal scale. Ordinal answers divide observations into categories, like the nominal scale, but the categories are also given values with a ranking. This give the same sta-tistical opportunities as the nominal scales, but as the observations can be set up in an ordered sequence, it is possible to calculate additional statistics in data processes, such as cumulative percentage.

Answer types and scale

There are four different kinds of scales to closed questions in questionnaires; Nominal, ordinal, interval and ratio. The nominal scale is the lowest scale level and ratio the high-est, regarding amount of analysis possibilities.

The nominal and ordinal scale are so-called categorisation scales, which do not meet the prerequisite for other analysis than summing up and finding the percentage. These are not adequate for finding the average nor the standard deviation, which are the building blocks for further data processing. The interval and ratio scales give more analysis pos-sibilities, as there is more depth to the answers.

Interval Scale

The data properties of the interval scale are numerical values, which express relative differences between the measured properties, i.e. distances between numerical values are equal, but the zero point is arbitrary. This scale has all the same characteristics as the ordinal scale, but the distance between the categories are the same. This property is a prerequisite for calculating the mean and standard deviation (Jensen & Knudsen, 2014).

With the interval scale, it is possible to merge the answers into intervals, and thereby achieving a smaller number of categories with more respondents in each category. This eases the analysis work and gives greater clarity.

The data from the interval scale can be processed by the mean value, standard deviation, regression and many more (Jensen & Knudsen, 2014). In order to process data, ordinal scale answers are often regarded as equal distanced, as this is required for statistical data analysis (Jensen & Knudsen, 2014).

Ratio Scale

The data properties of the ratio scale are as the interval scale, but with a fixed zero point.

This makes it possible to calculate ratio and relative difference between the categories.

Yet, with the ratio scale, it is not possible to categorise answers, as the data quality will be downgraded to a lower scale level of ‘interval scale’. There must be answers for all possibilities, e.g. 1 year, 2 years, 3 years etc.

Yet, it shortens the answering time of the responders as well as eases the analysis time (Jensen & Knudsen, 2014).

Reliability and validity

Questionnaires can be compared to experiments. A given hypothesis must under the same conditions give the same answer. In experiments, it is often possible to conduct the experiment repeatedly and test the outcome. This can be done in questionnaires as well, where the same investigation is repeated on the same respondents at different times, well within a given timeframe where respondents would be expected to not change opinion. Yet, this is rarely possible due to practical and economic factors (Jensen, J.M &

Knudsen, T. 2014). An alternatively method, which is more used in questionnaires, is to get the respondent to answer two or more questions, which are expected to produce consistent answers. Subsequently, the reliability of the measurements can be assessed based on the correspondence between the answers to these questions. Measurements based on multiple questions regarding the same measurement object, are also called measurements with combined scales (Jensen & Knudsen, 2014).

Questions regarding people’s gender, seniority and which phases they normally par-ticipate in are fairly easy to measure and categorize as ‘truthful’ as the questions are straightforward and answers objective. But, when the questions start to become abstract and not directly observable, the answers must be treated more critically. This is for exam-ple questions regarding peoexam-ple’s opinion or work methods, which are subjective. It must be clear whether the data from these kinds of questions really reflect what is intended to measure (Jensen & Knudsen, 2014). Reliability is a prerequisite for obtaining validity, since you cannot have accurate measurements if the measurements are influenced by coincidences.

Reliability

Reliability is an indicator of how stable the measurements are, i.e. if they are free from random measurement errors. If the measurement is reliable, it is consistent, and the questions used in the questionnaire will cause the same type of information each time they are used under the same conditions.

The main reason for random measurement errors are misinterpretation of the question or the answer options by the respondent. This might be due to the wording in the ques-tions or answering opques-tions, which can be ambiguous or with unclear formulaques-tions. If the question is not clear the respondent will apply her/his own interpretation (Jensen &

Knudsen, 2014).

An example of an ambiguous formulation, where the possible answering choices might be on a 1-5 scale, is; ‘What is your knowledge of daylight?’. Both respondents might an-swer that they have knowledge of daylight design, but their motives and competencies are far from the same. Some respondents might say “I am aware that you can design with daylight”, and some “I design with daylight by using this and that tool”; and they will both answer the same value on the scale. The same applies to the answering options, where the wording choice can lead to different perceptions. The question ‘How often do you work with daylight?’ can have the possible answering options of Never, rarely, often and always.

Some respondents will answer that they work with daylight often when they have ad-dressed the quality in daylight once in a lot of project, and others will answer that they work with it often, when they actually always work with daylight, but know of others who do it more. Also, they might have different understandings of what working with daylight means - do you look at shadows, do you look at glare, or you look at the daylight factors?

Do they use intuition or simulations? If the questionnaire is reliable, then the questions combined with the answering options will never have different meanings depending on the interpretations of the respondents.

An example of how reliability is included is the questionnaire is the daylight question above. For the question to have the same meaning regardless of the responder, it was divided into several other questions such as ‘How experienced are you with daylight?’,

‘How important is daylight in regards of quality of the design?’ and ‘How much does daylight influence your design decisions in the different phases?’. By dividing the question into sev-eral, the meaning is much clearer for the respondent, hence the quality of the answers is higher and therefore more suitable for further analysis.

Validity

The validity is an expression of the extent to which the measurements actually measure what are intended to measure. A questionnaire is not necessarily valid, just because it is reliable in the sense that it produces the same result over repeated measurements.

For example, when asking about people’s opinions and applications of sustainability in design, there will most likely be a tendency for respondents to overestimate or even exaggerate their responses in a direction that reflects that they are actually very aware of sustainability in their design decisions. The characteristic of validation problems is that the deviation from ‘the truth”’of what is measured follows a systematic pattern.

Respondents might give a more positive image of themselves and the studio through an overestimation. These systematic patterns of deviations are not necessarily an issue if identified. Knowledge of the tendencies within the deviations and where they appear can therefore be correlated (Jensen & Knudsen, 2014).

There are several approaches to evaluate the validity of a measurement; face validity, content validity and criterion validity.

Face validity

Does the measure look valid? This is the simplest form of validity, which is a matter of appearances. The face validity refers to the transparency and relevance of the question-naire, and how it must appear and be perceived by the respondents. It is not a measure-ment concept, but the degree to which a question appears to measure what it claims to do.

The validation is an immediate and subjective assessment of whether the content of the question actually measures what is desired to measure. The assessment is created based on whether the questionnaire looks like it will work, and is not assessed in hindsight of whether it has proven to work. The respondents might recognise the type of information they are responding to, and answer accordingly. The advantage of this is that the respon-dents might use the context to interpret the questions and thereby give more in-depth and accurate answers. The disadvantage however, it that the respondents might also try to adjust their answers to what they think is the desired outcome. Respondents might even adjust their answers to try to make themselves appear better. A face validity assess-ment is done by merely looking at the graphical appearance, questions and answering options. In the questionnaire, there is a DTU logo and the title is ‘Architectural Engineer-ing’. The respondents will therefore instantly have an idea that they will be ‘evaluated’ on their technical skills and sustainable character. The responders will therefore be biased based on the presentation of the questionnaire. The respondents might try to meet the answers they think the interviewer are looking for. It is assumed that it can affect the answers toward a more multidisciplinary overall picture of each studio. Yet, it is a recur-rent bias through all studios and will therefore not affect the comparison even though it might give a slightly off result if looking isolated on one studio. The questionnaire could have been presented in a more neutral way, without an explanation of sender or reason, but it was assessed to lower the response rate, as the output and context would not be clear (Brinkman, 2009).

Other pitfalls regarding the validity are that the persons, who fill out the questionnaire might be more interested in the topic, and are therefore not representative for the regu-lar employee at the studios. This will lead the results to be more positive than if every-one from each studio replied.

Content validity

Does the measure capture the full content of the questionnaire? This is the next step in validity, where an expert panel reviews the questionnaire to find out whether it mea-sures what is intended and no other variables. For example, if the questions about day-light were written in complicated phrases, the questionnaire might be a test of reading comprehension, rather than a measure of the respondent’s knowledge of daylight (Brink-man, 2009; Greco, Walop, & Mccarthy, 1987). For this questionnaire, several people with different professions (two Cand. Arch students, one Cand. Merc., one PhD student in Ar-chitectural Engineering at DTU and the head of marketing and sustainable development at Studio A) commented on its content, writing and meaning before it was released.

Criterion validity

Does the measure agree with other valid sources? This is a way of validating the ques-tionnaire by comparing it to other studies (Brinkman, 2009; Greco et al., 1987). This is not possible for this questionnaire, as similar research is not available or has never been conducted. However, the results of the questionnaire are compared to the observations made during the case study, where conclusions can be drawn to support or disprove each other. The findings of the questionnaire will also be a topic in the focus group.

Questionnaire design

All questions are closed questions, except number 32 and 37, which are a comment box on the respondent’s studio office and location and a ‘general comments’ box. For each of the environmental topics in the questionnaire there is a comment box, where the respondents can elaborate who and how they work with the different elements, if the answering options are not sufficient. There is also a commentary box on the respondent’s work office as well as one for general comments on the questionnaire.

The table on the following page shows the type of scale for each question. When the questions are marked with both and , it is because further elaboration is needed to categorise the scales.

When the Ordinal scale is marked with and Interval scale with , there might not exactly be the same interval in between each possible answer, but to process the data, it is assumed to be equal (Jensen & Knudsen, 2014).

When the Ratio scale is marked with and the interval scale with , the question could have been on a ratio scale, if the answers were not divided into intervals. These di-visions lower the quality of the data, but simplifies the different categories. This is done to ease the answering time for the respondent as well as give a quick and clear overview when processing the data, as they are already categorised.

Multi-item scales, where two questions cross reference each other, are used through-out the questionnaire to limit uncertainties in answers. For example, question 9 and 10 about “How many of your design decisions are influenced by …?” cross references with question 14, 18, 22, 26 and 30 about “How much does … influence your design decisions in the different phases?”.

The interval scale is the most frequent in the answering options due to the eased pro-cess of analysing the results. Many questions regard the respondents’ way of working, and by asking on a scale instead of ‘yes/no’, it is possible to get a much more detailed insight. For example, when asking about who the respondents collaborate with in the different phases, a ‘yes/no’ answer would not be sufficient and a lot of interpretation and assumptions would be put into analysing the results. Therefore, an interval scale of descriptions of collaboration intervals is given.

See appendix H and I for a full table of questions and multi-item scales.