• Ingen resultater fundet

The different rubrics

In document Active Learning in Engineering (Sider 126-132)

Conclusions and recommendations

4 Research design

5.5.2 The different rubrics

In general the students used the rubrics as a checklist when they handed in their solution. And in general the students prefer more information. When asked to rank the three different rubrics, most of the students prefer case C (see Figure 17), as one of them said:

My ranking is based on the fact that rubric C had the most detailed description

Figure 17 What rubric did you prefer for solving the case

From the interviews, however, it is clear that the students haven't had a very precise understanding of the difference - when we asked them about the different rubrics many of them found it difficult to remember what the differences were. In the questionnaire the students were presented with the different rubrics to help them remember them.

Only a few of the students noticed the difference in the importance of the criteria in case B (see Figure 9) and it had no impact on their solution.

no,  not  

126 We conclude that the students in general were very positive with the use of rubrics as a tool for making the criteria more precise, they use it as a checklist and the more information the better. It has not had a big impact on their work during the case, neither as a guide nor as a source of discussion.

6. Feedback

5.6.1 Rubrics in general

When the teacher gave feedback, the rubric was used to structure the feedback. But did the students read the feedback? As can see from Figure 18 all students read the feedback and 92 % read it one or several times for each case. Furthermore, the students used the feedback to improve their following cases - from Figure 19 you can see that only 7 % of the students did not use the feedback to improve.

Figure 18 Did you read the feedback? Figure 19 Did you use the feedback to improve the following solutions?

As described in section .. the rubrics were used directly for feedback. The teacher found this to be a good way to give the feedback. The students found this to be the case as well; in Figure 19 the students indicate that they receive much more feedback in this course than others and from you can see that more than ¾ of the students found the quality better.

Figure 20 In relation to other courses in your study program, how did you find the amount of feedback in this course?

Figure 21 In relation to other courses in your study program, how did you find the quality of the feedback in this course?

5.6.2 The different rubrics

We did experiment with different weights for different crtiteria to make the students focus more on the more relevant things. From Figure 22 we can see that the students did notice the difference. From the interviews we can conclude that the students did not use it in the intended way, as described above the students used all criteria as a checklist rather as a guide for focusing on the things to spend the most time on.

Figure 22 Did you notice the different points for the criteria?

When asked about what rubric the students liked the best seen from a feedback perspective, there is no clear pattern; the students’ preference is almost uniformly distributed between the three rubrics. When asked why they found the particular rubric the best, it in all cases has to do with the amount of individual feedback. As one student says:

There were the most feedback with case B, in case C some of the criteria were not individually commented but just “dotted out”. Feedback on the first case [without rubric] was also good. The schema gives a good overview and a better understanding of the total points

6 Discussion

Students favor detailed criteria. It is clear from this research that the rubric they liked the best were the last one where they had a more detailed description of each of the criteria. However, a university degree also should make the students more independent so they independently can solve a typical engineering problem.

Giving the students a detailed rubric could be seen as a way to reduce the independence. Consequently, it is important to adjust the level of detail to their expected level of proficiency in the case they solve. The students in the study had not been taking courses in thermodynamics before, so detailed rubrics were seen as appropriate. Naturally, it is debatable how detailed the rubrics are and if it is obvious to the students when their solutions is on a given level. The most detailed rubric was inspired by the formulations in the general Danish marking scale and their description of the levels of proficiency. It is important to bear this in mind when; if we should make more general conclusions we need many more different rubrics and investigate e.g.

if the students prefer individual feedback over feedback based on different rubrics.

Learning outcomes define the learning goals of a given course and when creating the rubrics it is important to align the criteria with the related learning outcomes. It is also important to choose the most important part of the course elements, the elements where the students are mostly challenged. When we design the cases it is very important to consider if the case must be a real case, from real problems, or it can be created from the teacher's mind to fit into what he exactly wants the students to learn. Real problems might be more motivating, but also include some elements that are not a part of the learning outcome. The formulation is equally important; do we serve the guide for the students calculation so that they do not need to go through the considerations with model to choose and what strategy to follow; or do we present the text as an open problem.

128

This research did not cover the teacher's perspective. The teacher indicates that the use of rubrics in this course also gave him a clearer picture of the students’ challenges and strengths as well as a more structured way to give feedback to the students.

7 Conclusion

As concluded by many others: Feedback is useful and appreciated by the students. In this course the students found the amount of feedback to be higher than what they normally experience in their study. One student put it this way:

There is much feedback in this course compared to others. Normally we do not get feedback, that we do not like and normally we do not have so many mandatory assignments, which was good. I think it is easier to get the right overview with these matrices.

The students like to have a detailed description of their assignment. The two rubrics that supplied the most information were liked best by the students. Nonetheless, personal feedback was preferred even more. When the students were asked about their view on the feedback given by indicating the quality of their solution on a detailed scale, they preferred the individually written one.

When they solved the cases, the students used the rubrics as guidelines in the outset. They did not use the level of achievement, just the different criteria. Using different weights to indicate the importance of the criteria was not used. When the students handed in the case, they used the criteria to check if everything was present in their solution.

References

Angen, M. J. (2000). Evaluating interpretive inquiry: Reviewing the validity debate and opening the dialogue.

Qualitative Health Research, 10(3), 378-395.

Blackboard. (2015). Blackboard learn - blackboard. Retrieved January 26, 2015, from

http://anz.blackboard.com/sites/international/globalmaster/Platforms/Blackboard-Learn.html

Carnegie Mellon University. (2015). Rubrics - teaching excellence & educational innovation - carnegie mellon university. Retrieved May 14 2015 formhttp://www.cmu.edu/teaching/designteach/teach/rubrics.html Cross, K. P. (1987). Teaching for learning. AAHE Bulletin, 39, 3-7.

Higgins, R., Hartley, P., & Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27(1), 53-64.

Howe, K. R. (1988). Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educational Researcher, 17(8), 10-16.

Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses : Shifting the focus from teaching to learning. Boston: Allyn and Bacon.

Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come.

Educational Researcher, 33(7), 14-26.

Lau, W. M. L. (2007). CASE-BASED LEARNING OF HIGH SCHOOL SCIENCE - what is case-based learning?

Retrieved February 6, 2015, from http://www.cuhk.edu.hk/sci/case-learning/whatcase.htm National Union of Students. (2014). Assessment and feedback benchmarking tool

Onwuegbuzie, A. J. N., & Leech, N. L. N. (2005). Taking the “Q” out of research: Teaching research methodology courses without the divide between quantitative and qualitative paradigms. Quality and Quantity, 39(3), 267-295.

Race, P. (2004). Using feedback to help students to learn. London, United Kingdom: The Heigher Education Academy.

Retrieved from https://www.heacademy.ac.uk/sites/default/files/Using_feedback.pdf

Study Group on the Conditions of Excellence in American Higher Education. (1984). Involvement in learning:

Realizing the potential of amerkan higher education No. HE017750). Washington, D.C, United States: National Institute of Education/U.S. Dept. of Education.

The National Student Survey. (2015). The national student survey.http://www.thestudentsurvey.com

129 Willis, L., & Webb, A. (2010). Enhancing feedback for engineering students © Higher Education Academy

Engineering Subject Centre, Loughborough University.

Improving the development of engineering projects through informational competence and

In document Active Learning in Engineering (Sider 126-132)