Lately there has been a lot of chatter about wanting to incorporate more pre- and post-surveys into our work as educators. Perhaps this is fresh in my mind after attending an assessment conference, but I found myself asking a lot of questions to presenters who argued to use pre- and post-surveys were the best way to measure student learning. The 'best' way? I'm not so sure.
As I have engaged in dialogue with colleagues about why they want to use pre- and post-surveys (because I don't!), it seems that one of the most common answers is because they are easy. You build a survey, administer it before the learning experience and then administer it again, and poof ... you can prove that your students learned more after than they knew before. But that's actually not accurate. You see, a pre- and post-survey is an indirect measure of student learning because it is completely based on student opinion, attitude and perception. Learners are not being asked to demonstrate their knowledge, we are simply asking them what they THINK they now know. It would be like if you asked a student if they could identify 3 campus resources that could help them be more academically successful in the future. They (hopefully) would note a higher level of perceived learning after they attended a training session on campus resources; however, if you asked them to actually identify which ones, they may only be able to recall one or two. Clearly, there is a disconnect between what they think they know and what they can actually demonstrate.
Indirect measures seem to be used a lot more than direct measures of assessment, likely for that same reason ... they are easy. They often involve surveys that ask students what they think they know or what they are satisfied with because it is far easier to administer quantitative assessments than analyze and code qualitative. Especially when you are working with hundreds or thousands of students, these indirect methods probably seem better than nothing at all. While of course it is better to do something than nothing, asking students to self-reflect and self-assess should not be reported on without direct assessment to validate the results. Even worse is when these indirect methods have low response rates, because not only does it mean that the results do not account for the majority of the population, it doesn't account at all for what students are directly learning (UNC, 2012). These types of surveys also provide little depth or understanding of why a learner indicated what they did. For example, if they are still testing low on their perception of leadership theories after the workshop, is it because they actually didn't learn leadership theory or because they don't realize that they did? Indirect assessment can't tell us that.
When I first started in the field of assessment, I thought everyone and every program should have a pre-and post-survey. It was easy, I could use language from the workshops and goals for what I wanted students to know and do as a result, it would be quick and easy to evaluate, and it would help me understand student learning. As I continued in my assessment journey, I started to realize how little we actually knew about student learning as a result, and how misinformed people probably were. Often times, I was using it as a sole assessment method (Santa Rosa Junior College, 2006), which I now know was quite irresponsible and did not truly demonstrate what our students knew or the effectiveness of our learning experiences. Then I used it in combination with a program evaluation, which still only measured what students thought of a session, including common questions like: Was this engaging? Do you feel like you learned something? How can this be improved in the future? ... still all about feelings, perceptions and satisfaction. So again, no direct evidence to support any of these perceptions among learners.
The one circumstance that I think is different, and is where I would suggest to a program that really wants to use a pre- and post-survey format can look is to self-confidence surveys. You can find an overview of this assessment technique in Angelo & Cross' (1993) book, Classroom Assessment Techniques. What makes this different, in my opinion, is that it does not ask students to consider what they think they learned; rather, it asks them to highlight their confidence in a specific area. Academic self-confidence - that is, confidence in one's own ability to understand a specific topic - has been proven to be a significant predictor of learning performance (Briggs, 2014). In fact, in a study conducted at the National Institute of Education, it was proven that, "confidence is a much better predictor of students' achievement than any other non-cognitive measure ... it acts in a way that overcomes everything else; so confidence is very important" (Stankov). Although the test was conducted specifically for English and Math, the strong relationship suggests that it is applicable to learning of a variety of topics and disciplines. Not only does this help educators gain semi-direct methods of assessment from an indirect tool, it also helps students engage in self-reflection by recognizing the areas that they need to gain more confidence in, and therefore, learn more.
I think this distinction between perception and opinion vs. confidence is important in the work that we do as educators. I acknowledge that assessment can be difficult, especially when we consider administrative load and resources. However, I do think it is important that we try to think about our efforts in a way that supports the 3Ms of assessment - is it measurable? is it meaningful? and is it manageable? A pre- and post-survey may be measurable and manageable but it may not be very meaningful if we can't get at the real learning or understand why students have the opinions they do. Thinking about adding in a piece around confidence means that we can maintain a similar setup of the form, continue to look at data quantitatively, but know that it is backed by research which draws direct relationships between a student's confidence and their ability to learn. That is far more correlation that we can make between a student's perceived learning and their actual learning using a single technique.
For institutions that have the resources to make a pre- and post-survey one of many techniques, with at least one being direct, then it may be a good choice. However, I would caution that there is a lot of correlating that needs to happen to be able to demonstrate that student learning is positively related to the pre- and post-tests, which means that the direct assessment needs to be directly aligned. That is quite a bit of work, and I would argue should be done by someone who has graduate experience in statistics and research methodology, and understands how to effectively code qualitative data to make it quantitative. Otherwise, it may be easier and more effective to consider using a confidence-related approach.
Finally, I think that this comes down to language. The way that we talk about assessment, and just education in general, includes a lot of terminology. My last post was related to referring to our work as pedagogy when what most of us probably mean is andragogy. For institutions that are truly doing a curricular approach, it can be very frustrating when an institution who is actually doing programming calls themselves curricular. When we are sending out satisfaction surveys and calling it assessment, what we really mean is evaluation... you get the point. But I think the same should be said about pre- and post-surveys. They seem to be a buzzword right now for folks who are integrating assessment into their work without actually understanding much about them. I think we need to ask ourselves: is pre- and post-survey really what we mean? Because if we want to show valid learning over time, then that is not the "best" way to do that, especially if it is the sole method of collecting data. Furthermore, if that is truly the most measurable, meaningful and manageable way to do it, then I think we should be talking about self-confidence surveys, and not pre- and post surveys. Although it may seem silly, language is important, and as assessment continues to become an integral part of sharing our stories, validating our work, and creating a culture of accountability to our students, then we need to start getting it right.
A creative educator striving to enhance the holistic student experience and committed to exploring personal strengths and fulfillment.