false
Catalog
Formulating the Exam
Form Exam
Form Exam
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
In this second installment on examinations, we're going to talk about formulating the exam itself, and specifically what the objectives are going to be for this little piece are going to be how we align the assessments with the course and the program learning outcomes. And I want to make the contrast between taking various random pieces of information out of the books or out of the lectures or out of the course materials, one way or the other, and throwing them into an exam. And as opposed to that, we want to have a very structured and deliberate approach, a sort of prospective approach to saying what should be on the examination. So that's what we're going to cover in this piece. I'm Dr. Michael Riker. I've had the privilege of being a professional educator for many years and serving on NBC RNA committees where I've written for the certification exam, as well as, of course, doing a lot of assessments and testing for my own students. So I'm glad to bring some of that experience here to you today. So we'll just do a little review from the first module. In the first module, we talk about why we test. And so just as a review of that and to set the stage for this, remember we talked about ensuring accountability for useful, applicable knowledge for the students. And I want to bring that part up because that's an important part to just kind of refresh in our heads that we don't want to just pull tidbits, random little fact toys and things like that and throw them on an exam, but we want to think about what would be useful, what would be applicable for our students. We want to ensure an understanding of the foundational concepts and knowledge that the students will then use as they move on, either into further coursework or into clinical practice. We talked about some other aspects about the importance of testing. But for this module, I'm going to really focus on those first two, just to refresh those, thinking about we're trying to put together an exam that reflects useful, applicable knowledge and knowledge that has some purpose in making sure that the student has achieved that knowledge or that competency. What's really good to do is to have kind of a scheme in mind of as we put together a course, and this is good for course development before we even get to exam development, is to have a scheme in mind about what are the objectives that the person's going to achieve in each little module along the way, or in this case, in each examination that we might provide, in each little piece. And that should feed up to, that should be a subset of, for example, course module or course objectives, and the course objectives should be subsets of program objectives or program outcomes. So, no matter what you're doing, you should always have a reason for doing it, a reason for teaching it. We shouldn't be in the classroom just sort of saying, oh, this seems like an interesting thing. Let's talk about it. And we shouldn't be writing an exam saying this seems like an interesting topic or fun thing to see if the students can answer this question or not. Here's an example of a course map from a business course I teach in our DMP curriculum. And you'll see, for example, in the human resources module, the program objectives are very, very broad. Employ strategic leadership and interprofessional teams and apply legal and financial regulatory principles. Those are very broad. It sounds like they're good things to know, but it doesn't really tell you exactly what we're going to do. The course objectives that cascade down from that include things like perform critical analysis and decision making in healthcare leadership, including thinking about financial and regulatory pieces. Justify the legal obligation and responsibility of healthcare providers and institutions, and then compare and contrast management models. Those are the course objectives. And then in this individual module on human resource management, we have an objective of applying knowledge of labor law and human resource practices to a project or business plan. So by the time we get down to the module objectives, we're really getting down to, here's what you actually need to know. Here's what you need to be able to do. And here's how this fits into the bigger picture of what we're going to do. You will see this type of modeling mapping out is really important as you develop a course. And as you develop an exam, we'll get on to talking about this with exams. With our own national certification exam, and now the CPC exam, you will see the same thing. CPC exam is based on four main domains. And this is just one of those. If we just look at the domain of airway management, underneath that, you have this little cascade. You have this kind of hierarchy of anatomy and physiologic concepts and pathophysiologic concepts. And then under each one of those, we say, well, we want people to know the normal anatomic structures and then variations in anatomic structures. And so if we map out the course, then we can ensure that we cover the content that is needed. And then likewise, if we map that with an exam, we can ensure that we have assessed the content that is needed for the person to meet the objectives of the module course and program. There's a lot of literature that you can find in educational literature these days talking about authenticity and assessment. And for the most part, authenticity and assessment really refers to assessing people in doing very sort of practical, hands-on things, as opposed to writing about how to make a legal argument. For example, it might involve law students arguing a case in front of a jury of peers or something like that. But I want to just think about this concept a little bit because it's a great concept to think about when you consider putting together an exam is shooting for authenticity, meaning that you want to put together an exam that really authentically kind of assesses the person's ability to do something that you need them to do. In our case, when we talk about nurse anesthesia education, we need them to be able to make decisions and perform differential diagnoses of things that they see in the patient to make calculations and administration of drugs and techniques and procedures and things like that. So we want to think about authenticity when we put the exam together. So we don't want an exam that, oh, this is great, the person passed this exam. But the exam asks them a whole bunch of questions about things that they really aren't going to need or aren't going to use or are not going to be very clinically applicable or relevant to them. You probably have all seen this Bloom's taxonomy, which is a classification of different objectives and skills that educators can set for their students, both in thinking about teaching techniques and things that we teach in the classroom, but also, as we will do today, we can apply that to examination. This is something that has been around since the 1960s. An educational psychologist by the name of Benjamin Bloom came up with this. And the taxonomy really just just lays out this different levels of knowledge that you might have about a given topic. So for example, you know, at the very basic level, it's just remembering, just your ability to to memorize and remember and regurgitate some topics. It doesn't imply that you need to be able to do anything with that information or knowledge to apply it in any way or use it anyway, just to be able to remember it. And from there on up, we get all the way up to the very top where you're synthesizing and able to make creation, creation of new things out of it. This is a great sort of concept to think about when you're thinking about putting together an examination, because you want to think about what is it, again, to shoot for authenticity, what is it that the student really needs to do? When I talk about a given topic that I'm going to provide some examination questions on, what is it, what's the end result that I need this student to be able to do as a result of the learning and hopefully then the assessment of that learning? And so I'll give you just a little exercise that we can work through. I've got a number of sort of hypothetical questions on the right side of the screen here, and this isn't important to have the right and wrong answer, but just to think about Bloom's Taxonomy and how different questions can relate to different levels of cognitive thought that relate to the different levels of Bloom's, you know, to give you a sense of thinking as I approached writing an exam, well, one of the things that's got to be very obvious very early on is that it's really easy to write exam questions that deal with the remembering level, the recall level. Very, very easy to say, here are five symptoms, tell me which three of these belong to this particular thing or whatever it is. That's really easy to write a question, you can write very valid questions about that, but where's the authenticity of that? When you think about that authenticity and you say, well, is this really the most important thing that I need my graduate to be able to do? Do I just need them to be able to recite a list of things that they can memorize? And in most cases, you know, in some cases, the answer obviously is yes. We need to, you know, we need to know MAC levels of drugs and we need to know drug dosages and things like that. But more importantly, the person needs to be able to synthesize those and analyze the patient scenario and formulate a plan. And so by the end of their educational course, the person really needs to do a lot more of the higher level knowledge, especially now that we're educating our anesthetists at a doctoral level. So what I'm going to do is I'm going to ask you to pause the video here for a minute, take a look at these example questions that I've got, and then just see if you can link them together and say, this looks like this is the level of remembering, this looks like it's a level of understanding. Just do a little exercise for yourself just to kind of get it in your mind. What I'm talking about when I say that we can write questions that speak to different levels of cognitive competency, according to Bloom's Taxonomy. Now, if you've taken a minute to look over those, let's just look at them together and see if we can come up with a consensus about where we think they fit. And you could probably make arguments to put some of these in more than one category. If we write a question that says, which of the following symptoms is common in malignant hyperthermia? These are all about malignant hyperthermia, by the way. If we say which of these symptoms is common and we give the examinee a list of potential symptoms, what we're asking them to do is just basically remember. We're just saying, just remember this list and what goes on this list and what does not go on this list. If we say, draw an algorithm to make a differential diagnosis between suspected MH versus neuroleptic malignant syndrome, now we're really asking someone to do something really high level. We're asking them to create, because in order to do that, they really need to do everything underneath it, right? So in order to create something new, they also do need to remember what symptoms go with each malignant hyperthermia and neuroleptic malignant. They do need to analyze. They need to draw a contrast and say, well, this one goes on this side of the algorithm and this other one goes on the other side of the algorithm. And they would need to evaluate what are the correct steps and what are the order of the steps that would be correct before they get to this very highest level. So that would be number two would probably relate to asking someone to do something on the creation level of a Bloom's taxonomy. If we go to number three here, considering the reason for fever in MH, which other symptoms would you expect from the same cause? So now we're doing something where we're kind of somewhere in between here. We're probably around analyze. You know, it might be most applicable to analyzing because we're asking them to draw a connection. We're saying, yes, the person has a fever, but why do they have a fever? They've got a fever because they're hypermetabolic. And if you think about the underlying pathophysiology of hypermetabolism, you can draw some other conclusions from that. You could draw the conclusion that the person will probably be tachycardic and that they will create additional carbon dioxide. So that type of question would probably relate mostly to the analyzing or analysis level of Bloom's. Number four says a patient received droperidol and midazolam preoperatively shortly after induction with propofol and rocuronium. He develops fever, tachycardia, tachypnea, and general rigidity. What is the most likely cause? So now we're asking, you know, that's not a very black and white thing because, you know, there's a lot of information that would still be needed before we had an absolute correct answer to that. But this would really get to the evaluation piece, right? We're asking the person to justify their stand, to appraise the situation, to make some contrasts. Of course, they're going to have to remember, again, what are the signs and symptoms of MH? What are the signs and symptoms of serotonin syndrome and neuroleptic malignant syndrome? They're going to have to understand why, you know, why certain drugs might lead to one or the other of those syndromes. And then they're going to have to come up here and be able to justify, here's why I think that the most likely thing is whatever the answer is they would come up with. The last example says you provide relief on an open abdomen case. The previous anesthetist has been giving Esmolol every 10 minutes for tachycardia. You find the minute ventilation is set at 12 liters per minute and the end tidal CO2 is still 38. What is your assessment? Now, here again, this is not asking very straightforward. It's not saying, you know, which of these symptoms goes with this or put these things in order. It's asking the person to really make an application and or an analysis. And we're kind of probably in this area here. We're saying, you know, take what you know about MH that they develop a lot of CO2, put that together with some other things, drawing connections between things like what's a normal minute ventilation and what should the CO2 be for a normal person receiving a normal minute ventilation and putting that all together to analyze. So we're going to apply or analyze in that in that scenario. So that's an example of the different levels of questions that we can think about writing. Now, how do we get there and how do we decide what's going to be right? We want to start with a blueprint for an exam. And for a lot of people, if you think about when I think about progression of a very novice educator to a very well-seasoned expert educator, one of the things that people probably skip over a whole lot as a novice educator is a lot of the front end planning. And that's just the way it works. It's perfectly fine. It's just the way the world works. You know, as a very novice educator, people tend to say, let me look at the book and let me look at maybe a previous PowerPoint and I'm going to put it together and I'm going to go in and talk through it as opposed to taking a very prospective front-end approach, again of saying, let me look at the course map, let me look at the course objectives and the module objectives. What do I need to achieve as I put this thing together? The same thing goes for the exam. So we don't want to just take, oh, here's the PowerPoints that the students have seen, and I'm going to pick some facts off of those and make questions out of those. But we really want to blueprint the exam. The term blueprinting of an exam is very similar to course mapping. We're going to consider, number 1, the purpose of the exam. Once again, we're going to do that cascade from the program objectives to the course objectives to the module objectives. In the modular objectives, it might say the objective is for the students after this module to be able to formulate an anesthetic plan for a patient with MH. Then that tells me, okay, so here's what I need to do when I write these test questions and I put this exam together. I need to assess their ability to do just that. I might come up with some really great questions out of the materials that they have seen that are very good, very straightforward questions and answers. But I have to ask myself first, are those questions and answers going to demonstrate to me that this person is able to achieve the module objectives which are to formulate an anesthetic plan for a patient with suspected MH? We want to think about the purpose of that exam. Then think about the content framework. Content framework really means out of this given exam, how much weighting in terms of numbers of questions, for example, do I want to put on the pathophysiology? How much do I want to put on the basic physiology of calcium handling and the sarcolemma? How much do I want to put on treatment and recognition? You want to think about the testing time available. So I think probably pretty standard is an hour-long exam for a lot of exams I think in a lot of programs. You want to think about the types of questions that are going to go on that exam and will they fit within that allotted time frame. Because obviously, we can't be writing a 200-question exam if the students are only given an hour to take the exam. So for most questions, questions that are not very heavy, multiple connections or matching or very heavy calculation questions, on the average, we can think about about a minute per question, is a pretty average time frame to assume. So in other words, if you've got a one-hour testing period, a exam that has 50 or 60 questions is probably about right. Then the last thing in terms of blueprinting the exam is thinking about the appropriate cognitive level. So I've got a simplified rendition of Bloom's taxonomy here. But we want to think about that, to what extent do the questions on this exam, should the questions on this exam test knowledge level versus application or analysis versus synthesis? That's not going to be a standard. That's going to change from time to time as the students progress through their curriculum. So the waiting, for example, in a very early on waiting, I'm saying, might be 50 percent knowledge, 40 percent application and analysis, and maybe only 10 percent synthesis and high-level questions in the beginning of the program, when the students are trying to learn the basis and the basics of the various topical areas. As they get further on down the road, as they get into their upper-class years, and now their course objectives have gotten higher because now they're expected to do more in terms of putting together anesthetic plans and dealing with complicated patients. Now we might weight this test a little bit heavier. Maybe it'll be 40 or 50 percent synthesis type questions, and complex planning type questions, and maybe very little weight on the knowledge-based questions at that point, when they're in their second or third year of study, because we're really trying to assess that they're progressing in their curriculum, and progressing in their competency, and able to make more of those analysis and synthesis decisions, not so much just that they're able to memorize things anymore at that point. A blueprint of an exam might look like this. Here's a really simplistic hypothetical example that I just put together if we're talking about a pediatric course. Maybe in this course, we've got a piece of the course where we're going to talk about pediatric anatomical differences from adults. Then we might say, well, we spent about four hours on that topic, and so that was one topic. The next topic is going to be pharmacology. Maybe we spent eight hours of class time on that, and then maybe we talked for advanced pathophysiology, and then anesthetic planning, putting it all together for 12 hours. We can think about, and it doesn't always have to be a one-to-one relationship like this, but I've drawn out a basic one-to-one relationship between the amount of hours spent and roughly the amount of questions that go on the exam on that given topic. You wouldn't want it to be that you spend necessarily, it's not really hard and fast, but you wouldn't necessarily want it to be that you spend one hour on a topic that you find really important and then give 30 questions on that one topic. It's probably going to be difficult to come up with 30 diverse questions on one hour, something that only took you an hour to present. Now, it's not that certain topics might not be more heavily weighted in terms of importance to the students, so it doesn't need to be a one-to-one, but this is just one way to approach it because the pitfall that you can find yourself in is that sometimes the questions that lend themselves well to the knowledge level are very easy to come by, they're very easy to find. You can find very easily facts and figures in a textbook and you say, I could write a great black and white question about this. As we talk about writing questions in the next module, we'll talk about the difficulty and the heavy lift that really is involved, especially if you're trying to write a higher level of thinking questions. That notwithstanding, we don't want to fall into that pitfall, and so it's good if we can make a blueprint for the exam and say, you know what, roughly this part's got this much importance and this part has this much importance, and so I want to make sure that the exam again is weighted correctly, because if the end result of this module is that I want the student to be able to demonstrate to me, that they're going to be able to make advanced anesthetic planning based on pediatric patients pathophysiology, I don't want to have 60 percent of the exam just have the student reciting or memorizing and regurgitating facts and figures to me. So if you're using an examination system, like for example, and this is not a commercial endorsement per se, but I just happen to use ExamSoft in my program, and so that's the example I was able to pull for you here. ExamSoft will give you the blueprint. So it won't give you the blueprint, it'll reflect for you the blueprint that you put together. So for example, you'll see that if you look at this example of a gas machine exam, that the questions are broken down in a number of different subcategories. So there's questions on gas machine troubleshooting, on gas storage systems, on physics principles. So we do tag the various questions to the NCE categories, so the NCE content areas, so that number one, we can tell the students on one hand, hey, in general, you're doing worse on basic principles than the advanced principles and surgical procedures. Also, we will tag the questions based on more specific content areas, that again, go back to the course content and the module content. So what this does for you is a couple of things. It'll help you to reflect back and again, make sure that you've got the correct weighting in the various areas. But also, it's a great way to give feedback to the student. If you want to have a means of telling the student, hey, in a big picture, you can take a glance at this and say, I'm really good at this, not so great at this. This student might look at their report and say, I did pretty decent on most things, but the gas calculations, I only got two-thirds of those right, and so that's probably an area that I need to spend a little bit more time. The nice aspect about this is that it also provides great exam security. So if you want to give feedback, and as I said before, the examination is not only to be used as a summative thing where you just check off that the person got it or not, but it's a great formative tool because it gives the exam taker feedback, here's where I did well, here's where I did not do well, here's where I really obviously learned very well and otherwise might not have. But to give them the exam back and say, here's your exam, take a look at it, also runs the risk that people remember things that they've seen, that they might mention to friends, things that were on that exam, they might chit-chat about that one question that asked about the three-year-old patient, so on and so on. That's not really good for your exam security in the big picture, because in the long run, I'm not saying that people are trying to be malicious about it, but any chit-chat about the exam questions will lead to subsequent classes, it'll lead to those stories being carried down and subsequent classes will start to be familiar with what questions are going to be on the exam. Really importantly for our students, it's really not about the individual questions. That's a really tough thing to get them to understand sometimes that this is a very big picture profession that we're training them for. It's not really just about, you had this one particular question on about the three-year-old patient right or wrong, but in a bigger picture, hey, you need to work more on calculations or you need to work more on acid-base. This is really great for exam security when you want to give feedback to the student and get them thinking in the bigger picture. It wasn't about that one individual point that you missed that might not ever come up again on your certification exam or even in life. But rather, this student can look at their report and say, you know what, I did pretty good on acid-base on ABG analysis. Overall, with acid-base, I was pretty decent. But when I actually had to do the application piece of the ABG analysis, I was not so good there. It's only about two-thirds right there. Buffer systems, I was a little low. But the blow gas solubility, I had a question on that and I did fine on that. Dibucane number, I did fine on that. The other nice benefit of blueprinting the exam is that it does give you some nice exam security. Blueprinting the exam, as we think about putting the exam together, think about coming up with that blueprint. It encourages content validity. It gives you a nice organized way of saying to yourself, asking yourself first of all, what do the students need to know as a result of the outcome of this module or this whatever time frame that you're testing? Therefore, this is going to help me to think about what needs to go on that exam, so that I can answer the question for myself. Did they actually get that stuff or not? Number 2, it reduces construct under-representation. In other words, it reduces the tendency toward finding a lot of little facts and figures that lend themselves very well and very easily to writing test questions. But maybe some of these things that are more complicated, you say, man, this is really hard to write a question about this, but we don't want to fall into that pitfall. We don't want to leave that stuff aside just because it is more difficult. Writing the blueprint will ensure you can say to yourself, look, it might be difficult, but you need to include X number of questions on this topic, whatever that topic is. It ensures the appropriate range of objectives, content, but also weighting. Weighting in terms of numbers of questions in a given topical area, and also weighting at the cognitive level. We're certainly always going to want students to be able to memorize and remember certain facts and figures. But more importantly, and especially as they move on in their curriculum, it becomes more and more important for them to think at a higher level. The exam blueprint also gives you a nice black and white means right in front of you, to ensure that you have given the right level of questions, level of cognitive work on the individual questions to the correct content areas. Again, it provides a big picture means of feedback, not only for the students, for sure, and it provides exam security because you don't have to actually give them the exam in order to give them feedback on the exam, but it also gives us feedback for ourselves. If we look at it and we say, wow, pretty much all the students missed a lot of these questions about ABG interpretation, that's also an opportunity for us to then reflect back on our teaching approach and say, did we give enough time? Do we have the right instructor? Do we have the right methodology and the right approach to that? In summary, as we're formulating the exam, we want to use the exams to assess the student progress toward meeting these cascading objectives from the module, to the course, to the program outcome, and then the exam should assess, did they actually meet the objectives of whatever this piece that we're testing them on? Secondly, we want to assess the elements that demonstrate achievement of those objectives. So not just to pick random facts out of the book or out of the PowerPoints or out of whatever materials the students have seen, but rather, what are the elements that I want to assess that are going to tell me that the students have actually met those cascading objectives? We want to consider the cognitive level to lend authenticity. So as the students get further along their educational path, it calls for a higher cognitive level of testing because we're expecting a higher cognitive level of objectives and performance in terms of knowledge, skills, and abilities as they get further along down their path. Then finally, the easiest way to do this, to be very organized about is to make an exam blueprint. So we want to plan so we have the appropriate content coverage and the appropriate weighting in terms of numbers of questions on each topic. We want to be able to give high-level student feedback in terms of how they did, and where they did well, and where they did not do so well. It also provides a decisive learner feedback, the exam security. So that's the big picture as we think about putting the exam together. In the next module, we're going to get onto the nuts and bolts of actually writing the exam questions. That's the real fun part.
Video Summary
In this video, Dr. Michael Riker discusses the importance of formulating exams to align with course and program learning outcomes. He contrasts random information-based exams with structured, purposeful exam development. Dr. Riker emphasizes the need for a prospective approach in designing exams to reflect useful and applicable knowledge for students to progress effectively. By creating a blueprint for exams, instructors can ensure content validity, reduce construct under-representation, and provide appropriate cognitive level assessments. The blueprint helps in organizing exam content, weighting questions based on objectives, and maintaining exam security. Overall, a well-structured exam blueprint enhances student feedback, aligns with learning objectives, and facilitates effective assessment of student progress.
Keywords
exam blueprint
course outcomes
content validity
cognitive level assessments
structured exams
student progress
10275 W. Higgins Rd., Suite 500, Rosemont, IL 60018
Phone: 847-692-7050
Help Center
Contact Us
Privacy Policy
Terms of Use
AANA® is a registered trademark of the American Association of Nurse Anesthesiology. Privacy policy. Copyright © 2024 American Association of Nurse Anesthesiology. All rights reserved.
×
Please select your language
1
English