false
Catalog
AANA Educator Series: Evaluation Methods
Evaluation for Nurse Anesthesia Educators
Evaluation for Nurse Anesthesia Educators
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Evaluation is a process. In a nurse anesthesia program, evaluation is more than assessment of student progress. The process involves course, instructor, clinical site, and clinical specialty rotation evaluation, as well as overall program evaluation, student self-evaluation, and so on. A true evaluation system is all-inclusive. This presentation will focus primarily on student evaluation because of time constraints, but hopefully your interest will be piqued for further study in the total evaluation process. Evaluation is an integral part of a nurse anesthesia program, and having an evaluation process in place is a requirement for accreditation by the Council on Accreditation of Nurse Anesthesia Educational Programs. It is the means by which students know how they are progressing, as well as the means by which instruction is improved and positive program growth occurs. And for those of you who love to collect and analyze data, meaningful evaluation produces a lot of information. One of the most important aspects of evaluation is assessing the knowledge about anesthesia administration which has been obtained by the student. When this assessment is mentioned, the traditional examination of many multiple-choice questions comes to mind. However, in this world of technology, a variety of testing techniques are available, many of which will be presented. Judging performance of nurse anesthesia students obviously has clinical implications and plays a role in determining competence for clinical practice. One of the goals of assessing a nurse anesthesia student's knowledge and judging his or her performance is to identify strengths and weakness in a timely manner. When this is done, identified weaknesses can be remediated in a manner that allows the student to progress through the program. Thus, evaluation is a means of allowing a nurse anesthesia student to achieve the goal of becoming a certified and practicing nurse anesthetist. Monitoring quality of the educational process and the quality of the program is achieved by assessment of student and graduate outcomes. Evaluation is most valuable when it is as objective as possible. Objectivity is best achieved by basing evaluation on pre-set objectives which must be met by the student. The achievement of objectives or the lack thereof not only is an indication of student knowledge but also of weaknesses. As mentioned previously, identification of weaknesses can be used to direct an improvement plan for the student. The goals of testing didactic knowledge are shown on this slide. The first important goal is to evaluate the student's didactic or book knowledge, but even more important for a nurse anesthesia student is the assessment of the ability to apply this knowledge. This latter part is a little bit more difficult to measure. For nurse anesthesia program faculty, ability to master the national certification exam is important. The major reason being faculty want to graduate clinicians who can be successful on this exam and practice nurse anesthesia. Not only is passing important, but passing the first time is important for the graduate's self-esteem as well as the graduate's ability to go to work as quickly after graduation as possible. Also, important for the program faculty is staying within the preferred pass rate of the Council on Accreditation of Nurse Anesthesia Programs. No program wants to be out of line with the standards by which it is accredited. Why would teaching and testing be considered wedded? Teaching is the imparting of knowledge and testing is measuring how much knowledge was imparted. Therefore, teaching and testing are indeed wedded. Maintaining this close relationship between teaching and testing is challenging. Assessment must be based on what is taught and what is taught must be the information needed to produce a competent provider of anesthesia. For instruction to be successful and the student to acquire the necessary knowledge and pass the assessment, the student must know the expected outcomes. These outcomes must be clearly identified and defined in lecture and course objectives. The assessment must then be developed upon these learning outcomes. If this is accomplished, learning objectives will direct both the material taught and tested. Assessments are generally classified as norm-referenced or criterion-referenced. Norm-referenced assessment is often called grading on the curve. Criterion-referenced assessment is not used often in assessment of nurse anesthesia students. This type of assessment distinguishes between low and high achievers. A small portion of students would receive an A or an F and the remainder of students tested will fall in between. The major weaknesses of this form of assessment is interference with student learning and production of an unhealthy competition for grades. Criterion-referenced assessment is used to determine mastery of specific concepts or skills. The National Certification Exam is a criterion-referenced assessment and it has the purpose of establishing the level of knowledge and skills needed to perform as an entry-level nurse anesthetist. Most agree that criterion-referenced assessment is better than norm-referenced because the intent of criterion-referenced assessment is to ensure the level of competency necessary, in the case of nurse anesthesia, to be a safe practitioner. In theory, when criterion-referenced assessment is used, all test takers could either pass or fail depending on whether the established benchmark has been met. Norm-referenced assessment could allow an individual to pass without meeting minimum standards if the remainder of the test takers also perform poorly. Since the goal of all nurse anesthesia programs is to graduate competent providers, the use of criterion-referenced assessments rather than norm-referenced assessment should be considered. This is a list of criterion-referenced assessment tools. Although true-false questions are listed separately on this slide, some consider this type of question to be a multiple-choice question. Of course, multiple-choice questions can be single answer or multiple answers. What is important is not necessarily the type of question, but the content of the question, which should be predetermined by your learning objectives. Additionally, questions should be well-constructed, which we will discuss as we move along. Making exam questions is hard, particularly if they are to test critical thinking skills. I want to spend some time on construction of multiple-choice questions since much of the National Certification Exam is composed of these questions. As mentioned earlier, multiple-choice questions can be divided into true-false, single-best-answer, or multiple-answer questions. Both single-answer and multiple-answer multiple-choice questions are used on the National Certification Examination. A multiple-choice question is composed of a stem and a response. The stem of a multiple-choice question can be open, meaning the answer is at the end of the stem, or the stem can be closed, with the stem in the form of a question. An example of an open stem would be, the basal ganglia is located near the, whereas an example of a closed stem would be, where is the basal ganglia located? This slide gives some very good information about constructing multiple-choice questions. Regardless of the type of question, the answer should be based on content which has been addressed by predetermined learning objectives. These objectives are made available to the student, but almost as important as the former statement is the construction of the question. Poorly constructed questions lead to missed questions, not because the students did not know the content being tested, but because the student did not understand the question. The stem should be grammatically correct and clear, whether read with the responses or without the responses. The test taker should not have to read through the responses in an attempt to understand the question. Minimization of extraneous information is important, otherwise the risk of confusion is increased. In other words, leave out the window dressing. Even so, the necessary information should be in the stem to arrive at the correct response. All of the responses should fall into the same subject area. Use of negative words in the stem should be used sparingly and should always be capitalized, bolded, and or underlined. Use of negative words adds unnecessary difficulty to the question. Best questions have the stems expressed in the positives. This slide gives an example of both a closed and an open stem question. Making stems wordy and long is not recommended for a question to meet all of the qualities of a good, clear question. Let's move on to what makes a good multiple choice response. The correct response should be absolutely correct, but the other choices should be reasonable alternatives to the correct response. The grammar in the response should be correct and comparable to the stem. An example of correct grammar could be if the verb in the stem is plural, then the response should be plural. If one of the responses is singular, it is usually not the correct answer. Students become attuned to test-taking strategies and soon learn terms such as the distractors listed in bullet 2 as usually associated with the wrong answer. The terms listed in bullet 3 are often associated with the right answer. Multiple choice questions should be independent of the others. In other words, the answer to one question should not depend on getting the correct answer to another question. If the stem ends in a or an, the answer may be obvious. For example, if the stem ends in an and only one response starts with a vowel, the correct answer is very clear to the test taker. Numerical answers should not overlap because a case can be made for more than one answer being correct. Instructional materials have no relevance to the goal of testing. For example, if one is testing for the cause of bronchospasm, there is no need to define bronchospasm. The goal of the question is to test on outcomes of learning, not to teach new material. The last bullet means to have the place for the correct answer at the end of the stem rather than at the beginning or in the middle. This makes the item easier to read for the student. Items listed on this slide have to do with cueing, which is unintentionally giving away the correct answer. Most of these errors are self-explanatory, but convergence can take several forms. An example would be when using numerical options, the correct answer is placed in the middle rather than at one of the extremes. Convergence also occurs when the test writer makes only slight variations to the correct answer when compared to the incorrect distractors. Before moving to how many distractors, we have already mentioned distractors should be incorrect but plausible options. They should appeal to the student who lacks knowledge for responding to the question, but should not confuse the student who does know the material being tested. If a distractor is obviously wrong, it should not be used. At least some students should choose each option. If not, the distractors should be revised if the question is used again. As to how many distractors, the slide is rather self-explanatory. Single answer multiple choice questions, as all of us know, have a right answer and alternative choices. The more options, as long as they are plausible, the more discriminating the question. Unfortunately, it is often hard to find four plausible distractors. Because of this, the usual is to have four options with one being the correct answer. One thing I do want to mention is that not all questions in an exam have to have the same number of possible answers. If one tries to have the same number of distractors for each question, one may clearly be incorrect or it may differ in substance from the other distractors, or it may be inappropriate for the STEM. If any of these occur, the test item may not measure the student's knowledge. NITCO and Brookhart have indicated that there is no rationale for using the same number of distractors for each item on a test. The goal is to develop plausible and functional responses that attract at least some students. It should, however, be a goal to have at least three distractors and one correct answer. If there are five good possibilities, use them. But if there are only four, do not worry about trying to find a fifth. Some of the items listed on this slide have already been discussed, but alternative choices should be appropriate for completing the STEM. The distractors should be written with the same specificity as the correct answer. One important point is to count the number of words included in each distractor for consistency in length. This prevents the possibility that the longest answer is the correct one. If the same number of words cannot be in each option, try not to vary the number more than one or two words. Each option should have the same number of parts. For example, if the STEM deals with symptoms of some abnormality and three distractors include only one symptom, but the other distractor has two symptoms, this is a huge clue for which is the correct answer. As mentioned previously, the correct answer, distractors in STEM, should be consistent in structure and terminology. Responses should come from the same domain, otherwise the student can be clued as to a response that is very wrong. For example, if the STEM is asking a question, which of the antihypertensives listed below has a specific side effect, and one of the distractors is a side effect of a vasopressor, this would not be consistent with the same domain. Multiple choice questions have either one correct answer, or the student is asked to choose the best answer. Best answers are valuable for assessing more complex and higher learning levels. Even though best answer items require a judgment to choose the best option, only one answer can be right, and there should be consistency in the literature and among experts as to the correct response. Questions and responses should not be designed from opinions of the instructor. Suggestions for writing correct answers are guided by disallowing the student to identify the correct response and eliminate distractors by the way the STEM or alternatives are written. Reviewing questions prior to administering the exam allows the test writer to be sure there is only one right answer. If key words are used in the STEM and in the correct response, but not in the alternative responses, the student may be given a clue to the correct answer. Other pitfalls not listed on this slide would be favoring the same response among questions. For example, favoring the B or C choice more often than A or D. This can be avoided by listing the alternatives in a logical or meaningful order, such as alphabetically, numerically, or chronologically. Regardless, the instructor should check the position of the correct responses on an exam to confirm they are more or less randomly distributed. Variations of multiple choice questions can be used. For bullet 1, if a multiple choice question is to be combined with the rationale for the answer chosen, the multiple choice question would be followed by a statement such as, In the space below, provide a rationale for why your chosen answer is correct and why the other alternatives are incorrect. Space should be provided for the student's response, and points should be given for the correct answer on the multiple choice question and for the rationale. Of course, grading this type of test is more time consuming, but this is a method for testing more of the student's knowledge, as well as higher cognitive skills. The second bullet is a type of multiple choice question being used on the National Certification Examination. The student must choose all right answers to get the question correct. Some suggestions for writing this type question are somewhat similar to writing one answer multiple choice questions. All alternatives must be just as plausible as the correct answer and logically combined and not randomly grouped. Logically ordered could mean alphabetically, numerically, or sequentially. Computerized testing allows the selection of multiple choices. If computerized testing is not available, an example of how this type of question can be accomplished is found in the slides following the references in this presentation. Matching questions with homogenous stems and options are those with premises and responses related to the same content. Examples of this would be laboratory tests matched to values, or words matched to definitions. Keeping the matching questions homogenous is one of the most important goals when writing a matching question. The use of an uneven number of stems and options helps to prevent cueing. If the matching question has a large number of responses, two matching exercises should be developed. Directions should be specific about how many times a response may be used. The order of the responses should be in alphabetical order, or if numbers, should be sequential. If the list has another logical order, such as dates or steps in a procedure, they should be listed in an organized manner. The entire matching question should be on the same page and not on divided pages. Matching questions with homogenous stems and options are those with premises and responses related to the same content. Examples of this would be laboratory tests matched to values, or words matched to definitions. Keeping the matching questions homogenous is one of the most important goals when writing a matching question. The use of an uneven number of stems and options helps to prevent queuing. If the matching question has a large number of responses, two matching exercises should be developed. Directions should be specific about how many times a response may be used. The order of the responses should be in alphabetical order, or if numbers, should be sequential. If the list has another logical order, such as dates or steps in a procedure, they should be listed in an organized manner. The entire matching question should be on the same page and not on divided pages. For the extended matching question, the test taker must select a single answer from a list of options. The extended matching question tests for higher order cognitive skills, such as application or analysis of knowledge. As with the matching question, clear instructions should be given to the student. Each set should address a single subject, and a relationship should be established between the options and the stems with a lead-in question. Options and stems should be short, use positive phrases, and appropriate grammar. As an educator, I have to say that the true-false question is my least favorite type of question. The format of this question type can be divided into simple or multiple true-false items. The definitions for both are on this slide. This slide lists two disadvantages of the true-false question. This type question lends itself to classroom exams, but not to standardized testing. Therefore, this question type is not used on the national certification examination. True-false questions are often viewed by students as tricky and confusing. This is particularly true if only one word change in the stem makes the question either true or false. For the item writer, it is often hard to write true-false questions which are clear and not ambiguous. The simple true-false question lends itself to guessing. Some recommendations for this type of question are to focus on a single problem or concept, be clear and concise in phrasing, and use positive language. Make the question obviously true or obviously false and try not to use absolute terms. Usually the true-false and the multiple true-false questions do not measure higher cognitive processes. Most of these questions only measure the student's ability to recall. Completion questions are often referred to as fill-in-the-blank. Completion questions usually need one to a few words to complete the statement. The advantages of this type of question are the ability to test a broader range of knowledge as well as reduce guessing at the answer. Completion questions are good for answers to calculation problems. I know that there are calculation problems on the certification exam, but I am not sure if they are completion questions. Non-calculation completion questions can be challenging to develop because it is hard to develop a question of this type that only has one answer. Obvious disadvantages are reading the student's handwriting and the time required to grade this type of question. One of the suggestions for completion type questions is to not take the STEM directly from a textbook or lecture material. When statements are taken directly from textbooks or lectures, the question may only test recall of meaningless facts out of context. In other words, the student has memorized content and may have no comprehension of it. For Bullet 2, the goal is to develop a question with only one right answer. However, it is necessary to develop a scoring sheet with all possible correct answers because the student may provide additional correct answers that were not anticipated by the item writer. With questions dealing with calculations, a statement should be given to tell the type of answer or degree of specificity desired by the item writer. For example, round the number to the nearest decimal point or express the answer in millet equivalents. If a longer answer is needed, allow enough space even if another sheet of paper is needed. If the question is written with a blank space, the blank space should be placed at the end of the sentence, not at the beginning or in the middle. Essay questions can be short or long. They are used primarily for learning outcomes which do not lend to measurement by a selection process. The biggest advantage of the essay question is assessment of higher cognitive skills. The essay also allows the student to use multiple sources of information. It must be remembered with this type of question that the student's knowledge of the content is being measured, not his or her writing ability. If the question for the essay is designed to address a broad topic, this gives the student more flexibility to generate a response. Often these questions are easy to develop but difficult to grade. A rubric of the information desired in the essay is suggested as the best process for grading. The rubric should be clear and developed in advance. This slide lists many of the disadvantages of the essay question. Not only are essays time-consuming to grade, they are time-consuming to take. Using this type question to sample content is not as efficient as using more objective items such as multiple choice questions. Usually only one or two essay questions can be used on an exam because of time constraints. Scoring can be unreliable which makes the pre-created rubric very important. The rubric will allow the grader to identify the specific outcomes being assessed. Again, when scoring the essay, it is important to focus on the content addressed in the essay and not to be influenced by how the essay is written. If the essay is highly focused and structured, reliability in grading increases. However, less restrictive essays allow more freedom and creativity in the way the student answers. The carryover effect mentioned on this slide occurs when the grader develops an impression of the overall quality of a student's work after grading one essay and carries this impression over to the next response. In other words, if the grader likes the answers in the first essay item, this may influence a better score on the second item. Of course, the reverse of this is also true. If the grader gives an exam with two essay questions and reads all responses to one essay and then reads all of the answers to the second essay, this lessens the ability of the carryover effect on grading. The halo effect is the tendency of evaluating the essay item by the general impression of the student. This can either positively or negatively affect the grading. Scoring essay questions anonymously by asking the student to select a number or be assigned a number rather than putting a name on the essay will avoid this issue. The way to avoid the effect of the student's writing ability on scoring is to grade only the content and not the grammar, sentence structure, or punctuation. Rater drift deals with the grader's fatigue and time constraints. If rater drift occurs, usually essays read early or first are graded higher than those graded later. Time constraints and fatigue can cause the grader to veer from the grading criteria. Teachers should read papers in random order and read each response twice before assigning a score. The grader should be sure the rubric is being used as a standard for grading and should stop periodically to assure that responses read later are consistently scored with those read early. Unless the grader is careful, reduced test reliability can occur. It is often a good idea to have two individuals grading essay questions using the same rubric because this tends to increase reliability. This slide lists some helpful recommendations for writing essay items. When essay questions require the student to synthesize content, summarization of the readings and class discussions without thinking about the content or applying it to a new situation is avoided. By clearly phrasing the essay question, the learner is directed to his or her response and the question will be less ambiguous. Preparing a student for an essay question may involve asking thought-provoking questions in class, engaging the students in critical discussions about the content, teaching students how to apply theories and concepts associated with the content, and helping the students compare approaches to arrive at decisions and judgments. Many times, teachers will allow students to choose which essay question they will answer. For example, the teacher may include four essay questions on an exam and ask the students to answer two of their choice. When students are allowed to choose which essay question to answer, this results in students taking different tests. The option to choose which essay to answer may also affect measurement validity. Although time-consuming, it is often very helpful for the examiner to write an ideal answer to each essay item. This can also be used in grading or in creating a rubric for grading. I am sure all of us have experienced the lab practical. As the slide states, the lab practical is usually used for anatomy classes and they do require a lot of time for preparation. For instructors of nurse anesthesia students, an expansion of this type testing is the Objective Structured Clinical Examination, as seen on the next slide. The Objective Structured Clinical Examination, or OSCE, not only evaluates didactic knowledge, but clinical skills as well. OSCE is performance-based and designed around a specific skill or task, which is completed within a specific time frame. An objective scoring tool is used to grade or rate the student. The American Heart Association has used this type testing for a long time in ACLS, BLS, and PALS. With the improvement in simulation and trainers for anesthesia, OSCE has become an important method for evaluation of nurse anesthesia learners. When thinking about developing an OSCE, it is helpful to know this type exam lends well to a scenario. Simulators and trainers can be used with OSCE, and the scenario can be progressive by beginning with background information in the beginning of the scenario, but allowing changes to occur as the student works within the scenario. For example, the EKG, heart rate, respiratory rate, blood pressure, etc. can change as the scenario progresses. A checklist of items must be developed for scoring. The grading can be in the format of whether or not the student performed or did not perform each of the listed items. The OSCE goes beyond a written examination in that it can assess what cannot be assessed with a written examination, which is the performance of a student in a simulated clinical setting. An OSCE can be simple or elaborate. It can involve just the instructor and one student or a whole group of students. Students can be assigned different roles in the scenario. OSCE can involve stations. For example, one station could evaluate preoperative assessment, another the holding area preparation, another the operating room setting, and another for postoperative assessment and sign off to the PACU. No type of assessment comes without disadvantages. OSCE can be labor-intensive, at least initially. Many times, once the scenarios have been developed, they can be reused with or without modification. OSCE is time-consuming and many times must involve dividing the class into groups to come in at different times. Simulators and trainers are costly. The more elaborate they are, the more costly. The physical environment may be limited. Of course, the best of all worlds is to have a classroom operating room with all the equipment. It is important to be as objective as possible in scoring the students. This entails developing a valid, reliable scoring tool and determining minimum competency levels. The minimum competency levels will change as the student progresses through the program. When using OSCE, there are two types of problems the students can be asked to solve. The well-structured problem provides the information needed to solve the problem. Usually there is one correct solution and, in general, the problem is very clearly identified. The ill-structured problem is a real-life problem. These problems are authentic and often not clear to the learner. The data given to the student may suggest more than one problem and multiple solutions may be appropriate. Well-structured problems provide practice in applying concepts and theories which have been learned in class and the situations used are hypothetical and do not require deep thinking skills. Ill-structured problems assess the student's ability to analyze situations, interpret patient needs, and identify possible problems using the given data. Additionally, the student may identify additional data needed and identify, compare, and evaluate multiple approaches to solve the problem before arriving at an informed decision as to the actions that must be taken to resolve the problem. Critical thinking is a term used often in teaching nurse anesthesia students. Critical thinking is reflective and reasoned thinking. When a student uses critical thinking, decisions are made as to what to do or what to believe in a given situation. This type of thinking is important when problems are unclear and or have more than one solution. Understanding concepts is the foundation for critical thinking. Memorizing concepts is not enough. It is also understanding concepts that allows the instructor to develop assessment strategies. This slide lists eight elements of critical thinking. These elements can be used to assess a student's ability to critically think. For example, is the student's purpose clear? Are the student's assumptions clear? Does the student have more than one point of view? Does the student collect data and evidence on which to base his or her thinking? There are many reasons why the student needs to critically think, and most are listed on this slide. The bottom line, however, is critical thinking is essential to solving problems which arise quickly and often during anesthetic management of patients. It is essential to problem solving for the safety of the patient. Students who critically think do all of what is listed on this slide. These are actually student characteristics that suggest behaviors that should be developed by students as they progress through a nurse anesthesia program. Take a moment to look at Tanner's aspects of clinical judgment. Then, let's move to the next slide to see assessment strategies for evaluating the student's mastery of using clinical judgment. These key points are for use by instructors in evaluating the clinical judgment of students. Assessment of key points of clinical judgment can be accomplished in simulated experiences. This does not mean every program has to have a costly simulator. Students can be given case situations and asked to analyze and problem solve. In fact, it may be an idea to start with classroom scenarios prior to placing the student in a simulator in order to relieve some of the stress of being put on the spot in the simulator experience. A basic principle of assessing higher-level skills is that test items or other assessment techniques have to introduce new or novel material for the student to analyze. One way to do this is to use a context-dependent item set. Context-dependent item sets assess cognitive skills, but in doing this, the sets must meet two criteria. The first criteria is that the set must introduce new information not encountered before by the student in earlier instruction. And secondly, it must provide data on the thought process used by students to arrive at an answer rather than the student just giving an answer alone. Introduction of new material avoids the student relying on memorization from prior discussions or prior readings about problem solving or decision making. In a set, the instructor presents introductory material for the students to analyze and answer questions. The introductory material can vary. It can be an anesthetic management situation. It can be a set of patient data. It can be research findings or be a clinical situation. The introductory material can include pictures, diagrams, tables, figures, graphs, example EKG, capnography, or other pertinent information. The student is asked to read the introductory material, and then the instructor asks questions about the material, or the student may be asked to complete a task or multiple tasks based on the presented material. The last bullet on this slide is one of the greatest advantages of this assessment type, which is that new material can be introduced for the student to analyze. The new material does need to be geared toward clinical practice and relevant to the item set. A selected response to a context-dependent item set could be a multiple-choice question, but if so, the instructor will not be able to assess the underlying thought process used by the student to solve the problem. If the intent is to assess the thought process, an open-ended response is necessary. An open-ended response could be a short essay or short answer questions which address the rationale for the solution to the problem. The suggestions on this slide are for writing context-dependent items, but the scenarios or other type of introductory material can be presented through multimedia or other types of instructional technology. The object is to assess a student's skill in problem solving and critical thinking. To do this, the introductory material needs to provide sufficient information for analysis, but not enough material to direct the student's thinking in a particular direction. The suggested development sets for this assessment begin with drafting the types of questions to be asked about the situation. To do this first may appear strange, but this allows the scenario to provide the essential information for analysis. The introductory material should be geared to the level of the student's understanding and experience. The situation should be of reasonable length to allow the student to complete the task within the given time. The question should focus on the thought process and not just the answer. However, these items can be used to assess a student's ability to apply principles or procedures presented in class without any original thinking about them. If this is the case, well-structured problems with only one correct answer are needed and situations used need to be clearly stated. The instructor must tell the student prior to the assessment how the responses will be scored, if responses are restricted in any way, example the length of response, for example, and the criteria used for evaluation. Context-dependent items may be used as part of an examination, individually completed or group completed. They may also be a topic for class discussion or done out of class. If the context-dependent set asks the student to problem solve, the instructor may ask the student to complete the task listed on this slide. An example of a context-dependent set can be located in the example slides provided at the end of this presentation following the reference slide. Interpreted items are used on the national certification examination. They are often referred to as hotspots. The responses from the student relate to the item in question. For an example, this could be a section of an EKG, an anatomical picture related to a regional anesthetic, or placement of a monitor, to name only a few. The student is asked to identify using a computerized approach or, if a pen and paper exam, by making a mark on the item where something is or where something might be placed. A good example would be to identify placement of a needle for an axillary block or to identify a portion of capnography. Assessment methods are numerous. This slide lists others which can be used but which will not be dwelled on in this presentation. Case method or case study works well for group analysis and discussion. Both of these can be used for students to critique each other's thinking in a given situation as well as compare different methods of problem solving, interventions, techniques, etc. One important learning event for this type of assessment is learning to come to a group consensus. In addition to the items listed on this slide, on any written assignment, it is important for the instructor to explain clearly to the student the objective of the assignment and the grading method to be used for the assignment. Again, it is important to look at the process and the quality of what the student submits, not the quantity. The student's writing ability or how many pages have been written is not what is graded. Again, all of the points given for assigning and grading an essay should be reviewed. A portfolio is a collection of projects and materials developed by the student which documents objectives met in a clinical course or in multiple clinical courses. A portfolio documents specifically what the student has learned in clinical practice as well as the competencies which have been achieved. Portfolios can be used for grading the student because the student provides evidence in the portfolio to confirm his or her clinical competencies and new learning and skills acquired in the course. A portfolio can be for an individual course or for the student's entire clinical experience while enrolled in the program. If the portfolio is used for assessment, a written component may be added as a means of stimulating reflection and self-awareness of growth in practice skills and knowledge. The portfolio can contain supportive evidence to include but not be limited to transcripts, letters of recommendation, standardized test reports, and certificates of completion, etc. As with any written assignment, there should be explicit instructions and expectations provided to the student throughout the time encompassed by the portfolio. A new recommendation noted on this slide is the need for standardized spelling, grammar, and format. The student should know whether the portfolio is to be used for formative or summative evaluation. Oral examination is another tool to examine a student's knowledge. Many health care professions use them in various portions of the education process. Some health care professions use the oral examination as part of certification for entry into the profession. Advantages of oral evaluation are the ability to assess various levels of cognitive processes and problem-solving ability. Additionally, communication skills are assessed in the process. Oral examination allows for organization of knowledge, ideas, and opinions and to make effective presenting responses in a confident manner. Disadvantages include concerns about objectivity during testing and grading. When using oral examination, it is critical to prepare the examinee and to provide clear instructions and guidelines about the administration of the exam. It is important to make attempts to allay the anxiety of the student by informing the student what to expect before and during the exam. It is also helpful to let the student know how many instructors will be involved in the examination. In summary of didactic assessment, a key to good assessment is making the information presented learning objective-based. These same objectives should be followed in development of the examination piece for assessing the student's knowledge and ability to use that knowledge. As this slide states, clinical evaluation, like didactic evaluation, needs established outcomes to direct student achievement. Outcomes are based on objectives, which are needed to protect the public, satisfy the student's expectations, meet the institutional requirements, and comply with the COA standards. Clinical evaluation can be more difficult when compared to didactic evaluation. Durham faculty many times finds educating clinical faculty about clinical evaluation difficult because of the numbers of faculty to orient to the evaluation process and the occasional difficulty getting accurate evaluations. Occasionally I find it difficult to convince clinical faculty that accurate clinical assessment is for the benefit of the student. It affords the student the chance to know both their strengths and weaknesses and where improvement is needed. This type of evaluation is necessary to produce a competent, safe provider. Again, objectives geared to the outcomes for the student must be available to both the student and the faculty. Six types of clinical evaluations are documented here for use in nurse anesthesia educational programs. Of the six listed here, the 360-degree evaluation is probably the newest and the least used. The summative evaluation, as stated, is usually a composite of formative evaluations. It is usually discussed with the student at the end of the semester in which the clinical course is completed. Some programs, such as mine, give a letter grade. Other programs give a pass or fail. The summative evaluation should list strengths and weaknesses which have been noted throughout the course. In my program, the terminal evaluation is the summative evaluation for the last semester of clinical completed in the program. And as the slide states, the student should have met all of the outcome objectives of the program at this point. The Council on Accreditation requires a self-evaluation and in my program these are completed at the end of each semester. Students are encouraged to be realistic and to take this evaluation seriously. They are encouraged to be specific about their accomplishments as well as their future goals. The objectives on the self-evaluation form are identical to the objectives listed on the formative evaluation for my program, but they may be the objectives listed on the summative evaluation if these differ from formative evaluation objectives. Task-specific evaluations has already been addressed. In my program, we would use this evaluation more for grading students' performance on trainers than for a simulator. This evaluation also needs specific outcome objectives as well as a grading tool. The student should know the objectives as well as the grading rubric. The 360-degree evaluation is a fairly new type of evaluation and I am not sure how much it is being used in nurse anesthesia programs. For a nurse anesthesia student, the evaluators completing this assessment would be the anesthesiologist, circulating nurses, surgeons, scrub nurses, and any other personnel who would have the opportunity to observe the student. The formative evaluation is exclusively for the clinical area and the uses or goals for this evaluation tool are listed on this slide. Although all programs try very hard to make this evaluation as objective as possible, it still remains subjective. The judgments of the instructor evaluating the student, as well as others involved in the process, come into play. The instructor's values influence evaluation and this occurs no matter how hard the instructor tries not to allow this to happen. Regardless of this influence, the instructor's values should not lead to unfairness in the evaluation process. This particular slide is a very important one for educating clinical instructors. Acknowledgement should be given to the fact that the evaluating clinical instructor's values, attitudes, beliefs, and biases can influence the evaluation, but it is important for the instructor to recognize these and in doing so, allows the instructor to assess whether these are influencing unjustly the student's evaluation. The program's responsibility is to develop an evaluation based on predetermined outcomes and competencies and to define these in terms of objectives specific for the level of student being evaluated. Fairness is facilitated if the instructor develops a supportive clinical learning environment. This piece of the puzzle can be difficult because the operating room is not always conducive to a supportive environment. Formal evaluation is making observations of performance and assessing the student's knowledge of why he or she is performing in a certain manner and comparing this information to a set of standards to arrive at judgments. It is true the formative evaluations will be used to ultimately determine a grade or other type of quantitative symbol which reflects the student's overall performance. If formative, daily evaluations are used, a grade is usually not assigned to this evaluation. However, a summation of the formative evaluations are used in how the grade, whether it's pass-fail or A-F, is determined. When the formative evaluation is developed, it should allow for the evaluation of affective, cognitive, and psychomotor domains. This slide defines the domains suggested for clinical evaluation. Of these, the affective domain may be the hardest to objectively assess. Evaluation of the affective domain is accomplished by observation and listening. Probably one of the easiest affective skills to evaluate is the professional behavior of the student toward their supervisors and colleagues. Bullet 1 of this slide is critical to a fair clinical assessment. Without preset objectives and outcomes, not only does the instructor and student not know the expectations, but no one involved in the evaluation process knows. Each clinical component of the program should have objectives and outcomes, and these should be progressive, indicating the higher level of performance which should be obtained by the student. This means as the student progresses through the program, objectives should indicate a need to meet higher cognitive, affective, and psychomotor levels. Each program should have terminal outcomes toward which each clinical course promotes achievement in a progressive manner. The beginning courses should achieve the more basic skills, such as intubation, equipment mastery, and preoperative patient preparation, to name a few basic skills. As the student progresses, clinical knowledge, critical thinking, judgments, and decision making should be perfected. At the completion of the final clinical course, the student should demonstrate that he or she has met the minimal competencies which are the terminal outcomes. The levels of progress outlined for each step in the program should be readily available to the clinical instructor. Critique of the student's performance should be clearly stated with the understanding that critique does not mean bad. I always tell my clinical faculty there has to be something good that happened during a day of clinical teaching, and hopefully more good things than not. Start with the positive and then move to weaknesses. Do not make statements on an evaluation personal, but make them objective. If an evaluation is hurtful, the one evaluated, no matter who he or she may be, will not get beyond the hurt to see anything worthwhile in the evaluation. The evaluation should be completed in a timely manner. Expectations benefit little from an evaluation that is returned a month later or not at all. It must be remembered, the foremost reason for evaluation is to help the student achieve his or her goals. Already a lot has been said about the evaluation tool, but it should be competency based with achievement of easier competencies early in the program and more difficult ones later. Again, expectations need to be available to the student and the instructor, as well as instructions for completing the evaluation tool appropriately for the student's level in the program. Certainly, a critical barrier is not filling out the evaluation. This often happens when the student has not performed well. No one wants to be the bad guy, but most of the time, being the bad guy is not filling out the evaluation, which in turn prevents the student from improving. It has always been my impression, when weaknesses are identified early, the ability to help the student achieve is increased. The longer the problems persist, the harder habits are to break. The second bullet is called a mixed message. It is not conducive to learning and improvement. It also creates a distrustful relationship between the student and the instructor. This is a situation that needs program faculty intervention, so the clinical faculty can be educated as to the importance of being truthful to the student and also to give the instructor guidance in how to do this without seeming to be the bad guy. The reason an instructor often gives for always giving a glowing evaluation is not wanting the student to go on probation or to be dismissed. Unfortunately, as stated above, the student cannot be helped if weaknesses have been noted and not documented. And of course, such action increases the probability of probation or being dismissed. Help cannot come to a student if there is no knowledge that help is needed. Ganging up on a student is not only a barrier to effective evaluation, but it is totally unfair to the student. This occurs when the student has performed poorly on more than one occasion and it has been discussed among instructors. The instructors then decide as a group how the student performs clinical tasks. This produces evaluations from the group based on preconceived expectation of poor performance as opposed to the actual performance of the student. Not only is this unfairness to the student, but it is a violation of the student's confidentiality. The final bullet on this slide, if being done consistently, needs to be addressed by program faculty. The importance of the evaluation process must be explained to these faculty, as well as the unfairness being afforded the student. Faculty meetings are a great place to review the clinical evaluation process. This slide lists all of the things we try to do at my program. Because new clinical instructors are always being added to clinical sites, the expectations of students at different levels of their education need systematic review with clinical faculty. We have objectives, competencies, and expectations on a website for our faculty, but this of course is worthless if how to find the site is unknown. Anonymous examples of formative evaluations are excellent ways to demonstrate a productive evaluation and a non-productive evaluation. The faculty always appreciates these examples and learns a lot from them. The faculty meetings are also a great place to answer clinical faculty questions and demonstrate support for these very vital instructors. It should not be forgotten that students have other stresses about the clinical area in addition to their grade. Learning in the clinical area is a public experience for the students. Everyone in the vicinity of the student has the opportunity to see their performance. It is very important for faculty to be aware of this and to attempt to make the learning environment as anxiety free as possible. Having said that last statement, I know how hard it can be to make an operating room anxiety free. It is always helpful, however, to make the student aware that you, as the instructor, know they are not a CRNA, but a learner. In that role, he or she is not expected to know everything and he or she is there to learn. This slide is a portion of the formative evaluation used by my program. Our evaluation is divided into organization skills, technical skills, clinical management skills and professionalism. We use a scale beginning with novice and move from there to advanced beginner and to competent. There is also an unsatisfactory and non-evaluated checkbox. Our evaluation is computerized, which appears to be convenient for the clinical instructors. It also allows the program to receive the evaluations in a timely manner. On the evaluation tool, each area evaluated has a place for comments. This slide gives an idea of our guidance tool for completing the evaluation. For each semester in the program, there is a guidance for clinical evaluation for receiving an A or B. Below a B is failing in our program. The portion presented on this slide is for the first semester and is for the portion that represents part of the evaluation tool shown on the previous slide. For an A in organization skills, we expect our first semester students to begin as novice and move toward advanced beginner by the end of the semester. For the clinical application skills, we do not expect them to progress as quickly because the didactic knowledge has not been completely presented. We are an integrated program rather than front loaded. We do expect the student to be competent in behavior skills. This example gives an idea of what we expect from our first semester students for a B. Again, this information is available to our clinical instructors as well as to the students. We add a statement like this one to the bottom of each semester as added explanation for our faculty. Additionally, anytime a student receives a U, our computer program for evaluation alerts the faculty. Each evaluation receiving one or more U's is discussed with the student as soon as feasibly possible. The following slides contain examples of the content and questions discussed within this lecture. Please note that there is no more audio for the last remaining slides.
Video Summary
The video transcript discusses the importance of evaluation in a nurse anesthesia program, emphasizing that evaluation goes beyond assessing student progress and involves course, instructor, clinical site, and overall program evaluation. Having a comprehensive evaluation system is necessary for accreditation. The focus is primarily on student evaluation due to time constraints. Evaluations serve as a means for students to monitor progress, improve instruction, and ensure positive program growth. Various evaluation techniques such as multiple-choice questions, context-dependent item sets, and OSCE are discussed. Objectivity in evaluation is crucial, and setting clear objectives and outcomes is essential. The transcript covers the importance of assessing clinical judgment, critical thinking, and professionalism in nurse anesthesia students. It provides guidance on developing fair and effective evaluation tools, addressing common barriers in the evaluation process, and ensuring consistency in evaluation practices. Specific examples of formative evaluations and guidance for grading are also shared. The transcript highlights the importance of ongoing faculty training, maintaining an anxiety-free learning environment, and providing clear expectations to students. Overall, effective evaluation in a nurse anesthesia program is crucial for student success and program accreditation.
Keywords
evaluation
nurse anesthesia
student progress
clinical site
accreditation
evaluation techniques
clinical judgment
critical thinking
formative evaluations
faculty training
10275 W. Higgins Rd., Suite 500, Rosemont, IL 60018
Phone: 847-692-7050
Help Center
Contact Us
Privacy Policy
Terms of Use
AANA® is a registered trademark of the American Association of Nurse Anesthesiology. Privacy policy. Copyright © 2024 American Association of Nurse Anesthesiology. All rights reserved.
×
Please select your language
1
English