- GU, Mangilao
Read our latest news and access media resources
Browse some of our favorite partner successes.GO →
Take a deeper dive into complex industry topics.GO →
Recordings and presentations; free and on-demand.GO →
Watch our latest tips, tactics, and testimonials.GO →
Industry views, opinions, and conversation starters.GO →
Finding the right online proctoring partner can be difficult and time consuming. If you knew the right questions to ask, wouldn’t it be easier to make an informed decision? We think so too! That’s why we’ve outlined some questions that can help guide you in your search. In this white paper, you’ll find: 11 important questions you should be asking every company you’re considering, why the questions matter, and our responses to the questions.
In the wake of the COVID-19 pandemic, colleges and universities across the world were forced to pivot to remote education. This hasty transition resulted in many ad hoc and make-do teaching and assessment plans that were never designed for long-term use. As the pandemic continues to disrupt traditional higher education, administrators and faculty need sustainable methods to remotely teach and assess students. Jeffrey Selingo and Karin Fischer, two well-respected writers in the field of higher education, collaborated on this white paper addressing three key questions regarding online proctoring.
Finding the right online proctoring partner can be difficult and time consuming. If you knew the right questions to ask, wouldn’t it be easier to make an informed decision? We think so too! That’s why we’ve outlined some questions that can help guide you in your search. In this white paper, you’ll find: 9 important questions you should be asking every company you’re considering, why the questions matter, and Meazure Learning’s responses to the questions
Setting a defensible cut score through a process called standard setting is an essential component that supports exam validity. For traditional multiple-choice and other selected-response assessments, there is a wide body of literature that supports the use of established standard-setting methods such as the Angoff method or the Nedelsky method. This white paper explores some of the key considerations when setting a cut score.
When developing an assessment, two major decisions a credentialing organization needs to make are: How many items will be on the exam? and How much time will test-takers be given to complete the exam? These choices can have a significant impact on fairness and validity. Often, once an exam has been administered, many ctest-takers will anecdotally report that they ran out of time and the assessment was unfair. Therefore, an important question to ask is, What can credentialing organizations do in order to investigate and address these concerns? We explore this topic in the white paper.
As a way to capture the richness of job performance, many credentialing organizations are supplementing traditional multiple-choice questions (MCQs) with innovative item types. Although this view is not unanimous, one theory suggests that MCQs represent a somewhat artificial representation of job tasks and that innovative item types represent a more refined way to assess candidate competence. This white paper explores this topic in depth.
A competency survey is a popular instrument for validating the skills, knowledge, and behaviors included on your competency profile. It allows an organization to reach numerous practitioners working in different practice settings and gather quantitative and qualitative data that lends itself to multiple methods of analysis and interpretation. This white paper explores the options for completing such a survey as well as some important considerations.
In addition to looking at an item’s p-value and discrimination index to determine how well an item is functioning, it is also important to analyze the distractor choice. The study of distractors is important for subject matter experts to better understand the performance of an item. Accordingly, distractor analyses can be used in an item’s revision process. Distractor evaluation is also helpful during key validation as it can help determine whether an item has a key error or more than one correct answer. This white paper discusses two methods for distractor evaluation: tabular and graphical.
Following an exam administration, Meazure Learning (formerly Yardstick) often produces an item analysis report from their proprietary software, COGs. The item analysis report provides information about each item in terms of its difficulty, discrimination, and the distribution of responses across alternatives. This white paper focuses on understanding what it means when a COGs item analysis report reads: “Item is making a limited contribution to the measurement capability of the test.”
In the testing world, mention of Item Response Theory (IRT) conjures up images of cutting-edge, state-of-the-art practices needed for a program to be seen as modern and valid. The purpose of this white paper is to provide an introduction to IRT using nontechnical language. Since IRT is such a big topic, this white paper will focus on how IRT can be used to assess test-taker ability.