If you’ve been following American news recently, you may have heard that the White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights.” This 73-page document provides guidelines for the ethical and equitable use of AI and automation. We’ve taken the time to read through it and briefly summarize how it affects the testing industry, specifically online exam proctoring. Modern assessment solutions have transformative potential to improve test-takers’ lives, but it’s more important than ever to develop online exam proctoring solutions ethically, transparently, and with respect for users’ privacy.
The purpose of the AI Bill of Rights will be to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems. This framework applies to automated systems that have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. The guidance provided in the AI Bill of Rights will be especially important in the ever-more-automated exam industry. The OSTP’s blueprint for the upcoming bill lays out the basic principles in easy-to-understand language.
Committed to Ensuring Ethical Proctoring Systems
As we move into a future dominated by AI and machine learning, it’s important to look forward and anticipate emerging trends in regulatory compliance, security protocols, tech improvements, and every other aspect of exam development and delivery. At Meazure Learning, we saw the potential for AI misuse and worked hard to replace our AI with professional, human expertise. To that end, we don’t use true AI or machine learning in our online proctored exams at this point in time.
Let’s look at the 5 principles outlined in the “Blueprint for an AI Bill of Rights”—and how Meazure Learning is already following them. Feel free to jump to one of the principles below:
- Safe and Effective Systems
- Algorithmic Discrimination Protections
- Data Privacy
- Notice and Explanation
- Human Alternatives, Consideration, and Fallback
Principle #1: Safe and Effective Systems
“You should be protected from unsafe or ineffective systems.”
Keeping users protected from AI systems that may cause them real-world harm is first among the OSTP’s priorities. It recommends that software developers and designers use guidance from “diverse communities, stakeholders, and domain experts.” This attempts to mitigate harm or bias prior to launching the program. The OSTP also recommends continual assessment after deployment to ensure safe, effective delivery and continued commitment to reducing the harm done by AI from both intended and unintended consequences.
Meazure Learning has committed to this principle and its three major components: developing and testing AI for potential harm in a “proactive and ongoing manner,” protecting against “inappropriate or irrelevant data usage,” and upholding the “safety and effectiveness of the system.” We don’t use AI in our online exam proctoring decision-making, but we strive to be sure our proctoring platform meets the OSTP’s standards nonetheless.
Reviewing existing AI and establishing a routine for developing and testing new AI helps test-takers feel more comfortable, safe, and confident in the technology they use. This will ultimately create a more equitable testing experience. This equity and comfort is essential for a positive online proctored exam experience for test-takers and proctors alike.
Reviewing Technology in a Proactive and Ongoing Manner
One way we proactively manage our technology is the consistent, diverse use of human involvement at every step. From our internal auditing and secret shopper program to our web monitoring and use of data analytics, we are constantly improving our proctoring platform to be sure it works in everyone’s best interest.
Preventing Inappropriate or Irrelevant Data Usage
When it comes to preventing inappropriate or irrelevant data usage, we’re committed to protecting test-takers’—and test programs’—information. Before testing, we give test-takers information on what data we will collect and how we will use that data. Our automated services only collect information relevant to verifying identity. And we only use that information within the context of online exam proctoring.
Maintaining Safety and Effectiveness of the System
Safety and effectiveness are invaluable to the online proctored exam process. We’re committed to safety for you and your test-takers, from ensuring your exam integrity to establishing data privacy protocols. Our online exam proctoring platform provides multiple methods for protecting your exam data against misuse and cheating, always backing up our automated solutions with an actual human. In addition to our own efforts to create a safe and effective system, we use third-party systems and scans to test our solutions for effectiveness, safety, and equity. We also use data encryption, application coding protections, a service to protect our critical infrastructure, and more.
Principle #2: Algorithmic Discrimination Protections
“You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
According to the OSTP, algorithmic discrimination happens when an AI solution has the effect—intended or otherwise—of causing a disadvantage to someone from a protected class, such as race, gender, religion, national origin, or disability status. AI designers and developers have an obligation to proactively prevent algorithmic discrimination. They can do this through various means, including equity assessments, use of representative data, and disparity testing.
The effect of algorithmic discrimination on people with disabilities is of particular concern. Say a person uses a device such as a screen reader to accommodate their disability. Will an AI solution flag that as suspicious? What about behavior that involves repetitive movements or unwanted sounds (tics) that can’t be easily controlled? Will that be flagged too? The effect of AI on your test-takers’ experience with your remote proctored exam is immense.
Recognizing the Effects of Algorithmic Discrimination in Online Exam Proctoring
Because our ProctorU Proctoring Platform doesn’t use AI, we have eliminated most of these false flags. Our human proctors support our automated system in identifying potential cheating. They also review any unexpected activity that may be wrongly labeled as cheating.
“Exams can have a tremendous impact on a test-taker’s life. That’s why remote proctoring systems need to involve more than just good software. They need to be ethically designed and thoughtfully maintained by human professionals.“Bobby Middleton, Vice President of Product Management, Meazure Learning
At Meazure Learning, we take the possibility of algorithmic discrimination very seriously. To trust the results of any exam, test-takers must have a fair and equitable environment. In early 2021, we removed nearly all AI and dynamic algorithms from our online exam proctoring system—the ProctorU Proctoring Platform. We did this because we recognized the harm AI could cause a test-taker. AI capabilities were simply not mature enough to replace the human nuance required in online testing. While AI solutions will be ready for prime time someday, they could cause too much damage now.
Our technology and automation reduce administrative workload. They also help our human proctors identify activities that might violate a test provider’s integrity requirements. The inaccuracies created by technology are known to result in bias. To protect against this, at least one highly trained professional proctors or reviews every exam session. They analyze the session before submitting suspected and confirmed incidents to the testing organization.
Principle #3: Data Privacy
“You should be protected from abusive data practices . . . and you should have agency over how data about you is used.”
The OSTP guidelines for data privacy include the right to AI that safeguards your information from potential misuse. They also include the right to have a say, not only in what information a service collects, but in how a service uses, shares, and—potentially—sells that information. AI developers and designers have an obligation to prevent violations of your privacy.
At Meazure Learning, we strive to be transparent in our policies for handling test-taker information. We provide secure storage for test-taker information via encrypted servers. The only information we collect is relevant to determining identity, and we delete the information after a set time period to protect privacy. We never use test-taker information for any purpose other than online exam proctoring, and we never sell it to third parties. You can find more information on our data privacy policies in the “Compliance and Privacy” section of our FAQ page.
Principle #4: Notice and Explanation
“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
The use of AI can have a massive impact on someone’s life (e.g., employment decisions, educational advancement). It’s vital to inform people both when an automated system is being used and what it is being used for. The OSTP lays out guidelines for such explanations to be “demonstrably clear, timely, understandable, and accessible.”
Principle #5: Human Alternatives, Consideration, and Fallback
“You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”
The OSTP outlines several guidelines for providing human alternatives and recourse to appeal decisions whenever possible. AI systems can have a massive impact on people’s lives, from healthcare and unemployment services to voting and employee reviews. Everyone should have access to a human trained to handle your needs. It can make the difference between receiving necessary benefits and being denied them.
Human proctors have always been a part of our online exam proctoring test solution. Why? Because we believe that even the most advanced technology in the world can’t replace human eyes and a critical brain. Without human oversight, you leave the future of your program and test-takers in the hands of a machine.
For the foreseeable future, human proctors are the best way forward in exam administration. Knowing that the human option is best, our client recommendation is to provide an alternative proctoring option for test-takers who aren’t comfortable with remote proctoring or who can’t get through the proctoring process. See more about how we incorporate the human element into our proctoring services.
Conclusion: Your Rights Matter in Online Exam Proctoring
The OSTP’s guidelines for using AI lay out a basic framework for keeping automated systems safe, secure, and equitable. In our online exam proctoring work at Meazure Learning, we keep safety and comfort paramount, both for our clients and for their test-takers. Our automation is as equitable as possible and led by a team of certified, professional proctors because we believe the future of assessment requires a human-centric solution with automated support—not the other way around.
With these 5 principles in mind, we hope you will examine your current standards when it comes AI and automation. If anything, they provide valuable guardrails as you scrutinize the standards of your vendors, partners, and clients. We can all work together to ensure a more fair, secure, and equitable exam experience for everyone.
Learn more about our human-led exam development and delivery in our article “Humanizing the Exam Process from Start to Finish.”