Virtual proctoring is the act of supervising an exam or a course online. Similar to onsite exams where test takers are monitored while they take an exam, virtual proctoring allow test takers to be virtually monitored by a human using their webcam to detect and prevent cheating. With the rise of online learning, and restrictions on traditional in-person proctoring due to COVID-19, many organizations who conduct online exams need a scalable way for virtual proctoring.
Instead of having a dedicated online proctor for each testing session, one online proctor can be assigned to large number of test takers and get assistance from machine learning to be notified when a human review of a testing session is needed. For example, machine learning can be used to detect objects of interest such as mobile phone and number of people in the session. It can also recognize the person taking the exam and analyze audio or video to detect multiple people. Depending on the nature of the exam, machine learning can be used to detect this information in streaming video, stored video or by taking frames at random intervals during the testing session to audit the exam.
Amazon Rekognition is a machine learning based image and video analysis service that can identify objects, people, text, scene and activities in images and videos, as well as inappropriate content. In this lab, we will show you how you can build an online virtual proctoring solution using different APIs of Amazon Rekognition.
To run this lab, you need to have prerequisite section completed.
Open the “virtual-proctoring” folder inside “amazon-rekognition-code-samples”
Open each notebooks in the folder one-by-one starting with “0-pre-requirements.ipynb” and follow the instructions in the notebook
In order to execute each cell, click on the “Run” button at the top or press “Shift+Enter”.
Make sure you use “conda_python3” kernel and execute all cells in order.