Amazon Rekognition helps you identify potentially unsafe or inappropriate content across both image and video assets and provides you with detailed labels that allow you to accurately control what you want to allow based on your needs. In this lab, you will learn how to detect content in images and videos that is inappropriate, unwanted, or offensive.
In the first part of the lab, we will be using Amazon Rekognition Image API, DetectModerationLabel to detect unsafe content in the image. The second part uses Amazon Rekognition Video API, StartContentModeration to identify unsafe content within a MP4 video. It is an asynchronous API. Once we start the detection, we periodically call GetContentModeration to monitor the status. However, it is highly recommended to check out the Notification Channel feature which allows you to receive a SNS notification when the detection job is completed instead of periodically polling the status of the detection.
To run this lab, you need to have prerequisite section completed.
Click on the 2-content-moderation.ipynb notebook.
Follow the instruction in the notebook to understand how you can identify inappropriate content in images and videos.
In order to execute each cell, click on the “Run” button at the top or press “Shift+Enter”.
Make sure you use “conda_python3” kernel and execute all cells in order.