With proliferating content volumes, media companies are facing challenges in preparing and managing their content. This is crucial to providing a high-quality viewing experience and better monetizing content. Finding where the end credits begin in a piece of content, choosing the right spots to insert advertisments are manual, slow and expensive processes. Amazon Rekognition Video makes it easy to automate these operational media analysis tasks by providing fully managed, purpose-built APIs.
In this lab, you will learn how to detect shot segments and technical cues (black frames, color bars, and end credits) in videos. To start the detection of segments in a stored video call, we will be using StartSegmentDetection API. Once we start the detection, we can periodically call GetSegmentDetection to monitor the status. However, it is highly recommended to check out the Notification Channel feature which allows you to receive a SNS notification when the detection job is completed instead of periodically polling the status of the detection.
To run this lab, you need to have prerequisite section completed.
Click on the 6-video-segment-detection.ipynb notebook.
Follow the instruction in the notebook to understand how you can detect markers (black frames or shot changes) in videos.
In order to execute each cell, click on the “Run” button at the top or press “Shift+Enter”.
Make sure you use “conda_python3” kernel and execute all cells in order.