In photos and videos, text appears very differently than neat words on a printed page. Amazon Rekognition can read skewed and distorted text to capture information like store names, forced narratives overlaid on media, street signs, and text on product packaging. In this lab, you will learn how to detect text in images and videos.
In the first part of this lab, we will be using Amazon Rekognition Image API, DetectText to detect text in the input image. The second part uses Amazon Rekognition Video API, StartTextDetection to identify text within an MP4 video. It is an asynchronous API. Once we start the detection, we can periodically call GetTextDetection to monitor the status. However, it is highly recommended to check out the Notification Channel feature which allows you to receive a SNS notification when the detection job is completed instead of periodically polling the status of the detection.
To run this lab, you need to have prerequisite section completed.
Click on the 5-text-detection.ipynb notebook.
Follow the instruction in the notebook to understand how you can detect text in images and videos.
In order to execute each cell, click on the “Run” button at the top or press “Shift+Enter”.
Make sure you use “conda_python3” kernel and execute all cells in order.