Amazon Rekognition introduction and demo


What is Amazon Rekognition?

Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use. With Amazon Rekognition you can identify people, text, scenes and activities in images and videos as well ass detect any appropriate content. It also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyse and compare faces for a wide variety of user verification, people counting and public safety use cases. You simply need to supply images of objects or scenes you want to identify and the service handles the rest.


1. Labels-

With Amazon Rekognition, you can identify thousands of objects such as bike, telephone, building, and scenes such as parking lot, beach or stadium etc. While analysing video, you can also identify specific identity such as playing football, delivering a package etc.

2. Custom Labels-

You can extend the capabilities of Amazon Rekognition to extract the information from image that is uniquely helpful for your business. For example, you can find your corporate logo in social media, identify your products on store shelves etc.

3. Content Moderation-

Amazon Rekognition helps you identify potentially unsafe or inappropriate content across both image and video assets and provides you with the detailed labels that allows you to accurately control what you want to allow based on your needs.

4. Text Detection-

In photos text appear very differently that neat words on a paper. Amazon Rekognition can read skewed and distorted text to capture information like store name, street signs or text on product packaging etc.

5. Face Detection and analysis-

With Amazon Rekognition you can easily detect when faces appear in images or videos, and get attributes like gender, age range, eyes open or closed etc.

6. Face Search and verification-

Amazon Rekognition provides fast and accurate face search allowing you to identify a person in a photo or video using your private repository of face images. You can also verify identity by analysing a face image against images you have restored for comparison.

7. Celebrity Recognition-

You can quickly identify well known people in your image or video libraries to catalog footage and photos for marketing, advertising and media industry use cases.

8. Pathing-

You can capture the path of the people in scene by using Amazon Rekognition. For example, you can use the movement of athletes during a game to identify place for the post game analysis.


1. Easy Integration

2. Use of Artificial Intelligence

3. Scalable Image Analysis

4. Completely Integration

5. Low Costs

Use Cases:

1. Make content searchable-

Amazon Rekognition automatically extracts metadata from image and video files capturing object, faces, text and much more. This metadata can be used to easily search you images and videos with keywords or to find the right assets for content syndication.

2. Flag inappropriate content-

Amazon Rekognition automatically flags inappropriate content such as nudity, graphic violence of weapons in an image or video. Using the detailed metadata returned you can create your own rules based on what is considered appropriate for the culture and demographics for your users.

3. Enable digital identity verification-

Amazon Rekognition can create scalable authentication workflows for automated payments or other identity verification scenarios. It lets you easily perform face verification for opted-in users by comparing a photo or a selfie with an identifying document such as driving license etc.

4. Respond quickly to public safety challenges-

It allows you to create applications that helps finding missing persons in images and videos by searching for their faces against a database of missing persons. You can speed up a rescue operation by doing so.

5. Identify products, landmarks and brands-

App developers can use Amazon Rekognition custom labels to identify specific items in social media or photo apps.

6. Analyse shopper patterns-

You can analyse shopper behaviour and density in your retail store by studying the path that each person follows. Using face analysis you can also understand the average age ranges, gender distribution and emotions expressed by the people without even identifying them.


Here is a basic demo project in which we will upload an image, and service will detect the objects present in that image. Technically, we'll fetch one image and try to fetch labels out of that image i.e. object detection. The image I used is;

Let's get started...

Step 1. Search for IAM in the services and create a new role with lambda as a service and attach 'AWSLambdaExecute', 'AmazonRekognitionFullAccess' permissions as shown below.

Step 2. Now, go to S3 service and create a bucket. Upload one image of your choice, like shown below. Remember, name of the bucket should be unique all over the world.

Step 3. Next step is to create a lambda function and write code in it. Click on create function in lambda service, add from scratch and choose Python language as runtime.

For execution role, choose the IAM role we created just now. And click on create function.

Step 4. Write the python code in lambda_function.

import json
import boto3

def lambda_handler(event, context):
    client = boto3.client("rekognition")
    s3 = boto3.client("s3")
    fileObj = s3.get_object(Bucket = "bucketforrekognitionservice", Key="rekog_image.jpeg")
    file_content = fileObj["Body"].read()
    res = client.detect_labels(Image = {"S3Object": {"Bucket": "bucketforrekognitionservice", "Name": "rekog_image.jpeg"}}, MaxLabels = 3, MinConfidence = 70)
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')

Import boto3 apart from json and define client snd s3 using boto3 functions. We write that to get our s3 object. Mention the bucket name we created and name of the image in the get_object function. After that we have to get the content of image file that we have passed. Next, once we get the file content, we have to get the response. For that we have used client.detect_labels() function. So basically, out of that image, we are detecting labels here. In this function we have passed s3 object, bucket, name of the image, maximum labels and minimum confidence.

MaxLabels is an optional parameter. It is used to tell how many labels you want to detect.

MinConfidence can be any number from 0-100. It takes the threshold value i.e. for each label returned, it applies the desired threshold by checking that the label confidence is greater than the threshold that you want for the label. Use a value of 0 for MinConfidence. A value 0 ensures that all labels are returned, regardless of the detection confidence.

Step 5. At last, we have to test the code and name the event.

If the execution result is succeeded, you will get your response.

{'Labels': [{'Name': 'Car', 'Confidence': 99.63050079345703, 'Instances': [{'BoundingBox': {'Width': 0.2949617803096771, 'Height': 0.2502225935459137, 'Left': 0.6277754902839661, 'Top': 0.351685494184494}, 'Confidence': 99.63050079345703}, {'BoundingBox': {'Width': 0.20002424716949463, 'Height': 0.2419649213552475, 'Left': 0.3250146806240082, 'Top': 0.35516148805618286}, 'Confidence': 99.21725463867188}, {'BoundingBox': {'Width': 0.16251954436302185, 'Height': 0.22240367531776428, 'Left': 0.5369489192962646, 'Top': 0.3549574017524719}, 'Confidence': 98.94142150878906}, {'BoundingBox': {'Width': 0.14828883111476898, 'Height': 0.24772381782531738, 'Left': 0.8505643606185913, 'Top': 0.33840563893318176}, 'Confidence': 97.6090316772461}, {'BoundingBox': {'Width': 0.10549013316631317, 'Height': 0.33547452092170715, 'Left': 0.0, 'Top': 0.2649077773094177}, 'Confidence': 93.35880279541016}, {'BoundingBox': {'Width': 0.2535254657268524, 'Height': 0.2279699593782425, 'Left': 0.5630983710289001, 'Top': 0.35698744654655457}, 'Confidence': 77.44841003417969}, {'BoundingBox': {'Width': 0.0598486103117466, 'Height': 0.07791062444448471, 'Left': 0.48055946826934814, 'Top': 0.3815402686595917}, 'Confidence': 69.66415405273438}, {'BoundingBox': {'Width': 0.062368690967559814, 'Height': 0.13660086691379547, 'Left': 0.47936567664146423, 'Top': 0.387096643447876}, 'Confidence': 68.0121841430664}, {'BoundingBox': {'Width': 0.04459456354379654, 'Height': 0.15997092425823212, 'Left': 0.08868294209241867, 'Top': 0.366536021232605}, 'Confidence': 57.1058235168457}], 'Parents': []},

{'Name': 'Person', 'Confidence': 99.6181640625, 'Instances': [{'BoundingBox': {'Width': 0.1587383896112442, 'Height': 0.4437275528907776, 'Left': 0.1819591522216797, 'Top': 0.24159686267375946}, 'Confidence': 99.6181640625}, {'BoundingBox': {'Width': 0.013436483219265938, 'Height': 0.046163879334926605, 'Left': 0.4770947992801666, 'Top': 0.3348120450973511}, 'Confidence': 51.229793548583984}], 'Parents': []},

{'Name': 'Bicycle', 'Confidence': 97.07412719726562, 'Instances': [{'BoundingBox': {'Width': 0.09683728218078613, 'Height': 0.2784055173397064, 'Left': 0.21297605335712433, 'Top': 0.48057276010513306}, 'Confidence': 97.07412719726562}], 'Parents': []}], 'LabelModelVersion': '2.0', 'ResponseMetadata': {'RequestId': '6a2172a3-1c7f-4675-9b62-8cfbcccab534', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '6a2172a3-1c7f-4675-9b62-8cfbcccab534', 'content-type': 'application/x-amz-json-1.1', 'content-length': '2123', 'date': 'Thu, 31 Mar 2022 07:20:02 GMT'}, 'RetryAttempts': 0}}

This was the result I got with label name and confidence level.

Thank you for your time. Happy Learning!