Programmable Media

Amazon Rekognition AI Moderation

Last updated: Jul-10-2024

Cloudinary is a cloud-based service that provides an end-to-end asset management solution including uploads, storage, transformations, optimizations and delivery. Cloudinary offers a very rich set of image transformation and analysis capabilities and allows you to upload images to the cloud, transform them on the fly and deliver them to your users optimized and cached via a fast CDN.

Amazon Rekognition is a service that makes it easy to add image analysis to your applications. Cloudinary provides an add-on for Amazon Rekognition's image moderation service based on Deep Learning algorithms, fully integrated into Cloudinary's image management and transformation pipeline.

With Amazon Rekognition's AI Moderation add-on, you can extend Cloudinary's powerful cloud-based image transformation and delivery capabilities with automatic artificial intelligence based moderation of your photos. Protect your users from explicit and suggestive adult content in your user uploaded images, making sure that no offensive photos are displayed to your web and mobile viewers.

Getting started

Before you can use the Amazon Rekognition AI Moderation add-on:

  • You must have a Cloudinary account. If you don't already have one, you can sign up for a free account.

  • Register for the add-on: make sure you're logged in to your account and then go to the Add-ons page. For more information about add-on registrations, see Registering for add-ons.

  • Keep in mind that many of the examples on this page use our SDKs. For SDK installation and configuration details, see the relevant SDK guide.

  • If you are new to Cloudinary, you may want to take a look at How to integrate Cloudinary in your app for a walk through on the basics of creating and setting up your account, working with SDKs, and then uploading, transforming and delivering assets.

Automatic image moderation flow

The following list describes the flow of uploading and displaying moderated images using Cloudinary and the Amazon Rekognition's AI Moderation add-on:

  1. Your users upload an image to Cloudinary through your application.
  2. The uploaded image is sent to Amazon Rekognition for moderation.
  3. The image is either approved or rejected by Amazon Rekognition.
  4. An optional notification callback is sent to your application with the image moderation result.
  5. A rejected image does not appear in your media library, but is backed up, consuming storage, so that it can be restored if necessary.
  6. Moderated images can be listed programmatically using Cloudinary's Admin API or interactively using our online Media Library Web interface.
  7. You can manually override the automatic moderation using the Admin API or the Media Library.

Moderation categorization

Amazon Rekognition assigns a moderation confidence score (0 - 100) indicating the chances that an image belongs to an offensive content category.

There are two levels of categories for labelling unsafe content, with each top-level category containing a number of second-level categories, for example under the 'Violence' (violence) category you have the sub-category 'Physical Violence' (physical_violence). A full list of all the latest available categories and sub-categories is provided by AWS, see Amazon Rekognition categories.

The top level categories include:

  • Explicit Nudity (explicit_nudity)
  • Suggestive (suggestive)
  • Violence (violence)
  • Visually Disturbing (visually_disturbing)
  • Rude Gestures (rude_gestures)
  • Drugs (drugs)
  • Tobacco (tobacco)
  • Alcohol (alcohol)
  • Gambling (gambling)
  • Hate Symbols (hate_symbols)

Note
When referring to the categories in your code, replace spaces with underscores and upper case letters with lower case letters (e.g., 'Illustrated Nudity Or Sexual Activity' becomes illustrated_nudity_or_sexual_activity).

The default moderation confidence level to reject an image is 0.5 for all categories, unless specifically overridden (see explanation and examples below). All images classified by Amazon Rekognition with a value greater than the moderation confidence level (in any of the categories) are classified as 'rejected', otherwise their status is set to 'approved'.

Request image moderation

To request moderation while uploading an image, with default moderation confidence levels, set the moderation upload API parameter to aws_rek:

Tip
You can use upload presets to centrally define a set of upload options including add-on operations to apply, instead of specifying them in each upload call. You can define multiple upload presets, and apply different presets in different upload scenarios. You can create new upload presets in the Upload page of the Console Settings or using the upload_presets Admin API method. From the Upload page of the Console Settings, you can also select default upload presets to use for image, video, and raw API uploads (respectively) as well as default presets for image, video, and raw uploads performed via the Media Library UI.

Learn more: Upload presets

You can also optionally:

  • Override the default moderation confidence level (0.5) on a per category basis by including the category name and new value as part of the moderation parameter value. You can override multiple categories, separated by colons.
    Note
    Overriding the default moderation confidence value of a top-level category will also set all its child categories to the same value, unless you specifically override one of the child categories as well.
  • Exclude a category from the moderation check by setting the category's value to ignore.
  • Return the moderation_labels array, even if no offending content is found by setting: aws_rek:min_confidence:0.0

Example

  • Set the Female Swimwear or Underwear child category to a minimum confidence level of 0.85
  • Set the Explicit Nudity top-level category to 0.7. This then becomes the confidence level for all its child categories as well.
  • Exclude the Revealing Clothes category from the check.
  • Check all other categories at the default 0.5 confidence level.

Notes
  • Confidence levels must be provided as a decimal number between 0.0 and 1.0.
  • Images must have a minimum height and width of 80 pixels.

Moderation response

The following snippet shows an example of a response to an upload API call, indicating the results of the request for moderation, and shows that the moderation result has put the image in 'rejected' status.

Image moderation listing

Cloudinary's Admin API can be used to list all moderated images. You can list either the approved or the rejected images by specifying the second parameter of the resources_by_moderation API method. For example to list all rejected images:

Manual override

As the automatic image moderation of the Amazon Rekognition AI Moderation add-on is based on a decision made by an advanced algorithm, in some cases you may want to manually override the moderation decision and either approve a previously rejected image or reject an approved one.

Overriding moderation via the Media Library

One way to manually override the moderation result is using Cloudinary's Media Library Web interface. From the left navigation menu, select Moderation. Then, from moderation tools list in the top menu, select Rekognition and then select the status (Rejected, or Approved) of the images you want to display.

  • When displaying the images rejected by Amazon Rekognition, you can click on the green Approve button to revert the decision and recover the original rejected image.
  • When displaying the images approved by Amazon Rekognition, you can click on the red Reject button to revert the decision and prevent a certain image from being publicly available to your users.

Overriding moderation via the Admin API

Alternatively to using the Media Library interface, you can use Cloudinary's Admin API to manually override the moderation result. The following example uses the update API method while specifying a public ID of a moderated image and setting the moderation_status parameter to the rejected status.

✔️ Feedback sent!

Rate this page: