Programmable Media

Image effects and enhancements

Last updated: Sep-16-2024

Cloudinary's visual effects and enhancements are a great way to easily change the way your images look within your site or application. For example, you can change the shape of your images, blur and pixelate them, apply quality improvements, make color adjustments, change the look and feel with fun effects, apply filters, and much more. You can also apply multiple effects to an image by applying each effect as a separate chained transformation.

Some transformations use fairly simple syntax, whereas others require more explanation - examples of these types of transformations are shown in the advanced syntax examples.

Besides the examples on this page, there are many more effects available and you can find a full list of them, including examples, by checking out our URL transformation reference.

Here are some popular options for using effects and artistic enhancements. Click each image to see the URL parameters applied in each case:

Cartoonify your images Cartoonify
your images
Add a vignette to your images Add a vignette to
your images
Create low quality image placeholders Generate low quality
image placeholders
Add image outlines Add image
outlines

Simple syntax examples

Here are some examples of effects and enhancements that use a simple transformation syntax. Click the links to see the full syntax for each transformation in the URL transformation reference.

Artistic filters

Apply an artistic filter using the art effect, specifying one of the filters shown.

Available filters

Original image:

Original image, no filter

Filters:

al_dente artistic filter al_dente athena artistic filter athena audrey artistic filter audrey aurora artistic filter aurora daguerre artistic filter daguerre eucalyptus artistic filter eucalyptus fes artistic filter fes frost artistic filter frost hairspray artistic filter hairspray hokusai artistic filter hokusai incognito artistic filter incognito linen artistic filter linen peacock artistic filter peacock primavera artistic filter primavera quartz artistic filter quartz red_rock artistic filter red_rock refresh artistic filter refresh sizzle artistic filter sizzle sonnet artistic filter sonnet ukulele artistic filter ukulele zorro artistic filter zorro

See full syntax: e_art in the Transformation Reference.

Cartoonify

Make an image look more like a cartoon using the cartoonify effect.

Image with cartoonify effect

See full syntax: e_cartoonify in the Transformation Reference.

Opacity

Adjust the opacity of an image using the opacity transformation (o in URLs). Specify a value between 0 and 100, representing the percentage of transparency, where 100 means completely opaque and 0 is completely transparent. In this case the image is delivered with 30% opacity:

Image delivered with 30% opacity

See full syntax: o (opacity) in the Transformation Reference.

Pixelate

Pixelate an image using the pixelate effect.

Pixelated image

See full syntax: e_pixelate in the Transformation Reference.

Sepia

Change the colors of an image to shades of sepia using the sepia effect.

Sepia effect applied to an image

See full syntax: e_sepia in the Transformation Reference.

Vignette

Fade the edges of images into the background using the vignette effect.

Models image with vignette


See full syntax: e_vignette in the Transformation Reference.

Note
When you use the vignette effect with PNG images, it usually blends seamlessly. But with other formats like JPEG, there could be a visible white background surrounding the vignette in areas where the image is transparent when displaying the image against a non-white background.

Image enhancement options

Cloudinary offers various way to enhance your images. This table explains the difference between them, and below you can see examples of each. You can also watch a video tutorial showing how to apply these in a React app.

Transformation Purpose Key features Main use cases How it works
Generative restore
(e_gen_restore)
Excels in revitalizing images affected by digital manipulation and compression. Compression Artifact Removal: Effectively eliminates JPEG blockiness and overshoot due to compression.
Noise Reduction: Smoothens grainy images for a cleaner visual.
Image Sharpening: Boosts clarity and detail in blurred images.
✅ Over-compressed images.
✅ User-generated content.
✅ Restoring vintage photos.
Utilizes generative AI to recover and refine lost image details.
Upscale
(e_upscale)
Increases the resolution of an image using AI, with special attention to faces. ✅ Enhances clarity and detail while upscaling.
✅ Specialized face detection and enhancement.
✅ Preserves the natural look of faces.
✅ Improving the quality of low resolution images, especially those with human faces. Analyzes the image, with additional logic applied to faces, to predict necessary pixels.
Enhance
(e_enhance)
Enhances the overall appeal of images without altering content using AI. ✅ Improves exposure, color balance, and white balance.
✅ Enhances the general look of an image.
✅ Any images requiring a quality boost.
✅ User-generated content.
An AI model analyzes and applies various operators to enhance the image.
Improve
(e_improve)
Automatically improves images by adjusting colors, contrast, and lighting. ✅ Enhances overall visual quality.
✅ Adjusts colors, contrast, and lighting.
✅ Enhancing user-generated content.
✅ Any images requiring a quality boost.
Applies an automatic enhancement filter to the image.

Generative restore

This example shows how the generative restore effect can enhance the details of a highly compressed image:

Normal downscaling Restored image

Try it out: Generative restore in the Transformation Center.

Upscale

This example shows how the upscale effect can preserve the details of a low resolution image when upscaling:

Normal upscaling Upscale effect

Try it out: Upscale in the Transformation Center.

Enhance

This example shows how the enhance effect can improve the lighting of an under exposed image:

Original Enhanced image

Try it out: AI image enhancer in the Transformation Center.

Improve

This example shows how the improve effect can adjust the overall colors and contrast in an image:

Original Improved image

Advanced syntax examples

In general, most of the visual effects and enhancements can take an additional option to tailor the effect to your liking. For some, however, you may need to provide additional syntax and use some more complex concepts. It is important to understand how these advanced transformations work when attempting to use them. The sections below outline some of the more advanced transformations and help you to use these with your own assets.

Remember, there are many more transformations available and you can find a full list of them, including examples, by checking out our URL transformation reference.

Tip
You can use MediaFlows, Cloudinary’s low-code workflow builder, to automatically generate variants of any image in different colors and styles. Learn more and sign up here.

3D LUTs

3D lookup tables (3D LUTs) are used to map one color space to another. They can be used to adjust colors, contrast, and/or saturation, so that you can correct contrast, fix a camera's inability to see a particular color shade, or give a final finished look or a particular style to your image.

After uploading a .3dl file to your product environment as a raw file, you can apply it to any image using the lut property of the layer parameter ( l_lut: in URLs), followed by the LUT file name (including the .3dl extension).

Below you can see the docs/textured_handbag.jpg image file in its original color, compared to the image with different LUT files applied. Below these is the code for applying one of the LUTs.

Original Original image with iwltbap_sedona LUT with 'iwltbap_sedona' LUT image with iwltbap_aspen LUT with 'iwltbap_aspen' LUT

See full syntax: l_lut in the Transformation Reference.

Background color

Use the background parameter (b in URLs) to set the background color of the image. The image background is visible when padding is added with one of the padding crop modes, when rounding corners, when adding overlays, and with semi-transparent PNGs and GIFs.

An opaque color can be set as an RGB hex triplet (e.g., b_rgb:3e2222), a 3-digit RGB hex (e.g., b_rgb:777) or a named color (e.g., b_green). Cloudinary's client libraries also support a # shortcut for RGB (e.g., setting background to #3e2222 which is then translated to rgb:3e2222).

For example, the uploaded image named mountain_scene padded to a width and height of 300 pixels with a light blue background:

Image padded to a width and height of 300 pixels with green background

You can also use a 4-digit or 8-digit RGBA hex quadruplet for the background color, where the 4th hex value represents the alpha (opacity) value (e.g., co_rgb:3e222240 results in 25% opacity).

Note
When using the background parameter to set the background color of a text overlay, you can also set the color to predominant_contrast. This selects the strongest contrasting color to the predominant color while taking all pixels in the image into account. For example, l_text:Arial_30:foo,b_predominant_contrast.

See full syntax: b (background) in the Transformation Reference.

Try it out: Background in the Transformation Center.

Content-aware padding

You can automatically set the background color to the most prominent color in the image when applying one of the padding crop modes (pad, lpad, mpad or fill_pad) by setting the background parameter to auto (b_auto in URLs). The parameter can also accept an additional value as follows:

  • b_auto:border - selects the predominant color while taking only the image border pixels into account. This is the default option for b_auto.
  • b_auto:predominant - selects the predominant color while taking all pixels in the image into account.
  • b_auto:border_contrast - selects the strongest contrasting color to the predominant color while taking only the image border pixels into account.
  • b_auto:predominant_contrast - selects the strongest contrasting color to the predominant color while taking all pixels in the image into account.
border b_auto:border predominant b_auto:predominant border_contrast b_auto:border_contrast predominant_contrast b_auto:predominant_contrast

For example, padding the purple-suit-hanky-tablet image to a width and height of 300 pixels, and with the background color set to the predominant color in the image:

Pad to 300x300 with the predominant color set as the background color

Tip
To use generative AI to extend the image into the padded areas, see generative fill.

See full syntax: b_auto in the Transformation Reference.

Try it out: Background in the Transformation Center.

Gradient fade

You can also apply a padding gradient fade effect with the predominant colors in the image by adjusting the value of the b_auto parameter as follows:

b_auto:[gradient_type]:[number]:[direction]

Where:

  • gradient_type - one of the following values:
    • predominant_gradient - base the gradient fade effect on the predominant colors in the image
    • predominant_gradient_contrast - base the effect on the colors that contrast the predominant colors in the image
    • border_gradient - base the gradient fade effect on the predominant colors in the border pixels of the image
    • border_gradient_contrast - base the effect on the colors that contrast the predominant colors in the border pixels of the image
  • number - the number of predominant colors to select. Possible values: 2 or 4. Default: 2
  • direction - if 2 colors are selected, this parameter specifies the direction to blend the 2 colors together (if 4 colors are selected each gets interpolated between the four corners). Possible values: horizontal, vertical, diagonal_desc, and diagonal_asc. Default: horizontal
predominant predominant_gradient:2:diagonal_desc border_contrast predominant_gradient_contrast:4

Custom color palette

Add a custom palette to limit the selected color to one of the colors in the palette that you provide. Once the predominant color has been calculated then the closest color from the available palette is selected. Append a colon and then the value palette followed by a list of colors, each separated by an underscore. For example, to automatically add padding and a palette that limits the possible choices to green, red and blue: b_auto:palette_red_green_blue

The palette can be used in combination with any of the various values for b_auto, and the same color in the palette can be selected more than once when requesting multiple predominant colors. For example, padding to a width and height of 300 pixels, with a 4 color gradient fade in the auto colored padding, and limiting the possible colors to red, green, blue, and orange:

Pad to 300x300 with 4 color gradient fade from given palette
Gradient fade into padding

Fade the image into the added padding by adding the gradient_fade effect with a value of symmetric_pad (e_gradient_fade:symmetric_pad in URLs). The padding blends into the edge of the image with a strength indicated by an additional value, separated by a colon (Range: 0 to 100, Default: 20). Values for x and y can also be specified as a percentage (range: 0.0 to 1.0), or in pixels (integer values) to indicate how far into the image to apply the gradient effect. By default, the gradient is applied 30% into the image (x_0.3).

For example, padding the string image to a width and height of 300 pixels, with the background color set to the predominant color, and with a gradient fade effect, between the added padding and 50% into the image.

Pad to 300x300 with the predominant color set as the background color and gradient fade into padding


See full syntax: e_gradient_fade in the Transformation Reference.

Try it out: Background in the Transformation Center.

Borders

Add a solid border around images with the border parameter (bo in URLs). The parameter accepts a value with a CSS-like format: width_style_color (e.g., 3px_solid_black).

An opaque color can be set as an RGB hex triplet (e.g., rgb:3e2222), a 3-digit RGB hex (e.g., rgb:777) or a named color (e.g., green).

You can also use a 4-digit or 8-digit RGBA hex quadruplet for the color, where the 4th hex value represents the alpha (opacity) value (e.g., co_rgb:3e222240 results in 25% opacity).

Additionally, Cloudinary's client libraries also support a # shortcut for RGB (e.g., setting color to #3e2222 which is then translated to rgb:3e2222), and when using Cloudinary's client libraries, you can optionally set the border values programmatically instead of as a single string (e.g., border: { width: 4, color: 'black' }).

Note
Currently only the 'solid' border style is supported.

For example, the uploaded JPG image named blue_sweater delivered with a 5 pixel blue border:

Image delivered with 5 pixel blue border

Borders are also useful for adding to overlays to clearly define the overlaying image, and also automatically adapt to any rounded corner transformations. For example, the base image given rounded corners with a 10 pixel grey border, and an overlay of the image of sale resized to a 100x100 thumbnail added to the northeast corner:

Base image with rounded corners + overlay

Note
When using the border parameter to set the border color of a text overlay, you can also set the color to predominant_contrast. This selects the strongest contrasting color to the predominant color while taking all pixels in the image into account. For example, l_text:Arial_30:foo,bo_3px_solid_predominant_contrast

See full syntax: bo (border) in the Transformation Reference.

Color blind effects

Cloudinary has a number of features that can help you to choose the best images as well as to transform problematic images to ones that are more accessible to color blind people. You can use Cloudinary to:

  • Simulate how an image would look to people with different color blind conditions.
  • Assist people with color blind conditions to help differentiate problematic colors.
  • Analyze images to provide color blind accessibility scores and information on which colors are the hardest to differentiate.

Tip
Watch a video tutorial that addresses color accessibility in JavaScript.

Simulate color blind conditions

You can simulate a number of different color blind conditions using the simulate_colorblind effect. For full syntax and supported conditions, see the e_simulate_colorblind parameter in the Transformation URL API Reference.

Simulate the way an image would appear to someone with deuteranopia (most common form of) color blindness:

Red and green image Original image Red and green image when simulating color blind condition Deuteranopia simulation

See full syntax: e_simulate_colorblind in the Transformation Reference.

Assist people with color blind conditions

Use the assist_colorblind effect (e_assist_colorblind in URLs) to help people with color blind conditions to differentiate between colors.

You can add stripes in different directions and thicknesses to different colors, making them easier to differentiate, for example:

Help a color blind user differentiate similar colors with stripes

A color blind person would see the stripes like this:

Stripe color-blind assistance with simulation

Alternatively, you can use color shifts to make colors easier to distinguish by specifying the xray assist type, for example:

Help a colorblind user differentiate similar colors with color shifts


See full syntax: e_assist_colorblind in the Transformation Reference.

Displacement maps

You can displace pixels in a source image based on the intensity of pixels in a displacement map image using the e_displace effect in conjunction with a displacement map image specified as an overlay. This can be useful to create interesting effects in a select area of an image or to warp the entire image to fit a needed design or texture. For example, to make an image wrap around a coffee cup or appear to be printed on a textured canvas.

The displace effect (e_displace in URLs) algorithm displaces the pixels in an image according to the color channels of the pixels in another specified image (a gradient map specified with the overlay parameter). The displace effect is added in the same component as the layer_apply flag. The red channel controls horizontal displacement, green controls vertical displacement, and the blue channel is ignored.

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

The final displacement of each pixel in the base image is determined by a combination of the red and green color channels, together with the configured x and/or y parameters:

x Red Channel Pixel Displacement
Positive 0 - 127 Right
Positive 128 - 255 Left
Negative 0 - 127 Left
Negative 128 - 255 Right
y Green Channel Pixel Displacement
Positive 0 - 127 Down
Positive 128 - 255 Up
Negative 0 - 127 Up
Negative 128 - 255 Down

The displacement of pixels is proportional to the channel values, with the extreme values giving the most displacement, and values closer to 128 giving the least displacement.

The displacement formulae are:

  • x displacement = (127-red channel)*(x parameter)/127
  • y displacement = (127-green channel)*(y parameter)/127

Positive displacement is right and down, and negative displacement is up and left.

For example, specifying an x value of 500, at red channel values of 0 and 255, the base image pixels are displaced by 500 pixels horizontally, whereas at 114 and 141 (127 - 10% and 128 + 10%) the base image pixels are displaced by 50 pixels horizontally.

x Red Channel Pixel Displacement
500 0 500 pixels right
500 114 50 pixels right
500 141 50 pixels left
500 255 500 pixels left

Note
Values of x and y must be between -999 and 999.

This is a standard displacement map algorithm used by popular image editing tools, so you can upload existing displacement maps found on the internet or created by your graphic artists to your product environment and specify them as the overlay asset, enabling you to dynamically apply the displacement effect on other images in your product environment or those uploaded by your end users.

Several sample use-case of this layer-based effect are shown in the sections below.

See full syntax: e_displace in the Transformation Reference.

Use case: Warp an image to fit a 3-dimensional product

Use a displacement map to warp the perspective of an overlay image for final placement as an overlay on a mug:

Couple cornfield Plus Displacement Equals Couple cornfield displaced

Using this overlay transformation for placement on a mug:

Zoom displacement map

Use case: Create a zoom effect

To displace the sample image by using a displacement map, creating a zoom effect:

Hands Plus Zoom map Equals Hands with zoom map

You could take this a step further by applying this displacement along with another overlay component that adds a magnifying glass. In this example, the same displacement map as above is used on a different base image and offset to a different location.

zoomed in image

Use case: Apply a texture to your image

Autumn woods Plus Canvas texture Autom woods - canvas texture Hands with zoom map

Interactive texture demo

For more details on displacement mapping with the displace effect, see the article on Displacement Maps for Easy Image Transformations with Cloudinary. The article includes a variety of examples, as well as an interactive demo.

Distort

Using the distort effect, you can change the shape of an image, distorting its dimensions and the image itself. It works in one of two modes: you can either change the positioning of each of the corners, or you can warp the image into an arc.

To change the positioning of each of the corners, it is helpful to have in mind a picture like the one below. The solid rectangle shows the coordinates of the corners of the original image. The intended result of the distortion is represented by the dashed shape. The new corner coordinates are specified in the distort effect as x,y pairs, clockwise from top-left. For example:

Distortion coordinates

Image distorted to new shape

For more details on perspective warping with the distort effect, see the article on How to dynamically distort images to fit your graphic design.

To curve an image, you can specify arc and the number of degrees in the distort effect, instead of the corner coordinates. If you specify a positive value for the number of degrees, the image is curved upwards, like a frown. Negative values curve the image downwards, like a smile.

You can distort text in the same way as images, for example, to add curved text to the frisbee image (e_distort:arc:-120):

Curved text distortion

See full syntax: e_distort in the Transformation Reference.

Text distortion demo

The CLOUDINARY text in the following demo was created using the text method of the Upload API. Try distorting it by entering different values for the corner coordinates.

Select one of the options or manually change the corner coordinates then generate the new distorted text.

Original     Slant     Distance     Narrow     Squash     Grow

,

,


,

,

Generative AI effects

Cloudinary has a number of transformations that make use of generative AI:

You can use natural language in most of these transformations as prompts to guide the generation process.

Tip
See AI in Action for more uses of AI within Cloudinary.

Generative background replace (Beta)

Use AI to generate an alternative background for your images. The new background takes into account the foreground elements, positioning them naturally within the scene.

Important
Generative background replace is currently in Beta. There may be minor changes in functionality before the general access release. We would appreciate any feedback via our support team.

For images with transparency, the generated background replaces the transparent area. For images without transparency, the effect first determines the foreground elements and leaves those areas intact, while replacing the background.

You can use generative background replace without a prompt, and let the AI decide what to show in the background, based on the foreground elements. For example, replace the background of this image (e_gen_background_replace):

Original Image Original image Replace the background Replace the background

Alternatively, you can use a natural language prompt to guide the AI and describe what you want to see in the background. For example, place the model in front of an old castle (e_gen_background_replace:prompt_an%20old%20castle):

Background replaced in image of a model


You can regenerate the background with the same prompt (or no prompt) by setting the seed parameter. A different result is generated for each value you set. For example, regenerate the background for the old castle example (e_gen_background_replace:prompt_an%20old%20castle;seed_1):

Background replaced in image of a model


If you want to reproduce a background, use the same seed value, and make sure to keep any preceding transformation parameters the same. Subsequent parameters can be different, for example, scale down the same image:

Background replaced in image of a model and scaled down


In this next example, the transparent background of the original image is replaced to give context to the motorbike (e_gen_background_replace:prompt_a%20deserted%20street):

Original Image Original image Replace the background Replace the background

Notes and limitations:
  • The use of generative AI means that results may not be 100% accurate.
  • There is a special transformation count for the generative background replace effect.
  • If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
  • The generative background replace effect is not supported for animated images, fetched images or incoming transformations.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See full syntax: e_gen_background_replace in the Transformation Reference.

Try it out: Generative background replace in the Transformation Center.

Generative fill

When resizing images using one of the padding crop modes (pad, lpad, mpad or fill_pad), rather than specifying a background color or using content-aware padding, you can seamlessly extend the existing image into the padded area.

Using generative AI, you can automatically add visually realistic pixels to either or both dimensions of the image. Optionally specify a prompt to guide the result of the generation.

To extend the width of an image, specify the aspect ratio such that the width needs padding. For example, change the following portrait image to be landscape by specifying an aspect ratio of 16:9 with a padding crop, then fill in the extended width using the gen_fill background option (b_gen_fill in URLs):

Original image of a moped in a street Original image Extended street Seamlessly fill the extended width

Similarly, you can change a landscape image into portrait dimensions by specifying the aspect ratio such that the height needs padding:

Original image of a bench outside a house Original image Extended house Seamlessly fill the extended height

To extend both the width and the height of an image, you can use the minimum pad mode, ensuring that the dimensions you specify are greater than the original image dimensions. For example, extend this 640 x 480 pixel image to fill a 1100 x 1100 pixel square:

Original image of a Gaudi mosaic Original image Extended mosaic Seamlessly fill both extended dimensions

When using padding modes, you can use the gravity parameter to position the original image within the padding, for example, perhaps with the first example, you only want to extend the image to the left, you can position the original image to the right by setting gravity to east:

Moped in a street - extend west

If you want to see something specific in the generated parts of the image, you can specify a prompt using natural language. For example, add a mug of coffee and cookies to the extended regions (b_gen_fill:prompt_mug%20of%20coffee%20and%20cookies):

Original image of a kid's desk Original image
Extended desktop with coffee and cookies Include coffee and cookies
Extended desktop No prompt

You can regenerate the filled background with the same prompt (or no prompt) by setting the seed parameter. A different result is generated for each value you set. For example, regenerate the background for the coffee and cookies example (b_gen_fill:prompt_mug%20of%20coffee%20and%20cookies;seed_1,c_pad,h_400,w_1500):

Extended background including coffee and cookies


To reproduce a filled background, use the same seed value, and make sure to keep any preceding transformation parameters the same. Subsequent parameters can be different, for example, scale down the same image:

Extended background including coffee and cookies


If you want to ensure that the background is extended in a natural fashion, without taking elements of the foreground into account, then you can set the ignore-foreground parameter to true. This is in fact the default behavior, unless a foreground object touches the edge of the image. You can see in the following example that the bike wheel touches the edge of the image, so in this case the foreground would not be ignored, however, this has consequences of the bike being generated in the extended image. Therefore, in this case it is better to force the foreground to be ignored:

Original image of a girl with a bike Original image
Extend the image taking the foreground into account Default behavior
Extend the image without taking the foreground into account ignore-foreground_true

Notes and limitations:
  • Generative fill can only be used on non-transparent images.
  • There is a special transformation count for generative fill.
  • Generative fill is not supported for animated images, fetched images or incoming transformations.
  • If you get blurred results when using this feature, it is likely that the built-in NSFW (Not Safe For Work) check has detected something inappropriate. You can contact support to disable this check if you believe it is too sensitive.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See full syntax: b_gen_fill in the Transformation Reference.

Try it out: Generative fill in the Transformation Center.

Generative recolor

Recolor elements of your images using generative AI.

Use natural language to describe what you want to recolor in the image. For example, turn the jacket on the right pink (e_gen_recolor:prompt_the%20jacket%20on%20the%20right;to-color_pink):

Original image of three people in jackets Original image Right jacket recolored pink Right jacket recolored pink

To recolor all instances of the prompt in the image, specify multiple_true, for example, recolor all the devices in the following image to a particular orange color, with hex code EA672A:

Original image of people on devices Original image All devices recolored orange All devices recolored orange

Tip
Consider using Replace color if you want to recolor everything of a particular color in your image, rather than specific elements.

If there are a number of different things that you want to recolor, you can specify more than one prompt. Note that when you specify more than one prompt, multiple instances of each prompt are recolored, regardless of the multiple parameter setting. For example, in this image, all devices and both people's hair are recolored:

Devices and hair recolored

AI-powered object recolor demo

Notes and limitations:
  • The generative recolor effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • The generative recolor effect works best on simple objects that are clearly visible.
  • Very small objects and very large objects may not be detected.
  • During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
  • When you specify more than one prompt, all the objects specified in each of the prompts will be recolored whether or not multiple_true is specified in the URL.
  • There is a special transformation count for the generative recolor effect.
  • The generative recolor effect is not supported for animated images, fetched images or incoming transformations.
  • User-defined variables cannot be used for the prompt when more than one prompt is specified.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See full syntax: e_gen_recolor in the Transformation Reference.

Try it out: Generative recolor in the Transformation Center.

Generative remove

This effect uses generative AI to remove an object from an image and fill in the space with artificially generated, visually realistic pixels.

Use natural language to describe what you want to remove from the image, for example, remove the stick from this image of a dog with a stick in its mouth (e_gen_remove:prompt_the%20stick):

Original image of dog with stick Original image Dog with stick removed Remove the stick

The natural language lets you be specific about what you want to remove. In the following example, specifying only 'the child' removes the child in the middle, whereas specifying the 'the child in green' removes the child wearing the green jacket:

Original image of family Original image
Family with middle child removed Remove the child Family with child in green removed Remove the child in green
Remove multiple items

If there is more than one of the same item in an image, you can remove them all using by setting multiple to true. For example, remove all the geese in this image (e_gen_remove:prompt_goose;multiple_true):

Original image with geese Original image Scene  with geese removed Remove all the geese

Otherwise, only one is removed:

One goose removed from the picture


If there are a number of different things that you want to remove, you can specify more than one prompt. Note that when you specify more than one prompt, multiple instances of each prompt are removed regardless of the multiple parameter setting. For example, in this image, all phones are removed, together with the mouse and keyboard:

Original image with gadgets Original image Scene  with geese removed Remove specified gadgets
Remove items from a region

You can also specify one or more region if you know the co-ordinates of the pixels that you want to remove. For each region, specify the x,y co-ordinates of the top left of the region, plus its width and height in pixels. For example, remove the objects from the top left and bottom right of the image:

Various objects Original image Remove two regions Remove specified regions
Remove shadows and reflections

By default, shadows and reflections cast by objects specified in the prompt are not removed. If you want to remove the shadow/reflection, set the remove-shadow parameter to true:

Family on a beach Original image
Remove the dog Remove the dog
(but not its shadow by default)
Remove the dog and its shadow Remove the dog
(and its shadow)
Remove text

You can remove all the text from an image by setting the prompt to text e.g. e_gen_remove:prompt_text, or e_gen_remove:prompt_(dog;text).

For example, remove the text and person from this store front (e_gen_remove:prompt_(text;person)):

Original image Original image Remove the text and the person Remove the text
and the person

If you don't want to remove all the text in the image, specify the object you want to remove the text from by using the syntax text:<object> as the prompt (either as the only prompt, or together with other prompts as in the previous example).

For example, in the following image there is text in the main part of the image in addition to text on the mobile screen. You can remove the text on the mobile screen only, as follows (e_gen_remove:prompt_text:the%20mobile%20screen):

Hospital poster Original image
Remove all the text Remove all the text Remove the text from the mobile screen only Remove the text
from the mobile screen only

Notes and limitations:
  • The generative remove effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • The generative remove effect works best on simple objects that are clearly visible.
  • Very small objects and very large objects may not be detected.
  • Do not attempt to remove faces or hands.
  • During processing, large images are downscaled to a maximum of 6140 x 6140 pixels, then upscaled back to their original size, which may affect quality.
  • When you specify more than one prompt, all the objects specified in each of the prompts will be removed whether or not multiple_true is specified in the URL.
  • There is a special transformation count for the generative remove effect.
  • The generative remove effect is not supported for animated images, fetched images or incoming transformations.
  • User-defined variables cannot be used for the prompt when more than one prompt is specified.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See full syntax: e_gen_remove in the Transformation Reference.

Try it out: Generative remove in the Transformation Center.

Generative replace

This effect uses generative AI to replace objects in images with other objects.

Use natural language to describe what you want to replace in the image, and what to replace it with.

For example, replace "the picture" with "a mirror with a silver frame" (e_gen_replace:from_the%20picture;to_a%20mirror%20with%20a%20silver%20frame):

Original image with a picture on the wall Original image Picture replaced with mirror Picture replaced
with mirror

If you want to maintain the shape of the object you're replacing, set the preserve-geometry parameter to true. For example, below, notice the difference between the position of the sleeves and neckline of the sweater, with and without preserving the geometry when the shirt is replaced with a cable knit sweater:

Comparison of with and without preserve-geometry

Replace shirt with sweater - compare with and without preserving geometry
Original image of a woman in a shirt Original Shirt replaced with sweater Geometry
not preserved
Shirt replaced with sweater Geometry
preserved

Notes and limitations:
  • The generative replace effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • The generative replace effect works best on simple objects that are clearly visible.
  • Very small objects and very large objects may not be detected.
  • Do not attempt to replace faces, hands or text.
  • During processing, large images are downscaled to a maximum of 2048 x 2048 pixels, then upscaled back to their original size, which may affect quality.
  • There is a special transformation count for the generative replace effect.
  • The generative replace effect is not supported for animated images, fetched images or incoming transformations.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See full syntax: e_gen_replace in the Transformation Reference.

Try it out: Generative replace in the Transformation Center.

Generative restore

Revitalize degraded and poor quality images using generative AI.

You can use the gen_restore effect (e_gen_restore in URLs) to improve images that have become degraded through repeated processing and compression, in addition to enhancing old images by improving sharpness and reducing noise.

Particularly useful for user generated content (UGC), generative restore can:

  • Remove severe compression artifacts
  • Reduce noise from grainy images
  • Sharpen blurred images

Use the slider in this example to see the difference between the original image on the left and the restored image on the right:

Original image Restored image

You can use the generative restore effect together with the improve effect for even better results. While generative restore tries to rectify compression artifacts, the improve effect addresses color, contrast and brightness.

Original image Restored and improved image

Tip
See how the generative restore effect compares to other image enhancement options.

Notes and limitations:
  • The generative restore effect can only be used on non-transparent images.
  • The use of generative AI means that results may not be 100% accurate.
  • There is a special transformation count for the generative restore effect.
  • The generative restore effect is not supported for animated images, fetched images or incoming transformations.
  • Initial transformation requests may result in a 423 error response while the transformation is being processed. You can prepare derived versions in advance using an eager transformation.

See full syntax: e_gen_restore in the Transformation Reference.

Try it out: Generative restore in the Transformation Center.

Layer blending and masking

Effects: screen, multiply, overlay, mask, anti_removal

These effects are used for blending an overlay with an image.

For example, to make each pixel of the boy_tree image brighter according to the pixel value of the overlaid cloudinary_icon_blue image:

Image made brighter according to overlay


See full syntax: e_screen, e_multiply, e_overlay, e_mask, e_anti_removal in the Transformation Reference.

Outline

The outline effect (e_outline in URLs) enables you to add an outline to your transparent images. The parameter can also be passed additional values as follows:

  • mode - how to apply the outline effect which can be one of the following values: inner, inner_fill, outer, fill. Default value: inner and outer.
  • width - the first integer supplied applies to the thickness of the outline in pixels. Default value: 5. Range: 1 - 100
  • blur - the second integer supplied applies to the level of blur of the outline. Default value: 0. Range: 0 - 2000
Original Original e_outline e_outline e_outline:inner e_outline:inner e_outline:inner_fill e_outline:inner_fill e_outline:outer e_outline:outer e_outline:fill e_outline:fill

Use the color parameter (co in URLs) to define a new color for the outline (the default is black). The color can be specified as an RGB hex triplet (e.g., rgb:3e2222), a 3-digit RGB hex (e.g., rgb:777) or a named color (e.g., green). For example, to add an orange outline:

multiple outlines

You can also add a multi-colored outline by creating successive outline effect components. For example:

multiple outlines

See full syntax: e_outline in the Transformation Reference.

Replace color

You can replace a color in an image using the replace_color effect. Unless you specify otherwise, the most prominent high-saturation color in an image is selected as the color to change. By default, a tolerance of 50 is applied to this color, representing a radius in the LAB color space, so that similar shades are also replaced, achieving a more natural effect.

Tip
Consider using Generative recolor if you want to specify particular elements in your image to recolor, rather than everything with the same color.

For example, without specifying a color to change, the most prominent color is changed to the specified maroon:

Original image with blue color Original blue bag Predominant color recolored to maroon shades Predominant color recolored

Adding a tolerance value of 10 (e_replace_color:maroon:10) prevents the handle also changing color:

Original image with blue color Original blue bag Handle not recolored Handle not recolored

Specifying blue as the color to replace (to a tolerance of 80 from the color #2b38aa) replaces the blue sides with parallel shades of maroon, taking into account shadows, lighting, etc:

Original image with blue color Original blue bag Blues recolored to maroon shades Blues recolored to maroon shades

See full syntax: e_replace_color in the Transformation Reference.

Rotation

Rotate an image by any arbitrary angle in degrees with the angle parameter (a in URLs). A positive integer value rotates the image clockwise, and a negative integer value rotates the image counterclockwise. If the angle is not a multiple of 90 then a rectangular transparent bounding box is added containing the rotated image and empty space. In these cases, it's recommended to deliver the image in a transparent format if the background is not white.

Note
If either the width or height of an image exceeds 3000 pixels, the image is automatically downscaled first, and then rotated. This applies to the image that is the input to the rotation, be it the output of a chained transformation or the original image.

You can also take advantage of special angle-rotation modes, such as a_hflip / a_vflip to horizontally or vertically mirror flip an image, a_auto_right / a_auto_left to rotate an image 90 degrees only if the requested aspect ratio is different than the original image's aspect ratio, or a_ignore to prevent Cloudinary from automatically rotating images based on the images's stored EXIF details.

For details on these rotation modes, see the Transformation Reference.

Rotation examples

The following images apply various rotation options to the cutlery image:

  1. Rotate the image by 90 degrees:
    Image rotated 90 degrees clockwise
  2. Rotate the image by -20 degrees (automatically adds a transparent bounding box):
    Image rotated 20 degrees counterclockwise
  3. Vertically mirror flip the image and rotate by 45 degrees (automatically adds a transparent bounding box):
    Image vertically flipped and rotated 45 degrees clockwise
  4. Crop the image to a 200x200 circle, then rotate the image by 30 degrees (automatically adds a transparent bounding box) and finally trim the extra whitespace added:
    image cropped to a 200x200 circle, rotated 30 degrees clockwise and trimmed

See full syntax: a (angle) in the Transformation Reference.

Try it out: Rotate in the Transformation Center.

Rounding

Many website designs need images with rounded corners, while some websites require images with a complete circular or oval (ellipse) crop. Twitter, for example, uses rounded corners for its users' profile pictures.

Programmatically, rounded corners can be achieved using the original rectangular images combined with modern CSS properties or image masking overlays. However, it is sometimes useful to deliver images with rounded corners in the first place. This is particularly helpful when you want to embed images inside an email (most mail clients can't add CSS based rounded corners), a PDF or a mobile application. Delivering images with already rounded corners is also useful if you want to simplify your CSS and markup or when you need to support older browsers.

Transforming an image to a rounded version is done using the radius parameter (r in URLs). You can manually specify the amount to round various corners, or you can set it to automatically round to an exact ellipse or circle.

Note
To deliver a rounded image with a transparent background, deliver as PNG. Formats that do not support transparency will be delivered by default with a white background, which can be adjusted with the background transformation parameter. Keep in mind that the PNG format produces larger files than the JPEG format. For more information, see the article on PNG optimization - saving bandwidth on transparent PNGs with dynamic underlay.

Manually setting rounding values

To manually control the rounding, use the radius parameter with between 1 and 4 values defining the rounding amount (in pixels, separated by colons), following the same concept as the border-radius CSS property. When specifying multiple values, keep a corner untouched by specifying '0'.

  • One value: Symmetrical. All four corners are rounded equally according to the specified value.
  • Two values: Opposites are symmetrical. The first value applies to the top-left and bottom-right corners. The second value applies to the top-right and bottom-left corners.
  • Three values: One set of corners is symmetrical. The first value applies to the top-left. The second value applies to the top-right and bottom-left corners. The third value applies to the bottom-right.
  • Four values: The rounding for each corner is specified separately, in clockwise order, starting with the top-left.

For example:

1 value: r_20 r_20 2 values: r_25:0 r_25:0    3 values: r_10:40:25 r_10:40:25    4 values: r_30:0:30:30 r_30:0:30:30   

Automatically rounding to an ellipse or circle

Rather than specifying specific rounding values, you can automatically crop images to the shape of an ellipse (if the requested image size is a rectangle) or a circle (if the requested image size is a square). Simply pass max as the value of the radius parameter instead of numeric values.

The following example transforms an uploaded JPEG to a 250x150 PNG with maximum radius cropping, which generates the ellipse shape with a transparent background:

150x100 ellipsoid image

As the following example shows, displaying pictures of your web site's users as circle headshots is very easy to achieve with Cloudinary using face gravity with max radius:

100x100 face thumbnail with max radius

You can also overlay circular pictures of your users on other images using the layer_apply flag that tells Cloudinary to apply the rounding (and other transformations) to the overlay image and not to the base image:

Face thumbnail on base image


See full syntax: r (round corners) in the Transformation Reference.

Try it out: Round corners in the Transformation Center.

Shadow

There are two ways to add shadow to your images:

  • Use the shadow effect to apply a shadow to the edge of the image.
  • Use the dropshadow effect to apply a shadow to objects in the image.

Shadow effect

The shadow effect (e_shadow in URLs) applies a shadow to the edge of the image. You can use this effect to make it look like your image is hovering slightly above the page.

In this example, a dark blue shadow with medium blurring of its edges (co_rgb:483d8b,e_shadow:50) is added with an offset of 60 pixels to the top right of the photo (x_60,y_-60):

Photo of Stockholm with shadow effect


If your image has transparency, the shadow is added to the edge of the non-transparent part, for example, adding the same shadow to the lipstick in this image:

Transparent photo of lipstick with shadow effect

For a more realistic shadow, use the dropshadow effect.

See full syntax: e_shadow in the Transformation Reference.

Dropshadow effect

The dropshadow effect (e_dropshadow in URLs) uses AI to apply a realistic shadow to an object or objects in the image.

You can use this effect to apply consistent shadows across a set of product images, where background removal has been used.

To create the shadow, specify the position of the light source, using azimuth and elevation as shown in this diagram, where north (0 / 360 degrees) is behind the object:

Diagram showing azimuth and elevation

You can also specify a spread from 0 to 100, where the smaller the number, the closer the light source is to 'point' light, and larger numbers mean 'area' light.

The following example has a light source set up at an azimuth of 220 degrees, an elevation of 40 degrees above 'ground' and where the spread of the light source is 20% (e_dropshadow:azimuth_220;elevation_40;spread_20):

Lipstick without shadow Original Lipstick with shadow With dropshadow effect

Notes
  • Either:
    • the original image must include transparency, for example where the background has already been removed and it has been stored in a format that supports transparency, such as PNG, or
    • the dropshadow effect must be chained after the background_removal effect, for example:

See background removal and drop shadow being applied to product images on the fly in a React app.

See full syntax: e_dropshadow in the Transformation Reference.

Try it out: Drop shadow in the Transformation Center.

Dropshadow effect demo

Try out the different dropshadow effect settings on an image of a bench.

Use the controls to set up the light source, then generate the shadow!

Azimuth (0 to 360): 215

Elevation (0 to 90): 45

Spread (0 to 100): 50

Note

*It can take a few seconds to generate a new image on the fly if you've tried a combination of settings that hasn't been tried before. Once an image has been generated though, it's cached on the CDN, so future requests to the same transformation are much faster. You can learn more about that in our Service introduction.

Shape cutouts

You can use a layer image with an opaque shape to either remove that shape from the image below that layer, leaving the shape to be transparent (e_cut_out), or conversely, use it like a cookie-cutter, to keep only that shape in the base image, and remove the rest (fl_cutter).

You can also use AI to keep or remove certain parts of an image (e_extract).

Note
The same layer transformation syntax rules apply, including for authenticated or private assets.

Remove a shape

The following example uses the cut_out effect to cut a logo shape (the overlay image) out of a base ruler image. There's a notebook photo underlay behind the cutout ruler, such that you can see the notebook paper and lines through the logo cutout:

Image of a ruler with the Cloudinary logo cut out

Keep a shape

The following example uses the cutter flag to trim an image of a water drop based on the shape of a text layer (l_text:Unkempt_250_bold:Water/fl_cutter,fl_layer_apply). The text overlay is defined with the desired font and size of the resulting delivered image:

Trim an image based on a text overlay definition

Use AI to determine what to remove or keep in an image

You can use the e_extract transformation to specify what to remove or keep in the image using a natural language prompt.

For example, start with this image of a desk with picture frames:

A desk with picture frames


You can extract the picture of the tree (e_extract:prompt_the%20picture%20of%20the%20tree):

The picture of the tree

Everything but the picture of the tree is now considered background, so you can then generate a new background for this picture, let's say an art gallery (e_gen_background_replace:prompt_an%20art%20gallery):

The picture of the tree in an art gallery


Or, you can invert the result of the extract transformation (invert_true), leaving everything but the picture of the tree:

Everything but the picture of the tree


And then generate a new picture in that space (e_gen_background_replace:prompt_a%20sketch%20of%20a%20tree):

Picture replaced with a new sketch of a tree


To use a pre-determined background, you can use the extract effect in a layer. In this example, the multiple parameter is used to extract all the cameras in the image, and overlay them on a colorful background.

Cut out cameras and overlay on a colorful background


And this is the inverted result:

Cut out cameras and overlay on a colorful background


Using the extract effect in mask mode, you can achieve interesting results, for example, blend the mask overlay with the colorful image using e_overlay:

Cut out cameras and overlay on a colorful background with overlay masking


See full syntax: e_extract in the Transformation Reference.

Theme

Use the theme effect to change the color theme of a screen capture, either to match or contrast with your own website. The effect applies an algorithm that intelligently adjusts the color of illustrations, such as backgrounds, designs, texts, and logos, while keeping photographic elements in their original colors. If needed, luma gets reversed (so if the original has dark text on a bright background, and the target background is dark, the text becomes bright).

In the example below, a screen capture with a predominantly light theme is converted to a dark theme by specifying black as the target background color:

Screen capture of the Cloudinary website

Screen capture of the Cloudinary website


See full syntax: e_theme in the Transformation Reference.

Tint

The tint:<options> effect enables you to blend your images with one or more colors and specify the blend strength. Advanced users can also equalize the image for increased contrast and specify the positioning of the gradient blend for each color.

  • By default, e_tint applies a red color at 60% blend strength.

  • Specify the colors and blend strength amount in the format:

    amount is a value from 0-100, where 0 keeps the original color and 100 blends the specified colors completely.

    The color can be specified as an RGB hex triplet (e.g., rgb:3e2222), a 3-digit RGB hex (e.g., rgb:777) or a named color (e.g., green).

    For example:

  • To equalize the colors in your image before tinting, set equalize to true (false by default). For example:

  • By default, the specified colors are distributed evenly. To adjust the positioning of the gradient blend, specify a position value between 0p-100p. If specifying positioning, you must specify a position value for all colors. For example:

Original Original default red color at 20% strength default red color at 20% strength red, blue, yellow  at 100% strength red, blue, yellow at 100% strength equalized, red, blue, yellow tinting at 80% strength, with adjusted gradients equalized, mutli-color, 80%, adjusted gradients

1Equalizing colors redistributes the pixels in your image so that they are equally balanced across the entire range of brightness values, which increases the overall contrast in the image. The lightest area is remapped to pure white, and the darkest area is remapped to pure black.

See full syntax: e_tint in the Transformation Reference.

Vectorize

The vectorize effect (e_vectorize in URLs) can be used to convert a raster image to a vector format such as SVG. This can be useful for a variety of use-cases, such as:

  • Converting a logo graphic in PNG format to an SVG, allowing the graphic to scale as required.
  • Creating a low quality image placeholder that resembles the original image but with a reduced number of colors and lower file-size.
  • Vectorizing as an artistic effect.

The vectorize effect can also be controlled with additional parameters to fine tune it to your use-case.

See full syntax : e_vectorize in the Transformation Reference.

Below you can see a variety of potential outputs using these options. The top-left image is the original photo. Following it, you can see the vector graphics, output as JPG, with varying levels of detail, color, despeckling and more. Click each image to open in a new tab and see the full transformation.

Original   Many colors, high detail   Many colors, high detail   Many colors, high detail   Many colors, high detail   Many colors, high detail  

Converting a logo PNG to SVG

If you have a logo or graphic as a raster image such as a PNG that you need to scale up or deliver in a more compact form, you can use the vectorize effect to create an SVG version that matches the original as closely as possible.

The original racoon PNG below is 256px wide and 28kb.

Original racoon logo png

If you want to display this image at a larger size, it will become blurry and the file size will increase with the resolution, as you can see in the below example which is twice the size of the original.

Upscaled PNG x4

To avoid the issues above, it's much better to deliver a vector image for this graphic using the vectorize effect. The example below delivers an SVG at the maximum detail (1.0) with 3 colors (like the original) and an intermediate value of 40 for the corners. This yields an extremely compact, 8 KB file that will provide pixel-perfect scaling to any size.

Deliver PNG as vectorized SVG

Creating a low quality image placeholder SVG

When delivering high quality photos, it's good web design practice to first deliver Low Quality Image Placeholders (LQIPs) that are very compact in size, and load extremely quickly. Cloudinary supports a large variety of compressions that can potentially be used for generating placeholders. You can read some more about those here.

Using SVGs is a nice way to display a placeholder. As an example, the lion jpeg image below with Cloudinary's optimizations applied, still gets delivered at 397 KB.

full resolution lion

Instead, an SVG LQIP can be used while lazy loading the full-sized image.

The placeholder should still represent the subject matter of the original but also be very compact. Confining the SVG to 2 colors and a detail level of 5% produces an easily identifiable image with a file size of just 6 KB.

LQIP lion

Vectorizing as an artistic effect

Vectorizing is a great way to capture the main shapes and objects composing a photo or drawing and also produces a nice effect. When using the vectorize effect for an artistic transformation, you can deliver the vectorized images in any format, simply by specifying the relevant extension.

For example, the image of a fruit stand below has been vectorized to create a nice artistic effect and subsequently delivered as an optimized jpg file.

fruit stand vectorized

Zoompan

Use the zoompan effect to apply zooming and/or panning to an image, resulting in an animated image.

Note
You need to transform the original image to an animated image type by either changing the extension or using the format parameter.

For example, you could take this image of a hotel and pool:

Hotel and swimming pool

...and create an animated version of it that starts zoomed into the right-hand side, and slowly pans out to the left while zooming out:

Hotel and swimming pool with zoompan effect


Or, you can specify custom co-ordinates for the start and end positions, for example start from a position in the northwest of the USA map (x=300, y=100 pixels), and zoom into North Carolina at (x=950, y=400 pixels).

Map of the USA with zoompan effect


If you want to automate the zoompan effect for any image, you can use automatic gravity (g_auto in URLs) to zoom into or out of the area of the image which Cloudinary determines to be most interesting. In the following example, the man's face is determined to be the most interesting area of the image, so the zoom starts from there when specifying from_(g_auto;zoom_3.4):

Man playing guitar with zoompan effect


There are many different ways to apply zooming and panning to your images. You can apply different levels of zoom, duration and frame rate and you can even choose objects to pan between.

See full syntax: e_zoompan in the Transformation Reference.

Learn more

✔️ Feedback sent!

Rate this page: