color_tools.image
Image processing module for color_tools.
This module provides image color analysis and manipulation tools: - Format Conversion: Convert between PNG, JPEG, WebP, HEIC, AVIF, etc. (conversion.py) - Watermarking: Add text, image, or SVG watermarks (watermark.py) - Color Analysis: Extract dominant colors with K-means clustering (analysis.py) - HueForge 3D printing: Luminance redistribution for multi-color printing (analysis.py) - CVD Operations: Simulate/correct color vision deficiencies (basic.py) - Palette Quantization: Convert to retro palettes with dithering (basic.py) - General Analysis: Count colors, brightness, contrast, noise (basic.py)
Requires Pillow: pip install color-match-tools[image]
Example:
>>> from color_tools.image import (
... convert_image, add_watermark,
... count_unique_colors, analyze_brightness,
... simulate_cvd_image, quantize_image_to_palette
... )
>>>
>>> # Convert image formats (auto-generates output filename)
>>> convert_image("photo.webp", output_format="png") # Creates photo.png
>>> convert_image("photo.jpg", output_format="webp", lossless=True)
PosixPath('photo.webp')
>>>
>>> # Add watermark
>>> add_text_watermark(
... "photo.jpg",
... text="© 2025 MyBrand",
... position="bottom-right",
... output_path="watermarked.jpg"
... )
PosixPath('watermarked.jpg')
>>>
>>> # Count colors in an image
>>> total = count_unique_colors("photo.jpg")
>>> print(f"Found {total} unique colors")
Found 42387 unique colors
>>>
>>> # Analyze image quality
>>> brightness = analyze_brightness("photo.jpg")
>>> print(f"Brightness: {brightness['mean_brightness']:.1f} ({brightness['assessment']})")
Brightness: 127.3 (normal)
>>>
>>> # Test accessibility with CVD simulation
>>> sim_image = simulate_cvd_image("chart.png", "deuteranopia")
>>> sim_image.save("chart_colorblind_view.png")
>>>
>>> # Create retro-style artwork
>>> retro = quantize_image_to_palette("photo.jpg", "cga4", dither=True)
>>> retro.save("retro_cga.png")
>>>
>>> # Extract dominant colors for Hueforge
>>> from color_tools.image import extract_color_clusters, redistribute_luminance
>>>
>>> # Extract 10 dominant colors from image
>>> clusters = extract_color_clusters("photo.jpg", n_colors=10)
>>>
>>> # Redistribute luminance for Hueforge
>>> colors = [c.centroid_rgb for c in clusters]
>>> changes = redistribute_luminance(colors)
>>>
>>> # Show layer assignments
>>> for change in changes:
... print(f"Layer {change.hueforge_layer}: RGB{change.new_rgb}")
- class color_tools.image.ColorCluster(centroid_rgb, centroid_lab, pixel_indices, pixel_count)[source]
Bases:
objectA cluster of similar colors from k-means clustering in LAB color space.
Represents a group of perceptually similar pixels extracted from an image. The centroid is the representative color for the cluster, and pixel assignments enable remapping the original image to use only the dominant colors.
- Variables:
centroid_rgb – Representative RGB color for this cluster (0-255 each)
centroid_lab – Representative color in CIE LAB space (L: 0-100, a/b: ~-128 to +127)
pixel_indices – List of pixel indices (flat array positions) belonging to this cluster
pixel_count – Number of pixels in this cluster (dominance weight)
- Parameters:
Example
>>> from color_tools.image import extract_color_clusters >>> clusters = extract_color_clusters("photo.jpg", n_colors=5) >>> for i, cluster in enumerate(clusters, 1): ... print(f"Color {i}: RGB{cluster.centroid_rgb} ({cluster.pixel_count} pixels)") Color 1: RGB(45, 52, 71) (15234 pixels) Color 2: RGB(189, 147, 128) (8921 pixels)
- class color_tools.image.ColorChange(original_rgb, original_lch, new_rgb, new_lch, delta_e, hueforge_layer)[source]
Bases:
objectRepresents a color transformation from luminance redistribution for HueForge optimization.
Tracks the before/after state when redistributing luminance values evenly across a set of colors. This is used for HueForge 3D printing to spread colors across the 27 available layers and prevent multiple colors from bunching on the same layer.
- Variables:
original_rgb – Original RGB color before redistribution (0-255 each)
original_lch – Original LCH color (L: 0-100, C: 0-100+, H: 0-360°)
new_rgb – New RGB color after luminance redistribution (0-255 each)
new_lch – New LCH color with redistributed L value (L: 0-100, C: 0-100+, H: 0-360°)
delta_e – Perceptual color difference (Delta E 2000) between original and new
hueforge_layer – Target HueForge layer number (1-27) based on new L value
- Parameters:
Example
>>> from color_tools.image import extract_color_clusters, redistribute_luminance >>> clusters = extract_color_clusters("image.jpg", n_colors=10) >>> colors = [c.centroid_rgb for c in clusters] >>> changes = redistribute_luminance(colors) >>> for change in changes: ... print(f"Layer {change.hueforge_layer}: RGB{change.new_rgb} (ΔE: {change.delta_e:.1f})") Layer 3: RGB(45, 52, 71) (ΔE: 12.3) Layer 7: RGB(89, 95, 102) (ΔE: 18.7)
- color_tools.image.extract_unique_colors(image_path, n_colors=10)[source]
Extract unique colors from an image using k-means clustering.
This is a simplified wrapper around extract_color_clusters that just returns the centroid RGB values for backward compatibility.
- Parameters:
- Return type:
- Returns:
List of RGB tuples (0-255 for each component)
- Raises:
ImportError – If Pillow is not installed
FileNotFoundError – If image file doesn’t exist
Example
>>> colors = extract_unique_colors("photo.jpg", n_colors=8) >>> print(colors) [(255, 0, 0), (0, 128, 255), ...]
- color_tools.image.extract_color_clusters(image_path, n_colors=10, use_lab_distance=True, *, distance_metric='lab', l_weight=1.0, use_l_median=False, n_iter=10)[source]
Extract color clusters from an image using k-means clustering.
This uses k-means in LAB color space for perceptually uniform clustering. Returns full cluster data including pixel assignments for later remapping.
- Parameters:
image_path (
str) – Path to the image file.n_colors (
int) – Number of clusters to extract (default: 10).use_lab_distance (
bool) – Deprecated — usedistance_metricinstead. When True (default) anddistance_metricis not set, uses LAB Euclidean distance. When False, uses raw RGB distance.distance_metric (
str) –Distance metric for cluster assignment.
"lab"— squared Euclidean in CIE LAB space (default)"rgb"— squared Euclidean in sRGB space"hyab"— HyAB (hybrid L + chromatic) in LAB space; recommended withl_weight=2.0for image quantization
l_weight (
float) – Lightness weight for HyAB distance (default: 1.0). A value of 2.0 (quantize_image_hyabdefault) emphasises lightness differences, yielding better separation of shades. Ignored unlessdistance_metric="hyab".use_l_median (
bool) – When True, use the median of the L channel (and mean of a/b) when updating centroids. This makes dark and light perceptual groups more stable. Ignored whendistance_metric="rgb".n_iter (
int) – Number of k-means iterations (default: 10).
- Return type:
- Returns:
List of
ColorClusterobjects with centroids and pixel assignments, sorted bypixel_countdescending.- Raises:
ImportError – If Pillow is not installed.
FileNotFoundError – If the image file doesn’t exist.
ValueError – If
distance_metricis not one of"lab","rgb", or"hyab".
Example:
>>> clusters = extract_color_clusters("photo.jpg", n_colors=8) >>> for cluster in clusters: ... print(f"Color: {cluster.centroid_rgb}, Pixels: {cluster.pixel_count}") Color: (255, 0, 0), Pixels: 1523 Color: (0, 128, 255), Pixels: 892
HyAB k-means example:
>>> clusters = extract_color_clusters( ... "photo.jpg", ... n_colors=16, ... distance_metric="hyab", ... l_weight=2.0, ... use_l_median=True, ... )
- color_tools.image.quantize_image_hyab(image_path, n_colors=16, *, n_iter=10, l_weight=2.0, use_l_median=True)[source]
Quantize an image to n_colors using HyAB k-means clustering.
HyAB uses hybrid L + chromatic distance in CIE LAB space, which often produces better separation of light and dark tones than pure Euclidean LAB distance. The default
l_weight=2.0is the value recommended by Abasi et al. (2020) for image quantization tasks.Steps:
Run k-means with HyAB distance to find cluster centroids.
Map every pixel to its nearest centroid colour.
Return the quantized image as a
PIL.Image.Image.
- Parameters:
image_path (
str) – Path to the input image file.n_colors (
int) – Palette size — number of distinct colours in the output (default: 16).n_iter (
int) – Number of k-means iterations (default: 10).l_weight (
float) – Lightness weight for HyAB distance (default: 2.0). Higher values emphasise lightness differences.use_l_median (
bool) – Use the median (not mean) of the L channel when updating centroids (default: True). Improves stability of dark/light clusters.
- Return type:
- Returns:
Quantized
PIL.Image.Imagein RGB mode.- Raises:
ImportError – If Pillow is not installed.
FileNotFoundError – If the image file doesn’t exist.
Example:
>>> img = quantize_image_hyab("photo.jpg", n_colors=8) >>> img.save("quantized.png")
See also
extract_color_clusters()— lower-level function that returns cluster data instead of a rendered image.
- color_tools.image.redistribute_luminance(colors)[source]
Redistribute LCH lightness values evenly across a list of colors.
This function: 1. Converts colors to LCH space 2. Sorts by LCH L (lightness) value 3. Redistributes L values evenly between 0 and 100 4. Converts back to RGB 5. Calculates Delta E for each change
- Parameters:
colors (
List[Tuple[int,int,int]]) – List of RGB tuples to redistribute- Return type:
- Returns:
List of ColorChange objects showing before/after for each color
Example
>>> colors = [(100, 50, 30), (200, 180, 160), (50, 50, 50)] >>> changes = redistribute_luminance(colors) >>> for change in changes: ... print(f"L: {change.original_lch[0]:.1f} -> {change.new_lch[0]:.1f}, ΔE={change.delta_e:.2f}") L: 24.3 -> 0.0, ΔE=12.45 L: 53.2 -> 50.0, ΔE=3.21 L: 76.8 -> 100.0, ΔE=23.14
- color_tools.image.format_color_change_report(changes)[source]
Format a human-readable report of color changes.
- Parameters:
changes (
List[ColorChange]) – List of ColorChange objects- Return type:
- Returns:
Formatted string showing before/after for each color
Example
>>> changes = redistribute_luminance([(100, 50, 30), (200, 180, 160)]) >>> print(format_color_change_report(changes)) Color Luminance Redistribution Report =====================================
RGB(100, 50, 30) → RGB(98, 48, 28) L: 24.3 → 33.3 | C: 28.5 → 28.5 | H: 31.2 → 31.2 ΔE (CIEDE2000): 9.12
…
- color_tools.image.l_value_to_hueforge_layer(l_value, total_layers=27)[source]
Convert an LCH L value (0-100) to a Hueforge layer number.
- Parameters:
- Return type:
- Returns:
Layer number (1-based, from 1 to total_layers)
Example
>>> l_value_to_hueforge_layer(0.0) # Darkest 1 >>> l_value_to_hueforge_layer(33.3) # 1/3 up 10 >>> l_value_to_hueforge_layer(100.0) # Brightest 27
- color_tools.image.count_unique_colors(image_path)[source]
Count the total number of unique RGB colors in an image.
Uses numpy for efficient counting of unique color combinations. The image is converted to RGB mode before counting (alpha channel ignored).
- Parameters:
- Return type:
- Returns:
Number of unique RGB colors (integer)
- Raises:
ImportError – If Pillow or numpy is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> count_unique_colors("photo.jpg") 42387
>>> # Indexed images (GIF, PNG with palette) show palette size >>> count_unique_colors("icon.gif") 256
>>> # Solid color image >>> count_unique_colors("red_square.png") 1
Note
For indexed color images (mode ‘P’), this counts unique colors in the converted RGB image, not the palette size. Use is_indexed_mode() to check if an image uses a palette.
- color_tools.image.get_color_histogram(image_path)[source]
Get histogram mapping RGB colors to their pixel counts.
Returns a dictionary where keys are RGB tuples and values are the number of pixels with that color. Uses numpy for efficient histogram calculation.
- Parameters:
- Return type:
- Returns:
Dictionary mapping (R, G, B) tuples to pixel counts
- Raises:
ImportError – If Pillow or numpy is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> histogram = get_color_histogram("photo.jpg") >>> histogram[(255, 0, 0)] # Count of pure red pixels 1523
>>> # Find most common color >>> most_common = max(histogram.items(), key=lambda x: x[1]) >>> print(f"Color: {most_common[0]}, Count: {most_common[1]}") Color: (240, 235, 230), Count: 15042
>>> # Get all colors sorted by frequency >>> sorted_colors = sorted(histogram.items(), key=lambda x: x[1], reverse=True) >>> for color, count in sorted_colors[:5]: ... print(f"RGB{color}: {count} pixels") RGB(240, 235, 230): 15042 pixels RGB(235, 230, 225): 12834 pixels ...
Note
For images with many colors, the histogram can be large. Consider using count_unique_colors() if you only need the count.
- color_tools.image.get_dominant_color(image_path)[source]
Get the most common (dominant) color in an image.
Returns the single RGB color that appears most frequently in the image. This is equivalent to finding the mode of the color distribution.
- Parameters:
- Return type:
- Returns:
RGB tuple (R, G, B) of the most common color
- Raises:
ImportError – If Pillow or numpy is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> dominant = get_dominant_color("photo.jpg") >>> print(f"Dominant color: RGB{dominant}") Dominant color: RGB(240, 235, 230)
>>> # Use with nearest color matching >>> from color_tools import Palette >>> palette = Palette.load_default() >>> color_record, distance = palette.nearest_color(dominant) >>> print(f"Closest CSS color: {color_record.name}") Closest CSS color: seashell
Note
For images with many unique colors, this uses the histogram approach which may be memory-intensive. For very large images, consider downsampling first using Pillow’s thumbnail() method.
- color_tools.image.is_indexed_mode(image_path)[source]
Check if an image uses indexed color mode (palette-based).
Indexed color images (mode ‘P’) store pixel values as indices into a color palette, rather than direct RGB values. This is common for: - GIF images (max 256 colors) - PNG images with palettes - Some BMP images
- Parameters:
- Return type:
- Returns:
True if image is in indexed mode (‘P’), False otherwise
- Raises:
ImportError – If Pillow is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> is_indexed_mode("photo.jpg") False
>>> is_indexed_mode("icon.gif") True
>>> is_indexed_mode("logo.png") # Depends on PNG type True # If PNG uses palette
Note
PIL/Pillow mode codes: - ‘P’: Palette-based (indexed color) - ‘RGB’: Direct RGB color - ‘RGBA’: RGB with alpha channel - ‘L’: Grayscale - ‘1’: Binary (black and white)
- color_tools.image.analyze_brightness(image_path)[source]
Analyze image brightness characteristics.
Calculates the mean brightness of the image in grayscale and provides an assessment based on standard thresholds.
- Parameters:
- Returns:
‘mean_brightness’: Mean brightness value (0-255 scale)
’assessment’: Human-readable assessment (‘dark’|’normal’|’bright’)
- Return type:
Dictionary with
- Raises:
ImportError – If Pillow or numpy is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> result = analyze_brightness("photo.jpg") >>> print(f"Brightness: {result['mean_brightness']:.1f} ({result['assessment']})") Brightness: 127.3 (normal)
>>> # Dark image >>> result = analyze_brightness("dark_photo.jpg") >>> print(result) {'mean_brightness': 45.2, 'assessment': 'dark'}
Note
Brightness thresholds: - Dark: mean < THRESHOLD_DARK_IMAGE (60) - Bright: mean > THRESHOLD_BRIGHT_IMAGE (195) - Normal: THRESHOLD_DARK_IMAGE ≤ mean ≤ THRESHOLD_BRIGHT_IMAGE
- color_tools.image.analyze_contrast(image_path)[source]
Analyze image contrast using standard deviation of pixel values.
Higher standard deviation indicates more contrast (wider range of brightness values). Lower standard deviation indicates less contrast (more uniform brightness).
- Parameters:
- Returns:
‘contrast_std’: Standard deviation of brightness values
’assessment’: Human-readable assessment (‘low’|’normal’)
- Return type:
Dictionary with
- Raises:
ImportError – If Pillow or numpy is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> result = analyze_contrast("photo.jpg") >>> print(f"Contrast: {result['contrast_std']:.1f} ({result['assessment']})") Contrast: 62.4 (normal)
>>> # Low contrast image >>> result = analyze_contrast("flat_image.jpg") >>> print(result) {'contrast_std': 25.3, 'assessment': 'low'}
Note
Contrast threshold: - Low contrast: std < THRESHOLD_LOW_CONTRAST (40) - Normal contrast: std ≥ THRESHOLD_LOW_CONTRAST
- color_tools.image.analyze_noise_level(image_path, crop_size=512, noise_threshold=2.0)[source]
Estimate noise level using scikit-image restoration.estimate_sigma().
Analyzes a center crop of the image to estimate noise sigma. This method is effective for detecting sensor noise, compression artifacts, and other forms of image degradation.
- Parameters:
- Returns:
‘noise_sigma’: Estimated noise standard deviation
’assessment’: Human-readable assessment (‘clean’|’noisy’)
- Return type:
Dictionary with
- Raises:
ImportError – If Pillow, numpy, or scikit-image is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> result = analyze_noise_level("photo.jpg") >>> print(f"Noise: {result['noise_sigma']:.2f} ({result['assessment']})") Noise: 1.23 (clean)
>>> # Noisy image >>> result = analyze_noise_level("noisy_photo.jpg") >>> print(result) {'noise_sigma': 3.45, 'assessment': 'noisy'}
Note
Uses center crop to avoid edge effects
Estimates noise in RGB channels and averages
Noise threshold: sigma > THRESHOLD_NOISE_SIGMA (2.0) = noisy
Fallback: Returns 0.0 if estimation fails
- color_tools.image.analyze_dynamic_range(image_path)[source]
Analyze dynamic range and tonal distribution of an image.
Examines the full range of brightness values used and provides suggestions for gamma correction based on the tonal distribution.
- Parameters:
- Returns:
‘min_value’: Minimum brightness value (0-255)
’max_value’: Maximum brightness value (0-255)
’range’: Dynamic range (max - min)
’mean_brightness’: Mean brightness for gamma assessment
’range_assessment’: Assessment of dynamic range usage (‘full’|’limited’)
’gamma_suggestion’: Suggested gamma adjustment for tonal balance
- Return type:
Dictionary with
- Raises:
ImportError – If Pillow or numpy is not installed
FileNotFoundError – If image file doesn’t exist
IOError – If image file cannot be opened
Example
>>> result = analyze_dynamic_range("photo.jpg") >>> print(f"Range: {result['range']} ({result['range_assessment']})") >>> print(f"Gamma suggestion: {result['gamma_suggestion']}") Range: 248 (full) Gamma suggestion: Normal (mean balanced)
>>> # Limited range image >>> result = analyze_dynamic_range("flat_image.jpg") >>> print(result) {'min_value': 45, 'max_value': 198, 'range': 153, 'mean_brightness': 89.2, 'range_assessment': 'limited', 'gamma_suggestion': 'Decrease (<1.0) to boost midtones'}
Note
Full range threshold: range ≥ THRESHOLD_FULL_DYNAMIC_RANGE (216, 85% of 0-255 spectrum)
Gamma suggestions based on mean brightness: - Mean < GAMMA_DARK_THRESHOLD (100): Decrease gamma to boost midtones - Mean > GAMMA_BRIGHT_THRESHOLD (200): Increase gamma to suppress midtones - GAMMA_DARK_THRESHOLD ≤ mean ≤ GAMMA_BRIGHT_THRESHOLD: Normal/balanced
- color_tools.image.transform_image(image_path, transform_func, preserve_alpha=True, output_path=None)[source]
Apply a color transformation function to every pixel of an image.
This is the core function that handles image loading, pixel iteration, transformation application, and optional saving. It’s used by both CVD simulation and palette quantization functions.
- Parameters:
- Return type:
- Returns:
PIL Image with transformed colors
- Raises:
ImportError – If Pillow is not installed
FileNotFoundError – If input image doesn’t exist
ValueError – If image format is unsupported
Example
>>> # Define a transformation (invert colors) >>> def invert_rgb(rgb): ... r, g, b = rgb ... return (255-r, 255-g, 255-b) >>> >>> # Apply to image >>> transformed = transform_image("photo.jpg", invert_rgb) >>> transformed.save("inverted.jpg")
- color_tools.image.simulate_cvd_image(image_path, deficiency_type, output_path=None)[source]
Simulate color vision deficiency for an entire image.
This shows how an image would appear to someone with a specific type of color blindness. Useful for testing image accessibility.
- Parameters:
- Return type:
- Returns:
PIL Image showing CVD simulation
Example
>>> # See how image appears to someone with deuteranopia >>> sim_image = simulate_cvd_image("colorful.jpg", "deuteranopia") >>> sim_image.save("deuteranopia_sim.jpg") >>> >>> # Test accessibility of an infographic >>> simulate_cvd_image("chart.png", "protanopia", "chart_protan.png")
- color_tools.image.correct_cvd_image(image_path, deficiency_type, output_path=None)[source]
Apply color vision deficiency correction to an entire image.
This shifts colors to improve discriminability for individuals with color blindness. The corrected image should be viewed by people with the specified deficiency type.
- Parameters:
- Return type:
- Returns:
PIL Image with CVD correction applied
Example
>>> # Enhance image for deuteranopia viewers >>> corrected = correct_cvd_image("chart.jpg", "deuteranopia") >>> corrected.save("chart_deutan_enhanced.jpg")
- color_tools.image.quantize_image_to_palette(image_path, palette_name, metric='de2000', dither=False, output_path=None)[source]
Convert an image to use only colors from a specified palette.
This maps each pixel to the nearest color in the target palette using perceptually-accurate color distance metrics. Perfect for creating retro-style graphics or testing designs with limited color sets.
- Parameters:
palette_name (
str) – Name of palette to use: - Built-in palettes: ‘cga4’, ‘ega16’, ‘ega64’, ‘vga’, ‘web’, ‘gameboy’ - User palettes: ‘user-mycustom’ (files in data/user/palettes/ must have ‘user-’ prefix) - User palettes do not override built-in palettes (separate namespaces)metric (
str) – Color distance metric for matching: - ‘de2000’: CIEDE2000 (most perceptually accurate) - ‘de94’: CIE94 (good balance) - ‘de76’: CIE76 (simple LAB distance) - ‘cmc’: CMC l:c (textile industry standard) - ‘euclidean’: Simple RGB distance (fastest) - ‘hsl_euclidean’: HSL distance with hue wraparounddither (
bool) – Apply Floyd-Steinberg dithering to reduce bandingoutput_path (
Path|str|None) – Optional path to save quantized image
- Return type:
- Returns:
PIL Image using only palette colors
Example
>>> # Convert photo to CGA 4-color palette >>> retro = quantize_image_to_palette("photo.jpg", "cga4") >>> retro.save("retro_cga.png") >>> >>> # Create EGA-style artwork with dithering >>> quantize_image_to_palette( ... "artwork.png", ... "ega16", ... metric="de2000", ... dither=True, ... output_path="ega_dithered.png" ... )
- color_tools.image.add_text_watermark(image, text, position='bottom-right', font_name=None, font_file=None, font_size=24, color=(255, 255, 255), opacity=0.8, stroke_color=None, stroke_width=0, margin=10)[source]
Add a text watermark to an image.
- Parameters:
image (
Image) – PIL Image to watermarktext (
str) – Text to displayposition (
Union[Literal['top-left','top-center','top-right','center-left','center','center-right','bottom-left','bottom-center','bottom-right'],tuple[int,int]]) – Position preset or (x, y) coordinatesfont_file (
str|None) – Custom font file (path or filename in fonts/)font_size (
int) – Font size in pointsopacity (
float) – Opacity from 0.0 (transparent) to 1.0 (opaque)stroke_color (
tuple[int,int,int] |None) – Outline color as (R, G, B), or None for no strokestroke_width (
int) – Outline width in pixels (0 for no stroke)margin (
int) – Margin from edges for preset positions
- Return type:
- Returns:
New image with watermark applied
Example
>>> img = Image.open("photo.jpg") >>> result = add_text_watermark( ... img, ... text="© 2025", ... position="bottom-right", ... font_file="Roboto-Bold.ttf", ... font_size=32, ... color=(255, 255, 255), ... stroke_color=(0, 0, 0), ... stroke_width=2, ... opacity=0.7 ... )
- color_tools.image.add_image_watermark(image, watermark_path, position='bottom-right', scale=1.0, opacity=0.8, margin=10)[source]
Add an image watermark (e.g., logo PNG) to an image.
- Parameters:
image (
Image) – PIL Image to watermarkwatermark_path (
str|Path) – Path to watermark image file (PNG recommended)position (
Union[Literal['top-left','top-center','top-right','center-left','center','center-right','bottom-left','bottom-center','bottom-right'],tuple[int,int]]) – Position preset or (x, y) coordinatesscale (
float) – Scale factor for watermark (1.0 = original size)opacity (
float) – Opacity from 0.0 (transparent) to 1.0 (opaque)margin (
int) – Margin from edges for preset positions
- Return type:
- Returns:
New image with watermark applied
Example
>>> img = Image.open("photo.jpg") >>> result = add_image_watermark( ... img, ... watermark_path="logo.png", ... position="top-left", ... scale=0.2, ... opacity=0.7 ... )
- color_tools.image.add_svg_watermark(image, svg_path, position='bottom-right', scale=1.0, opacity=0.8, margin=10, width=None, height=None)[source]
Add an SVG watermark (e.g., vector logo) to an image.
- Requires cairosvg to be installed:
pip install color-match-tools[image]
- Parameters:
image (
Image) – PIL Image to watermarkposition (
Union[Literal['top-left','top-center','top-right','center-left','center','center-right','bottom-left','bottom-center','bottom-right'],tuple[int,int]]) – Position preset or (x, y) coordinatesscale (
float) – Scale factor for watermark (1.0 = original size)opacity (
float) – Opacity from 0.0 (transparent) to 1.0 (opaque)margin (
int) – Margin from edges for preset positionswidth (
int|None) – Explicit width in pixels (overrides scale)height (
int|None) – Explicit height in pixels (overrides scale)
- Return type:
- Returns:
New image with watermark applied
- Raises:
ImportError – If cairosvg is not installed
Example
>>> img = Image.open("photo.jpg") >>> result = add_svg_watermark( ... img, ... svg_path="logo.svg", ... position="top-right", ... width=200, ... opacity=0.6 ... )
- color_tools.image.convert_image(input_path, output_path=None, output_format=None, quality=None, lossless=None)[source]
Convert an image from one format to another with sensible quality defaults.
- Parameters:
output_path (
str|Path|None) – Path for output file. If None, auto-generates from input_path using output_format extensionoutput_format (
str|None) – Output format (png, jpg, webp, etc.). Case-insensitive. If None, defaults to PNG. If output_path is provided with extension, infers from that.quality (
int|None) – JPEG/WebP quality (1-100). If None, uses format-specific defaults: - JPEG: 67 (Photoshop quality 8/12 equivalent) - WebP: Lossless by default (no quality needed) - AVIF: 80 for lossy compressionlossless (
bool|None) – Force lossless compression for formats that support it (WebP, AVIF). If None, WebP uses lossless by default.
- Return type:
- Returns:
Path object pointing to the created output file
- Raises:
FileNotFoundError – If input file doesn’t exist
ValueError – If output format is not supported
ImportError – If pillow-heif not installed for HEIC files
Examples
>>> # WebP to PNG (lossless) >>> convert_image("photo.webp") # Creates photo.png
>>> # JPEG to WebP (lossless) >>> convert_image("photo.jpg", output_format="webp") # Creates photo.webp
>>> # Custom output path >>> convert_image("input.webp", "output.png")
>>> # JPEG with custom quality >>> convert_image("photo.png", output_format="jpg", quality=85)
>>> # WebP with lossy compression >>> convert_image("photo.png", output_format="webp", lossless=False, quality=80)
- color_tools.image.get_supported_formats()[source]
Get lists of supported input and output formats.
- Return type:
- Returns:
Dictionary with ‘input’ and ‘output’ keys containing lists of format strings
Examples
>>> formats = get_supported_formats() >>> print("Input formats:", formats['input']) >>> print("Output formats:", formats['output'])
- color_tools.image.blend_images(base_path, blend_path, mode='normal', opacity=1.0, output_path=None)[source]
Blend two images using a Photoshop-compatible blend mode.
Both images are converted to RGBA before blending. The blend mode is applied only to the RGB channels; alpha is composited separately using standard src-over with opacity. The blend layer is resized to match the base image if their sizes differ.
- Parameters:
base_path (
str|Path) – Path to the base (background) image.blend_path (
str|Path) – Path to the blend (top) layer image.mode (
str) – Blend mode name. See BLEND_MODES for all options.opacity (
float) – Blend layer opacity in [0.0, 1.0]. Default 1.0.output_path (
str|Path|None) – If provided, the result is saved to this path.
- Return type:
- Returns:
Composited PIL Image in RGBA mode.
- Raises:
ValueError – If mode is not in BLEND_MODES or opacity is out of range.
ImportError – If Pillow or numpy are not installed.
Example
>>> result = blend_images("base.png", "layer.png", mode="multiply", opacity=0.8) >>> result.save("output.png")