7 colour electronic paper

Sunday 5 March 2023

7 colour electronic paper

For Christmas I recieved a 5.65" seven-colour e-paper display, which is awesome. The catch as everything with a Raspberry Pi or Arduino is that beyond the gloss of the advert is something that is far from a flexible plug and play system. I enjoyed my voyage, but it was rather odd even if typical of a Raspberry Pi project.

I was given a 5.65" Waveshare e-ink/e-paper display. It's a nice size, although I am not sure about why it's point 65: weird things in freedom units like UNC screws are in ratios like 2/3 (which is .67) yet not ratio seems to match and this coverted into metric is 136.51 mm. But that is the least of the issues. The problems come from the data processing side.

Module

Waveshare provides a Python repo with demo scripts, waveshare/e-Paper as does Pimoroni inky. I have the former as after all it is cheaper as it's from China and not Sheffield unfortunately, so I cannot comment much on the latter, which is written far better than the former, but seems to have the same issues. Neither module is pip installable, but given how Circuit-python modules are moved to the lib folder in a Pi Nano one would expect that is what is going on, but no.

Plan A: Pillow vs. Pi Nano

I put this in a picture frame with a Pi Nano on its back: I remove it from the wall, connect the Pi Nano to my laptop via a microUSB and change the image.
The Python script from Waveshare requires Pillow. The Pillow requirement is deep throughout the code and is not a mockable modular affair. This module is a no-go when it comes to the Pi Nano. For that matter numpy on a Pi Nano is also a red flag, therefore, this means that the code has to be run on kernel that can run Pillow. Pillow works on a Pi Zero, so hello plan B.

Plan B: Floyd-Steinberg dithering algorithm vs. Pi Zero

I put this in a picture frame with a Pi Zero W on its back: I remove it from the wall and connect it to a power source, go to my laptop and upload the image I want for fully processing.
Pillow and Numpy can be installed on a Pi Zero. There is an annoyance in that one of the modules is released as a universal Arm-architecture wheel, compiled for Arm7, yet the Pi Zero requires Arm6 compiled code, resulting in a segmental fault like error on import. ARCHFLAGS='-arch arm6' python3 -m pip install pillow avoids this.
The image needs to be made into a 7-colour image. The 7 colors are on or off with no gradient, so the images need to be a style like a Roy Lichtenstein artwork with nice Ben Day dots. The scant documentation describes lowering the image to 7-colour via the Floyd-Steinberg dithering algorithm and suggests using their Windows exe to do so. As a Linux user, I ignored this and sought a Python implementation. A simple example is found in a SciPy blog post, which goes through the basics and has a commented out alternative function, which addresses the case when one has a palette (uninformatively called p it that post). There is also a post on GitHub Gist wherein the numba.jit is used to speed up the inference to milliseconds. However, whereas Numba is awesome, its antipathy towards functions and other simple things makes it painful to implement, plus it does not work with a Pi Zero in my hands anyway, so I will give it a miss.
The Waveshare module provided uses Image.quantize() upon a Image.putpalette(...) palette, which actually does exactly this, so my coding adventure was utterly unnecessary. One step that is necessary is scaling the image:

from PIL import Image

def scale(image: Image, target_width=600, target_height=448) -> Image:
    """
    Given an Pillow image and the two dimensions scale it, 
    cropping centrally if required.
    """
    width, height = image.size
    if height/width < target_height/target_width:
        warn('too wide: cropping')
        new_height = target_height
        new_width = int(width * new_height / height)
    else:
        warn('too tall: cropping')
        new_width = target_width
        new_height = int(height * new_width / width)
    print(height, width, target_height, target_width, new_height, new_width)
    # Image.ANTIALIAS is depracated --> Image.Resampling.LANCZOS
    # but a fresh install of pillow via ``ARCHFLAGS='-arch arm6' python3 -m pip install pillow``
    # yielded 8.1.2 as of 26/02/23
    ANTIALIAS = Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.ANTIALIAS
    img = image.resize((new_width, new_height), ANTIALIAS)
    # (left, top, right, bottom)
    half_width_delta = (new_width - target_width) // 2
    half_height_delta = (new_height - target_height) // 2
    img = img.crop((half_width_delta, half_height_delta,
                    half_width_delta + target_width, half_height_delta + target_height
                   ))
    return img

Image.crop accepts negative numbers too in which case the image gets padded, making it a viable alternative solution.

A classic consolation statement is always, at least it was didactic... In my case I am not sure I learnt anything in modifying the code for the Floyd-Steinberg dithering algorithm as it felt very unnumpyish iterating pixel by pixel. Here is my code which is mostly the same bar for typehints.

import numpy as np
import numpy.typing as npt
from typing import Iterator, TypeVar
import operator, itertools
from PIL import Image
from numba import jit
from warnings import warn

colors = [
            {"name": "black", "hex": "#000000", "rgb": (0, 0, 0)},
            {"name": "white", "hex": "#FFFFFF", "rgb": (255, 255, 255)},
            {"name": "green", "hex": "#008000", "rgb": (0, 255, 0)},
            {"name": "blue", "hex": "#0000FF", "rgb": (0, 0, 255)},
            {"name": "red", "hex": "#FF0000", "rgb": (255, 0, 0)},
            {"name": "yellow", "hex": "#FFFF00", "rgb": (255, 255, 0)},
            {"name": "orange", "hex": "#FFA500", "rgb": (255, 165, 0)}
        ]
_c: Iterator = map(operator.itemgetter('rgb'), colors)
color_palette: npt.NDArray[np.float64] = np.array(list(map(list, _c))) / 255
RGBTriplet = npt.NDArray[np.float64]  # These will have 3 elements

def get_new_val(old_val: float) -> RGBTriplet:
    idx: npt.NDArray[np.float64] = np.argmin(np.sum((old_val[None,:] - color_palette)**2, axis=1))
    return color_palette[idx]

#@jit(nopython=True)
def _fs_inner(pixels: npt.NDArray[np.float64]) ->None:
    """
    ``Pixels`` gets modified in place
    """
    height, width, _ = pixels.shape
    for ir in range(height):
        for ic in range(width):
            old_val: RGBTriplet = pixels[ir, ic].copy()
            idx: int = np.argmin(np.sum((old_val[None,:] - color_palette)**2, axis=1))
            new_val: RGBTriplet = color_palette[idx]
            pixels[ir, ic] = new_val
            err: RGBTriplet = old_val - new_val
            if ic < width - 1:
                pixels[ir, ic+1] += err * 7/16
            if ir < height - 1:
                if ic > 0:
                    pixels[ir+1, ic-1] += err * 3/16
                pixels[ir+1, ic] += err * 5/16
                if ic < width - 1:
                    pixels[ir+1, ic+1] += err / 16
            
def fs_dither(img: Image) -> Image:
    """
    Floyd-Steinberg dither the image img into a palette with colors specified
    in a global variable ``color_palette``.
    
    ... code-block :: python
    	dithered_image: Image = fs_dither(scaled_image)

    """
    pixels = np.array(img, dtype=float) / 255
    _fs_inner(pixels)
    corr_pixels = np.array(pixels/np.max(pixels, axis=(0,1)) * 255, dtype=np.uint8)
    return Image.fromarray(corr_pixels)

Traditionally in image processing a lady with a hat called Lenna (cropped from a naked Playboy image) was used, now everyone has their own standard. Here I will use the image I want to use, Altas in the Yorkshire Dales.

This is the image after my code:

Whereas using the Pillow quantize method I get the following:

pal_image = Image.new("P", (1,1))
pal_image.putpalette( (0,0,0,  255,255,255,  0,255,0,   0,0,255,  255,0,0,  255,255,0, 255,128,0) + (0,0,0)*249)
quanti = scaled_image.convert("RGB").quantize(palette=pal_image)

The pillow method actually is better: My code is very slow and takes 10 seconds, whereas the Pillow method is under a second. So I best use that!

Parenthesis on colours

The pillow method actually is better: My code is very slow and takes 10 seconds, whereas the Pillow method is under a second. In the above I copied the palette RGB triplets from the Waveshare module and orange is full red channel (255) and half green channel (128). Whereas the enum-like value epd.ORANGE the green channel is meant to be 168. I have not tweaked the code to test whether the colours are better on the display wholly with the latter scheme. The colour scheme is curious anyway as it has black, white, the three additive primary colours (green, blue, red) and yellow and orange. Yellow, cyan and magenta are secondary colours, but only the first is present. Orange is a tertiary colour. The latter is more akin to the historical RYB colour model (red, yellow, blue => orange, green and purple), but without purple.

colors = [epd.BLACK, epd.ORANGE, epd.GREEN, epd.BLUE, epd.RED, epd.YELLOW]
print(', '.join(map('{0:0>6x}'.format, colors)))
Gives '000000, 0080ff, 00ff00, ff0000, 0000ff, 00ffff' rather oddly this is Blue-Green-Red not RGB and this is not an endianess issue.
from IPython.display import display, HTML

colors = [
            {"name": "black", "hex": "#000000", "rgb": (0, 0, 0)},
            {"name": "white", "hex": "#FFFFFF", "rgb": (255, 255, 255)},
            {"name": "green", "hex": "#008000", "rgb": (0, 255, 0)},
            {"name": "blue", "hex": "#0000FF", "rgb": (0, 0, 255)},
            {"name": "red", "hex": "#FF0000", "rgb": (255, 0, 0)},
            {"name": "yellow", "hex": "#FFFF00", "rgb": (255, 255, 0)},
            {"name": "orange", "hex": "#FFA500", "rgb": (255, 165, 0)},
            {"name": "mid-orange", "hex": "#FF8000", "rgb": (255, 165, 0)}
        ]
        
to_color_span = lambda color: f'<span style="color:{color["hex"]}">{color["name"]} ▀ </span>'
display(HTML('\n'.join(map(to_color_span, colors))))
black ▀ white ▀ green ▀ blue ▀ red ▀ yellow ▀ orange ▀ mid-orange ▀

Plan C: Pi Nano, full Pillow

One thing that stands out is that the image is rather whitewashed after the dithering. Therefore boosting the saturation helps

from PIL import ImageEnhance
enhanced_image: Image = ImageEnhance.Color(scaled_image).enhance(2)
endithered: Image = enhanced_image.convert("RGB").quantize(palette=pal_image)

Setup

As mentioned previously, I set up my Raspberry Pis to serve Jupyter notebooks (instructions). Then:

python -m pip install -q pillow waveshare-epaper
raspi-config  # enable SPI

A kind user uploaded Waveshare's module to pypi, which is rather telling...

import logging
import time
from PIL import Image,ImageDraw,ImageFont,ImageEnhance
from warnings import warn
import traceback

# ------------

def scale(image: Image, target_width=600, target_height=448) -> Image:
    .... # See above

original_image = Image.open('DSC_0168.JPG')
scaled_image = scale(original_image)
scaled_image

# ------------

enhanced_image: Image = ImageEnhance.Color(scaled_image).enhance(3)
pal_image = Image.new("P", (1,1))
pal_image.putpalette( (0,0,0,  255,255,255,  0,255,0,   0,0,255,  255,0,0,  255,255,0, 255,128,0) + (0,0,0)*249)
endithered: Image = enhanced_image.convert("RGB").quantize(palette=pal_image)
endithered

# ------------

import epaper
epd = epaper.epaper('epd5in65f').EPD()
epd.init()
epd.Clear()
epd.Clear()  # double tap.

# ------------

epd.display(epd.getbuffer(endithered))

Plan D: Pi Nano with L shaped headers

Well, I have a deep picture frame from Ikea (Ribba) and a sheet of plasticard to hold the frame, however with straight header pins on the Pi Nano it does not work at all as it's thicker than the picture frame... This is not my best thought out project.

No comments:

Post a Comment