I work for a company where we use Zig in production.
In particular, we built a virtual makeup try on that runs on WebAssembly for the Korean makeup manufacturer Ameli. You can try it directly for yourself here. Don’t worry, we don’t store any images, since all the processing is done locally on the end device.
My employer, B factory, has agreed to letting me release the foundation library I developed for the virtual makeup. The library features:
I still haven’t uploaded all the features, but I couldn’t wait to share it with you.
Notably, the drawing and blurring parts are missing, since their current implementations are not generic enough to be in the library.
Be sure to check the examples’ directory, where you can try the face alignment demo for yourself.
A little bit about me: I am a contributor to the dlib C++ library: http://dlib.net/
Some of the functionality in Zignal has been ported from dlib, from where I drew a lot of inspiration.
Other features were developed for Zignal first, and then ported back to dlib:
Finally, I am thrilled to share this and get some feedback: I am sure there are things that can be done better; do not hesitate to give me some advice! It’s still a work in progress.
Small update, I am porting back functionality from our internal code, now you can already blur images with Zignal. The demo has been updated accordingly
It supports box-blur because it’s really fast because it computes the integral image. It works on grayscale images with SIMD and Color images (with any kind of pixel type).
Following up on my New image processing library: Zignal post from a few months ago, I’m excited to announce that Zignal has reached its first tagged release!
Here’s the TL;DR of this release:
12 color spaces with seamless conversions (RGB, HSV, HSL, Lab, Oklab, XYZ, etc.)
Full linear algebra suite with SVD and PCA implementations
2D drawing API with antialiased primitives and Bézier curves
Geometric transforms and convex hull algorithms
WASM-first design with interactive browser examples
Complete image I/O with native PNG codec and JPEG decoder (had to read the zigimg at some point, since I was getting stuck, parts of my understanding of JPEG was wrong)
What pushed me to tag a release was the fact that I wanted to provide python packages, though.
I’ve spent the last week or so trying to get it to work with Python via native bindings using the Python C interface.
I know Ziggy Pydust exists, but I wanted to learn how to do it myself: write the native Python bindings, see which patterns repeat and can be automated with Zig’s comptime reflection, and slowly build my best effort python utils.
Before, I was always compiling a generic shared library with ctypes to bind it to Python.
Right now, I am building the wheels myself via CI, because I am using Zig 0.15-0.dev, and as far as I know, it’s not available in PyPI. At some point I’d like to add the ziglang dependency, so users can build the library themselves and get full native optimizations. It’s also nicer on the PyPI, since we won’t be uploading any binaries.
Sorry for spamming here, but I just tagged 0.2.0 with a pretty nice feature: terminal image display.
I am a in image processing guy, but I love working on the terminal (foot + kakoune + tmux)
I like having my program running on a split pane with zig build run-whatever --watch so whenever I make changes I can see if the program runs. However, working with images made me have to switch to an image viewer. Well, no more!
That’s foot displaying the image with sixel. By default, the .format method on Image will progressively degrade, depending on what your terminal supports:
.kitty: it works on Ghostty
.sixel: it works on Foot (the default settings use an adaptive palette of 256 colors with dithering)
.ansi_blocks: it works on GNOME Terminal (▀)
But it also supports .ansi_basic using spaces with background (image is stretched, but doesn’t require unicode characters .braille for monochrome graphics