Openai-proxz: An OpenAI API client library for zig!

I wanted a simple interface for interacting with OpenAI & compatible APIs and couldn’t find one that carried the features I needed, so I built one!

As someone who is coming from python, I loved how simple the openai-python package was, so this was modeled after that interface.

This was a great learning experience for me and is my first library so I’d love feedback!

:orange_book: ProxZ Docs: https://proxz.mle.academy

(view docs/github link for installation guides)

Usage

Client Configuration

const proxz = @import("proxz");
const OpenAI = proxz.OpenAI;
// make sure you have an OPENAI_API_KEY environment variable set,
// or pass in a .api_key field to explicitly set!
var openai = try OpenAI.init(allocator, .{});
defer openai.deinit();

Since OpenAI was one of the first large LLM providers, others modeled their APIs around their contracts! So you can use other providers by setting the OPENAI_BASE_URL environment variable or adjusting the config:

var openai = try OpenAI.init(allocator, .{
    .api_key = "my-groq-api-key",
    .base_url = "https://api.groq.com/openai/v1",
    .max_retries = 5,
});
defer openai.deinit();

Chat Completions

const ChatMessage = proxz.ChatMessage;

var response = try openai.chat.completions.create(.{
    .model = "gpt-4o",
    .messages = &[_]ChatMessage{
        .{
            .role = "user",
            .content = "Hello, world!",
        },
    },
});
// This will free all the memory allocated for the response
defer response.deinit();
const completions = response.data;
std.log.debug("{s}", .{completions.choices[0].message.content});

Embeddings

const inputs = [_][]const u8{ "Hello", "Foo", "Bar" };
const embeddings_response = try openai.embeddings.create(.{
    .model = "text-embedding-3-small",
    .input = &inputs,
});
// Don't forget to free resources!
defer embeddings_response.deinit();
const embeddings = embeddings_response.data;
std.log.debug("Model: {s}\nNumber of Embeddings: {d}\nDimensions of Embeddings: {d}", .{
    embeddings.model,
    embeddings.data.len,
    embeddings.data[0].embedding.len,
});

Feedback

I’d love some, I’m confident there are many things I could improve with this!

1 Like
  • Streaming support added. With a createStream function added to chat completions.
  • Updated contract to reduce bloat
  • Added logging customizations

New Contract

const ChatMessage = proxz.ChatMessage;

var response = try openai.chat.completions.create(.{
    .model = "gpt-4o",
    .messages = &[_]ChatMessage{
        .{
            .role = "user",
            .content = "Hello, world!",
        },
    },
});
// This will free all the memory allocated for the response
defer response.deinit();
std.log.debug("{s}", .{response.choices[0].message.content});

Streaming

var stream = try openai.chat.completions.createStream(.{
    .model = "gpt-4o-mini",
    .messages = &[_]ChatMessage{
        .{
            .role = "user",
            .content = "Write me a poem about lizards. Make it a paragraph or two.",
        },
    },
});
defer stream.deinit();

std.debug.print("\n", .{});
while (try stream.next()) |val| {
    std.debug.print("{s}", .{val.choices[0].delta.content});
}
std.debug.print("\n", .{});
1 Like