Is it easy to learn a zig for go programmer

I am go programmer.
I want to learn zig.
Is it easy to learn or any difficulty is there.
What is toughest parts…

1 Like

The hardest parts of learning Zig are probably the same for most programmers regardless of background. I say this because I’ve analyzed our help topics in the past and we get similar questions across the spectrum.

Basic things (making variables, using loops, defining functions) are all very easy. Anyone with basic programming knowledge can pick up these parts quite easily. However, in C++ and Rust, meta-programming is seen as advanced/difficult - in Zig, it’s really quite easy.

Runtime polymorphism in Zig is harder at first. In C++, people use virtual inheritance (inheritance using virtual tables, essentially). It’s good to learn some patterns in Zig for polymorphism (using @fieldParentPtr or anyopaque with closures to recast to parent classes… etc).

There’s often a lack of familiarity with using and picking allocators. This isn’t just a Zig problem though - people don’t commonly think about allocation at a granular level these days.

We also get a lot of questions surrounding casting and byte alignment. This, however, is tricky regardless of what language one is working in because they require a more fundamental understanding in general.

4 Likes

This is the part of Zig I’m finally approaching in my journey… (yes, I’m moving slow, but that’s because life gets in the way). TBH, this is where I kinda started missing C++, and by that I mean “grandpa’s C++”, as I have lost contact with the language in mid-2000’s.
Could you give any links to helpful articles / posts / videos on the topic?

Hey @Durobot, I’ve got a resource here you might particularly like. And yes, it took me a while to get around to polymorphism as well.

I think a great place to start is right in the deep end here: zig/lib/std/Thread/Pool.zig at master · ziglang/zig · GitHub

Basically, the thread pool can spawn a bunch of job-related threads and then run them with “Runnable” objects. Quick caveat, I’m going to skip some details in my explanation but you can read the full implementation through that link (it’s for the sake of brevity) :slight_smile:

So let’s look at the spawn function a chunk at a time and see what’s up.

pub fn spawn(pool: *Pool, comptime func: anytype, args: anytype) !void {
    if (builtin.single_threaded) {
        @call(.auto, func, args);
        return;
    }

So first, we can see that it takes in a comptime function and some arguments. This is a fairly opaque API as @matklad was pointing out in the other thread (I’m tempted to link back to my post on generic concepts, but I’ll spare everyone the trip).

We can see that in the second line, they check to see if the app is single threaded - if it is, they go ahead and just run the function with the args and return. So we can see that the high-level intention is to do just that… run a function in some manner. In the case where it’s not single threaded, we have this interesting piece of code…

    const Args = @TypeOf(args);
    const Closure = struct {
        arguments: Args,
        pool: *Pool,
        run_node: RunQueue.Node = .{ .data = .{ .runFn = runFn } },

        fn runFn(runnable: *Runnable) void {
            const run_node = @fieldParentPtr(RunQueue.Node, "data", runnable);
            const closure = @fieldParentPtr(@This(), "run_node", run_node);
            @call(.auto, func, closure.arguments);

            // The thread pool's allocator is protected by the mutex.
            const mutex = &closure.pool.mutex;
            mutex.lock();
            defer mutex.unlock();

            closure.pool.allocator.destroy(closure);
        }
    };

Here’s where the generic behavior comes into play. Let’s just focus on the runFunc portion.

The point of this function is to recover context. At some point we’re going to lose our surrounding context. This is very similar to using lambdas in C++ to convert to function pointers. So here’s how they handle that in this case.

Start by noticing that the closure has run_node - that’s our intrusive hook to identify the current closure. We’ll use that in the future to to get back to this same object. You can also see that the run_node has a reference to the run function.

The next call they make is to get the run_node back from the runnable object. We’re beginning the recovery process. Since the run_node is a member of the closure, they do this again to get the closure itself. At this point, we have recovered the same object we were in at a later point in time.

In other words, @fieldParentPtr:

    Parent Struct                                  
          Child Field  <--- Use this this to get to back to the Parent Struct

Then, they call the func on the closure.arguments that were recovered.

The most confusing part is when they then call:

closure.pool.allocator.destroy(closure);

They destroy the closure after it has been run. That completes the cleanup after the task.So this is the mechanism in this case to recover the context, run the function, and then clear the resource. Now, we move onto the part where things actually get run.

    {
        pool.mutex.lock();
        defer pool.mutex.unlock();

        const closure = try pool.allocator.create(Closure);
        closure.* = .{
            .arguments = args,
            .pool = pool,
        };

        pool.run_queue.prepend(&closure.run_node);
    }

    // Notify waiting threads outside the lock to try and keep the critical section small.
    pool.cond.signal();

They allocate a closure object using the pool allocator - this could be a stack based allocator so we may not actually be making a syscall here.

Then, they create one of those same closure objects we saw above, load up the arguments and give it the current pointer to the pool object we’re using. Once that is said and done, we append the run_node to the run_queue and signal a thread that it’s ready to be run (signal means we’re going to wake up a single thread).

So that exact same run_node that we just put into the queue is the same one that we used in the closure to cast back to getting to the closure itself. Thus completing the cycle.

Again, there’s a little bit more context surrounding all of this, but you can see the goals here:

  • Create a function that has context
  • Capture the context in a closure object that has a handle
  • Use the handle to recover the object and context at some point
  • Run the function using the recovered context
4 Likes

The only thing to note here is that using anyopaque is becoming the standard for doing polymorphic behavior. There are some maneuvers in the above code to make two or more generic api’s work with eachother (the thread interface and the closure interface).

To see a more basic form of polymorphism, look into how the vtables work for creating the allocator interface. It’s a classic vtable - just load up some pointers and create a pointer to the object itself that you give to the allocator interface. Those functions know how to cast the opaque pointer back to the object that it was previously.

@AndrewCodeDev , thank you!
Hopefully I’ll have time to read it this weekend.

Admittedly, zig is easier to lean if you are familiar with C.

For Gophers, one important thing is you should never expose local variables (which are allocated on stack) to the outside of a function.

I had a link titled

The Zig and Go Programming Showdown! | by Erik Engheim | Oct, 2022 | ITNEXT

but unfortunately it’s broken for me
(some problems with https sertificates i think)
may be this can be somewhat useful

I’m a python programmer.

would you recommend me to learn C before learning Zig or can I just start learning Zig right away?

@FullyRemote You don’t need to learn C first. Zig will teach you the low level things you need to get familiar with, just as well as C would. Unless you really want to learn C too, which is fine as well.

2 Likes

I can’t make the decision for you. It depends on many factors.
C is stable and there are many C books there.
And c is a skill you will never regret to master.

C has some drawbacks (which are some reasons why zig is invented).
Such as inconsistent dev env setups on different OSes, too many unspecific behaviors,
function types are hard to read sometimes, the std lib is not rich, etc.

Zig fixed most of these drawbacks, but it still doesn’t reach 1.0 stable state.
You can develop apps with zig (there are already some great apps developed in zig),
but you must bear some minor back-compatibility breaking cases from version to version
before zig reaches 1.0.
And there are still some bugs for a lots of edge cases in the compiler.
There are some free online books for zig, but I haven’t found one which is good enough
for a programmer who only has experiences in garbage-collection based languages.
And I found many of books contain code which has not worked with the latest zig compiler.

My suggestion is you can make a try to learn zig directly. Just remember that don’t expose a value declared in an inner scope to an outer scope (by passing the address of the value to the outer scope), including exposing a local variable to the outside of its containing function.

4 Likes

I agree with the sentiment that @zigo has presented here.

C is an important part of software development history. Love it or hate it, it has a role to play in almost any field you can imagine. I work in C/C++ databases and could go on a long rant here about both languages. There are things I think are absolutely brilliant (and horrible) about about the C family of languages.

I want to propose a slightly different take here, though. Learning C is helpful for Zig at this moment in time. There are many C libraries that you can wrap with Zig code after some familiarity. You’re not necessarily neglecting Zig by learning C. On the contrary - it would make sense to learn C after some time in Zig because it will open up a vast aresenal of code that can enrich your Zig projects.

4 Likes