Ziggit and Large Language Models

The moderators have been actively discussing the LLM question. One article of consensus was to open up this thread.

Ziggit has always been easy to moderate. It’s a small community, enthusiastic about its topic, and well-behaved in general. It’s like this for one reason, more than any: we have a well-defined and narrow topic, and no place for off-topic discussions, either. If it’s not about Zig, it doesn’t belong here.

We’ve only made one major addition to the moderation policy: the AI policy. It’s time to revisit it.

Up front, we condense the policy to a slogan:

Ziggit is by and for humans. We don’t want AI participation

We stand by this. As the rest of the policy (hopefully) makes clear, “participation” is aimed at things like: letting a chatbot write posts, answering usage questions with LLM output, none of which we want.

We tried to write the policy on AI generated code with some breadth of discretion: at the time, it was already clear that using LLM assistance was widespread, and, also at that time, the quality of purely ‘vibecoded’ generative AI code was inferior, particularly where Zig is concerned.

The policy comes from the very infancy of “agents”, and it’s stopped reflecting how we moderate in practice.

We know that we have users who are staunchly opposed to the mere existence of LLMs, let alone the use of them. We want those users to be comfortable here. There are reasons to feel that way, some on-topic for the forum, many not: in this thread we consider more of that on topic than usual.

Yet we have consensus amongst ourselves that Ziggit will not have a zero tolerance policy around LLMs, agents, and so on. We will also insist on our bedrock policy, since the beginning, which is that users will treat each other well. We’ve informally been more tolerant of mean-spirited or insulting comments in this area than we feel that we should be, and that won’t continue going forward.

So we want to be clear: there’s a place for threads about agentic coding (of Zig), for Showcases which feature agent-directed code, and things of that nature on Ziggit. We don’t want this to overwhelm the topics we already have, and we don’t want to drive off users who would prefer none of this is happening. “Ziggit is by and for humans” remains primary.

In the interest of personal disclosure: while I’ve been using chatbots as a sort of all-purpose question machine for about a year, starting about six weeks ago I’ve started using Codex to do full-on agentic coding. I don’t intend to stop. It’s a remarkable experience, and well worth discussing.

As another indicator of this moment in history, mitchellh is using agents to find and fix hard bugs in Ghostty. This is something which is here to stay, and that’s a mixed bag, but a policy which would preclude Showcasing Ghostty is not what we intend to offer.

One possibility we’re considering is making agents a topic of their own. That doesn’t taboo the mention and discussion of them anywhere which is topical, but it would be the place for discussions where agents (and Zig!) are the main event, so to speak, and somewhere we can move digressions in an agent-related direction.

One of the mods suggested a disclosure policy for Showcases, another thinks it’s a good idea. Two more were lightly inclined against it, one of whom has moved toward ‘unsure’, and one abstained from any comment. That leaves me: I don’t think this is a good policy, for three reasons.

First, it’s not clear what “used an LLM” is supposed to mean. Asked a chatbot one question? Generated a few tests, fixed a bug or two? Is ‘used an agent’ the threshold, or is the label to be reserved for pure vibecoding? Second, users being coy or dishonest about this question is not a problem we’ve actually seen, and third, we’ve seen a growing trend of Showcase posters proactively describing the degree of involvement of LLM tech in their code, so making that into policy may actually be counterproductive.

We decided to solicit community feedback on that as well. We’re not putting it to a vote, we want to know what you think.

Now we come to the deciding reason this thread exists: slop.

Slop is Real

One thing LLMs have made possible, which basically wasn’t before, is the creation of code which just kind of exists. The human responsible doesn’t really know what’s in there, often the README promises a bunch of stuff which doesn’t exist at all, and in general it gives off an unpleasant aroma which I can only describe as: slop.

This is stressing the moderators out. Mostly because it exists, and we don’t want to look at it, but also because it universally gets flagged, and has served as an invitation for users to express themselves in ways we don’t want happening here.

Calling the work of other users ‘slop’ is itself an example of something we can’t permit, there’s no “it’s true” exemption for that. There are ways to address this which don’t amount to name-calling, we want you to be more specific, creative, and kinder about it.

Here’s a real example:

Seems to segfault on wayland. The heavy LLM looking codebase does not really boost my confidence much either.

Here’s a made up / paraphrased one:

This README seems to be AI-generated, which isn’t a good sign. How much of what it claims does your program actually do? We’ve had bad experiences with that in the past.

Specific, critical, actionable, and not insulting.

But ‘slop’ isn’t an obscenity either, and we’re not going to make up a euphemism which means the same thing.

Slop does share something with obscenity: it’s hard to define, but you know it when you see it. Recent slop-threads have turned out to be teachable moments, which is not all bad.

Be that as it may: our policy against chatbot spam has, along with a certain amount of cultural maturity, noticeably reduced the amount of it which we have to see. We’re considering adding a “no slop” policy to the house rules, but this will call for responsibility on everyone’s part. We don’t want slop-suspected Showcases getting flagged, we can’t have you accusing others of slop-purveying, and the moderators are the final arbiters of how to apply the policy, on a case-by-case basis.

I want people to stand by their code. In using agents, I’ve found them extremely capable of implementation: but getting a final result which I am willing to stand by takes most of the effort, and is truly stretching my abilities to the limit. We don’t want to see agent-slop which the poster can’t explain, justify, and stand by.

If a beginner posts Zig code, it’s not always good. No one minds that! We’re all happy to give feedback on how to make it better. But it’s not the same with a pile of incomprehensible LLM output, and Ziggit isn’t going to be home base for learning how to use agents: the problem isn’t Zig related and we don’t want to keep fixing it.

So just having an explicit pre-filter, warning posters to run a vibe check on what they’ve done, and not post it until they feel we won’t judge it to be slop: this could make a real difference for us as moderators, and y’all as well.

Loris Cro’s motto is applicable here: “write software you can love”. That’s compatible with agents (have you fallen out of love with Ghostty? not me), but it’s not compatible with slop.

So this is the thread where we talk it all out. Things we’re considering: Agent / LLM topic, some policy around disclosure (tagging?), and, no slop.

6 Likes

I’m personally fine with LLM assisted projects as long as it is properly disclosed. If there is no proper disclousure, it feels like interacting with snake oil salesman.

I at least expect the project owner to know what was generated and where. They should be honest about what they know and what they dont. I also expect the project owner to understand the code that was generated.

As for ziggit. I dont ever want to read some copy paste from LLM, unless there is good reason like language barrier. Otherwise its just rude.

Personally, i find LLMs very interesting for “smarter fuzzing”. As long as you have ground truth, you can leave LLM to generate outputs until it fails the ground truth. There are valid uses for LLMs, just like for any tools. It is up to us human to decide how we use them.

(Though there are ethical / moral side on how LLMs are produced and who can actually produce them as well as the potential for power over what goes in the generated output, so it is also up to individual what they think about the human / moral side of the technology)

6 Likes

I think disclosure is a complicated topic. I’m an experienced programmer, and yet I’m using coding agents, not to delegate my thinking to them, but to help me cope with my fragmented time available for programming. I stand by every line of the code, even though many of them are written by the agent, it’s still under my full supervision. Having it in the same category as vibe coding is degrading, in my opinion, but I can live it it

4 Likes

In the interest of clarity, this is already against policy, and we don’t intend to change that at all.

4 Likes

This would be fine. What about a couple of tags, like “AI-used” (as opposed to “AI”, which might just mean that the topic is AI-centric) and “vibe-coded” (which is probably not wanting promotion, but… at least it would help differentiate AI-use from vibe-coding, which @lalinsky mentioned.)

This would allow the extra-sensitive to filter out “all that stuff” if they want. It would require good citizenship, though a mod could add the tag, if they perceived the need.

Anyway… 2 cents.

3 Likes

LLM has helped me a lot. It has helped me understand many areas that I had never understood before and has provided me with a lot of references.
Because of this, I understand how difficult it is to provide helpful content based on LLM’s responses. Directly copying and pasting LLM’s replies is not helpful; it requires a lot of thinking and repeatedly questioning every detail to reach conclusions that are not misleading.

4 Likes

I should probably add, to anticipate the question, “how do I know if I should use such an ‘AI-used’ tag?” - as mentioned above, “asked a chatbot one question”…? Well, I’d say: don’t overthink it. You’re smart. If you think it would help to have a little disclosure in the form of setting that tag, then do it. If you just asked a chatbot a question or two, but hand-wrote all the real code, then probably not. Only if you think others would raise an eyebrow and you don’t want to have to ‘fess-up 10 posts later.

1 Like

This is an example of me using an LLM to help others within the site:

In fact, I shouldn’t have any ability to help him at all, because I don’t even have the Pico 2 device. I try to help simply because I hope the other person can get some assistance on the platform, and perhaps something can be achieved with the help of LLM.

When the LLM initially received the question, it was very confident in providing its own answer and fix for the code, but I did not believe it at all, and I could not understand its answer. It turned out that my suspicion was correct; the answers initially given by the LLM were almost never right. Starting from Microzig’s sample code, I asked the LLM about the key differences between Pico 1 and Pico 2, and eventually obtained information from the key RP2350 document.

The LLM had long indicated to me that the answer was in 5.9.5.1. Minimum Arm IMAGE_DEF, but I didn’t pay much attention to it at first. Instead, with the help of the LLM, I started reading the documentation from the beginning to confirm why the boot failed and triggered USB Boot mode. Only at the end did I independently confirm, based on the documentation, that this chapter is very helpful and should be the content the OP needs. Although the LLM also tried to provide its code fix, I did not adopt it because I didn’t have the conditions to test it.

This experience made me believe that LLMs can indeed help me accomplish things that were originally impossible. At the same time, getting some truly helpful results using LLMs is much more difficult than I imagined and still requires a lot of effort.

1 Like

My preference is mandatory, binary disclosure.

If someone used genai at all, I don’t want to follow the link. If they abstained, then I’m willing to invest part of my precious few remaining hours on this Earth looking at it.

I think looking at LLM generated output rots one’s brains and I want to avoid my brain rotting. If it’s hard to avoid brain rot on Ziggit then I will avoid Ziggit altogether.

1 Like

I am on the fence about agentic coding. In some instances I can see how it can provide a boost to productivity. As you mentioned, Ghostty sees real success with an agentic workflow. However, behind the LLM sits an incredibly talented, proven individual with a history of successful projects. It’s one thing to use an LLM to flesh out an idea or help with boilerplate, but the result needs to be understood. The LLM isn’t going to write code the way you or I would write code[1]. The output almost always needs to be cleaned up and “unsloppified.” I have no problem reviewing LLM-generated code that’s well-understood and specifically curated. What I have a problem with is the result of prompts like “Rewrite X, but in Zig[2].”

I also acknowledge Zig’s international presence and some users rely on LLMs to translate their own words into another language. I have no problem with that either. But anything obviously written by an LLM that is intended for humans to read (project documentation[3], READMEs, blog posts, etc) I find incredibly distasteful.

With all that said, it would be remiss to alienate everything even tangentially related to LLMs as potentially great projects would fall under this category (re: Ghostty). However, I feel that this represents a thin majority of LLM-assisted projects. Due to this, I feel that disclosure is essential, and maybe even required to be posted in an LLM-specific category that I can ignore completely.


  1. I believe this is true today, but the future is uncertain. ↩︎

  2. I’ve done this, and the result is absolutely atrocious. ↩︎

  3. I really enjoyed reading release notes for 0.16.0. The effort put into handcrafting them does not go unnoticed! ↩︎

1 Like