The moderators have been actively discussing the LLM question. One article of consensus was to open up this thread.
Ziggit has always been easy to moderate. It’s a small community, enthusiastic about its topic, and well-behaved in general. It’s like this for one reason, more than any: we have a well-defined and narrow topic, and no place for off-topic discussions, either. If it’s not about Zig, it doesn’t belong here.
We’ve only made one major addition to the moderation policy: the AI policy. It’s time to revisit it.
Up front, we condense the policy to a slogan:
Ziggit is by and for humans. We don’t want AI participation
We stand by this. As the rest of the policy (hopefully) makes clear, “participation” is aimed at things like: letting a chatbot write posts, answering usage questions with LLM output, none of which we want.
We tried to write the policy on AI generated code with some breadth of discretion: at the time, it was already clear that using LLM assistance was widespread, and, also at that time, the quality of purely ‘vibecoded’ generative AI code was inferior, particularly where Zig is concerned.
The policy comes from the very infancy of “agents”, and it’s stopped reflecting how we moderate in practice.
We know that we have users who are staunchly opposed to the mere existence of LLMs, let alone the use of them. We want those users to be comfortable here. There are reasons to feel that way, some on-topic for the forum, many not: in this thread we consider more of that on topic than usual.
Yet we have consensus amongst ourselves that Ziggit will not have a zero tolerance policy around LLMs, agents, and so on. We will also insist on our bedrock policy, since the beginning, which is that users will treat each other well. We’ve informally been more tolerant of mean-spirited or insulting comments in this area than we feel that we should be, and that won’t continue going forward.
So we want to be clear: there’s a place for threads about agentic coding (of Zig), for Showcases which feature agent-directed code, and things of that nature on Ziggit. We don’t want this to overwhelm the topics we already have, and we don’t want to drive off users who would prefer none of this is happening. “Ziggit is by and for humans” remains primary.
In the interest of personal disclosure: while I’ve been using chatbots as a sort of all-purpose question machine for about a year, starting about six weeks ago I’ve started using Codex to do full-on agentic coding. I don’t intend to stop. It’s a remarkable experience, and well worth discussing.
As another indicator of this moment in history, mitchellh is using agents to find and fix hard bugs in Ghostty. This is something which is here to stay, and that’s a mixed bag, but a policy which would preclude Showcasing Ghostty is not what we intend to offer.
One possibility we’re considering is making agents a topic of their own. That doesn’t taboo the mention and discussion of them anywhere which is topical, but it would be the place for discussions where agents (and Zig!) are the main event, so to speak, and somewhere we can move digressions in an agent-related direction.
One of the mods suggested a disclosure policy for Showcases, another thinks it’s a good idea. Two more were lightly inclined against it, one of whom has moved toward ‘unsure’, and one abstained from any comment. That leaves me: I don’t think this is a good policy, for three reasons.
First, it’s not clear what “used an LLM” is supposed to mean. Asked a chatbot one question? Generated a few tests, fixed a bug or two? Is ‘used an agent’ the threshold, or is the label to be reserved for pure vibecoding? Second, users being coy or dishonest about this question is not a problem we’ve actually seen, and third, we’ve seen a growing trend of Showcase posters proactively describing the degree of involvement of LLM tech in their code, so making that into policy may actually be counterproductive.
We decided to solicit community feedback on that as well. We’re not putting it to a vote, we want to know what you think.
Now we come to the deciding reason this thread exists: slop.
Slop is Real
One thing LLMs have made possible, which basically wasn’t before, is the creation of code which just kind of exists. The human responsible doesn’t really know what’s in there, often the README promises a bunch of stuff which doesn’t exist at all, and in general it gives off an unpleasant aroma which I can only describe as: slop.
This is stressing the moderators out. Mostly because it exists, and we don’t want to look at it, but also because it universally gets flagged, and has served as an invitation for users to express themselves in ways we don’t want happening here.
Calling the work of other users ‘slop’ is itself an example of something we can’t permit, there’s no “it’s true” exemption for that. There are ways to address this which don’t amount to name-calling, we want you to be more specific, creative, and kinder about it.
Here’s a real example:
Seems to segfault on wayland. The heavy LLM looking codebase does not really boost my confidence much either.
Here’s a made up / paraphrased one:
This README seems to be AI-generated, which isn’t a good sign. How much of what it claims does your program actually do? We’ve had bad experiences with that in the past.
Specific, critical, actionable, and not insulting.
But ‘slop’ isn’t an obscenity either, and we’re not going to make up a euphemism which means the same thing.
Slop does share something with obscenity: it’s hard to define, but you know it when you see it. Recent slop-threads have turned out to be teachable moments, which is not all bad.
Be that as it may: our policy against chatbot spam has, along with a certain amount of cultural maturity, noticeably reduced the amount of it which we have to see. We’re considering adding a “no slop” policy to the house rules, but this will call for responsibility on everyone’s part. We don’t want slop-suspected Showcases getting flagged, we can’t have you accusing others of slop-purveying, and the moderators are the final arbiters of how to apply the policy, on a case-by-case basis.
I want people to stand by their code. In using agents, I’ve found them extremely capable of implementation: but getting a final result which I am willing to stand by takes most of the effort, and is truly stretching my abilities to the limit. We don’t want to see agent-slop which the poster can’t explain, justify, and stand by.
If a beginner posts Zig code, it’s not always good. No one minds that! We’re all happy to give feedback on how to make it better. But it’s not the same with a pile of incomprehensible LLM output, and Ziggit isn’t going to be home base for learning how to use agents: the problem isn’t Zig related and we don’t want to keep fixing it.
So just having an explicit pre-filter, warning posters to run a vibe check on what they’ve done, and not post it until they feel we won’t judge it to be slop: this could make a real difference for us as moderators, and y’all as well.
Loris Cro’s motto is applicable here: “write software you can love”. That’s compatible with agents (have you fallen out of love with Ghostty? not me), but it’s not compatible with slop.
So this is the thread where we talk it all out. Things we’re considering: Agent / LLM topic, some policy around disclosure (tagging?), and, no slop.