Ziggit and Large Language Models

The moderators have been actively discussing the LLM question. One article of consensus was to open up this thread.

Ziggit has always been easy to moderate. It’s a small community, enthusiastic about its topic, and well-behaved in general. It’s like this for one reason, more than any: we have a well-defined and narrow topic, and no place for off-topic discussions, either. If it’s not about Zig, it doesn’t belong here.

We’ve only made one major addition to the moderation policy: the AI policy. It’s time to revisit it.

Up front, we condense the policy to a slogan:

Ziggit is by and for humans. We don’t want AI participation

We stand by this. As the rest of the policy (hopefully) makes clear, “participation” is aimed at things like: letting a chatbot write posts, answering usage questions with LLM output, none of which we want.

We tried to write the policy on AI generated code with some breadth of discretion: at the time, it was already clear that using LLM assistance was widespread, and, also at that time, the quality of purely ‘vibecoded’ generative AI code was inferior, particularly where Zig is concerned.

The policy comes from the very infancy of “agents”, and it’s stopped reflecting how we moderate in practice.

We know that we have users who are staunchly opposed to the mere existence of LLMs, let alone the use of them. We want those users to be comfortable here. There are reasons to feel that way, some on-topic for the forum, many not: in this thread we consider more of that on topic than usual.

Yet we have consensus amongst ourselves that Ziggit will not have a zero tolerance policy around LLMs, agents, and so on. We will also insist on our bedrock policy, since the beginning, which is that users will treat each other well. We’ve informally been more tolerant of mean-spirited or insulting comments in this area than we feel that we should be, and that won’t continue going forward.

So we want to be clear: there’s a place for threads about agentic coding (of Zig), for Showcases which feature agent-directed code, and things of that nature on Ziggit. We don’t want this to overwhelm the topics we already have, and we don’t want to drive off users who would prefer none of this is happening. “Ziggit is by and for humans” remains primary.

In the interest of personal disclosure: while I’ve been using chatbots as a sort of all-purpose question machine for about a year, starting about six weeks ago I’ve started using Codex to do full-on agentic coding. I don’t intend to stop. It’s a remarkable experience, and well worth discussing.

As another indicator of this moment in history, mitchellh is using agents to find and fix hard bugs in Ghostty. This is something which is here to stay, and that’s a mixed bag, but a policy which would preclude Showcasing Ghostty is not what we intend to offer.

One possibility we’re considering is making agents a topic of their own. That doesn’t taboo the mention and discussion of them anywhere which is topical, but it would be the place for discussions where agents (and Zig!) are the main event, so to speak, and somewhere we can move digressions in an agent-related direction.

One of the mods suggested a disclosure policy for Showcases, another thinks it’s a good idea. Two more were lightly inclined against it, one of whom has moved toward ‘unsure’, and one abstained from any comment. That leaves me: I don’t think this is a good policy, for three reasons.

First, it’s not clear what “used an LLM” is supposed to mean. Asked a chatbot one question? Generated a few tests, fixed a bug or two? Is ‘used an agent’ the threshold, or is the label to be reserved for pure vibecoding? Second, users being coy or dishonest about this question is not a problem we’ve actually seen, and third, we’ve seen a growing trend of Showcase posters proactively describing the degree of involvement of LLM tech in their code, so making that into policy may actually be counterproductive.

We decided to solicit community feedback on that as well. We’re not putting it to a vote, we want to know what you think.

Now we come to the deciding reason this thread exists: slop.

Slop is Real

One thing LLMs have made possible, which basically wasn’t before, is the creation of code which just kind of exists. The human responsible doesn’t really know what’s in there, often the README promises a bunch of stuff which doesn’t exist at all, and in general it gives off an unpleasant aroma which I can only describe as: slop.

This is stressing the moderators out. Mostly because it exists, and we don’t want to look at it, but also because it universally gets flagged, and has served as an invitation for users to express themselves in ways we don’t want happening here.

Calling the work of other users ‘slop’ is itself an example of something we can’t permit, there’s no “it’s true” exemption for that. There are ways to address this which don’t amount to name-calling, we want you to be more specific, creative, and kinder about it.

Here’s a real example:

Seems to segfault on wayland. The heavy LLM looking codebase does not really boost my confidence much either.

Here’s a made up / paraphrased one:

This README seems to be AI-generated, which isn’t a good sign. How much of what it claims does your program actually do? We’ve had bad experiences with that in the past.

Specific, critical, actionable, and not insulting.

But ‘slop’ isn’t an obscenity either, and we’re not going to make up a euphemism which means the same thing.

Slop does share something with obscenity: it’s hard to define, but you know it when you see it. Recent slop-threads have turned out to be teachable moments, which is not all bad.

Be that as it may: our policy against chatbot spam has, along with a certain amount of cultural maturity, noticeably reduced the amount of it which we have to see. We’re considering adding a “no slop” policy to the house rules, but this will call for responsibility on everyone’s part. We don’t want slop-suspected Showcases getting flagged, we can’t have you accusing others of slop-purveying, and the moderators are the final arbiters of how to apply the policy, on a case-by-case basis.

I want people to stand by their code. In using agents, I’ve found them extremely capable of implementation: but getting a final result which I am willing to stand by takes most of the effort, and is truly stretching my abilities to the limit. We don’t want to see agent-slop which the poster can’t explain, justify, and stand by.

If a beginner posts Zig code, it’s not always good. No one minds that! We’re all happy to give feedback on how to make it better. But it’s not the same with a pile of incomprehensible LLM output, and Ziggit isn’t going to be home base for learning how to use agents: the problem isn’t Zig related and we don’t want to keep fixing it.

So just having an explicit pre-filter, warning posters to run a vibe check on what they’ve done, and not post it until they feel we won’t judge it to be slop: this could make a real difference for us as moderators, and y’all as well.

Loris Cro’s motto is applicable here: “write software you can love”. That’s compatible with agents (have you fallen out of love with Ghostty? not me), but it’s not compatible with slop.

So this is the thread where we talk it all out. Things we’re considering: Agent / LLM topic, some policy around disclosure (tagging?), and, no slop.

31 Likes

I’m personally fine with LLM assisted projects as long as it is properly disclosed. If there is no proper disclousure, it feels like interacting with snake oil salesman.

I at least expect the project owner to know what was generated and where. They should be honest about what they know and what they dont. I also expect the project owner to understand the code that was generated.

As for ziggit. I dont ever want to read some copy paste from LLM, unless there is good reason like language barrier. Otherwise its just rude.

Personally, i find LLMs very interesting for “smarter fuzzing”. As long as you have ground truth, you can leave LLM to generate outputs until it fails the ground truth. There are valid uses for LLMs, just like for any tools. It is up to us human to decide how we use them.

(Though there are ethical / moral side on how LLMs are produced and who can actually produce them as well as the potential for power over what goes in the generated output, so it is also up to individual what they think about the human / moral side of the technology)

19 Likes

I think disclosure is a complicated topic. I’m an experienced programmer, and yet I’m using coding agents, not to delegate my thinking to them, but to help me cope with my fragmented time available for programming. I stand by every line of the code, even though many of them are written by the agent, it’s still under my full supervision. Having it in the same category as vibe coding is degrading, in my opinion, but I can live it it

15 Likes

In the interest of clarity, this is already against policy, and we don’t intend to change that at all.

13 Likes

This would be fine. What about a couple of tags, like “AI-used” (as opposed to “AI”, which might just mean that the topic is AI-centric) and “vibe-coded” (which is probably not wanting promotion, but… at least it would help differentiate AI-use from vibe-coding, which @lalinsky mentioned.)

This would allow the extra-sensitive to filter out “all that stuff” if they want. It would require good citizenship, though a mod could add the tag, if they perceived the need.

Anyway… 2 cents.

5 Likes

LLM has helped me a lot. It has helped me understand many areas that I had never understood before and has provided me with a lot of references.
Because of this, I understand how difficult it is to provide helpful content based on LLM’s responses. Directly copying and pasting LLM’s replies is not helpful; it requires a lot of thinking and repeatedly questioning every detail to reach conclusions that are not misleading.

7 Likes

I should probably add, to anticipate the question, “how do I know if I should use such an ‘AI-used’ tag?” - as mentioned above, “asked a chatbot one question”…? Well, I’d say: don’t overthink it. You’re smart. If you think it would help to have a little disclosure in the form of setting that tag, then do it. If you just asked a chatbot a question or two, but hand-wrote all the real code, then probably not. Only if you think others would raise an eyebrow and you don’t want to have to ‘fess-up 10 posts later.

3 Likes

This is an example of me using an LLM to help others within the site:

In fact, I shouldn’t have any ability to help him at all, because I don’t even have the Pico 2 device. I try to help simply because I hope the other person can get some assistance on the platform, and perhaps something can be achieved with the help of LLM.

When the LLM initially received the question, it was very confident in providing its own answer and fix for the code, but I did not believe it at all, and I could not understand its answer. It turned out that my suspicion was correct; the answers initially given by the LLM were almost never right. Starting from Microzig’s sample code, I asked the LLM about the key differences between Pico 1 and Pico 2, and eventually obtained information from the key RP2350 document.

The LLM had long indicated to me that the answer was in 5.9.5.1. Minimum Arm IMAGE_DEF, but I didn’t pay much attention to it at first. Instead, with the help of the LLM, I started reading the documentation from the beginning to confirm why the boot failed and triggered USB Boot mode. Only at the end did I independently confirm, based on the documentation, that this chapter is very helpful and should be the content the OP needs. Although the LLM also tried to provide its code fix, I did not adopt it because I didn’t have the conditions to test it.

This experience made me believe that LLMs can indeed help me accomplish things that were originally impossible. At the same time, getting some truly helpful results using LLMs is much more difficult than I imagined and still requires a lot of effort.

5 Likes

My preference is mandatory, binary disclosure.

If someone used genai at all, I don’t want to follow the link. If they abstained, then I’m willing to invest part of my precious few remaining hours on this Earth looking at it.

I think looking at LLM generated output rots one’s brains and I want to avoid my brain rotting. If it’s hard to avoid brain rot on Ziggit then I will avoid Ziggit altogether.

54 Likes

I am on the fence about agentic coding. In some instances I can see how it can provide a boost to productivity. As you mentioned, Ghostty sees real success with an agentic workflow. However, behind the LLM sits an incredibly talented, proven individual with a history of successful projects. It’s one thing to use an LLM to flesh out an idea or help with boilerplate, but the result needs to be understood. The LLM isn’t going to write code the way you or I would write code[1]. The output almost always needs to be cleaned up and “unsloppified.” I have no problem reviewing LLM-generated code that’s well-understood and specifically curated. What I have a problem with is the result of prompts like “Rewrite X, but in Zig[2].”

I also acknowledge Zig’s international presence and some users rely on LLMs to translate their own words into another language. I have no problem with that either. But anything obviously written by an LLM that is intended for humans to read (project documentation[3], READMEs, blog posts, etc) I find incredibly distasteful.

With all that said, it would be remiss to alienate everything even tangentially related to LLMs as potentially great projects would fall under this category (re: Ghostty). However, I feel that this represents a thin majority of LLM-assisted projects. Due to this, I feel that disclosure is essential, and maybe even required to be posted in an LLM-specific category that I can ignore completely.


  1. I believe this is true today, but the future is uncertain. ↩︎

  2. I’ve done this, and the result is absolutely atrocious. ↩︎

  3. I really enjoyed reading release notes for 0.16.0. The effort put into handcrafting them does not go unnoticed! ↩︎

3 Likes

I don’t think policing words like “slop” is an effective moderation strategy (but I’m not a moderator, never have been, hopefully never will be). I agree the definition of “slop” is a grey area, but I think we all can immediately notice slop when we see it.

If one can’t put in the effort required to think for oneself by writing the code, why should I mince my words? Hell, the LLM wrote it - why would someone be defensive or offended if it’s called slop if it’s not even their own intellectual creation?

4 Likes

First off, thank you to @mnemnion and the rest of the moderation team. This is one of my favorite forums to visit, primarily because it has been so immune to the AI slop that has invaded its competitors.

Secondly, I appreciate the open-mindedness to take a hard stance on a subject, but being humble enough to reevaluate later on. I’ve been a similar journey in my software development journey. I was resistant at first, but as I hear anecdotes from Mitchell Hashimoto, I’ve begun to give it a shot. Using a similar setup to his, I had an agent reimplement one of my Zig project and, while it did need a lot of help cleaning up it’s slop, it also had new approaches to solving problems (I’ll link an example when it’s ready for prime time.) I was reluctant to share this fact because of the anti-LLM sentiment, but now you have made this post, I feel like it’s less taboo to admit to using agents to produce Zig code.

I still appreciate the accountability for slop. I don’t want to be spreading heaps of poor-quality code for the community to sift through.

5 Likes

There’s a use-mention distinction here. As I said, we have no plans to taboo the word, that would be silly: moderating you because you just talked about AI slop would be insane.

But we will continue to moderate feedback like “your program is shit”, and use of slop which pattern-matches that will also be moderated.

For reasons the FAQ addresses in several places. Those policies are as old as Ziggit and aren’t negotiable.

3 Likes

Sorry, I should have worded this differently. It came off as saying “you shouldn’t put a blanket ban on the word ‘slop’”, but the intended tone is something along the lines of “not allowing the criticism of low-effort LLM output (a.k.a. ‘slop’), in my opinion, isn’t effective”.

I have no issue with the instated policies, and y’all do a good job upholding them.

2 Likes

Since LLMs are an unavoidable topic these days, its a good and important thing to discuss this stuff in a thread.

LLMs are a very ambiguous thing. Yes, they’re here and wont go anywhere soon. And especially in context of company-work it might be necessary to use it in some cases. I also understand why projects like Ghostty make use of it. In general, if experienced programmer take advantage of some of those features, it can surely produce qualitative code (though, thats not given).

On the other hand forums/websites/repos are spammed with bullshit apps/libs nobody will ever maintain in the long run. Plus there often negative impact they have on our political and social life. Let alone all the waste of resources, energy and harm to sustainability.

While this is all very general (sorry, had to write this down :smiley: ), the thing for Ziggit is the following (totally personal)

I understand Ziggit, as already mentioned, as a place “for humans, by humans”. Thus, for me it should be a place where user/Ziggies discuss topics/problems in their codebase, ask for help to enhance code etc as I understand writing code as kind of a craft where the creator put in some energy. But all of that doesn’t account for LLM generated code in my eyes. Even if its in a personal project. Ziggit seens a place for people who enjoy writing code in Zig. And thats not the case if its done by a LLM. Thus, I don’t want to spend my time reading about such code; even if its well maintained. Here I fully agree with Andrew:

My opinion might be unpopular and its not meant to be insulting to anyone :slight_smile: . If there is the need to discuss LLM stuff, I would prefer a separate thread which I can avoid without much effort. However, so far its one of my most loved things about Ziggit that LLM is such a minor topic here…

13 Likes

This:

Is very much intended to thread that needle. Showcases don’t need to be a hugbox, they do need to be civil.

I don’t think the parameters here are new. Say you see some code which sucks. Is it ok to say “yo your code sucks”? You know the answer: no, it isn’t.

Is it ok to describe how it sucks, in various ways, and leave no doubt you’re not impressed? Yeah but, probably don’t need “sucks” in there, do you. Point gets made.

Literally just saying “this looks like low-effort LLM output to me” strikes me as a bit low effort in its own part, but I see no cause for moderation in that statement. I think it’s better to complete the paragraph, starting with “comma, because”.

A noun which serves only to disparage is an insult. Slop serves that purpose, and insulting people, or their work, is not ok here.

5 Likes

I don’t want to look at / review code written by an LLM. For me code is a distillation of human experience, knowledge and creativity. I just don’t care to review code synthesized / stolen by a machine from other humans.

I would rather no read text written by an LLM, but I can see how an LLM might help somebody whose native tongue is not English. Disclosure would be a matter of courtesy for me.

Cheers!

10 Likes

I’m with Linus Torvalds on this:

Thinking LLMs are ‘just another tool’ is to say effectively that the kernel is immune from this.
Which seems to me a silly position.

No. Your position is the silly one.

There is zero point in talking about AI slop. That’s just plain stupid.

Why? Because the AI slop people aren’t going to document their patches as such. That’s such an obvious truism that I don’t understand why
anybody even brings up AI slop.

So stop this idiocy.

The documentation is for good actors, and pretending anything else is pointless posturing.

As I said in private elsewhere, I do not want any kernel development documentation to be some AI statement. We have enough people on both
sides of the “sky is falling” and “it’s going to revolutionize software engineering”, I don’t want some kernel development docs to
take either stance.

It’s why I strongly want this to be that “just a tool” statement.

And the AI slop issue is NOT going to be solved with documentation, and anybody who thinks it is either just naive, or wants to “make a
statement”.

Neither of which is a good reason for documentation.

Nobody’s going to advertise their project is AI slop. Even if the person is trying to be honest, it’s hard to qualify, like @mnemnion mentioned here:

And, mind you, we have always had human slop, so you can’t claim that a person didn’t reveal their use of AI, because it could be they’re just bad programmers.
A rule about AI disclaimers would be both non-enforceable and poorly defined.
I don’t think it’s a good filter to decide what’s worth looking into, either. We are starting to see good AI generated code, and we still have bad human-made code.

7 Likes

Speaking of… a certain dude was looking for a site generator not being vibe coded and just found one

1 Like

I just wrote this post in the other thread, where reading through this topic I am not sure it is within what is accepted here. In that case, I apologize and I will delete it.

I agree with Andrew in that AI poses a huge risk to tank the average critical thinking/problem solving skill of human kind as a whole. So I use it very rarely. But I also see that it can be used as an helpful tool.

I vote for a disclosure policy, where no comment regarding AI means “no AI”. For everything else I would want the programmer to briefly describe (maybe just a few key words are enough), how they used AI.

While I dont draw the line as early as Andrew at the momemt. I think it is only fair for someone that used AI to save time, to spend a fraction of that saved time to give every reader a chance for an informed decision on how much of their time they want to spend on the issue/topic.

At work I had a few occasions where colleagues asked me to debug “their” code, and only after they could not answer some questions they told me they used some AI tool to generate it.
This was extremely insulting for me.

6 Likes