On this forum, if I can tell it was written by AI, it’s unwelcome. Period.
If I wanted an AI response, I would have asked the AI myself. I asked on the forum precisely because I wanted a human from the community to respond irrespective of the level of the question. AI posts break that community and should be verboten.
As for a project, that’s a very different ball game. I have absolutely used AI as an “uber-macro” that autocompletes a ton of boilerplate functions, for example. Programming has lots of boilerplate-like things (see: Vulkan) that AI seems to be pretty good at filling out. Projects should not be verboten simply because they used AI.
However, the point of showcasing a project is the project and not how you used AI. I personally don’t care how you used AI to do the project just like I’m not interested in whether you used Windows/macOS/Linux. That’s a different discussion and there are PLENTY of forums where that is welcome to be discussed. Put links to your AI forum discussion about your “Uber-133t AI Skilz” in your README.md or on your blog and people can talk about it somewhere else if they truly want to.
In my opinion, this is the line we should draw about projects like Ghostty. Featuring it. Fine. Talking about the Zig and tech in it. Fine. Discussion about AI used to make it? Different forum please. We’re not Facebook/TikTok/etc.; we don’t need 150% engagement and should have no fear about links to outside resources.
(To be fair: I am interested in the tools that people like Mitchell Hashimoto use. However, I can find his discussions and opinions about AI tools everywhere. We don’t need to pollute this forum with that discussion. Once the AI hype dies out, perhaps this could be revisited.)
Many members here indeed proclaim a noble stance against LLM use, but I can’t help but question if as much as half of them can differentiate an artisanal hand-crafted beautiful codebase like ghostty against a terminally brain-damaged AI-slopped hellscape of a codebase like ghostty.
We’ve already reached the point where many people can’t tell right away (bar exceptions) if a piece of software was AI-Enhanced. This is not only because the degree of AI-Enhancements is a blurry line (write pub fn serialize on a struct and hit tab) but also most likely the bias of the reader will affect it as well.
Bad code → likely LLM slop
User I don’t trust → likely LLM slop
I don’t userstand this code → likely LLM slop
I don’t recognize that person → likely LLM slop
This internal bias will be very hard to overcome for every person involved.
Earlier in this thread the creator of the zig language has stated that he will try his best not to ever read a single line of AI generated code again in fear of his mental health. The two biggest zig projects I know of bun and ghossty are relying heavily on AI. While tigerbeetle is neither publically abstaining nor proclaiming usage.
While typing this out I realize that I don’t actually have anything meaningful to say.
Let me just close with a statement that I think many can agree on: I dislike the current situation in the programming world in its entirety.
Looks like a solid public abstain: “In fact, everything at TigerBeetle is handcrafted.” Hm, maybe “handcrafted” is a better category/tag name than “handmade” since the latter is also the name of an in-shambles community
Thanks for the link. A very encouraging sign that LLMs are not as unavoidable as often claimed and that you still can produce widely-used software the way it has been done for the last 40 years
My two cents regarding tagging AI/non-AI and people generally respecting them: it is easier to lie passively (by omission) than it is to lie actively.
In that sense, it would be psychologically harder to put a “handmade” tag on a AI-assisted project (actively lying) than forgetting to put a “AI-assisted” tag on an AI-assisted project (passively lying).
I think zig is known for its deliberate use of friction to steer programmer’s behaviour towards intended use … this would be a case for this.
This makes more sense to me after having reading this link. I didn’t have the context to fully understand what you meant before I read it. I might be naive, but I think there are ways to create and use language models that are non-fascist. It does seem like most of the ways that they are being created and used currently are not good, though.
It’s absolutely relevant, if only to illustrate that many of us see AI as very political and explain why “keep your politics out of our AI discussion” is not always helpful or clear.
While peoples politics certainly can inform and shape how they see AI and it’s usage, this is a discussion of “how do we create a open community, for humans, by humans that allows for those humans to focus on excellence in their work, allowing that many people use AI as tool effectively”. The mods have been pretty lenient in this topic because we understand the strong feelings about AI. However we also have to be careful to not spark off topic debates that stunt the actual benefit of the open discussion.
The experience for new community members I think is important, and it saddens me to see new members hit with a wall of hate when they may not be aware of the norms. By definition, the “debut post” is the first one, and their first introduction into the norms.
Showcases are a time of vulnerability, the user has put some work into something and they are seeking validation…being hit with “go away” is not a nice experience.
I think “debut” post could be earlier in their project and lower stakes (help catagory), and being corrected on norms during a “help” post (perhaps on the first day of a project) is less damaging. Restricting showcase to higher trust would directly move us toward lower-stakes corrections for new users.
Ziggit is one of the few places that has been a refuge from the onslaught of AI generated content. If people want a place to talk about AI there are a zillion different places to do that.
I am here to assist people and learn from people, not machines. If Ziggit becomes a place where I cannot avoid easily avoid LLM generated content, then sadly Ziggit will not be a place for me.
This is not a case of luddit-ism as some seem to be pushing in this thread. Even ignoring all of the very valid copyright, licensing and ethical concerns, it is simply a choice that I want to spend my time learning and advancing the craft of coding in Zig. Wasting my time on LLM generated code does not meet that goal.
I do not understand how the proposed “handcrafted” tag helps in the showcases. I want to easily opt out of all AI / LLM generated content, not have to opt-in to “authentic” content.
I prefer writing code by hand and reading human generated content on ziggit. At the same time I am trying to get ready for a world in which transpiling big projects to zig becomes possible. Say at some point in the next 3 years someone translates chromium, godot, llvm, and the linux kernel, and all other codebases, to readable zig, then I am genuinely curious to see what will be our reaction. (I am not trolling by the way). I don’t know if it will happen. I don’t know how I will react if it does. One normal response to what I said is ‘this will never happen and if it does I dont want to look at or use this code’. I might choose this answer, or not, I am not sure, but I think about it a lot. It might become weird.
Unfortunately the pandoras box has been opened and I’m unsure about how successful we, as humanity, will be with mitigating the consequences via word of mouth and education (note that this is not a critique towards Ziggit, or any forum for that matter). I think it’s unlikely we return to the normalcy levels of pre-LLM boom, as abnormal as they were, but if there is a way of reaching as close as possible to them, my guess is the best bet would be via legislation, but I think it’s impossible to fully eradicate it - far too many people in many different industries have become unbelievably dependant on it.
Sorry if that was off-topic. To give my admittedly already expressed two cents in the more on topic question, I’d like to say that if I notice someone’s post be visibly AI written (with the exception of those who use it for translation, but you can usually tell those apart), I automatically assume they have negative respect to the readers of their content and I, in turn, lack respect for them. As a side note, I find it funny how closely the term “AI slop” as an insult matches what parts of the reverse engineering community call “pasters” - it’s practically the same thing.
(edit: this was meant to be a reply to someone’s post but Discourse messed something up. Point still stands independently though)
Exactly this. Introducing a new tag for what used to be the ground reality is a signal of a massive paradigm shift and in my mind a very strong indicator of direction
I love the community here. People are kind, respectful, deeply knowledgeable, helpful, and their posts are unusually high-quality. I’ve learned a lot, I enjoy my time here. I see this as a sign of good stewardship. Whatever you (the moderators) think we need to do to keep it that way has my full support.
I like the idea of a handcrafted tag to mean “a human wrote every character of this” and AI Free Design to mean “no LLM was consulted for information, any mistakes or slop is my own”. You can use one or the other, or both if you abstain entirely. I think those two are orthogonal issues.
As a software development tool (and not a societal issue) I don’t have a problem with LLM-powered tab-completions or agents writing boilerplate, but I have huge problems with people outsourcing their own brains. I do share a lot of the negative sentiments around AI in general that have been expressed in the thread, though, and would be in favor of a ban if I thought it was enforceable.
I think that’s too much, I mean it’s really hard to draw the line, I don’t think using AI would entail bad software or slop automatically. For example someone using chatgpt to clarify some nuance about some api, is not the same as someone doing claude "Your task is to implement X don't make mistakes".
In the same vein I’ve found a lot of success at work using opencode with gpt-oss-120b and ask it to review my code, and be a second pair of eyes on every commit.. Does this remove my ability to say that I handmade the code ?. If someone is using AI to enhance test coverage, is it also not handmade anymore ? Does tab completion of a simple for loop matters ? Those are real questions, and honestly I don’t think we will found an answer that satisfy everyone.
I think if we go with the AI tag, it should be about disclosure of any AI usage beyond using AI as a better man-page. Basically when agentic workflow were involved, the AI tag should be used.
I think each and everyone of us as at least used some form of local or cloud llm, as basically a better google/stackoverflow, especially since google has become unusable