There’s a lot here, but I want to push back on this assumption. I have never sought out an LLM for anything. Documentation, idea generation, code writing, prose writing; anything. I don’t think I’m the only one.
“Each and everyone of us” is absolutely wrong. Percentage matters but it’s not 0% and it’s not 100%. And if you assume that absolutely zero of us actually mean what we say and will leave before compromising it then you are mistaken.
I think some of you are making some gigantic assumptions here, and I ask you to reconsider and please represent your side without trying and failing to represent other sides of the argument.
(I mean, you’re right that Google has become unusable, but there are other ways to work with that. For my part, I give Kagi a few bucks a month, less than I give to Zig.)
Genuine question: let’s say I use AI heavily for:
- Investigating large codebases
- Getting feedback on architecture decisions
- Reviewing code I wrote
But let’s say I type out 100% of the code by hand. Would people consider this “handcrafted” or would people think this should be labelled with AI?
He who has never used Google search or Brave search or whatever search in the last few months, shall throw the first stone.
What I mean is: it is practically unavoidable to use AI for searching little details, even if you want to avoid it.
Feel free to declare it “unavoidable” if you wish to completely alienate those of us who have chosen to avoid it.
If I tried really hard, I could perhaps avoid it at home (where I’m developing in Zig), and in fact I’m moving in that direction.
But that’s impossible at work, and I can’t avoid to use the knowledge I collected at work also at home.
I’d have to be an eremite.
I’m not calling you an eremite, nor am I going to search for that on my non-Google search engine that has its AI features turned off, I’m just asking you not to tell me what’s unavoidable for me or anybody other than yourself.
I wouldn’t consider that AI. But of course it depends how much you let LLM influence your decisions. In the end its pretty obvious just reading through a codebase to see if it is a LLM slop or not. For me disclaimer makes it so that i dont immediately write the project off if I see LLM and establishes some initial trust between you, me and the project.
Even if project was LLM slop, but there was proper disclaimer, i would just shrug and move on. I’m personally only annoyed, when there is no disclaimer, and i feel like im tricked into reading bot generated content.
I respect your position, I tend to the same anti-AI direction, but not so extreme. If “never use AI, never!” would be a precondition for Ziggit members, then this would soon be a lonely place, with the occasional tumble weed rolling through the scene.
Again you have made a statement that implies that everybody else here is just like you, in a conversation where we are trying to all communicate where we are each coming from.
This is an interesting idea, but I do think it’s hard to keep a clean-room separation. Consulting anything leads to it influencing your design. That’s the whole point of consulting it.
Also a project could be handcrafted with no design influence from generative models while still having dependencies that do use generative models. Does using such a dependency remove the project’s handcraftedness? Should reading its documentation or source code count as consulting AI? At the end of the day you are reading generative model output. Take LLVM for example.
https://llvm.org/docs/AIToolPolicy.html
I’m going off topic and starting more debates again, but I really think this is important to think about.
For this forum, I think that the handcrafted tag and possibly the “AI free design” tags or the opposite “AI was consulted“ and “AI output was used“ tags are the best solution.
So if I tell it to hide the “Showcase” category and to show the “AI-free-design” tag specifically, I get to see all the really completely handmade things? That sounds nice to me.
I think that the rules are there’s zero AI output on Ziggit itself, but you can link to a project using AI in topics in the Showcase category. Those topics would then have the tags indicating their level of AI usage. There was also the idea of splitting the Showcase category into AI and no AI showcases.
Yeah, I understand that, I was just thinking through how I would use the tags you were discussing and wanted to support that.
I think there is a lot of ambiguity around “AI” though, so if it didn’t include “AI-free-design” then I’d just have to filter out Showcase entirely, I think. That’s all I want: where AI is allowed, I want to be able to opt out 100%.
I understand everybody else has different needs, I just hope that this perspective is supported here.
What about a simple suggestion for people to add extra info in showcase posts, like their background, why did they start the project, how is their development process? All things that are interesting to humans for human interaction. And yet it serves the same purpose.
For example, I don’t really care if someone used AI or not, I care if the project is high quality and they know what they are doing, or someone’s first project. In the former case, I’d be interested in the code to maybe learn something. In the later case, I’d be maybe interested in the code to maybe teach them something. And whether they used AI or wrote the code themselves matters very little, they are still the human owning the code.
That would be nice in a world where everybody is absolutely honest. But if you look in other communities, like e.g. Reddit, almost everybody tries to sell their vibe-coded fully prompt engineered BS as a beloved project they put all their blood, sweat and tears into.
While I think the number of such persons is much lower on Ziggit, its still a very (much too?) positive suggestion; at least imho. But I hope Im wrong…
I am here to learn about zig. I agree the end product can be more important than the method used to create it, for some sorts of software. If my software needs to be secure and customers depend on it then I think getting AI to review my code for security bugs so I can fix them by hand is a good thing. I’m a financial modeller not a security guru and it makes it safer for customers.
For software built for fun or learning or personal challenges then fully hand crafted can make a lot of sense for that individual. And I respect that, for I started as a hobby programmer and only became a professional coder doing it for work after developing skills for fun. This was well before AI or even stack overflow. There was hardly any internet when I first wrote c++! When I’m learning zig or c++ AI has been useful to me, but needs careful use. I’ve vibe coded React with Claude and created some slop - but it works ok as a prototype and I will pass it all to a colleague who will bin it and do it properly.
People are here in this community for different reasons. Zig should be our passion and quality of code and advancing the community and the language and tools and their use. But we need to be realistic and know that many lines of zig will be created by AI in future. Somehow we need to accommodate everyone in one way or another if we are to advance and get zig to reach its potential.
This topic has got me thinking about how I learn and what I look at. Tags and notes will help but ultimately it is the quality of the code that will matter most to me: I’d sooner learn or be inspired by quality code that is AI assisted in some way than look at poor human code. But that is just me.
Ok my bad sorry for the broad statement, I hope you can forgive me if my statement didn’t include or represented your opinion correctly, that wasn’t my intent.
What I was trying to say is that looking at google search almost 75% of content is created by AI nowadays, even if you refrain from using LLM directly, you are still influenced by it indirectly.
As such the lines become blurry, because at some point most of the resources available, will be from LLM or influenced by them, in some shape or form.
Therefore I think the practical solution is not to give the “handcrafted” tag to project who use zero AI at all, because that’s not practical.
And this is by no mean an attempt to say your opinion and taste don’t matter, they do and I’m truly trying to hear everyone’s opinion.
At the same time, there’s clearly a spectrum of AI usage, all of us are ok strictly rejecting vibe coding, and from it polluting this forum.
All of us agree that craftsmanship and ownership / standing by your code, is fundamental for goo human interaction
Where we differ in our view, is where we draw the line. If i understand some, really don’t want to interact with code that was even minimally written or influenced by LLM.
That’s fine I think it’s a reasonable request, some of us think using llm for documentation, a second pair of eyes, and small mechanical boring task is fine.
And maybe some think that this doesn’t matter as long as the project is truthful and understood.
So to me the simplest solution seems to try to push for the AI tag, this way you can easily filter and remove any post with any AI usage.
sorry for the broad statement, I wasn’t trying to be dismissive or to silence your opinion. I was trying to say that all of us are bound to be influenced by AI even without using it, because we don’t live in a close system.
It’s the same argument, when I complaint that C++ has too many garbage feature, and people answer, “Yeah but you don’t have to use them, you can only use the magic subset of good feature”. Which theoretically is true.
But practically impossible, because if you work on a legacy project, you are force to know all of the features, even the one you don’t like, same if you want to use the ecosystem, it’s bound to happen, you will have to know and understand each feature if you want to use the language correctly.
So again, I’m not trying to be dismissive, and If you feel some concerns or that I’m being unfair please correct me.
But it’s in my opinion unavoidable to use intentionally or not and therefore be influenced by LLM whether you like it or not. Because you can’t control what others do.
So to me the craftsmanship tag would be more confusing than a simple easy to filter our AI tag
That’s true, but I mean not to be insulting, but usually people who vibe code something and have the guts to try to showcase it and lie about it, are not the sharpest tool in the shed, it’s pretty easy to spot, and overtime someone is bound to forfeit if no one is engaging and paying attention, and if we the mods are constantly removing their post.