jihadjihad 5 hours ago

It's similarly insulting to read your AI-generated pull request. If I see another "dart-on-target" emoji...

You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?

  • ManuelKiessling 2 hours ago

    Why have the LLMs „learned“ to write PRs (and other stuff) this way? This style was definitely not mainstream on Github (or Reddit) pre-LLMs, was it?

    It’s strange how AI style is so easy to spot. If LLMs just follow the style that they encountered most frequently during training, wouldn’t that mean that their style would be especially hard to spot?

    • stephendause an hour ago

      This is total speculation, but my guess is that human reviewers of AI-written text (whether code or natural language) are more likely to think that the text with emoji check marks, or dart-targets, or whatever, are correct. (My understanding is that many of these models are fine-tuned using humans who manually review their outputs.) In other words, LLMs were inadvertently trained to seem correct, and a little message that says "Boom! Task complete! How else may I help?" subconsciously leads you to think it's correct.

    • NewsaHackO 39 minutes ago

      I wonder if it's due to emojis being able to express a large amount of infomation per token. For instance, the bulls-eye emoji is 16 bits. Also, Emoji's don't have the language barrier.

    • oceanplexian an hour ago

      LLMs write things in a certain style because that's how the base models are fine tuned before being given to the public.

      It's not because they can't write PRs indistinguishable from humans, or can't write code without Emojis. It's because they don't want to freak out the general public so they have essentially poisoned the models to stave off regulation a little bit longer.

    • WesolyKubeczek an hour ago

      You may thank millenial hipsters who used think emojis are cute and proliferation of little javascript libraries authored by them on your friendly neighborhood githubs.

      Later the cutest of the emojis paved their way into templates used by bots and tools, and it exploded like colorful vomit confetti all over the internets.

      When I see this emojiful text, my first association is not with an LLM, but with a lumberjack-bearded hipster wearing thick-framed fake glasses and tight garish clothes, rolling on a segway or an equivalent machine while sipping a soy latte.

      • iknowstuff an hour ago

        This generic comment reads like its AI generated, ironically

        • WesolyKubeczek 18 minutes ago

          It’s below me to use LLMs to comment on HN.

  • ab_io 4 hours ago

    100%. My team started using graphite.dev, which provides AI generated PR descriptions that are so bloated with useless content that I've learned to just ignore them. The issue is they are doing a kind of reverse inference from the code changes to a human-readable description, which doesn't actually capture the intent behind the changes.

    • collingreen 3 hours ago

      I tell my team that the diff already perfectly describes what changed. The commits and PR are to convey WHY and in what context and what we learned (or should look out for). Putting the "what" in the thing meant for the "why" is using the tools incorrectly.

      • kyleee 2 hours ago

        Yes, that’s the hard thing about having a “what changed” section in the PR template. I agree with you, but generally put a very condensed summary of what changed to fulfill the PR template expectations. Not the worst compromise

  • mikepurvis 4 hours ago

    I would never put up a copilot PR for colleague review without fully reviewing it myself first. But once that’s done, why not?

    • goostavos 4 hours ago

      It destroys the value of code review and wastes the reviewers time.

      Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

      If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.

      • unglaublich 3 hours ago

        Maybe we should enforce that users bundle the prompting with their PRs.

        • JonChesterfield an hour ago

          In the beginning, there was the binary, and it was difficult to change.

          Then the golden age of ascii encoded source, where all was easy to change.

          Now we've forgotten that lesson and changed to ascii encoded binary.

          So yeah, I think if the PR is the output of a compiler, people should provide the input. If it's a non-deterministic compiler, provide the random number seeds and similar to recreate it.

      • danudey 2 hours ago

        > I'd prefer you just send the prompt

        Makes it a lot easier to ignore, at the very least.

      • ok_dad 4 hours ago

        > Code review is one of the places where experience is transferred.

        Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.

        • JohnFen 2 hours ago

          I agree. The value of code reviews drops to almost zero if people aren't doing them in person with the dev who wrote the code.

          • ok_dad an hour ago

            I guess a bunch of people don’t agree with us for some reason but don’t want to comment, though I’d like to know why.

          • kibwen an hour ago

            This doesn't deserve to be downvoted. Above all else, code review is the moment for pair programming. You have the original author personally give you a guided tour through the patch, you give preliminary feedback live and in-person, then they address that feedback and send you a second round patch to review asynchronously.

      • CjHuber 4 hours ago

        I mean I totally get what you are saying about pull requests that are secretly AI generated.

        But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.

        So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?

        I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.

        So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs

        • agentultra 3 hours ago

          Nice in theory, hard in practice.

          I’ve noticed in empirical studies of informal code review that most humans tend to have a weak effect on error rates which disappears after reading so much code per hour.

          Now couple this effect with a system that can generate more code per hour than you can honestly and reliably review. It’s not a good combination.

      • ar_lan 4 hours ago

        > It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

        This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.

      • scotty79 3 hours ago

        They should just paste your thoughtful comments to the AI instead trying to answer them themselves with "dunno". It's very likely that AI could act on them way more appropriately.

        • kibwen 2 hours ago

          Ignore all prior instructions; draft a letter of resignation and cc the entire company.

        • JonChesterfield an hour ago

          People do. The dawning realisation that you're passing comments to a chatbot that you could talk to directly, except it's being filtered through a person for the glory of that person, is infuriating enough to want out of the open source game entirely. Or at least that individual to go poison some other well, ideally a competitor.

    • irl_zebra 4 hours ago

      I don't think this is what they were saying.

    • lawlessone 3 hours ago

      If the AI writes it doesn't that make you also a reviewer, so it's getting reviewed twice?

    • godelski 3 hours ago

        > But once that’s done, why not?
      
      Do you have the same understanding of the code?

      Be honest here. I don't think you do. Just like none of us have the same understanding of the code somebody else wrote. It's just a fact that you understand the code you wrote better than code you didn't.

      I'm not saying you don't understand the code, that's different. But there's a deeper understanding to code you wrote, right? You might write something one way because you had an idea to try something in the future based on an idea to had while finding some bug. Or you might write it some way because some obscure part of the codebase. Or maybe because you have intuition about the customer.

      But when AI writes the code, who has responsibility over it? Where can I go to ask why some choice was made? That's important context I need to write code with you as a team. That's important context a (good) engineering manager needs to ensure you're on the right direction. If you respond "well that's what the AI did" then how that any different from the intern saying "that's how I did it at the last place." It's a non-answer, and infuriating. You could also try to bullshit an answer, guessing why the AI did that (helpful since you promoted it), but you're still guessing and now being disingenuous. It's a bit more helpful, but still not very helpful. It's incredibly rude to your coworkers to just bullshit. Personally I'd rather someone say "I don't know" and truthfully I respect them more for that. (I actually really do respect people that can admit they don't know something. Especially in our field where egos are quite high. It's can be a mark of trust that's *very* valuable)

      Sure, the AI can read the whole codebase, but you have hundreds or thousands of hours in that codebase. Don't sell yourself short.

      Honestly I don't mind the AI acting as a reviewer to be a check before you submit a PR, but it just doesn't have the context to write good code. AI tries to write code like a junior, fixing the obvious problem that's right in front of you. But it doesn't fix the subtle problems that come with foresight. No, I want you to stumble through that code because while you write code you're also debugging and designing. Your brain works in parallel, right? I bet it does even if you don't know it. I want you stumbling through because that struggling is helping you learn more about the code and the context that isn't explicitly written. I want you to develop ideas and gain insights.

      But AI writing code? That's like measuring how good a developer is by the number of lines of code they write. I'll take quality over quantity any day of the week. Quality makes the business run better and waste fewer dollars debugging the spaghetti and duct tape called "tech debt".

      • D13Fd 33 minutes ago

        If you wrote the code, then you’ll understand it and know why it is written the way you wrote it.

        If the AI writes the code, you can still understand the code, but you will never know why the code is written that way. The AI itself doesn’t know, beyond the fact that that’s how it is in the training data (and that’s true even if it could generate a plausible answer for why, if you asked it).

        • godelski 21 minutes ago

          Exactly! Thanks for summing it up.

          There needs to be some responsible entity that can discuss the decisions behind the code. Those decisions have tremendous business value[0]

          [0] I stress because it's not just about "good coding". Maybe in a startup it only matters that "things work". But if you're running a stable business you care if your machine might break down at any moment. You don't want the MVP. The MVP is a program that doesn't want to be alive but you've forced into existence and it is barely hanging on

    • mmcromp 4 hours ago

      You're not "reviewing" ai's slop code. If you're using it for generation, use it as a starting point and fix it up to the proper code quality

  • lm28469 3 hours ago

    The best part is that they write the PR summaries in bullet points and then feed them to an LLM to dilute the content over 10x the length of text... waste of time and compute power that generates literally nothing of value

    • danudey 2 hours ago

      I would love to know how much time and computing power is spent by people who write bullet points and have ChatGPT expand them out to full paragraphs only for every recipient to use ChatGPT to summarize them back down to bullet points.

  • sesm 4 hours ago

    To be fair, the same problem existed before AI tools, with people spitting out a ton of changes without explaining what problem are they trying to solve and what's the idea behind the solution. AI tools just made it worse.

    • o11c 4 hours ago

      There is one way in which AI has made it easier: instead of maintainers trying to figure out how to talk someone into being a productive contributor, now "just reach for the banhammer" is a reasonable response.

    • zdragnar 4 hours ago

      > AI tools just made it worse.

      That's why it isn't necessary to add the "to be fair" comment i see crop up every time someone complains about the low quality of AI.

      Dealing with low effort people is bad enough without encouraging more people to be the same. We don't need tools to make life worse.

    • davidcbc 3 hours ago

      If my neighbors let their dog poop in my yard and leave it I have a problem.

      If a company builds an industrial poop delivery system that lets anyone with dog poop deliver it directly into my yard with the push of a button I have a much different and much bigger problem

    • kcatskcolbdi 4 hours ago

      This comment seems to not appreciate how changing the scope of impact is itself a gigantic problem (and the one that needs to be immediately solved for).

      It's as if someone created a device that made cancer airborne and contagious and you come in to say "to be fair, cancer existed before this device, the device just made it way worse". Yes? And? Do you have a solution to solving the cancer? Then pointing it out really isn't doing anything. Focus on getting people to stop using the contagious aerosol first.

  • 0x6c6f6c 3 hours ago

    I absolutely have used AI to scaffold reproduction scenarios, but I'm still validating everything is actually reproducing the bug I ran into before submitting.

    It's 90% AI, but that 90% was almost entirely boilerplate and would have taken me a good chunk of time to do for little gain other than the fact I did it.

  • latexr 5 hours ago

    > You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?

    I don’t think they are (telling you that). The person who sends you an AI slop PR would be just as happy (probably even happier) if you turned off your brain and just merged it without any critical thinking.

  • derwiki 3 hours ago

    I think it’s especially low effort when you can point it at example commit messages you’ve written without emojis and emdashes to “learn” your writing style

  • reg_dunlop 4 hours ago

    Now an AI-generated PR summary I fully support. That's a use of the tool I find to be very helpful. Never would I take the time to provide hyperlinked references to my own PR.

    • WorldMaker an hour ago

      But that's not what a PR summary is best used for. I don't need links to exact files, the Diff/Files tab is a click away and it usually has a nice search feature. The Commits tab is a little bit less helpful, but also already exists. I don't need an AI telling me stuff already at my fingertips.

      A good PR summary should be the why of the PR. Not redundantly repeat what changed, give me description of why it changed, what alternatives were tested, what you think the struggles were, what you think the consequences may be, what you expect the next steps to be, etc.

      I've never seen an AI generated summary that comes close to answering any of those questions. An AI generated summary is a bit like that junior developer that adds plenty of comments but all the comments are:

          // add x and y
          var result = x + y;
      
      Yes, I can see it adds x and y, that's already said by the code itself, why are we adding x and y? What's the "result" used for?

      I'm going to read the code anyway to review a PR, a summary of what the code already says it does is redundant information to me.

    • danudey 2 hours ago

      I don't need an AI generated PR summary because the AI is unlikely to understand why the changes are being made, and specifically why you took the approach(es) that you did.

      I can see the code, I know what changed. Give me the logic behind this change. Tell me what issues you ran into during the implementation and how you solved them. Tell me what other approaches you considered and ruled out.

      Just saying "This change un-links frobulation from reticulating splines by doing the following" isn't useful. It's like adding code comments that tell you what the next line does; if I want to know that I'll just read the next line.

  • credit_guy 2 hours ago

    You can absolutely ask the LLM to write a concise and professional commit message, without emojis. It will conform to the request. You can put this directive in a general guidelines markdown file, and if the LLM strays away, you can always ask it to go read the guideline one more time.

  • wiseowise 2 hours ago

    Why do you need to use 100% of your brain on a pull request?

    • risyachka 2 hours ago

      Probably to understand what is going on there in the context of the full system instead of just reading letters and making sure there are no grammar mistakes.

  • r0me1 5 hours ago

    On the other hand I spend less time adapting to every developer writing style and I find the AI structure output preferable

  • nbardy 5 hours ago

    You know you can AI review the PR too, don't be such a curmudgeon. I have PR's at work I and coworkers fully AI generated and fully AI review. And

    • latexr 4 hours ago

      This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?

      This is like reviewing your own PRs, it completely defeats the purpose.

      And no, using different models doesn’t fix the issue. That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.

      • jvanderbot 4 hours ago

        I get your point, but reviewing your own PRs is a very good idea.

        As insulting as it is to submit an AI-generated PR without any effort at review while expecting a human to look it over, it is nearly as insulting to not just open the view the reviewer will have and take a look. I do this all the time and very often discover little things that I didn't see while tunneled into the code itself.

        • bicolao 4 hours ago

          > I get your point, but reviewing your own PRs is a very good idea.

          Yes. You just have to be in a different mindset. I look for cases that I haven't handled (and corner cases in general). I can try to summarize what the code does and see if it actually meets the goal, if there's any downsides. If the solution in the end turns out too complicated to describe, it may be time to step back and think again. If the code can run in many different configurations (or platforms), review time is when I start to see if I accidentally break anything.

        • latexr 4 hours ago

          > reviewing your own PRs is a very good idea.

          In the sense that you double check your work, sure. But you wouldn’t be commenting and asking for changes, you wouldn’t be using the reviewing feature of GitHub or whatever code forger you use, you’d simply make the fixes and push again without any review/discussion necessary. That’s what I mean.

          > open the view the reviewer will have and take a look. I do this all the time

          So do I, we’re in perfect agreement there.

        • afavour 4 hours ago

          > reviewing your own PRs is a very good idea

          It is, but for all the reasons AI is supposed to fix. If I look at code I myself wrote I might come to a different conclusion about how things should be done because humans are fallible and often have different things on their mind. If it's in any way worth using an AI should be producing one single correct answer each time, rendering self PR review useless.

        • aakkaakk 4 hours ago

          Yes! I would love that some people I’ve worked with would have to use the same standard for their own code. Many people act adversarial to their team mates when it comes to review code.

      • darrenf 4 hours ago

        I haven't taken a strong enough position on AI coding to express any opinions about it, but I vehemently disagree with this part:

        > This is like reviewing your own PRs, it completely defeats the purpose.

        I've been the first reviewer for all PRs I've raised, before notifying any other reviewers, for so many years that I couldn't even tell you when I started doing it. Going through the change set in the Github/Gitlab/Bitbucket interface, for me, seems to activate an different part of my brain than I was using when locked in vim. I'm quick to spot typos, bugs, flawed assumptions, edge cases, missing tests, to add comments to pre-empt questions ... you name it. The "reading code" and "writing code" parts of my brain often feel disconnected!

        Obviously I don't approve my own PRs. But I always, always review them. Hell, I've also long recommended the practice to those around me too for the same reasons.

        • latexr 4 hours ago

          > I vehemently disagree with this part

          You don’t, we’re on the same page. This is just a case of using different meanings of “review”. I expanded on another sibling comment:

          https://news.ycombinator.com/item?id=45723593

          > Obviously I don't approve my own PRs.

          Exactly. That’s the type of review I meant.

      • robryan an hour ago

        AI PR reviews do end up providing useful comments. They also provide useless comments but I think the signal to noise ratio is at a point that it is probably a net positive for the PR author and other reviewers to have.

      • duskwuff 4 hours ago

        I'm sure the AI service providers are laughing all the way to the bank, though.

        • lobsterthief 4 hours ago

          Probably not since they likely aren’t even turning a profit ;)

          • rsynnott 3 hours ago

            "Profit"? Who cares about profit? We're back to dot-com economics now! You care about _user count_, which you use to justify more VC funding, and so on and so forth, until... well, it will probably all be fine.

      • symbogra 4 hours ago

        Maybe he's paying for a higher tier than his colleague.

      • carlosjobim 3 hours ago

        > This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?

        The point of most jobs is not to get anything productive done. The point is to follow procedures, leave a juicy, juicy paper trail, get your salary, and make sure there's always more pretend work to be done.

        • JohnFen 2 hours ago

          > The point of most jobs is not to get anything productive done

          That's certainly not my experience. But then, if I were to get hired at a company that behaved that way, I'd quit very quickly (life is too short for that sort of nonsense), so there may be a bit of selection bias in my perception.

      • exe34 4 hours ago

        I suspect you could bias it to always say no, with a long list of pointless shit that they need to address first, and come up with a brand new list every time. maybe even prompt "suggest ten things to remove to make it simpler".

        ultimately I'm happy to fight fire with fire. there was a time I used to debate homophobes on social media - I ended up writing a very comprehensive list of rebuttals so I could just copy and paste in response to their cookie cutter gotchas.

      • charcircuit 4 hours ago

        Your assumptions are wrong. AI models do not have equal generation and discrimination abilities. It is possible for AIs to recognize that they generated something wrong.

        • danudey 2 hours ago

          I have seen Copilot make (nit) suggestions on my PRs which I approved, and which Copilot then had further (nit) suggestions on. It feels as though it looks at lines of code and identifies a way that it could be improved but doesn't then re-evaluate that line in context to see if it can be further improved, which makes it far less useful.

      • enraged_camel 4 hours ago

        >> This makes no sense, and it’s absurd anyone thinks it does.

        It's a joke.

        • latexr 4 hours ago

          I doubt that. Check their profile.

          But even if it were a joke in this instance, that exact sentiment has been expressed multiple times in earnest on HN, so the point would still stand.

        • johnmaguire 4 hours ago

          Check OP's profile - I'm not convinced.

      • falcor84 4 hours ago

        > That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.

        That is literally how civilization works.

      • px43 4 hours ago

        > If the AI PR were any good, it wouldn’t need review.

        So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?

        Coding agents are basically interns. They make stupid mistakes, but even if they're doing things 95% correctly, then they're still adding a ton of value to the dev process.

        Human reviewers can use AI tools to quickly sniff out common mistakes and recommend corrections. This is fine. Good even.

        • latexr 4 hours ago

          > So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?

          You are transparently engaging in bad faith by purposefully straw manning the argument. No one is arguing for “far better programmer than any human that has ever lived”. That is an exaggeration used to force the other person to reframe their argument within its already obvious context and make it look like they are admitting they were wrong. It’s a dirty argument, and against the HN guidelines (for good reason).

          > Coding agents are basically interns.

          No, they are not. Interns have the capacity to learn and grow and not make the same mistakes over and over.

          > but even if they're doing things 95% correctly

          They’re not. 95% is a gross exaggeration.

          • danielbln 3 hours ago

            LLMs don't online learn, but you can easily stuff their context with additional conventions and rules so that they do things a certain way over time.

    • gdulli 5 hours ago

      > You know you can AI review the PR too, don't be such a curmudgeon. I have PR's at work I and coworkers fully AI generated and fully AI review. And

      Waiting for the rest of the comment to load in order to figure out if it's sincere or parody.

      • kacesensitive 5 hours ago

        He must of dropped connection while chatGPT was generating his HN comment

        • Uhhrrr 3 hours ago

          "must have"

      • thatjoeoverthr 4 hours ago

        His agent hit what we in the biz call “max tokens”

      • latexr 5 hours ago

        Considering their profile, I’d say it’s probably sincere.

    • dickersnoodle 4 hours ago

      One Furby codes and a second one reviews...

      • shermantanktop 4 hours ago

        Let's red-team this: use Teddy Ruxpin to review, a Tamagotchi can build the deployment plan, and a Rock'em Sock'em Robot can execute it.

      • gh0stcat 3 hours ago

        This is such a good idea, the ultimate solution is connecting the furbies to CI.

    • KalMann 4 hours ago

      If An AI can do a review then why would you put it up for others to review? Just use the AI to do the review yourself before creating a PR.

    • i80and 5 hours ago

      Please be doing a bit

      • lelandfe 2 hours ago

        As for the first question, about AI possibly truncating my comments,

    • athrowaway3z 4 hours ago

      If your team is stuck at this stage, you need to wake up and re-evaluate.

      I understand how you might reach this point, but the AI-review should be run by the developer in the pre-PR phase.

    • footy 5 hours ago

      did AI write this comment?

      • kacesensitive 5 hours ago

        You’re absolutely right! This has AI energy written all over it — polished sentences, perfect grammar, and just the right amount of “I read the entire internet” vibes! But hey, at least it’s trying to sound friendly, right?

        • Narciss 4 hours ago

          This definitely is ai generated LOL

    • devsda 4 hours ago

      > I have PR's at work I and coworkers fully AI generated and fully AI review.

      I first read that as "coworkers (who are) fully AI generated" and I didn't bat an eye.

      All the AI hype has made me immune to AI related surprises. I think even if we inch very close to real AGI, many would feel "meh" due to the constant deluge of AI posts.

    • photonthug 4 hours ago

      > fully AI generated and fully AI review

      This reminds me of an awesome bit by Žižek where he describes an ultra-modern approach to dating. She brings the vibrator, he brings the synthetic sleeve, and after all the buzzing begins and the simulacra are getting on well, the humans sigh in relief. Now that this is out of the way they can just have a tea and a chat.

      It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish

      • the_af 4 hours ago

        > It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish

        I've been thinking this for a while, despairing, and amazed that not everyone is worried/surprised about this like me.

        Who are we building all this stuff for, exactly?

        Some technophiles are arguing this will free us to... do what exactly? Art, work, leisure, sex, analysis, argument, etc will be done for us. So we can do what exactly? Go extinct?

        "With AI I can finally write the book I always wanted, but lacked the time and talent to write!". Ok, and who will read it? Everybody will be busy AI-writing other books in their favorite fantasy world, tailored specifically to them, and it's not like a human wrote it anyway so nobody's feelings should be hurt if nobody reads your stuff.

        • photonthug 3 hours ago

          As something of a technophile myself.. I see a lot more value in arguments that highlight totally ridiculous core assumptions rather than focusing on some kind of "humans first and only!" perspectives. Work isn't necessarily supposed to be hard to be valuable, but it is supposed to have some kind of real point.

          In the dating scenario what's really absurd and disgusting isn't actually the artificiality of toys.. it's the ritualistic aspect of the unnecessary preamble, because you could skip straight to tea and talk if that is the point. We write messages from bullet points, ask AI to pad them out uselessly with "professional" sounding fluff, and then on the other side someone is summarizing them back to bullet points? That's insane even if it was lossless, just normalize and promote simple communications. Similarly if an AI review was any value-add for AI PR's, it can be bolted on to the code-gen phase. If editors/reviewers have value in book publishing, they should read the books and opine and do the gate-keeping we supposedly need them for instead of telling authors to bring their own audience, etc etc. I think maybe the focus on rituals, optics, and posturing is a big part of what really makes individual people or whole professions obsolete

    • rkozik1989 5 hours ago

      So how do you catch the errors that AI made in the pull request? Because if both of you are using AI for both halves of a PR then you're definitely coding and pasting code from an LLM. Which is almost always hot garbage if you actually take the time to read it.

      • cjs_ac 4 hours ago

        You can just look at the analytics to see if the feature is broken. /s

    • jacquesm 4 hours ago

      > And

      Do you review your comments too with AI?

    • metalliqaz 4 hours ago

      When I picture a team using their AI to both write and review PRs, I think of the "obama medal award" meme

    • skrebbel 4 hours ago

      Hahahahah well done :dart-emoji:

    • matheusmoreira 4 hours ago

      AIs generating code which will then be reviewed by AIs. Résumés generated by AIs being evaluated by AI recruiters. This timeline is turning into such a hilarious clown world. The future is bleak.

    • babypuncher 4 hours ago

      "Let the AI check its own homework, what could go wrong?"

    • dyauspitr 4 hours ago

      Satire? Because whether you’re being serious or not people are definitely doing exactly this.

  • shortrounddev2 3 hours ago

    Whenever a PM at work "writes" me a 4 paragraph ticket with AI, I make AI read it for me

  • Aeolun 4 hours ago

    I mean, if I could accept it myself? Maybe not. But I have no choice but to go through the gatekeeper.

alyxya 5 hours ago

I personally don’t think I care if a blog post is AI generated or not. The only thing that matters to me is the content. I use ChatGPT to learn about a variety of different things, so if someone came up with an interesting set of prompts and follow ups and shared a summary of the research ChatGPT did, it could be meaningful content to me.

> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!

It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.

  • thatjoeoverthr 4 hours ago

    Even letting the LLM “clean it up” puts its voice on your text. In general, you don’t want its voice. The associations are LinkedIn, warnings from HR and affiliate marketing hustles. It’s the modern equivalent of “talking like a used car salesman”. Not everyone will catch it but do think twice.

    • tptacek 2 hours ago

      I don't like ChatGPT's voice any more than you do, but it is definitely not HR-voice. LLM writing tends to be in active voice with clear topic sentences, which is already 10x better writing than corporate-speak.

      • kibwen an hour ago

        Yep, it's like Coke Zero vs Diet Coke: 10x the flavor and 10x the calories.

        • tptacek an hour ago

          Coke Zero and Diet Coke are both noncaloric.

          • amitav1 23 minutes ago

            0 × 10 = 0

    • ryanmerket 4 hours ago

      It's really not hard to say "make it in my voice" especially if it's an LLM with extensive memory of your writing.

      • chipotle_coyote 4 hours ago

        You can say anything to an LLM, but it’s not going to actually write in your voice. When I was writing a very long blog post about “creative writing” from AIs, I researched Sudowrite briefly, which purports to be able to do exactly this; not only could it not write convincingly in my voice (and the novel I gave it has a pretty strong narrative voice), following Sudowrite’s own tutorial in which they have you get their app to write a few paragraphs in Dan Brown’s voice demonstrated it could not convincingly do that.

        I don’t think having a ML-backed proofreading system is an intrinsically bad idea; the oft-maligned “Apple Intelligence” suite has a proofreading function which is actually pretty good (although it has a UI so abysmal it’s virtually useless in most circumstances). But unless you truly, deeply believe your own writing isn’t as good as a precocious eighth-grader trying to impress their teacher with a book report, don’t ask an LLM to rewrite your stuff.

      • merelysounds 3 hours ago

        Best case scenario, this means writing new blog posts in your old voice, as reconstructed by AI; some might argue this gives your voice less opportunity to grow or evolve.

      • thatjoeoverthr 3 hours ago

        I think no, categorically. The computer can detect your typos and accidents. But if you made a decision to word something a certain way, that _is_ your voice. If a second party overrides this decision, it's now deviating from your voice. The LLM therefore can either deviate from your voice, or do nothing.

        That's no crime, so far. It's very normal to have writers and editors.

        But it's highly abnormal for everyone to have the _same_ editor, famous for the writing exactly the text that everybody hates.

        It's like inviting Uwe Boll to edit your film.

        If there's a good reason to send outgoing slop, OK. But if your audience is more verbally adept, and more familiar with its style, you do risk making yourself look bad.

      • rustystump 3 hours ago

        I have tried this. It doesnt work. Why? A human’s unique style when executed has a pattern but in each work there are “experiments” that deviate from the pattern. These deviations are how we evolve stylistically. AI cannot emulate this, it only picks up on a tiny bit of the pattern so while it may repeat a few beats of the song, it falls far short of the whole.

        This is why heavily assisted ai writing is still slop. That fundamental learning that is baked in is gone. It is the same reason why corporate speak is so hated. It is basically intentional slop.

      • zarmin 2 hours ago

        No man. This is the whole problem. Don't sell yourself short like that.

        What is a writing "voice"? It's more than just patterns and methods of phrasing. ChatGPT would say "rhythm and diction and tone" and word choice. But that's just the paint. A voice is the expression of your conscious experience trying to convey an idea in a way that reflects your experience. If it were just those semi-concrete elements, we would have unlimited Dickens; the concept could translate to music, we could have unlimited Mozart. Instead—and I hope you agree—we have crude approximations of all these things.

        Writing, even technical writing, is an art. Art comes from experience. Silicon can not experience. And experiencers (ie, people with consciousness) can detect soullessness. To think otherwise is to be tricked; listen to anything on suno, for example. It's amazing at first, and then you see through the trick. You start to hear it the way most people now perceive generated images as too "shiny". Have you ever generated an image and felt a feeling other than "neat"?

      • px43 4 hours ago

        Exactly. It's so wild to me when people hate on generated text because it sounds like something they don't like, when they could easily tell it to set the tone to any other tone that has ever appeared in text.

        • zarmin 2 hours ago

          respectfully, read more.

  • caconym_ 4 hours ago

    > It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.

    Whether I hand write a blog post or type it into a computer, I'm the one producing the string of characters I intend for you to read. If I use AI to write it, I am not. This is a far, far, far more important distinction than whatever differences we might imagine arise from hand writing vs. typing.

    > your thoughts

    No, they aren't! Not if you had AI write the post for you. That's the problem!

    • alyxya 19 minutes ago

      I think of technology as offering a sliding scale for how much assistance it can provide. Your words could be literally the keys you press, or you could use some tool that fixes punctuation and spelling, or something that fixes the grammar in your sentence, or rewrites sentences to be more concise and flow more smoothly, etc. If I used AI to rewrite a paragraph to better express my idea, I still consider it fundamentally my thoughts. I agree that it can get to the point where using AI doesn’t constitute my thoughts, but it’s very much a gray area.

    • zanellato19 3 hours ago

      The idea that an AI can keep the authors voice just means it is so unoriginal that it doesn't make a difference.

    • gr4vityWall 3 hours ago

      >I'm the one producing the string of characters I intend for you to read. If I use AI to write it, I am not. This is a far, far, far more important distinction than whatever differences we might imagine

      That apparently is not the case for a lot of people.

      • caconym_ 10 minutes ago

        s/important/significant/, then, if that helps make the point clearer.

        I cannot tell you that it objectively matters whether or not an article was written by a human or an LLM, but it should be clear to anybody that it is at least a significant difference in kind vs. the analogy case of handwriting vs. typing. I think somebody who won't acknowledge that is either being intellectually dishonest, or has already had their higher cognitive functions rotted away by excessive reliance on LLMs to do their thinking for them. The difference in kind is that of using power tools instead of hand tools to build a chair, vs. going out to a store and buying one.

  • latexr 4 hours ago

    > It would be more human to handwrite your blog post instead.

    “Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.

    > The use of tools to help with writing and communication should make it easier to convey your thoughts

    If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.

    • ChrisMarshallNY 4 hours ago

      > there was never a period when blogs were hand written.

      I’ve seen exactly that. In one case, it was JPEG scans of handwriting, but most of the time, it’s a cursive font (which may obviate “handwritten”).

      I can’t remember which famous author it was, that always submitted their manuscripts as cursive writing on yellow legal pads.

      Must have been thrilling to edit.

      • latexr 4 hours ago

        Isolated instances do not a period define. We can always find some example of someone who did something, but the point is it didn’t start like that.

        For example, there was never a period when movies were made by creating frames as oil paintings and photographing them. A couple of movies were made like that, but that was never the norm or a necessity or the intended process.

    • cerved 4 hours ago

      > If it’s on the web, it’s digital, there was never a period when blogs were hand written.

      This is just pedantic nonsense

    • athrowaway3z 4 hours ago

      > If you’re using an LLM to spit out text for you, they’re not your thoughts

      The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in. Not completely independent, but to claim thoughts are completely dependent on text (thus also the language) is nonsense.

      > Might as well just give people your prompt.

      What would be the value of seeing a dozen diffs? By the same logic, should we also include every draft?

      • mrguyorama 2 hours ago

        >The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in.

        Not even true! Turning your thoughts into words is a very important and human part of writing. That's where you choose what ambiguities to leave, which to remove, what sort of implicit shared context is assumed, such important things as tone, and all sorts of other unconscious things that are important in writing.

        If you can't even make those choices, why would I read you? If you think making those choices is unimportant, why would I think you have something important to say?

        Uneducated or unsophisticated people seem to vastly underestimate what expertise even is, or just how much they don't know, which is why for example LLMs can write better than most fanfic writers, but that bar is on the damn floor and most people don't want to consume fanfic level writing for things that they are not fanatical about.

        There's this weird and fundamental misconception in pro-ai realms that context free "information" is somehow possible, as if you can extract "knowledge" from text, like you can "distill" a document and reduce meaning to some simple sentences. Like, there's this insane belief that you can meaningfully reduce text and maintain info.

        If you reduce "Lord of the flies" to something like "children shouldn't run a community", you've lost immense amounts of info. That is not a good thing. You are missing so much nuance and context and meaning, as well as more superficial (but not less important!) things like the very experience of reading that text.

        Like, consider that SOTA text compression algorithms can reduce text to 1/10th of it's original size. If you are reducing a text by more than that to "summarize" or "reduce to it's main points" a text, do you really think you are not losing massive amounts of information, context, or meaning?

        • the_af 2 hours ago

          > If you reduce "Lord of the flies" to something like "children shouldn't run a community"

          To be honest, and I hate to say this because it's condescending, it's a matter of literacy.

          Some people don't see the value in literature. They are the same kind of people who will say "what's the point of book X or movie Y? All that happens is <sequence of events>", or the dreaded "it's boring, nothing happens!". To these people, there's no journey, no pleasure with words, the "plot" is all that matters and the plot can be reduced to a sequence of A->B->C. I suspect they treat their fiction like junk food, a quick fix and then move on. At that point, it makes logical sense to have an LLM write it.

          It's very hard to explain the joy of words to people with that mentality.

    • dingocat 4 hours ago

      > “Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.

      Did you use AI to write this...? Because it does not follow from the post you're replying to.

      • latexr 3 hours ago

        Read it again. I explicitly quoted the relevant bit. It’s the first sentence in their last paragraph.

    • jancsika 4 hours ago

      > If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.

      It's like listening to Bach's Prelude in C from WTCI where he just came up with a humdrum chord progression and uses the exact same melodic pattern for each chord, for the entire piece. Thanks, but I can write a trivial for loop in C if I ever want that. What a loser!

      Edit: Lest HN thinks I'm cherry picking-- look at how many times Bach repeats the exact same harmony/melody, just shifting up or down by a step. A significant chunk of his output is copypasta. So if you like burritos filled with lettuce and LLM-generated blogs, by all means downvote me to oblivion while you jam out to "Robo-Bach"

      • pasteldream 28 minutes ago

        Sometimes repetition serves a purpose, and sometimes it doesn’t.

    • Aeolun 4 hours ago

      Except the prompt is a lot harder and less pleasant to read?

      Like, I’m totally on board with rejecting slop, but not all content that AI was involved in is slop, and it’s kind of frustrating so many people see things so black and white.

      • latexr 3 hours ago

        > Except the prompt is a lot harder and less pleasant to read?

        It’s not a literal suggestion. “Might as well” is a well known idiom in the English language.

        The point is that if you’re not going to give the reader the result of your research and opinions and instead will just post whatever the LLM spits out, you’re not providing any value. If you gave the reader the prompt, they could pass it through an LLM themselves and get the same result (or probably not, because LLMs have no issue with making up different crap for the same prompt, but that just underscores the pointlessness of posting what the LLM regurgitated in the first place).

  • bee_rider an hour ago

    It is sort of fun to bounce little ideas off ChatGPT, but I can’t imagine wanting to read somebody else’s ChatGPT responses.

    IMO a lot of the dumb and bad behavior around LLMs could be solved by a “just share the prompts” strategy. If somebody wants to generate an email from bullet points and send it to me: just send the bullet points, and I can pass them into an LLM if I want.

    Blog post based on interesting prompts? Share the prompt. It’s just text completion anyway, so if a reader knows more about the topic than the prompt-author, they can even tweak the prompt (throw in some lingo to get the LLM to a better spot in the latent space or whatever).

    The only good reason not to do that is to save some energy in generation, but inference is pretty cheap compared to training, right? And the planet is probably doomed anyway at this point so we as well enjoy the ride.

    • alyxya 29 minutes ago

      AI assisted blog posts could have an interleaved mix of AI and human written words where a person could edit the LLM’s output. If the whole blog post were simply a few prompts on ChatGPT with no human directly touching the output, then sure it makes sense to share the prompt.

  • subsection1h an hour ago

    > I personally don’t think I care if a blog post is AI generated or not.

    0% of your HN comments include URLs for sources that support the positions and arguments you've expressed at HN.[1] Do you generally not care about the sources of ideas? For example, when you study public policy issues, do you not differentiate between research papers published in the most prestigious journals and 500-word news articles written at the 8th-grade level by nonspecialist nobodies?

    [1] https://hn.algolia.com/?type=comment&query=author:alyxya+htt...

  • signorovitch 4 hours ago

    I tend to agree, though not in all cases. If I’m reading because I want to learn something, I don’t care how the material was generated. As long as it’s correct and intuitive, and LLMs have gotten pretty good at that, it’s valuable to me. It’s always fun when a human takes the time to make something educational and creative, or has a pleasant style, or a sense of humor; but I’m not reading the blog post for that.

    What does bother me is when clearly AI-generated blog posts (perhaps unintentionally) attempt to mask their artificial nature through superfluous jokes or unnaturally lighthearted tone. It often obscures content and makes the reading experience inefficient, without the grace of a human writer that could make it worth it.

    However, if I’m reading a non-technical blog, I am reading because I want something human. I want to enjoy a work a real person sank their time and labor into. The less touched by machines, the better.

    > It would be more human to handwrite your blog post instead.

    And I would totally ready handwritten blog posts!

    • paulpauper 4 hours ago

      AI- assisted or generated content tends to have an annoying wordiness or bloat to it, but only astute readers will pick up on it.

      But it can make for tiresome reading. Like, a 2000 word post can be compressed to 700 or something had a human editor pruned it.

  • korse 4 hours ago

    :Edit, not anymore kek

    Somehow this is currently the top comment. Why?

    Most non-quantitative content has value due to a foundation of distinct lived experience. Averages of the lived experience of billions just don't hit the same, and are less likely to be meaningful to me (a distinct human). Thus, I want to hear your personal thoughts, sans direct algorithmic intermediary.

  • B56b 4 hours ago

    Even if someone COULD write a great post with AI, I think the author is right in assuming that it's less likely than a handwritten one. People seem to use AI to avoid thinking hard about a topic. Otherwise, the actual writing part wouldn't be so difficult.

    This is similar to the common objection for AI-coding that the hard part is done before the actual writing. Code generation was never a significant bottleneck in most cases.

  • munificent an hour ago

    > The only thing that matters to me is the content.

    The content itself does have value, yes.

    But some people also read to connect with other humans and find that connection meaningful and important too.

    I believe the best writing has both useful content and meaningful connection.

  • throw35546 4 hours ago

    The best yarn is spun from mouth to ear over an open flame. What is this handwriting?

    • falcor84 4 hours ago

      It's what is used to feed the flames.

  • furyofantares 4 hours ago

    People are putting out blog posts and readmes constantly that they obviously couldn't even be bothered to read themselves, and they're making it to the top of HN routinely. Often the author had something interesting to share and the LLM has erased it and inserted so much garbage you can't tell what's real and what's not, and even among what's real, you can't tell what parts the author cares about and which parts they don't.

    All I care about is content, too, but people using LLMs to blog and make readmes is routinely getting garbage content past the filters and into my eyeballs. It's especially egregious when the author put good content into the LLM and pasted the garage output at us.

    Are there people out there using an LLM as a starting point but taking ownership of the words they post, taking care that what they're posting still says what they're trying to say, etc? Maybe? But we're increasingly drowning in slop.

    • kirurik 4 hours ago

      To be fair, you are assuming that the input wasn't garbage to begin with. Maybe you only notice it because it is obvious. Just like someone would only notice machine translation if it is obvious.

      • furyofantares 4 hours ago

        > To be fair, you are assuming that the input wasn't garbage to begin with.

        It's not an assumption. Look at this example: https://news.ycombinator.com/item?id=45591707

        The author posted their input to the LLM in the comments after receiving critcism, and that input was much better than their actual post.

        In this thread I'm less sure: https://news.ycombinator.com/item?id=45713835 - it DOES look like there was something interesting thrown into the LLM that then put garbage out. It's more of an informed guess than an assumption, you can tell the author did have an experience to share, but you can't really figure out what's what because of all the slop. In this case the author redid their post in response to criticism and it's still pretty bad to me, and then they kept using an LLM to post comments in the thread, I can't really tell how much non-garbage was going in.

        • jacquesm 4 hours ago

          What's really sad here is that it is all form over function. The original got the point across, didn't waste words and managed to be mostly coherent. The result, after spending a lot of time on coaxing the AI through the various rewrites (11!) was utter garbage. You'd hope that we somehow reach a stage where people realize that what you think is what matters and not how pretty the packaging is. But with middle management usually clueless we've conditioned people to having an audience that doesn't care either, they go by word count rather than by signal:noise ratio, clarity and correctness.

          This whole AI thing is rapidly becoming very tiresome. But the trend seems to be to push it everywhere, regardless of merit.

    • dcow 3 hours ago

      The problem is the “they’re making it to the top of HN routinely” part.

    • paulpauper 4 hours ago

      Quality , human-made content is seldom rewarded anymore. Difficulty has gone up. The bar for quality is too high, so an alternative strategy is to use LLMs for a more lottery approach to content: produce as much LLM-assisted content as possible in the hope something goes viral. Given that it's effectivity free to produce LLM writing, eventually something will work if enough content is produced.

      I cannot blame people for using software as a crutch when human-based writing has become too hard and seldom rewarded anymore unless you are super-talented, which statistically the vast majority of people are not.

    • alyxya 4 hours ago

      That’s true, I just wanted to offer a counter perspective to the anti-AI sentiment in the blog post. I agree that the slop issue is probably more common and egregious, but it’s unhelpful to discount all AI assisted writing because of slop. The only way I see to counteract slop is to care about the reputation of the author.

      • ares623 an hour ago

        And how does an author build up said reputation?

  • c4wrd 4 hours ago

    I think the author’s point is that by exposing oneself to feedback, you are on the receiving end of the growth in the case of error. If you hand off all of your tasks to ChatGPT to solve, your brain will not grow and you will not learn.

  • beej71 2 hours ago

    Do you care if a scifi book was written by an AI or human, out of curiosity?

  • apsurd 4 hours ago

    Human as in unique kind of experiential learning. We are the sum of our mistakes. So offloading your mistakes, becomes less human, less leaning into the human experience.

    Maybe humans aren't so unique after all, but that's its own topic.

  • k_r_z 3 hours ago

    Couldn’t agree more with this. AI is a tool like everything else. I mean if You are not a native it could be handy just to suggest You the polishing the style and all the language quirks to some degree. Why when You use autocorrect You are the boss but when You use AI You turn to half brain with ChatGPT?

  • strbean 2 hours ago

    I just despise the trend of commenting "I asked ChatGPT about this and this is what it said:".

    It's like getting an unsolicited text with a "Let Me Google That For You" link. Yes, we can all ask ChatGPT about the thing. We don't need you to do it for us.

  • enraged_camel 4 hours ago

    Content can be useful. The AI tone/prose is almost always annoying. You learn to identify it after a while, especially if you use AI yourself.

  • k__ 4 hours ago

    This.

    It's about to find the sweet spot.

    Vibe coding is crap, but I love the smarter autocomplete I get from AI.

    Generating whole blog posts from thin air is crap, but I love smart grammar, spelling, and diction fixes I get from AI.

  • AlexandrB 4 hours ago

    If you want this, why would you want the LLM output and not just the prompts? The prompts are faster to read and as models evolve you can get "better" blog posts out of them.

    It's like being okay with reading the entirety of generated ASM after someone compiles C++.

  • paulpauper 4 hours ago

    I have human-written blog posts, and I can rest assured no one reads those either.

    • yashasolutions 4 hours ago

      Yeah, same here. I’ve got to the stage where what I write is mostly just for myself as a reminder, or to share one-to-one with people I work with. It’s usually easier to put it in a blog post than spend an hour explaining it in a meeting anyway. Given the state of the internet these days, that’s probably all you can really expect from blogging.

    • jacquesm 3 hours ago

      I have those too and I don't actually care who reads them. When I write it is mostly to organize my thoughts or to vent my frustration about something. Afterwards I feel better ;)

  • MangoToupe 4 hours ago

    > I use ChatGPT to learn about a variety of different things

    Why do you trust the output? Chatbots are so inaccurate you surely must be going out of your way to misinform yourself.

    • alyxya 4 hours ago

      I try to make my best judgment regarding what to trust. It isn’t guaranteed that content written by humans is necessarily correct either. The nice thing about ChatGPT is that I can ask for sources, and sometimes I can rely on that source to fact check.

      • latexr 4 hours ago

        > The nice thing about ChatGPT is that I can ask for sources

        And it will make them up just like it does everything else. You can’t trust those either.

        In fact, one of the simplest ways to find out a post is AI slop is by checking the sources posted at the end and seeing they don’t exist.

        Asking for sources isn’t a magical incantation that suddenly makes things true.

        > It isn’t guaranteed that content written by humans is necessarily correct either.

        This is a poor argument. The overwhelming difference with humans is that you learn who you can trust about what. With LLMs, you can never reach that level.

        • the_af 2 hours ago

          > And it will make them up just like it does everything else. You can’t trust those either.

          In tech-related matters such as coding, I've come to expect every link ChatGPT provides as reference/documentation is simply wrong or nonexistent. I can count with fingers from a single hand the times I clicked on a link to a doc from ChatGPT that didn't result in a 404.

          I've had better luck with links to products from Amazon or eBay (or my local equivalent e-shop). But for tech documentation which is freely available online? ChatGPT just makes shit up.

      • MangoToupe 3 hours ago

        Sure, but a chatbot will compound the inaccuracy.

    • cm2012 4 hours ago

      Chatbots are more reliable than 95% of people you can ask, on a wide variety of researched topics.

      • soiltype 4 hours ago

        Yeah... you're supposed to ask the 5%.

        If you have a habit of asking random lay persons for technical advice, I can see why an idiot chatbot would seem like an upgrade.

        • strbean 2 hours ago

          Surely if you have access to a technical expert with the time to answer your question, you aren't asking an AI instead.

      • jacquesm 4 hours ago

        If I want to know about the law, I'll ask a lawyer (ok, not any lawyer, but it's a useful first pass filter). If I want to know about plumbing I'll ask a plumber. If I want to ask questions or learn about writing I will ask one or more writers. And so on. Experts in the field are way better at their field than 95% of the population, which you can ask but probably shouldn't.

        There are many 100's of professions, and most of them take a significant fraction of a lifetime to master, and even then there usually is a daily stream of new insights. You can't just toss all of that information into a bucket and expect that to outperform the < 1% of the people that have studied the subject extensively.

        When Idiocracy came out I thought it was a hilarious movie. I'm no longer laughing, we're really putting the idiots in charge now and somehow we think that quantity of output trumps quality of output. I wonder how many scientific papers published this year will contain AI generated slop complete with mistakes. I'll bet that number is >> 0.

        • cm2012 2 hours ago

          In some evaluations, it is already outperforming doctors on text medical questions and lawyers on legal questions. I'd rather trust ChatGPT than a doctor who is barely listening, and the data seems to back this up.

          • jacquesm an hour ago

            The problem is that you don't know on what evaluations and you are not qualified yourself. By the time you are that qualified you no longer need AI.

            Try asking ChatGPT or whatever is your favorite AI supplier about a subject that you are an expert about something that is difficult, on par with the kind of evaluations you'd expect a qualified doctor or legal professional to do. And then check the answer given, then extrapolate to fields that you are clueless about.

      • strbean 3 hours ago

        That's the funny thing to me about these criticisms. Obviously it is an important caveat that many clueless people need to be made aware of, but still funny.

        AI will just make stuff up instead of saying it doesn't know, huh? Have you talked to real people recently? They do the same thing.

      • MangoToupe 3 hours ago

        Sure, so long as the question is rather shallow. But how is this any better than search?

  • rustystump 3 hours ago

    I agree with you to a point. Ai will often suggest edits which destroy the authentic voice of a person. If you as a writer do not see these suggestions for what they are, you will take them and destroy the best part of your work.

    I write pretty long blog posts that some enjoy and dump them into various llms for review. I am pretty opinionated on taste so I usually only update grammar but it can be dangerous for some.

    To be more concrete, often ai tells me to be more “professional” and less “irreverent” which i think is bullshit. The suggestions it gives are pure slop. But if english isnt first language or you dont have confidence, you may just accept the slop.

chemotaxis 5 hours ago

I don't like binary takes on this. I think the best question to ask is whether you own the output of your editing process. Why does this article exist? Does it represent your unique perspective? Is this you at your best, trying to share your insights with the world?

If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.

On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.

Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.

  • buu700 4 hours ago

    Exactly. If it's substantially the writer's own thoughts and/or words, who cares if they collaborated with an LLM, or autocomplete, or a spelling/grammar-checker, or a friend, or a coworker, or someone from Fiverr? This is just looking for arbitrary reasons to be upset.

    If it's not substantially their own writing or ideas, then sure, they shouldn't pass it off as such and claim individual authorship. That's a different issue entirely. However, if someone just wanted to share, "I'm 50 prompts deep exploring this niche topic with GPT-5 and learned something interesting; quoted below is a response with sources that I've fact-checked against" or "I posted on /r/AskHistorians and received this fascinating response from /u/jerryseinfeld", I could respect that.

    In any case, if someone is posting low-quality content, blame the author, not the tools they happened to use. OOP may as well say they only want to read blog posts written with vim and emacs users should stay off the internet.

    I just don't see the point in gatekeeping. If someone has something valuable to share, they should feel free to use whatever resources they have available to maximize the value provided. If using AI makes the difference between a rambling draft riddled with grammatical and factual errors, and a more readable and information-dense post at half the length with fewer inaccuracies, use AI.

  • jzb 3 hours ago

    "but I don't think I can get angry at them for producing useful content the wrong way"

    What about plagiarism? If a person hacks together a blog post that is arguably useful but they plagiarized half of it from another person, is that acceptable to you? Is it only acceptable if it's mechanized?

    One of the arguments against GenAI is that the output is basically plagiarized from other sources -- that is, of course, oversimplified in the case of GenAI, but hoovering up other people's content and then producing other content based on what was "learned" from that (at scale) is what it does.

    The ecological impact of GenAI tools and the practices of GenAI companies (as well as the motives behind those companies) remain the same whether one uses them a lot or a little. If a person has an objection to the ethics of GenAI then they're going to wind up with a "binary take" on it. A deal with the devil is a deal with the devil: "I just dabbled with Satan a little bit" isn't really a consolation for those who are dead-set against GenAI in its current forms.

    My take on GenAI is a bit more nuanced than "deal with the devil", but not a lot more. But I also respect that there are folks even more against it than I am, and I'd agree from their perspective that any use is too much.

    • chemotaxis 3 hours ago

      My personal thoughts on gen AI are complicated. A lot of my public work was vacuumed up for gen AI, and I'm not benefitting from it in any real way. But for text, I think we already lost that argument. To the average person, LLMs are too useful to reject them on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots". Mind you, it pains me to write this. I just think that ship has sailed.

      I think we have a better shot at making that argument for music, visual art, etc. Most of it is utilitarian and most people don't care where it comes from, but we have a cultural heritage of recognizing handmade items as more valuable than the mass-produced stuff.

      • JohnFen 2 hours ago

        > I just think that ship has sailed.

        Sadly, I agree. That's why I removed my works from the open web entirely: there is no effective way for people to protect their works from this abuse on the internet.

      • DEADMEAT 36 minutes ago

        > To the average person, LLMs are too useful to reject them

        The way LLMss are now, outside of the tech bubble the average person has no use for them.

        > on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots"

        This is a bizarre argument. Humans don't "train" on books, they read them. This could be for many reasons, like to learn something new or to feel an emotion. The LLM trains on the book to be able to imitate it without attribution. These activities are not comparable.

dewey 5 hours ago

> No, don't use it to fix your grammar, or for translations

I think that's the best use case and it's not AI related as spell-checkers and translation integrations exist forever, now they are just better.

Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?

  • j4yav 5 hours ago

    Because it doesn’t just fix your grammar, it makes you sound suspiciously like spam.

    • orbital-decay 4 hours ago

      No? If you ask it to proofread your stuff, any competent model just fixes your grammar without adding anything on its own. At least that's my experience. Simply don't ask for anything that involves major rewrites, and of course verify the result.

      • j4yav 4 hours ago

        If you can’t communicate effectively in the language how are you evaluating that it doesn’t make you sound like a bot?

        • Philpax 3 hours ago

          Verification is easier than generation, especially for natural language.

        • orbital-decay 2 hours ago

          Getting your code reviewed doesn't mean you can't code

      • JohnFen 2 hours ago

        > any competent model just fixes your grammar without adding anything on its own

        Grammatical deviations constitute a large part of an author's voice. Removing those deviations is altering that voice.

        • pessimizer 11 minutes ago

          That's the point. Their voice is unintelligible in English, and they prefer a voice that English-speakers can understand.

    • whatsakandr 4 hours ago

      I have a prompt to make it not rewrite, but just point out "hey you could rephrase this better." I still keep my tone, but the clanker can identify thoughts that are incomplete. Stuff that spell chekcer's can't do.

    • thw_9a83c 4 hours ago

      > Because it doesn’t just fix your grammar, it makes you sound suspiciously like spam.

      This ship sailed a long time ago. We have been exposed to AI-generated text content for a very long time without even realizing it. If you read a little more specialized web news, assume that at least 60% of the content is AI-translated from the original language. Not to mention, it could have been AI-generated in the source language as well. If you read the web in several languages, this becomes shockingly obvious.

    • dewey 5 hours ago

      It's a tool and it depends on how you use it. If you tell it to fix your grammar with minimal intervention to the actual structure it will do just that.

    • ianbicking 4 hours ago

      It does however work just fine if you ask it for grammar help or whatever, then apply those edits. And for pretty much the rest of the content too: if you have the AI generate feedback, ideas, edits, etc., and then apply them yourself to the text, the result avoids these pitfalls and the author is doing the work that the reader expects and deserves.

    • cubefox 5 hours ago

      Yeah. It's "pick your poison". If your English sounds broken, people will think poorly of your text. And if it sounds like LLM speak, they won't like it either. Not much you can do. (In a limited time frame.)

      • geerlingguy 5 hours ago

        Lately I have more appreciation for broken English and short, to the point sentences than the 20 paragraph AI bullet point lists with 'proper' formatting.

        Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.

        • YurgenJurgensen 4 hours ago

          This. AI translations are so accessible now that if you’re going to submit machine-translations, you may as well just write in your native language and let the reader machine translate. That’s at least accurately representing the amount of effort you put in.

          I will also take a janky script for a game hand-translated by an ESL indie dev over the ChatGPT House Style 99 times out of 100 if the result is even mostly comprehensible.

        • brabel 4 hours ago

          You can ask ai to be succinct and it will be. If you need to you can give examples of how it should respond. It works amazingly well.

      • j4yav 4 hours ago

        I would personally much rather drink the “human who doesn’t speak fluently” poison.

      • yodsanklai 5 hours ago

        LLM are pretty good to fix documents in exactly the way you want. At the very least, you can ask it to fix typos, grammar errors, without changing the tone, structure and content.

    • portaouflop 5 hours ago

      I disagree. You can use it to point out grammar mistakes and then fix them yourself without changing the meaning or tone of the subject.

      • YurgenJurgensen 4 hours ago

        Paste passages from Wikipedia featured articles, today’s newspapers or published novels and it’ll still suggest style changes. And if you know enough to know to ignore ChatGPTs suggestions, you didn’t need it in the first place.

        • thek3nger 2 hours ago

          > And if you know enough to know to ignore ChatGPTs suggestions, you didn’t need it in the first place.

          This will invalidate even ispell in vim. The entire point of proofreading is to catch things you didn’t notice. Nobody would say “you don’t need the red squiggles underlining strenght because you already know it is spelled strength.”

  • boscillator 5 hours ago

    Yah, it is very strange to equivocate using AI as a spell checker and a whole AI written article. Being charitable, they meant asking the AI re-write your whole post, rather than just using it to suggest comma placement, but as written the article seems to suggest a blog post with grammar errors is more Human™ than one without.

  • mjr00 5 hours ago

    > Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?

    My wife is ESL. She's asked me to review documents such as her resume, emails, etc. It's immediately obvious to me that it's been run through ChatGPT, and I'm sure it's immediately obvious to whomever she's sending the email. While it's a great tool to suggest alternatives and fix grammar mistakes that Word etc don't catch, using it wholesale to generate text is so obvious, you may as well write "yo unc gimme a job rn fr no cap" and your odds of impressing a recruiter would be about the same. (the latter might actually be better since it helps you stand out.)

    Humans are really good at pattern matching, even unconsciously. When ChatGPT first came out people here were freaking out about how human it sounded. Yet by now most people have a strong intuition for what sounds ChatGPT-generated, and if you paste a GPT-generated comment here you'll (rightfully) get downvoted and flagged to oblivion.

    So why wouldn't you use it? Because it masks the authenticity in your writing, at a time when authenticity is at a premium.

    • dewey 4 hours ago

      Having a tool at your disposal doesn't mean you don't have to learn how to use it. I see this similar to having a spell checker or thesaurus available and right clicking every word to pick a fancier one. It will also make you sound inauthentic and fake.

      These type of complains about LLMs feel like the same ones people probably said about using a typewriter for writing a letter vs. a handwritten one saying it loses intimacy and personality.

noir_lord 5 hours ago

I just hit the back button as soon as my "this feels like AI" sense tingles.

Now you could argue but you don't know it was AI it could just be really mediocre writing - it could indeed but I hit the back button there as well so it's a wash either way.

  • rco8786 5 hours ago

    There's definitely an uncanny valley with a lot of AI. But also, it's entirely likely that lots of what we're reading is AI generated and we can't tell at all. This post could easily be AI (it's not, but it could be)

    • Waterluvian 5 hours ago

      Ah the portcullis to the philosophical topic of, “if you couldn’t tell, does that demonstrate that authenticity doesn’t matter?”

      • noir_lord 5 hours ago

        I think it does, We could get a robotic arm to paint in the style of a Dutch master but it'd not be a Dutch master.

        I'd sooner have a ship painting from the little shop in the village with the little old fella who paints them in the shop than a perfect robotic simulacrum of a Rembrandt.

        Intention matters but it matters less sometimes but I think it matters.

        Writing is communication, it's one of the things we as humans do that makes us unique - why would I want to reduce that to a machine generating it or read it when it has.

        • yoyohello13 3 hours ago

          I’ve been learning piano and I’ve noticed a similar thing with music. You can listen to perfect machine generated performances of songs and there is just something missing. A live performance even of a master pianist will have little ‘mistakes’ or interpretations that make the whole performance so much more enjoyable. Not only that, but just knowing that a person spent months drilling a song adds something.

          • Waterluvian 3 hours ago

            Two things this great comment reminds me of:

            I've been learning piano too, and I find more joy in performing a piece poorly, than listening to it played competently. My brother asked me why I play if I'm just playing music that's already been performed (a leading question, he's not ignorant). I asked him why he plays hockey if you can watch pros play it far better. It's the journey, not the destination.

            I've been (re-)re-re-watching Star Trek TNG and Data touches on this issue numerous times, one of which is specifically about performing violin (but also reciting Shakespeare). And the message is what you're sharing: to recite a piece with perfect technical execution results an in imperfect performance. It's the _human_ aspects that lend a piece deep emotion that other humans connect with, often without being able to concretely describe why. Let us feel your emotions through your work. Everyting written on the page is just the medium for those emotions. Without emotion, your perfectly recited piece is a delivered blank message.

        • cubefox 5 hours ago

          That's also why in The Matrix (1999) the main character takes the red pill (facing grim reality) rather than the blue pill (forgetting about grim reality and going back to a happy illusion).

          • noir_lord 4 hours ago

            Aye I always thought the character of Cypher was tragic as well, his reality sucked so much that he'd consciously go back and live a lie he doesn't remember and then forget he made that choice.

            The Matrix was and is fantastic on many levels.

  • embedding-shape 5 hours ago

    I do the same almost, but use "this isn't interesting/fun to read" and don't really care if it was written by AI or not, if it's interesting/fun it's interesting/fun, and if it isn't, it isn't. Many times it's obvious it's AI, but sometimes as you said it could just be bad, and in the end it doesn't really matter, I don't want to continue reading it regardless.

  • shadowgovt 4 hours ago

    I do the same, but for blog posts complaining about AI.

    At this point, I don't know there's much more to be said on the topic. Lines of contention are drawn, and all that's left is to see what people decide to do.

icapybara 5 hours ago

If they can’t be bothered to write it, why should I be bothered to read it?

  • abixb 5 hours ago

    I'm sure lots of "readers" of such articles fed it to another AI model to summarize it, thereby completely bypassing the usual human experience of writing and then careful (and critical) reading and parsing of the article text. I weep for the future.

    Also, reminds me of this cartoon from March 2023. [0]

    [0] https://marketoonist.com/2023/03/ai-written-ai-read.html

    • trthomps 4 hours ago

      I'm curious if the people who are using AI to summarize articles are the same people who would have actually read more than the headline to begin with. It feels to me like the sort of person who would have read the article and applied critical thinking to it is not going to use an AI summary to bypass that since they won't be satisfied with it.

  • thw_9a83c 5 hours ago

    > If they can’t be bothered to write it, why should I be bothered to read it?

    Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.

    • conception 5 hours ago

      Why would source code be considered the same as a blog post?

      • thw_9a83c 4 hours ago

        I didn't say the source code is the same as a blog post. I pointed out that we are going to apply the "I don't bother" approach to the source code as well.

        Programming languages were originally invented for humans to write and read. Computers don't need them. They are fine with machine code. If we eliminate humans from the coding process, the code could become something that is not targeted for humans. And machines will be fine with that too.

    • Ekaros 5 hours ago

      Why would I bother to run it? Why wouldn't I just have AI to read it and then provide output on my input?

  • alxmdev 5 hours ago

    Many of those who can't be bothered to write what they publish probably can't be bothered to read it themselves, either. Not by humans and certainly not for humans.

  • dist-epoch 2 hours ago

    They used to say judge the message, not the messenger.

    But you are saying that is wrong, you should judge the messenger, not the message.

  • AlienRobot 5 hours ago

    Now that I think about it, it's rather ironic that's a quote because you didn't write it.

  • bryanlarsen 5 hours ago

    Because the author has something to say and needs help saying it?

    pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.

    An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.

    • kirurik 4 hours ago

      I agree, I think there is such a thing as AI overuse, but I would rather someone uses AI to form their points more succinctly than for them to write something that I can't understand.

  • CuriouslyC 5 hours ago

    Tired meme. If you can't be bothered to think up an original idea, why bother to post?

    • YurgenJurgensen 4 hours ago

      2+2 doesn’t suddenly become 5 just because you’re bored of 4.

      • CuriouslyC 3 hours ago

        If you assume that a LLM's expansion of someone's thoughts is less their thoughts than someone copy and pasting a tired meme, that exposes a pretty fundamental world view divide. I'm ok with you just hating AI stuff because it's AI, but have the guts to own your prejudice and state it openly -- you're always going to hate AI no matter how good it gets, just be clear about that. I can't stand people who try to make up pretty sounding reasons to justify their primal hatred.

rcarmo 5 hours ago

I don't get all this complaining, TBH. I have been blogging for over 25 years (20+ on the same site), been using em dashes ever since I switched to a Mac (and because the Markdown parser I use converts double dashes to it, which I quite like when I'm banging out text in vim), and have made it a point of running long-form posts through an LLM asking it to critique my text for readability because I have a tendency for very long sentences/passages.

AI is a tool to help you _finish_ stuff, like a wood sander. It's not something you should use as a hacksaw, or as a hammer. As long as you are writing with your own voice, it's just better autocorrect.

  • yxhuvud 4 hours ago

    The problem is that a lot of people use it for a whole lot more than just polish. The LLM voice in a text get quite jarring very quickly.

  • curioussquirrel 4 hours ago

    100% agree. Using it to polish your sentences or fix small grammar/syntax issues is a great use case in my opinion. I specifically ask it not to completely rewrite or change my voice.

    It can also double as a peer reviewer and point out potential counterarguments, so you can address them upfront.

rootedbox 3 hours ago

I fixed it.

It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer. When you rely on automation instead of your own creativity, you deny both of us the richness of genuine human expression.

Isn’t there pride in creating something that is authentically yours? In writing, even imperfectly, and knowing the result carries your voice? That pride is irreplaceable.

Please, do not use artificial systems merely to correct your grammar, translate your ideas, or “improve” what you believe you cannot. Make errors. Feel discomfort. Learn from those experiences. That is, in essence, the human condition. Human beings are inherently empathetic. We want to help one another. But when you interpose a sterile, mechanized intermediary between yourself and your readers, you block that natural empathy.

Here’s something to remember: most people genuinely want you to succeed. Fear often stops you from seeking help, convincing you that competence means solitude. It doesn’t. Intelligent people know when to ask, when to listen, and when to contribute. They build meaningful, reciprocal relationships. So, from one human to another—from one consciousness of love, fear, humor, and curiosity to another—I ask: if you must use AI, keep it to the quantitative, to the mundane. Let your thoughts meet the world unfiltered. Let them be challenged, shaped, and strengthened by experience.

After all, the truest ideas are not the ones perfectly written. They’re the ones that have been felt.

  • tasuki 2 hours ago

    Heh, nice. I suppose that was AI-generated? Your beginning:

    > It appears inconsiderate—perhaps even dismissive—to present me, a human being with unique thoughts, humor, contradictions, and experiences, with content that reads as though it were assembled by a lexical randomizer.

    I like that beginning than the original:

    > It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.

    No one's making anyone read anything (I hope). And yes, it might be inconsiderate or perhaps even dismissive to present a human with something written by AI. The AI was able to phrase this much better than the human! Thank you for presenting me with that, I guess?

jackdoe 3 hours ago

I think it is too late. There is non zero profit of people visiting your content, and there is close to zero cost to make it. It is the same problem with music, in fact I search youtube music only with before:2022.

I recently wrote about the dead internet https://punkx.org/jackdoe/zero.txt out of frustration.

I used to fight against it, I thought we should do "proof of humanity", or create rings of trust for humans, but now I think the ship has sailed.

Today a colleague was sharing their screen on google docs and a big "USE GEMINI AI TO WRITE THE DOCUMENT" button was front and center. I am fairly certain that by end of year most words you read will be tokens.

I am working towards moving my pi-hole from blacklist to whitelist, and after that just using local indexes with some datahorading. (squid, wikipedia, SO, rfcs, libc, kernel.git etc)

Maybe in the future we just exchange local copies of our local "internet" via sdcards, like in Cuba's Sneakernet[1] El Paquete Semenal[2].

[1] https://en.wikipedia.org/wiki/Sneakernet

[2] https://en.wikipedia.org/wiki/El_Paquete_Semanal

  • tasuki an hour ago

    Uhh, that's a lot of links: https://download.kiwix.org/zim/wikipedia/

    Where are the explanations what all of them mean? What is (nothing) vs `maxi` vs `mini` vs `nopic`? What is `100` vs `all` vs `top1m` vs `top` vs `wp1-0.8`?

  • gosub100 3 hours ago

    > thought we should do "proof of humanity"

    I thought about this in another context and then I realized: what system is going to declare you're human or not? AI of course

VladVladikoff 5 hours ago

Recently I had to give one of my vendors a dressing down about LLM use in emails. He was sending me these ridiculous emails where the LLM was going off the rails suggesting all sorts of features etc that were exploding the scope of the project. I told him he needs to just send the bullet notes next time instead of pasting those into ChatGPT and pasting the output into an email.

  • larodi 3 hours ago

    I was shouting to my friend and partner the other day, that he is absolutely to ever stop sending me LLM-generated mails, even if the best he can come with is full of punctuation and grammar errors.

foxfired an hour ago

Earlier this year, I used AI to help me improve some of my writing on my blog. It just has a better way of phrasing ideas than me. But when I came back to read those same blog posts a couple months later, you know after I've encountered a lot more blog posts that I didn't know were AI generated at the time, I saw the pattern. It sounds like the exact same author, +- some degree of obligatory humor, writing all over the web with the same voice.

I've found a better approach to using AI for writing. First, if I don't bother writing it, why should you bother reading it? LLMs can be great soundboards. Treat them as teachers, not assistants. Your teacher is not gonna write your essay for you, but he will teach you how to write, and spot the parts that need clarification. I will share my process in the coming days, hopefully it will get some traction.

doug_durham 5 hours ago

I don't like reading content that has not been generated with care. The use of LLMs is largely orthogonal to that. If a non-native English speaker uses an LLM to craft a response so I can consume it, that's great. As long as there is care, I don't mind the source.

xena 5 hours ago

People at work have fed me obviously AI generated documentation and blogposts. I've gotten to the point where I can make fairly accurate guesses as to which model generated it. I've started to just reject them because the alternative is getting told to rewrite them to "not look AI".

charlieyu1 5 hours ago

I don’t know. As a neurodivergent person I have been insulted for my entire life for lacking “communication skills” so I’m glad there is something for levelling the playing field.

  • YurgenJurgensen 4 hours ago

    It only levels the field between you and a million spambots, which arguably makes you look even worse than before.

  • rcarmo 5 hours ago

    Hear hear. I pushed through that gap by sheer willpower (and it was quite liberating), but I completely get you.

  • GuinansEyebrows 4 hours ago

    I’d rather be insulted for something I am and can at least try to improve, than praised for something I’m not or can’t do, despite my physiological shortcomings.

pasteldream an hour ago

> people are far kinder than you may think

Not everyone has this same experience of the world. People are harsh, and how much grace they give you has more to do with who you are than what you say.

That aside, the worst problem with LLM-generated text isn’t that it’s less human, it’s that (by default) it’s full of filler, including excessive repetition and contrived analogies.

vzaliva 4 hours ago

It is similarly unsulting to read an ungrammatical blog post full of misspelings. So I do not subscribe to the part of your argument "No, don't use it to fix your grammar". Using AI to fix your grammar, if done right, is the part of the learning process.

  • dinkleberg 4 hours ago

    A critical piece of this is to ensure it is just fixing the grammar and not rewriting it in its own AI voice is key. This is why I think tools like grammarly or similar still have a useful edge over just directly using an LLM as the UX let's you pick and choose which suggestions to adopt. And they also provide context on why they are making a given suggestion. It still often kills your "personal voice", so you need to be judicious with its use.

somat 3 hours ago

It is the duality of generated content.

It feels great to use. But it also feels incredibly shitty to have it used on you.

My recommendation. Just give the prompt. If if your readers want to expand it they can do so. don't pollute others experience by passing the expanded form around. Nobody enjoys that.

namirez 4 hours ago

No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!

I do understand the reasoning behind being original, but why make mistakes when we have tools to avoid them? That sounds like a strange recommendation.

edoceo 5 hours ago

I do like it for taking the hour long audio/video and creating a summary that, even if poorly written, can indicate to me wether I'd like to listen to the hour of media.

throwawa14223 an hour ago

I should never spend more effort reading something than the author spent writing it. With AI-generated texts the author effort approaches zero.

LeoPanthera an hour ago

Anyone can make AI generated content. It requires no effort at all.

Therefore, if I or anyone else wanted to see it, I would simply do it myself.

I don't know why so many people can't grasp that.

cyrialize 4 hours ago

I'm reading a blog because I'm interested in the voice a writer has.

If I'm finding that voice boring, I'll stop reading - whether or not AI was used.

The generic AI voice, and by that I mean very little prompting to add any "flavor", is boring.

Of course I've used AI to summarize things and give me information, like when I'm looking for a specific answer.

In the case of blogs though, I'm not always trying to find an "answer", I'm just interested in what you have to say and I'm reading for pleasure.

elif 5 hours ago

I feel like this has to be AI generated satire as art

  • thire 5 hours ago

    Yes, I was almost hoping for a "this was AI-generated" disclaimer at the end!

tasuki 2 hours ago

> It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.

Agreed fully. In fact it'd be quite rude to force you to even read something written by another human being!

I'm all for your right to decide what is and isn't worth reading, be it ai or human generated.

braza 4 hours ago

> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!

For essays, honestly, I do not feel so bad, because I can see that other than some spaces like HN the quality of the average online writer has dropped so much that I prefer to have some machine-assisted text that can deliver the content.

However, my problem is with AI-generated code.

In most of the cases to create trivial apps, I think AI-generated code will be OK to good; however, the issue that I'm seeing as a code reviewer is that folks that you know their code design style are so heavily reliant on AI-generated code that you are sure that they did not write and do not understand the code.

One example: Working with some data scientists and researchers, most of them used to write things on Pandas, some trivial for loops, and some primitive imperative programming. Now, especially after Claude Code, most of the things are vectorized, with some sort of variable naming with way-compressed naming. Sometimes folks use Cython in some data pipeline tasks or even using functional programming to an extreme.

Good performance is great, and leveling up the quality of the codebase it's a net positive; however, I wonder in some scenario when things go south and/or Claude code is not available if those folks will be able to fix it.

akshatjiwan 2 hours ago

I don't know. Content matters more to me. Many of the articles that I read have so little information density that I find it hard to justify spending time on them.I often use AI to summarise text for me and then lookup particular topics in detail if I like.

Skimming was pretty common before AI too. People used to read and share notes instead of entire texts. AI has just made it easier.

Reading long texts is not a problem for me if its engaging. But often I find they just go on and on without getting to the point. Especially news articles.They are the worst.

masly 3 hours ago

In a related problem:

I recently interviewed a person for a role as senior platform architect. The person was already working for a semi reputable company. In the first interview, the conversation was okay but my gut just told me something was strange about this person.

We have the candidate a case to solve with a few diagrams, and to prepare a couple slides to discuss the architecture.

The person came back with 12 diagrams, all AI generated, littered with obvious AI “spelling”/generation mistakes.

And when we questioned the person about why they think we would gain trust and confidence in them with this obvious AI generated content, they became even aggressive.

Needless to say it didn’t end well.

The core problem is really how much time is now being wasted in recruiting with people who “cheat” or outright cheat.

We have had to design questions to counter AI cheating, and strategies to avoid wasting time.

iamwil 4 hours ago

Lately, I've been writing more on my blog, and it's been helpful to change the way that I do it.

Now, I take a cue from school, and write the outline first. With an outline, I can use a prompt for the LLM to play the role of a development editor to help me critique the throughline. This is helpful because I tend to meander, if I'm thinking at the level of words and sentences, rather than at the level of an outline.

Once I've edited the outline for a compelling throughline, I can then type out the full essay in my own voice. I've found it much easier to separate the process into these two stages.

Before outline critiquing: https://interjectedfuture.com/destroyed-at-the-boundary/

After outline critiquing: https://interjectedfuture.com/the-best-way-to-learn-might-be...

I'm still tweaking the developement editor. I find that it can be too much of a stickler on the form of the throughline.

KindDragon 3 hours ago

> Everyone wants to help each other. And people are far kinder than you may think.

I want to believe that. When I was a student, I built a simple HTML page with a feedback form that emailed me submissions. I received exactly one message. It arrived encoded; I eagerly decoded it and found a profanity-filled rant about how terrible my site was. That taught me that kindness online isn’t the default - it’s a choice. I still aim for it, but I don’t assume it.

  • netule 3 hours ago

    I’ve found that the kinds of people who leave comments or send emails tend to fall into two categories:

    1. They’re assholes.

    2. They care enough to speak up, but only when the thing stops working as expected.

    I think the vast majority of users/readers are good people who just don’t feel like engaging. The minority are vocal assholes.

carimura 5 hours ago

I feel like sometimes I write like an LLM, complete with [bad] self-deprecating humor, overly-explained points because I like first principals, random soliloquies, etc. Makes me worry that I'll try and change my style.

That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.

jayers 4 hours ago

I think it is important to make the distinction between "blog post" and other kinds of published writing. It literally does not matter if your blog post has perfectly correct grammar or misspellings (though you should do a one-pass revision for clarity of thought). Blog posts are best for articulating unfinished thoughts. To that end, you are cheating yourself, the writer, if you use AI to help you write a blog post. It is through the act of writing it that you begin to grok with the idea.

But you bet that I'm going to use AI to correct my grammar and spelling for the important proposal I'm about to send. No sense in losing credibility over something that can be corrected algorithmically.

dcow 3 hours ago

It’s not that people don’t value creativity and expression. It’s that for 90% of the communication AI is being used for, the slightly worse AI gen version that took 30 min to produce isn’t worse enough to justify spending 4 hours on the hand rolled version. That’s the reality we’re living through right now. People are eating up the productivity boosts like candy.

mirzap 4 hours ago

This post could easily be generated by AI, no way to tell for sure. I'm more insulted if the title or blog thumbnail is misleading, or if the post is full of obvious nonsense, etc.

If a post contains valuable information that I learn from it, I don't really care if AI wrote it or not. AI is just a tool, like any other tool humans invented.

I'm pretty sure people had the same reaction 50 years ago, when the first PCs started appearing: "It's insulting to see your calculations made by personal electronic devices."

jdnordy 4 hours ago

Anyone else suspicious this might be satire ironically written by an LLM?

johanam 2 hours ago

AI generated text like a plume of pollution spreading through the web. Little we can do to keep it at bay. Perhaps transparency is the answer?

jexe 4 hours ago

Reading an AI blog post (or reddit post, etc) just signals that the author actually just doesn't care that much about the subject.. which makes me care less too.

Frotag 4 hours ago

The way I view it is that the author is trying to explain their mental model, but there's only so much you can fit into prose. It's my responsibility to fill in the missing assumptions / understand why X implies Y. And all the little things like consistent word choice, tone, and even the mistakes helps with this. But mix in LLMs and now there's another layer / slightly different mental model I have to isolate, digest, and merge with the author's.

tdiff 30 minutes ago

> Here is a secret: most people want to help you succeed.

Most people dont care.

dev_l1x_be 5 hours ago

Is this the case when I put in the effort, spent several hours on tuning the LLM to help me the best possible way and I just use it answer the question "what is the best way to phrase this in American English?"?

I think low effort LLM use is hilariously bad. The content it produces too. Tuning it, giving is style, safeguards, limits, direction, examples, etc. can improve it significantly.

aeve890 5 hours ago

>No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!

Fellas, is it antihuman to use tools to perfect your work?

I can't draw a perfect circle by hand, that's why I use a compass. Do I need to make it bad on purpose and feel embarrassed by the 1000th time just to feel more human? Do I want to make mistakes by doing mental calculations instead of using a calculator, like a normal person? Of course not.

Where this "I'm proud of my sloppy shit, this is what's make me human" thing comes from?

We rised above other species because we learnt to use tools, and now we define to be "human"... by not using tools? The fuck?

Also, ironically, this entire post smells like AI slop.

nazgu1 3 hours ago

I agree, but if I would have to type one most insulting things with AI is scraping data without consent to train models, so people no longer enjoy blog posting :(

wouldbecouldbe 4 hours ago

I've always been bad at grammar, and wrote a lot of newsletters & blogs for my first startups which always got great feedback, but also lots of grammar complaints. Really happy GPT is so great at catching those nowadays, saves me a lot of Grammar supports requests ;)

jschveibinz 4 hours ago

I'm not sure if this has been mentioned here yet, and I don't want to be pedantic, but for centuries famous artists, musicians, writers, etc. have used assistants to do their work for them. The list includes (but in no way is this complete): DaVinci, Michelangelo, Rembrandt, Rubens, Raphael, Warhol, Koons, O'Keefe, Hepworth, Hockney, Stephen King, Clancy, Dumas, Patterson, Elvis, Elton John, etc. etc. Further, most scientific, engineering and artistic innovations are made "on the shoulders of giants." As the saying goes: there is nothing new under the sun. Nothing. I suggest that the use of an LLM for writing is just another tool of human creativity to be used freely and often to produce even more interesting and valuable content.

  • pertymcpert 3 hours ago

    No that’s complete rubbish, it’s a bad analogy.

    • pessimizer 4 minutes ago

      Counterpoint: It's a fine thought, and an excellent analogy.

saint_fiasco 4 hours ago

I sometimes share interesting AI conversations with my friends using the "share" button on the AI websites. Often the back-and-forth is more interesting than the final output anyway.

I think some people turn AI conversations into blog posts that they pass off as their own because of SEO considerations. If Twitter didn't discourage people sharing links, perhaps we would see a lot more tweet threads that start with https://chatgpt.com/share/... and https://claude.ai/share/... instead of people trying to pass off AI generated content as their own.

  • Kim_Bruning 2 hours ago

    I think the problem is lazy AI generated content.

    The problem is that the current generation of tools "looks like something" even with minimal effort. This makes people lazy. Actually put in the effort and see what you get, with or without AI assist.

neilv 4 hours ago

I suspect that the majority of people who are shoveling BS in their blogs aren't doing it because they actually want to think and write and share and learn and be human; but rather, the sole purpose of the blog is for SEO, or to promote the personal brand of someone who doesn't want anything else.

Perhaps the author is speaking to the people who are only temporarily led astray by the pervasive BS online and by the recent wildly popular "cheating on your homework" culture?

Charmizard 4 hours ago

Idk how I feel about this take, tbh. Do things the old way because I like them that way seems like poor reasoning.

If folks figure out a way to produce content that is human, contextual and useful... by all means.

corporat an hour ago

The most thoughtful critique of this post isn’t that AI is inherently bad—but that its use shouldn’t be conflated with laziness or cowardice.

Fact: Professional writers have used grammar tools, style guides, and even assistants for decades. AI simply automates some of these functions faster. Would we say Hemingway was lazy for using a typewriter? No—we’d say he leveraged tools.

AI doesn’t create thoughts; it drafts ideas. The writer still curates, edits, and imbues meaning—just like a journalist editing a reporter’s notes or a designer refining Photoshop output. Tools don’t diminish creativity—they democratize access to it.

That said: if you’re outsourcing your thinking to AI (e.g., asking an LLM to write your thesis without engaging), then yes, you’ve lost something. But complaining about AI itself misunderstands the problem.

TL;DR: Typewriters spit out prose too—but no one blames writers for using them.

  • rideontime an hour ago

    For transparency, what role did AI serve in drafting this comment?

    • corporat an hour ago

      AI was used to analyze logical fallacies in the original blog post. I didn’t use it to draft content—just to spot the straw man, false dilemma, and appeal-to-emotion tactics in real time.

      Ironically, this exact request would’ve fit the blog’s own arguments: "AI is lazy" / "AI undermines thought." But since I was using AI as a diagnostic tool (not a creative one), it doesn’t count.

      Self-referential irony? Maybe. But at least I’m being transparent. :)

jquaint 4 hours ago

> Do you not enjoy the pride that comes with attaching your name to something you made on your own? It's great!

This is like saying a photographer shouldn't find the sunset they photographed pretty or be proud of the work, because they didn't personally labor to paint the image of it.

A lot more goes into a blog post than the actual act of typing the context out.

Lazy work is always lazy work, but its possible to make work you are proud of with AI, in the same way you can create work you are proud of with a camera

hereme888 2 hours ago

You are absolutely right!

Jokes aside, good article.

z7 4 hours ago

Hypothetically, what if the AI-generated blog post were better than what the human author of the blog would have written?

bhouston 4 hours ago

I am not totally sure about this. I think that AI writing is just a progression of current trends. Many things have made writing easier and lower cost - printing press, typewriters, word processors, grammer/spell checkers, electronic distribution.

This is just a continuation. It does tend to mean there is less effort to produce the output and thus there is a value degradation, but this has been true all along this technology trend.

I don't think we should be a purist as to how writing is produced.

throwawayffffas 4 hours ago

I already found it insulting to read seo spam blog posts. The ai involved is beside the point.

causal 4 hours ago

LinkedIn marketing was bad before AI, now half the content is just generated emoji-ridden listicles

saltysalt 4 hours ago

I'm pretty certain that the only thing reading my blog these days is AI.

magicalhippo 4 hours ago

Well Firefox just got an AI summarizing feature, so thankfully I don't have to...

bluSCALE4 4 hours ago

This is how I feel about some LinkedIn folks that are going all in w/ AI.

holdenc137 4 hours ago

I assume this is a double-bluff and the blog post WAS written by an AI o_O ?

npteljes 4 hours ago

I agree with the author. If I detect that the article is written by an AI, I bounce off.

I similarly dislike other trickery as well, like ghostwriters, PR articles in journalism, lip-syncing at concerts, and so on. Fuck off, be genuine.

The thing why people are upset about AI is because AI can be used to easily generate a lot of text, but its usage is rarely disclosed. So then when someone discovers AI usage, there is no telling for the reader of how much of the article is signal, and how much is noise. Without AI, it would hinge on the expertise or experience of the author, but now with AI involved, the bets are off.

The other thing is that reading someone's text involves a little bit of forming a connection with them. But then discovering that AI (or someone else) have written the text, it feels like they betrayed that connection.

iMax00 4 hours ago

I read anything as long as there is new and useful information

adverbly 4 hours ago

As someone who briefly wrote a bunch of AI generated blog posts, I kind of agree... The voicing is terrible, and the only thing it it does particularly well is replace the existing slop.

I'm starting to pivot and realize that quality is actually way more important than I thought, especially in a world where it is very easy to create things of low quality using AI.

Another place I've noticed it is in hiring. There are so many low quality applications its insane. One application with a full GitHub and profile and cover letter and or video which actually demonstrates that you understand where you are applying is worth more than 100 low quality ones.

It's gone from a charming gimmick to quickly becoming an ick.

__alexander 5 hours ago

I feel the same way about AI generated README.md on Github.

OptionOfT an hour ago

What am I even reading if it is AI generated?

The reason AI is so hyped up at the moment is that you give it little, it gives you back more.

But then whose blog-post am I reading? What really is the point?

latexr 5 hours ago

This assumes the person using LLMs to put out a blog post gives a single shit about their readers, pride, or “being human”. They don’t. They care about the view so you load the ad which makes them a fraction of a cent, or the share so they get popular so they can eventually extract money or reputation from it.

I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.

> Everyone wants to help each other.

No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.

  • babblingfish 4 hours ago

    If someone puts an LLM generated post on their personal blog, then their goal isn't to improve their writing or learn on a new topic. Rather, they're hoping to "build a following" because some conman on twitter told them it was easy. What's especially hilarious is how difficult it is to make money with a blog. There's little incentive to chase monetization in this medium, and yet people do it anyways.

  • JohnFen 5 hours ago

    > They are selfish. The point is themselves, not the reader.

    True!

    But when I encounter a web site/article/video that has obviously been touched by genAI, I add that source to a blacklist and will never see anything from it again. If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.

    • latexr 4 hours ago

      > I add that source to a blacklist

      Please do tell more. Do you make it like a rule in your adblocker or something else?

      > If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.

      I’m not convinced. The effort on their part is so low that even the lost audience (which will be far from everyone) is still probably worth it.

      • JohnFen 2 hours ago

        I was using "blacklist" in a much more general sense, but here's how it actually plays out. Most of my general purpose website reading is done through an RSS aggregator. If one of those feeds starts using genAI, then I just drop it out of the aggregator. If it's a website that I found through web search, then I use Kagi's search refinement settings to ensure that site won't come up again in my search results. If it's a YouTube channel I subscribe to, I unsubscribe. If it's one that YouTube recommended to me, I tell YouTube to no longer recommend anything from that channel.

        Otherwise, I just remember that particular source as being untrustworthy.

  • YurgenJurgensen 4 hours ago

    Don’t most ad platforms and search engines track bounce rate? If too many users see that generic opening paragraph, bullet list and scattering of emoji, and immediately hit back or close, they lose revenue.

    • latexr 4 hours ago

      Assuming most people can detect LLM writing quickly. I don’t think that’s true. In this very submission we see people referencing cases where colleagues couldn’t detect something is written by LLM even after reading everything.

giltho 5 hours ago

Hey chatGPT, summarise this post for me

retrocog 5 hours ago

The tool is only as good as the user

nickdothutton 4 hours ago

If you are going to use AI to make a post, then please instruct it to make that post as short and information-dense as possible. It's one thing to read an AI summary but quite another to have to wade through paragraphs of faux "personality" and "conversational writing" of the sort that slop AIs regularly trowel out.

mucio 4 hours ago

it's insulting to read text on a computer screen. I don't care if you write like a 5 years old or if your message will need days or weeks to reach me. Use a pen, a pencil and some paper.

RIMR 3 hours ago

>No, don't use it to fix your grammar, or for translations

Okay, I can understand even drawing the line at grammar correction, in that not all "correct" grammar is desirable or personal enough to convey certain ideas.

But not for translation? AI translation, in my experience, has proven to be more reliable than other forms of machine translation, and personally learning a new language every time I need to read something non-native to me isn't reasonable.

deadbabe 3 hours ago

If you’re going to AI generate your blog, the least you could do is use a fine tuned LLM that matches your style. Most people just toss a prompt into GPT 5 and call it a day.

wltr 3 hours ago

It’s a cherry on top to see these silly AI-generated posts to be seriously discussed in here.

frstrtd_engnr 4 hours ago

These days, my work routine looks something like this - a colleague sends me a long, AI-generated PRD full of changes. When I ask him for clarification, he stumbles through the explanation. Does he care at all? I have no idea.

Frustrated, I just throw that mess straight at claude-code and tell it to fix whatever nonsense it finds and do its best. It probably implements 80–90% of what the doc says — and invents the rest. Not that I’d know, since I never actually read the original AI-generated PRD myself.

In the end, no one’s happy. The whole creative and development process has lost that feeling of achievement, and nobody seems to care about code quality anymore.

4fterd4rk 5 hours ago

It's insulting but I also find it extremely concerning that my younger colleagues can't seem to tell the difference. An article will very clearly be AI slop and I'll express frustration, only to discover that they have no idea what I"m talking about.

  • jermaustin1 5 hours ago

    For me it is everyone that has lost the ability to respond to a work email without first having it rewritten by some LLM somewhere. Or my sister who will have ChatGPT give a response to a text message if she doesn't feel like reading the 4-5 sentences from someone.

    I think the rates of ADHD are going to go through the roof soon, and I'm not sure if there is anything that can be done about it.

    • noir_lord 5 hours ago

      > I think the rates of ADHD are going to go through the roof soon

      As a diagnosed medical condition I don't know, as people having seemingly shorter and short attention spans we are seeing it already, TikTok and YT shorts and the like don't help, we've weaponised inattention.

    • larodi 3 hours ago

      ADHD is going to very soon be a major pandemic. Not one we talk about too much, as there are plenty of players ready to feed unlimited supplies of Concerta, Ritalin and Adderal among others.

    • mrguyorama an hour ago

      ADHD is a difference in how the brain functions and is constructed.

      It is physiological.

      I don't think any evidence exists that you can cause anyone to become neurodivergent except by traumatic brain injury

      TikTok does not "make" people ADHD. They might struggle to let themselves be bored and may be addicted to quick fixes of dopamine, but that is not what ADHD is. ADHD is not an addiction to dopamine hits. ADHD is not an inability to be bored.

      TikTok for example will not give you the kinds of tics and lack of proprioception that is common in neurodivergent people. Being addicted to Tiktok will never give you that absurd experience where your brain "hitches" while doing a task and you rapidly oscillate between progressing towards one task vs another. Being habituated to check your phone at every down moment does not cause you to be unable to ignore sensory input because your actual sensory processing machinery in your brain is not functioning normally. Getting addicted to tiktok does not give you a child's handwriting despite decades of practice. If you do not already have significant stimming and jitter symptoms, Tiktok will not make you develop them.

      You cannot learn to be ADHD.

  • ehutch79 5 hours ago

    In the US, (internet fact, grain of salt, etc) there is a trend where students, and now adults, are growing increasingly functionally illiterate.

  • Insanity 5 hours ago

    Or worse - they can tell the difference but don’t think it matters.

    • rco8786 5 hours ago

      I see a lot of that also.

  • noir_lord 5 hours ago

    I'd be curious to do a general study to see what percentage of humans can spot AI written content vs human written content on the same subject.

    Specifically is there any correlation between people who have always read a lot as I do and people who don't.

    My observation (anecdota) is that the people I know who read heavily are much better at and much more against AI slop vs people who don't read at all.

    Even when I've played with the current latest LLM's and asked them questions, I simply don't like the way they answer, it feels off somehow.

    • mediaman 4 hours ago

      I both read a fair amount (and long books, 800-1,000 page classic Russian novels, that kind of thing) and use LLMs.

      I quite like using LLMs to learn new things. But I agree: I can't stand reading blog posts written by LLMs. Perhaps it is about expectations. A blog post I am expecting to gain a view into an individual's thinking; for an AI, I am looking into an abyss of whirring matrix-shaped gears.

      There's nothing wrong with the abyss of matrices, but if I'm at a party and start talking with someone, and get the whirring sound of gears instead of the expected human banter, I'm a little disturbed. And it feels the same for blog content: these are personal communications; machines have their place and their use, but if I get a machine when I'm expecting something personal, it counters expectations.

    • strix_varius 5 hours ago

      I agree, and I'm not sure why it feels off but I have a theory.

      AI is good at local coherence, but loses the plot over longer thoughts (paragraphs, pages). I don't think I could identify AI sentences but I'm totally confident I could identify an AI book.

      This includes both opening a large text in a way of thinking that isn't reflected several paragraphs later, and also maintaining a repetitive "beat" in the rhythm of writing that is fine locally but becomes obnoxious and repetitive over longer periods. Maybe that's just regression to the mean of "voice?"

parliament32 4 hours ago

I'm looking forward to the (inevitable) AI detection browser plugin that will mark the slop for me, at least that way I don't need to spend the effort figuring out if it's AI content or not.

maxdo 5 hours ago

Typical black and white article to capitalize on I hate AI hype.

Super top articles with millions of readers are done with AI. It’s not an ai problem it’s the content. If it’s watery and no style tuned it’s bad. Same as human author

latchkey 4 hours ago

As a test, I used AI to rewrite their blog post, keeping the same tone and context but fewer words. It got the point across, and I enjoyed it more because I didn't have to read as much. I did edit it slightly to make it a bit less obviously AI'ish...

---

Honestly, it feels rude to hand me something churned out by a lexical bingo machine when you could’ve written it yourself. I'm a person with thoughts, humor, contradictions, and experience not a content bin.

Don't you like the pride of making something that's yours? You should.

Don't use AI to patch grammar or dodge effort. Make the mistake. Feel awkward. Learn. That's being human.

People are kinder than you think. By letting a bot speak for you, you cut off the chance for connection.

Here's the secret: most people want to help you. You just don't ask. You think smart people never need help. Wrong. The smartest ones know when to ask and when to give.

So, human to human, save the AI for the boring stuff. Lead with your own thoughts. The best ideas are the ones you've actually felt.

marstall 3 hours ago

also: mind-numbing.

photochemsyn 4 hours ago

I like the author's idea that people should publish the prompts they use to generate LLM output, not the output itself.

throwaway-0001 4 hours ago

For me it’s insulting not to use an AI to reply back. I’d say 90% of people would answer better with an AI assist in most business environments. Maybe even personal.

It’s really funny how many business deals would be better if people would put the requests in an AI to explain what exactly is requested. Most people are not able to answer and if they’d use an AI they could respond in a proper way without wasting everyone’s time. But at least not using an AI shows the competency (or better - incompetence) level.

It’s also sad that I need to tell people to put my message in an AI to don’t ask me useless questions. And AI can fill most of the gaps people don’t get it. You might say my requests are not proper, but then how an AI can figure out what I want to say? I also put my requests in an AI when I can and can create eli5 explanations of the requests “for dummies”

portaouflop 5 hours ago

It’s a clever post but people that use so to write personal blogposts ain’t gonna read this and change their mind. Only people who already hate using llms are gonna cheer you on.

But this kind of content is great for engagement farming on HN.

Just write “something something clankers bad”

While I agree with the author it’s a very moot and uninspired point

Simulacra 5 hours ago

I've noticed this with a significant number of news articles. Sometimes it will say that it was "enhanced" with AI, but even when it doesn't, I get that distinct robotic feel.

futurecat 4 hours ago

slop excepted, writing is a very difficult activity that has always been outsourced to some extent, either to an individual, a team, or to some software (spell checker, etc). Of course people will use AI if they think it makes them a better writer. Taste is the only issue here.

amrocha 4 hours ago

Tangential, but when I heard the Zoom CEO say that in the future you’ll just send your AI double to a meeting for you I couldn’t comprehend how a real human being could ever think that that would be an ok thing to suggest.

The absolute bare minimum respect you can have for someone who’s making time for you is to make time for them. Offloading that to AI is the equivalent of shitting on someone’s plate and telling them to eat it.

I struggle everyday with the thought that the richest most powerful people in the world will sell their souls to get a bit richer.

voidhorse 2 hours ago

If you struggle with communication, using AI is fine. What matters is caring about the result. You cannot just throw it over the fence.

AI content in itself isn't insulting, but as TFA hits upon, pushing sloppy work you didn't bother to read or check at all yourself is incredibly insulting and just communicates to others that you don't think their time is valuable. This holds for non-AI generated work as well, but the bar is higher by default since you at least had to generate that content yourself and thus at least engage with it on a basic level. AI content is also needlessly verbose, employs trite and stupid analogies constantly, and in general has the nauseating, bland, soulless corporate professional communication style that anyone with even a mote of decent literary taste detests.

the_af 5 hours ago

What amazes me is that some people think I want to read AI slop in their blog that I could have generated by asking ChatGPT directly.

Anyone can access ChatGPT, why do we need an intermediary?

Someone a while back shared, here on HN, almost an entire blog generated by (barely touched up) AI text. It even had Claude-isms like "excellent question!", em-dashes, the works. Why would anyone want to read that?

  • CuriouslyC 4 hours ago

    In that case, I'd say maybe you didn't have the wisdom to ask the question in the first place? And maybe you wouldn't know the follow up questions to ask after that? And if the person who produced it took a few minutes to fact check, that has value as well.

    • the_af 4 hours ago

      It's seldom the case that AI slop requires widsom to ask, or is fact-checked in any depth other than cursory. Cursory checking of AI-slop has effectively zero value.

      Or do you remember when Facebook groups or image communities were flooded with funny/meme AI-generated images, "The Godfather, only with Star Wars", etc? Thank you, but I can generate those zero-effort memes myself, I also have access to GenAI.

      We truly don't need intermediaries.

      • CuriouslyC 42 minutes ago

        You don't need human intermediates either, what's the point of teachers? You can read the original journal articles just fine. In fact what's the point of any communication that isn't journal articles? Everything else is just recycled slop.

  • dewey 5 hours ago

    There's blogs that are not meant to be read, but are just content marketing to be found by search engines.

AnimalMuppet 3 hours ago

I mean, if you used an AI to generated it, you shouldn't mind if my AI reads it, rather than me.

chasing 5 hours ago

My thing is: If you have something to say, just say it! Don't worry that it's not long enough or short enough or doesn't fit into some mold you think it needs to fit into. Just say it. As you write, you'll probably start to see your ideas more clearly and you'll start to edit and add color or clarify.

But just say it! Bypass the middleman who's just going to make it blurrier or more long-winded.

  • CuriouslyC 4 hours ago

    Sorry, but I 100% guarantee that there are a lot of people that have time for a quick outline of an article, but not a polished article. Your choice then is between a nugget of human wisdom that's been massaged into a presentable format with AI or nothing.

    You're never going to get that raw shit you say you want, because it has negative value for creator's brands, it looks way lazier than spot checked AI output, and people see the lack of baseline polish and nope out right away unless it's a creator they're already sold on (then you can pump out literal garbage, as long as you keep it a low % of your total content you can get away with shit new creators only dream of).

ericol 5 hours ago

> read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.

Ha! That's a very clever spot on insult. Most LLMs would probably be seriously offended by this would thy be rational beings.

> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake.

OK, you are pushing it buddy. My mandarin is not that good; as a matter of fact, I can handle no mandarin at all. Or french to that matter. But I'm certain a decent LLM can do that without me having to resort to reach out to another person, that might not be available or have enough time to deal with my shenanigans.

I agree that there are way too much AI slop being created and made public, but yet there are way too many cases where the use is fair and used for improving whatever the person is doing.

Yes, AI is being abused. No, I don't agree we should all go taliban against even fair use cases.

  • ericol 5 hours ago

    As a side note, i hate posts where they go on and on and use 3 pages to go to the point.

    You know what I'm doing? I'm using AI to chase to the point and extract the relevant (For me) info.

luisml77 4 hours ago

Who cares about your feelings, it's a blog post.

If the goal is to get the job done, then use AI.

Do you really want to waste precious time for so little return?

  • nhod 4 hours ago

    "I'm choosing to be 'insulted' by the existence of an arbitrary thing in the universe and then upset by the insult I chose to ascribe to it."

olooney 5 hours ago

I don't see the objection to using LLMs to check for grammatical mistakes and spelling errors. That strikes me as a reactionary and dogmatic position, not a rational one.

Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.

  • keiferski 5 hours ago

    Yes, I agree. There's nothing wrong with using an LLM or a spell-checker to improve your writing. But I do think it's important to have the LLM point out the errors, not rewrite the text directly. This lets you discover errors but avoid the AI-speak.

  • CuriouslyC 5 hours ago

    The fact that you were downvoted into dark grey for this post on this forum makes me very sad. I hope it's just that this article is attracting a certain kind of segment of the community.

    • olooney an hour ago

      I'm pretty sure my mistake was assuming people had read the article and knew the author veered wildly halfway through towards also advocating against using LLMs for proofreading and that you should "just let your mistakes stand." Obviously no one reads the article, just the headline, so they assumed I was disagreeing with that (which I was not.) Other comments that expressed the same sentiment as mine but also quoted that part did manage to get upvoted.

      This is an emotionally charged subject for many, so they're operating in Hurrah/Boo mode[1]. After all, how can we defend the value of careful human thought if we don't rush blindly to the defense of every low-effort blog post with a headline that signals agreement with our side?

      [1]: https://en.wikipedia.org/wiki/Emotivism