Hacker Newsnew | past | comments | ask | show | jobs | submit | vultour's commentslogin

My colleague had a problem with commit messages, so now they're all written by AI. I don't know what depth of hell he managed to get the prompt from, but they're all now in the format "Updated /path/to/file: fixed issue in thingamabob", which means they're all at least 200 characters long and half of it is the file path, an absolutely pointless thing to put in a commit message. The best part is that whenever you look at GitLab or GitHub, instead of seeing the commit message next to the file you just see the file name again, then the message is cut off.

Out of curiosity, with it taking up that much of the message, is it the local file path on their computer?

There is simply no chance that LLMs are saving you 30 hours of work a week, especially if they're doing something where you'd have to do the research yourself. Either you're just simply wrong, or you went from understanding the code you were writing to skimming whatever the magic box spits out and either merging it outright or pawning off the effort of review on someone else.

That's why I gave a range. I didn't say it is saving me 30 hours every week, I said 10 to 30 hours a week. So 30 is the max of the range, and I'd say the distribution is pretty heavily left-skewed. It really depends on what I'm doing, but I do think there are weeks where it has save me 75% of the time I would have otherwise spent. I think there are two kinds of weeks where this is the case:

1. A week where I would have otherwise actually spent the majority of my time writing out and doing a ton of refactoring of a lot of implementation code. This is very rare for me, but it does exist. I can remember how it could actually take me a whole week to just "code up" meaningfully sized prototypes or greenfield implementations of some unambiguous thing. Truly, now, for that kind of work, claude code can save me full days of mechanical work.

2. A week where there is something very subtle going on that I have to figure out, probably having to do with some component or system I'm not very familiar with yet. Having an AI tool as a rubber ducky, or like a supercharged stackoverflow, can save me days of reading, debugging, working on minimal repros, etc.

Again, I'm not saying this is the common case at all. And estimating this kind of thing is always wildly inaccurate, so sure, take it with a grain of salt. But I know that a few times now, doing estimates based on my past experience, I've said "that will take me a week" (in case #1) or "gosh, I dunno, that's a tricky one, that might take me a week to figure out" (in case #2), and instead it only took me a day.

But honestly I think people focus too much on the high end of this range. The more valuable thing to me is the large number of weeks where it saves me that 10 to 15 hours, where I can then use that time to research new things, try more ideas, say "yes" to more things, or just not spend that time working.


Are you ashamed of other people finding out you used Claude? I think the co-authored-by bit should not be a setting at all, AI-generated code should be clearly identified.

I use Claude at work. I've never instructed it to make a commit, and it's never attempted to make one. It would fail anyway because my commits are signed by Yubikey and it requires presence detection, so I have to tap it.

But I don't want it to make commits, and I don't want to review its code in the Claude Code TUI, either. I want to read its changes in my text editor, decide what to drop or revise or revert, and then stage individual hunks or regions into logical commits.

If anyone asks I'll tell them I used an LLM, idc. I often mention it in commit messages or PRs. But I don't want LLM agents to write commits at all.


Basically what you’re saying is that if AI does anything on your computer, anything the AI impacts you should lose control over. If the AI touched it at all in any way, big or small, you now lose ownership of the actions your computer takes (on open source tools, I might add).

In case you need reminding of common sense, I’m supposed to be allowed to decide what my commit messages are because it’s my fucking computer.

I prefer that my software is not a morality police.


mind-boggling people are trying to hide this, tells you all you need to know about our “profession.” presence of that hook or the like in a place of business should be fireable offense

> AI-generated code should be clearly identified.

Let AI autonomously produce code of a quality that I care about and I might consider giving it credit. I don't know how other people write code but I come up with an idea and use a multitude of LLMs to brainstorm a reasonably comprehensive spec that any reasonably competent person can read and produce a working program from, including a locally working Q2 quant of Qwen 3.6. Even Kimi is as good as Claude at most coding tasks, and I don't see why any single agent deserves any credit for my design.

Let artists and filmmakers start watermarking their output with the tools they use and I might reconsider my decision.


> Let artists and filmmakers start watermarking their output with the tools they use and I might reconsider my decision.

They do, though, in the form of metadata.


Do Adobe or Arri or Red get authorship credit for the work their hardware and software do on projects? After all, artists would not be able to produce a single pixel without them. In a similar vein, you could make the argument that modern farming is sitting on your ass in your modern tractor while software handles most of the work. Does John Deere get rights over a quarter/half your harvest?

I am stuck between the luddites and "artisanal" coders on this one. LLMs are neither as smart/useful or as dumb/useless as people think. Unless your job involves producing useless garbage every single day, good software requires a lot of thought before the first line of code is even written. For those with serious domain knowledge, the thinking time can be compressed into minutes/hours rather than days/weeks it might take.

LLMs are a tool. You either pay for it or you use the freely available ones on your own hardware. As long as the output is directed by my thinking, the output belongs to me. If it were up to me, I would abolish IPR (and even permanent ownership of land) as a category altogether, but that is a different discussion.


I think the Linux kernel's standard of disclosure via the "Assisted-By" trailer is the right move.

Makes it clear you used a bullshit machine, without implying it's an author.

...assuming you think using them at all is a good move - I won't deny they have some utility (though I'd argue much lower than many seem to think), but I do presently believe they're a disaster for humanity.

The ruination of the Internet with slop, the massive propagation of propaganda, and the insanely easy-to-wield tools for abuse are in no way worth the ability to accrue tech debt at 10x velocity (though to be clear, accruing tech debt can absolutely be a useful strategy, if one I personally dislike).


I'm sorry but none of this sounds in any way exciting or like a breakthrough. There are ASML machines that hit microscopic tin particles with a laser 50,000 times per second, but it's somehow an achievement we've managed to create a ping pong paddle that's fast enough to hit a ball? Precision robotics have been used in manufacturing for decades.


Is this a joke? You proclaim your support for a party that proudly posts AI-generated pictures of Obama as a monkey, shits out vitriol-filled messages on literally every holiday, and sends the gestapo to execute American citizens in the streets, and then you demand civil discourse? I'm sorry but that ship has sailed, there is no reason why someone should maintain a civil discussion with you.


Because if it’s not an LLM it’s not good for the current hype cycle. Calling everything AI makes the line go up.


LLMs also make the cynicism go up among the HN crowd.


Hm. Is HN starting to become more skeptical of LLMs? For the past couple of years, HN has seemed worryingly enthusiastic about LLMs.


How so? Half the people here have LLM delusion in every thread posted here; more than half of the things going to the frontpage are AI. Just look at hours where Americans are awake.


Fucking Americans. Only 4% of the world population, with the magic of disproportionately afflicting the global news headlines which make their way here.

It’s impressive, honestly.


These have been popping up on all the TeamPCP compromises lately


A bunch of AquaSec stuff has been getting compromised since the initial incident at end of February. Apparently in the latest attack they managed to compromise their internal organisation: https://opensourcemalware.com/blog/teampcp-aquasec-com-githu...


I spend probably thousands of hours in Firefox every year and I don't think I've ever had it crash.


Same. I don't think I've had a crash in 10+ years.


Same for me, it's simply never crashing for my day to day use. It doesn't mean there aren't idiosyncratic cases out there but anecdata can easily paint any number of pictures.


The comment perfectly exemplifies the kind of person that would work at OpenAI. Government AI drones could be executing citizens in the streets but they’d still find some sort of cope why it’s not a problem. They’ll keep moving the goalposts as long as the money keeps coming.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: