Hacker Newsnew | past | comments | ask | show | jobs | submit | godelski's commentslogin

I'm not convinced. To me it looks like the bulls are throwing money and priding themselves on the bits that land back on it (or come from the next table over). It looks very chaotic and wasteful to me

The problem is that from the outside it seems like Microsoft no longer cares about the product. So much so that "the product" has become "shareholders"[0].

We've just been moving into a world where metric hacking is the desired outcome, not an outcome to try to avoid. These companies are only surviving because of their monopoly statuses. Because of momentum. It's a powerful force. It's the reason Twitter still is around. The reason Facebook is still around. But them being around doesn't mean they're good. It doesn't mean they're useful. It doesn't mean it is a good product. It doesn't mean the users like it. It just means people are used to the way things are and they aren't angry enough to leave for something else. But these companies are actively creating friction for users, daring them to leave, gouging them for everything they can. FFS Microsoft is the largest contributor (even more than Valve) to creating "the year of linux". Sure, it'll never have M$FT's market share, but it sure is eating into their revenue.

We've all lost sight of what made software so powerful in the first place. Why it became so successful and changed the world. We used to ship good products that help people, make their lives better, and make lots of money in the process. Now, I think all that anyone cares about is the last part. Now we're actively being hostile to those that make the systems better. And that system is fucked up and will destroy itself. That's not a good thing, because it does a lot of damage along the way. It is a system of extreme myopia.

In the last 5 years I'd argue that most software has made my life harder and more complex, not easier. There are definitely exceptions to this (ghostty being a great example), but there is a strong trend. I know I'm not alone in this feeling and I think we're getting to a point where a lot of people are no longer willing to dismiss their own gripes. This is not a good sign...

I'm glad you're optimistic. I do hope things can change. And my frustration is not directed at you. I really do want you to be right and I really do want to see change come from the inside. But I do not think those leading the companies now have any foresight. To be honest, I'm not even sure there's anyone at the wheel. It feels like we've just let the market forces steer the ship. If the currents steer the ship, then there's no captain, regardless of who claims the title. Frankly, I don't want to be on a ship without a captain, but here we are.

[0] https://www.youtube.com/watch?v=YZFTaEenaHM



  > It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it?
Probably because it's a good way to be more profitable.

Code that's easier to understand is easier to: maintain, generate new features for, fix bugs, onboard new engineers, etc

Code that's well written: executes faster (saving computational costs), scales better, has higher uptimes/more robust, reduces bandwidth, and so on.

The thing is the business people will never understand this. Why would they? They're not programmers. They're not in the weeds. But that's what your job is as an engineer. To find all these invisible costs.

I'm pretty confident the industry is spending billions unnecessary. Hell, I'm sure Google alone is wasting over $100m/yr due to this.

Don't be penny wise and pound foolish. You're smarter than that. I know everyone here is smarter than that. So don't fall for the trap


Save us the patronizing tone.

I am well aware of stupidity in industry. However, I am also wise enough to recognize the opposite error. (I myself have academic tendencies and a background aligned with that. I have chosen jobs that payed less, because the subject matter was more interesting for me. I'm not some vulgar, money-chasing techbro here.) The via media demands that we recognize the distinction between general truths and practical realities. As I wrote elsewhere in this thread, yes, properly refactored code is easier to maintain, easier to read, easier to change, and theoretically, commercially preferable. It also makes programming more satisfying, helping retention. But that describes a feature of such code. It doesn't tell us what the right course of action is in a particular situation. The notion that refactoring is unconditionally the right course of action when code is not in some ideal state is simply wrong. It really does depend on the situation. Sometimes, refactoring is the wrong thing to do.

I'm not making some outrageous claim here. This follows from basic truths about the nature of what it means to be practical, and if industry is anything, it is practical.


The professor is obviously not advising naive absolutism. He’s saying care deeply about your craft, and good judgement will follow from that.

Actually caring is what gives someone the itch to go back and improve things, versus happily calling it a day once minimum acceptable value has been delivered. The rampant enshittification of basically everything should make it clear which disposition is in short supply.

> Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.

The advice is aimed at students who haven’t yet decided which type they want to be. In fact it’s directly telling them to think for themselves and not blindly listen to you or anyone else here making the same case.


  > Save us the patronizing tone.
If you come out swinging you can't get mad when others swing back. You're not a victim, you're an instigator. You called danny_codes flippant for suggesting there are different biases. You called it absurd. You escalated it. And then you escalated it again.

  > It doesn't tell us what the right course of action is in a particular situation.
That's because there is never an objectively correct course of action. There is no optimal solution. In fact, there can't be when the situation evolves. The objective isn't even defined, let alone well defined. I don't understand your point because no one was suggesting it was always the right answer. Don't strawman here. Of course it depends on the situation, that's true about almost everything. It doesn't need to be said explicitly because it's so well understood. Don't inject absolute qualifiers into statements that don't have them.

  > I'm not making some outrageous claim here.
Your current claim? No. To be frank, you didn't claim much. But your prior claim? Yes. Yes you were. You were creating strawman then just as you did now.

  >> Unlike algorithms and principles and even techniques, software is not eternal. 
Not even algorithms are eternal. But I'm going to assume you're meaning the types of algoritms you see in textbooks because interpreting "algorithms" by its actual definition makes your comment weird. Since all programs are algorithms.

  >> [Software] is ephemeral. It's shelf-life is bounded.
And this is going to be something nearly everyone is already going to assume. It doesn't need to be stated. It doesn't need to be differentiated because it is already the working assumption.

  >> You're not refining some theory or some grasp of a Platonic ideal
And this is the real strawman. You're made a wild assumption about what others are claiming. There is such a wide range of viewpoints between "the way things are done now" and "chasing perfection." Anyone that thinks perfection exists in code is incredibly naive. You and I both know this, and so does anyone working in industry or academia (save maybe some juniors). There's a huge difference between saying "this isn't good enough" and "it's not good until its perfect." If someone talks about climbing a mountain you can't respond by saying it is impossible to climb to the moon.

  >> Whether you should refactor something, when you should refactor something, is a matter of prudential judgement, which is to say, of practical reason.
Whether you should do anything is a matter of prudential judgement. It's wild to say this while accusing people of chasing perfection. You think people are just yoloing their way to perfection?! Seriously? The article and thread context is literally asking that people use more prudential judgement. To not be myopic. And you have the audacity to say "think about it". What do you think we're doing here?

  > Who is going to read your carefully crafted documentation lol?
Everyone that uses or works in your codebase.

Look at how people use LLMs these days. People frequently use it on new codebases to get up to speed on the code. Frankly because it's a lot faster than grepping, profiling, and all the digging we'd normally do (though those still have benefits and you're still going to do them. Hell, the LLMs even do them). But how much of that could have been avoided had people just taken a few seconds to document their code? No one is saying sit down and document the whole thing but "add a few comments when you add new functions" or "update comments in places you touch". If it costs you more than a minute of your time you're probably doing it wrong.

I'm tired of these arguments. People are turning molehills into mountains. It's so incredibly myopic. We waste so much fucking time on things because we're trying to move fast. But no one seems to understand the difference between speed and velocity. It never mattered how fast you go, it has always been about velocity. Going fast in the wrong direction is harming you, not helping. If you don't have the time to know if you're headed in the right direction or not then you're probably not.

  > Outside of the bit on avoiding cutting corners
But what your gripe is with is cutting corners. Not documenting? That's cutting corners. Not refactoring? That's cutting corners. Not spending time understanding the code at multiple scopes? That's cutting corners.

Those are all corners cut that end up wasting tons of man hours. Sure, they save you a few precious seconds or minutes now, but at the cost of hours or days in the future.

Here's the thing, if you don't take those shortcuts, then none of those tasks are hard. Even refactoring. But as soon as you start taking those shortcuts they start compounding. Then a year down the line your company is writing a blog post about how your code is 500x faster now that it's written in rust (or whatever the cool kids use). If it's 500x faster that's not because a language change, it's because tech debt. And like all debt it accumulates little by little and it's the compounding interest that really kills you.

Sorry, I'm tired of cleaning up everybody's messes. Go ahead, move fast and break things. It's a great way to learn (I do it too!), but don't make others clean up your mess.

Stop buying into this bullshit of needing to move so fast. It's the same anti-pattern scammers use to get you to make poor decisions. Stop scamming yourselves


this resonated for me, quite hard actually. there's the famous quote which has always stuck with me on this stuff slow is smooth, smooth is fast.

thinking about it a little more, i would personally prefer to use the term momentum rather than velocity or just plain speed -- we accrue more mass by adding code, features, etc. and shifting direction/increasing speed are both harder with greater mass.


I think mass and momentum are appropriate. I use them when talking about this too.

Given your username and the topic, I think you'll enjoy this read: https://www.cs.utexas.edu/~EWD/transcriptions/EWD02xx/EWD288...


I'm really unfamiliar with this playbook and how America has used it. Do you have any examples? I can't seem to find any

I think the occasional joke is fine but when you have too many then the comments get diluted. It's exactly that kind of thing that makes me hate Reddit and so many other places: spam.

  > Claude Code used bash to make edits anyway.
If you had the former rule why would you ever whitelist bash commands? That's full access to everything you can do.

Same goes for `find`, `xargs`, `awk`, `sed`, `tar`, `rsync`, `git`, `vim` (and all text editors), `less` (any pager), `man`, `env`, `timeout`, `watch`, and so many more commands. If you whitelist things in the settings you should be much more specific about arguments to those commands.

People really need to learn bash


At some point you need to get things done.

There's no point in getting things done if there's nothing that ends up being done.

You can still get shit done without risking losing it all. Don't outsource your thinking to the machine. You can't even evaluate if what it is doing is "good enough" work or not if you don't know how to do the work. If you don't know what goes into it you just end up eating a lot of sausages.


  > FaceID only unlocks if you actually look at the screen.
You need "Require Attention for Face ID" turned on for this

  > Overall it seems pretty unscientific.
I'd agree with all your points and add some things to help people better "sniff-test" these kinds of papers.

  1) The paper is suggesting aliens... your suspicion hats should always go on
    - Carl Sagan said: "Extraordinary claims require extraordinary evidence". Is the evidence extra-ordinary?
  2) The authors aren't experts
    - Stephen Bruehl: A doctor of Anesthesiology
    - Brian Doherty: "Independent Researcher"[0]
    - Alina Streblyanska: Actually maybe a astrophysics researcher?[1]
    - Beatriz Villarroel: The top Google hit for her is for a UFO wikipedia[2]
  3) Authors don't share affiliations
    - Corresponding author has no domain expertize and no clear affiliation to others.
  4) Authors have hints of metric hacking
    - Villarroel has 8 citations in a paper with only 18[3]
  5) The GitHub repo is dead: https://github.com/dca-doherty/VASCO-ML
None of these things are enough to conclude that the paper is wrong, but they are red flags and don't require actually understanding any of the details of the paper.

If you do understand statistics there's clearly more red flags. The +/- windowing being a pretty big one, since there are much better tools for this (errors don't need to be symmetric! Nor do they need to be uniform!). There's also a pretty big assumption made that cshimmin didn't mention: the paper assumes all nuclear tests are in the public record. But I also assume if you have a strong statistics background then there's a high probability you didn't upvote the post.

[0] The man has effectively no online presence. Google searching his email yields effectively nothing except people posting about this paper in UFO groups (https://www.google.com/search?q=%22briandohertyresearch%40gm...). His linked GitHub also makes him anonymous (https://github.com/dca-doherty/) and his website linked is just about finding day care in Texas. He has one more paper on ArXiv, but it is from a few weeks prior

[1] Found their Linkedin (https://www.linkedin.com/in/alina-streblyanska-95b2375b/). Their most recent paper is also on UAPs, along with Villarroel. But also, they work for "Society of UAP Studies", which should be a big red flag. Also, they were working as a Post-doc for 12 years, which is a bit insane

[2] https://www.wikidisc.org/wiki/Beatriz_Villarroel and here's here Google Scholar https://scholar.google.com/citations?user=_Jc8gm0AAAAJ

[3] I looked at some other papers of hers and they show a similar pattern. This explains her citation count (which is rather low) and h-index (it's better to just click on the references and you'll see it's predominantly her referencing herself):

  - 2602.15171: 9 citations total, 8 are hers
  - "A cost-effective search for extraterrestrial probes in the Solar system" has many more, but still 6 to herself (and 3 to Loeb)
  - Transients in the Palomar Observatory Sky Survey (Yes, this is in "Nature"): 20 citations, 5 hers
  - Aligned, Multiple-transient Events in the First Palomar Sky Survey: 11/36
  - On the Image Profiles of Transients in the Palomar Sky Survey: 5/5
  - A Civilian Astronomer's Guide to UAP Research: 7/98 (actually not a red flag, but the title sure is...)
  - and so on

Not gonna lie, the first thing I noticed was that the first author was in an anesthesiology department. Your guidelines for sniff-testing are not unreasonable, and can definitely be helpful to people who are unfamiliar with the research area. But I quite intentionally did not appeal to any of those. As a (somewhat) subject matter expert, it's important to _ignore_ things like ad hominem judgement, and instead address the paper on its self-contained merits. And more importantly, to share my assessment of those with the lay public.

I'm glad you did it that way. I hope, my comment works well as an addendum to your type of comment. I don't think would have worked well on its own, nor prior to yours. Especially since nothing I said is an absolute rule that allows one to reject a work. But this paper sure does smell suspicious. I think it's good to have the stronger reasons to be suspicious and then understand some softer flags to navigate in unfamiliar territory.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: