It's clearly not the MO that capable engineers want, but it's the MO that is getting funded right now.
Reading code carefully is harder than writing code unless the code is written consistently and clearly in a way that is idiomatic to the reader. And there's way more code to review now, but companies aren't scaling up the number of skilled engineers on staff. So in practice, never reading all of the diffs is the MO that will be built into code we depend on.
> It's clearly not the MO that capable engineers want, but it's the MO that is getting funded right now.
Quite a few capable engineers really are that short-sighted!
The bigger question for the AI-techbro questioning "If AI writes your code, why use Python?" is "If AI writes your code, what use do we have for you?"
After all, there's dozens of people in the same business that have better domain knowledge but are unable to program - as a programmer the only value you added over random analysts and clerks was that you could automate shit.
Now you can't, so good luck competing with people who were already making half your salary when your largest value-prop is now gone.
> Enjoy this while it lasts, because very shortly LLMs are going to be fundamentally better at being the "idea guy" as well.
Maybe for the ideas that are so far from novel that there's a corpus of training data that could train an LLM to reproduce it. But for actually novel ideas, LLMs won't ever be much use. They can't interact with the world directly, they can only interact with text (or I guess bits).
And I guess that's not much of a distinction, as truly novel good ideas are very rare and you can go very far applying a good idea into a new domain. But at the edges where true novelty is required, LLMs will either hallucinate or guide you away from the edge.
Offloading cognition is what one does when they use abstractions that other people made through intense cognition. And it's fine to do that; people can build great things with great abstractions. A woodworker doesn't have to design and construct a tool to make great things with it.
But developing the people [who can build great new abstractions or the people who can build those abstractions into ergonomic tooling] involves a lot of cognitive struggle through which these people learn how to push knowledge forward.
Forming the mental models for how things work takes struggle. Debugging errors in your code forces you to figure out the disconnect between your mental model and reality.
Claude can figure out most errors I show to it much faster than I can, but when we're building something I could build from scratch, I regularly find even Opus 4.7 regularly provides vastly overcomplicated and inferior solutions and I have to redirect it. I assume this is also the case when we're building stuff that's new to me and I just can't recognize all of the overcomplicated suboptimal solutions until I get to testing the behaviors I need to be correct. If I got a tool like this at the start of my career or education, I just don't know how I wouldn't end up completely stunted.
Laws against impersonating law enforcement exist so that law enforcement officers can get compliance from people that they wouldn't be obligated to provide to regular civilians.
You can't impersonate something to a text editor as there's no special compliance you could get; WYSIWYG. But to a chatbot, you could get special compliance based on your identity.
So, what will AGI be able to do that will make that bet pay off? Human-like intelligence is already very common. Vastly better than human intelligence seems like it would be worth the expense, but I don't know where we'd get suitable training data.
The bet is that they perfect a new kind of neural network which is roughly as good at "training" as the human mind is as far as "amount learned/experience gained per bit of information input".
Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).
That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).
It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.
Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.
That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)
I can't imagine the bots could ever cost McDonald's less than people cost.
15 years ago I worked at McDonald's for a few months after graduating into the Great recession. I worked from 5am to 1pm-ish 5 days a week. They paid workers weekly and I remember getting those checks for ~$235 each week (for 38 to 39.5 hours a week; they were vigilant about never letting anyone get overtime). About $47 per day.
The federal minimum wage has not risen since then, remaining at $7.25/hr. Inflation adjusted, $7.25 today would have been just under $5 then, so I guess I had it good.
Anyway, I would be shocked if bots could cost less than labor in min wage jobs.
He had a very close, decades long friendship with the most notorious sex-trafficker-of-children-to-rich-creeps in modern history for decades. And when imprisoned, that infamous pedophile died while in a federal prison under Trump's control, with a strange gap in the CCTV video footage. And Trump's handling of the entire Epstein Files saga makes it clear that Trump is described extensively in those files and he desperately wants to conceal it. What could be in there that he would use the entire justice department to try and redact? Trump is shameless about things that are legal even if they're salacious (like sleeping with porn star Stormy Daniels), so you have to wonder, what could Jeffery Epstein's good friend be trying to conceal?
Also, he owned the Miss Universe org (including Miss USA and Miss Teen USA) for decades, and he was known to walk into the dressing rooms of teen contestants as young as 15 while they were undressed. [0]
Also, he bragged about molesting women, and a court of law found that he sexually assaulted E Jean Carroll.
I haven't proven the case that Trump had sex with a minor, but there's way more than enough probable cause to believe it's more likely than not.
Imagine there's a camera continuously recording a cookie jar. A child eats all of the cookies and then deletes the footage from the time they ate the cookies. A parent returns to find their child covered in crumbs, loudly proclaiming they haven't eaten a cookie in years and actively interferes with the parent's investigation and tries to distract from it by throwing a brick through the window of an Iranian family down the street.
Are any of the facts in this hypothetical "evidence"? With the knowledge of the truth (that the kid ate the cookies), it's clear these are all relevant pieces of evidence. If we take knowledge of the truth out of the equation, would these facts still be evidence? Unambiguously they would.
Definitionally both circumstantial and direct evidence are forms of evidence. No modifier is necessary.
And incidentally you can be convicted in a court of law purely on circumstantial evidence, and that's the place in society where we have the highest standard of proof. The evidence all being circumstantial is not a gotcha.
This reminds me of a talk I attended many years ago given by the director of UChicago's writing program (and found a recording of the talk [0]), and his thesis was that writing IS the process of thinking. That talk changed the way I write and made writing a primary tool I reach for when I want to learn something new.
Words / language are the great technology we've made for representing ideas, and representing those ideas in the written word enables us to evaluate, edit, and compose those smaller ideas into bigger ideas. Kind of like how teachers would ask for an explanation in my own words, writing down my understanding of something I'd heard or read forced me to really evaluate the idea, focus on the parts I cared about, and record that understanding. Without the writing step, ideas would easily just float through my mind like a phantasm, shapeless and out of focus and useless when I had a tangible need for the idea.
I am glad I learned to write (both code and text) long before Claude came online. It would have been very hard to struggle through translating ideas from my head into words and words (back) into ideas in my head if I knew there was an "Easy button" I could hit to get something cogent-sounding. I hope a large enough proportion of kids today will still put in the work and won't just end up with a stunted ability to write/think.
I'm not sure - being able to take something like a casual response to a post, and then changing it to iambic pentameter with the easy button could be a great way of learning how to do that off the cuff.
Though I’m unsure, this notion comes to mind:
to take a casual reply to a post
and turn it, with an easy button’s press,
to flawless iambic pentameter
might be the finest way to learn the art
of speaking thus extempore, off the cuff.
It's not perfect, but I envy the wealth of tools this generation has. They'll find uses for AI that leave us in awe.
You can do that interactively with LLMs. Instead of aiming for a finished product you ask a question, then refine your understanding with more questions.
"So if that is true then this next statement is also true..." and the LLM will either agree or disagree.
There are lines between writing as a persuasive medium, writing as a didactic medium for teaching, writing as a creative/poetic medium, writing as the process of creation of marketable products, writing as a shared summary of specialist niche knowledge, and writing as an aid to personal comprehension.
Those are fundamentally different activities, They happen to use the same medium and there are some overlapping areas. But they're essentially different activities with different requirements and different processes.
There's also the point that LLMs can give you explicit control over features like reading age, social register, metaphor frames/ themes/imagery, sentence structure, grammatical uniqueness, rhythmic variation, and other linguistic markers.
The generic templated slop styles - rule of three, it's not this it's that, bullet points, "that's rare", strained weird or cringey similes, and the other tics - that appear all over social media are the low-skill default for AI writing. It doesn't have to be that crude or obvious, and learning how to push it beyond that is a skill in itself.
As is creating knowledge engineering systems that use agents to manage knowledge in useful ways, with writing as one possible output.
> There's also the point that LLMs can give you explicit control over features like reading age, social register, metaphor frames/ themes/imagery, sentence structure, grammatical uniqueness, rhythmic variation, and other linguistic markers.
You already have this. Control over your writing is the default position.
> You can do that interactively with LLMs. Instead of aiming for a finished product you ask a question, then refine your understanding with more questions.
Yeah, I regularly spend a lot of time with Claude fleshing out ideas and scoping out features. I'm behind the times and just use the chat interface rather than Claude Code, so perhaps there are controls I'm not aware of, but there can't be any that make it correctly understand an under-specified idea, or even correctly understand an adequately specified idea.
For example, I've been playing around building a side-project that involves building out a safety-weighted graph to support generating safer bike routes. I was recently working on integrating traffic control devices (represented on OpenStreetMaps nodes) into the model where I calculate weights for my graph (I essentially join the penalty for the traffic control device onto the destination end of an edge) and Claude kept wanting to average that penalty by the length of the edge (as that makes sense for some other factors in my model like crashes, surface material, max speed, etc), but doesn't make sense for traffic control signals at intersections (as the length of an edge shouldn't change the risk a cyclist experiences going through an intersection). If I didn't have a well-developed ability to closely parse words to ideas, I could have very easily just taken the working model Claude generated and built more on top of it, setting up a dangerous situation where the routing algo would promote routes running a user through more intersections (which are the most dangerous place for cyclists).
I hope a comparable proportion of kids coming up today will spend the time and energy to understand the ideas behind the text and the code, but I really doubt 18-year old me would have had that wisdom. I would have been underspecifying what I wanted out of a lack of prerequisite knowledge, receiving slop, and either promptly getting lost in debugging hell or more likely the worse case of erroneously believing the slop satisfied the brief.
> There are lines between writing as a persuasive medium, writing as a didactic medium for teaching, writing as a creative/poetic medium, writing as the process of creation of marketable products, writing as a shared summary of specialist niche knowledge, and writing as an aid to personal comprehension.
> Those are fundamentally different activities, They happen to use the same medium and there are some overlapping areas. But they're essentially different activities with different requirements and different processes.
In all of those areas, if you take away the human who can develop value-creating ideas into an accurate and high-fidelity written representation, you will just get slop. Developing ideas and representing them in words is the skill. There is no substitute.
Reading code carefully is harder than writing code unless the code is written consistently and clearly in a way that is idiomatic to the reader. And there's way more code to review now, but companies aren't scaling up the number of skilled engineers on staff. So in practice, never reading all of the diffs is the MO that will be built into code we depend on.
reply