Hacker Newsnew | past | comments | ask | show | jobs | submit | globnomulous's commentslogin

> It's one thing to memorize arguments in favour of a position. It's another to actively defend your positions against those aggressively invested in proving you wrong. John Stuart Mill argued that only the latter activity produces the real understanding that allows an argument, or a tradition, to be renewed and kept alive across generations against constant attempts at refutation. If you are regurgitating a stance instead of actively fighting to defend one, do you really believe in what you are saying?

A person generally cannot effectively, fluently, convincingly regurgitate an argument without understanding it, and the act of memorizing a variety of different positions primes the brain to handle all of them with greater depth and adroitness. Mill greatly underestimates the power and benefits of memorization.

I think most people would agree that memorization and a standarized 'one-size-fits-all' approach are inferior to teaching methods that are (onstensibly) creative, 'active,' and individualized.

I couldn't disagree more strongly. It's a false dichotomy. All learning -- all -- starts from and depends upon memorization. Is that its only the goal? Obviously not, but memorization gets a bad rap because it's viewed, incorrectly, as contrary to or in competition with more active, creative intellectual enterprises.


I once heard a lecture by a (famous) college professor who talked about the large numbers of students who failed (college) Algebra 1.

His argument was: you cannot memorize algebra, you have to understand. Students who are failing in college do so because they do not understand the fundamentals, and try to memorize enough to succeed - not realizing that the effort needs to go somewhere else.

Rule 1 of memorization is "do not [memorize] if you do not understand". [1] (Note: that source uses the word "learn" instead of "memorize", but to me the word learn means come to understand.)

There is a role for memorization and rote repetition, but it is not the foundation of understanding.

[1]: https://super-memory.com/articles/20rules.htm


As a teacher, I feel this is wrong. A lot of students fail by trying too hard to understand.

They listen in class, then read the text and notes you posted, then watch a Youtube explaination, then ask Chat, then ask you questions ... anything to avoid trying to do a few practice questions where they might make a mistake.

It's like watching people try to learn to play basketball when they are afraid of shooting hoops in case they miss a shot. So they watch videos or read books to really understand how to shoot hoops. And then fail miserably when they are tested.

OK, you could argue that exercises build a type of understanding, and listening to explanations builds a different type of understanding, and the former is more useful, but people don't understand that.


I can't argue with this professor's argument because I don't know it, but I can only say that, intuitively, this sounds like an example of the false dichotomy I described in my previous comment.

I've never met a math student who tried to pass algebra by memorizing anything. I'm not even sure what a students would memorize in an effort to pass the class.


Yeah, memorization is very underrated.

Memorization increases the size of the building blocks you can use.

Mathematics is where I see this most clearly. Why memorize hundreds of theorems? Because then you can just cite them on the fly when doing real mathematics. If you had to re-derive everything, you'd be stuck doing undergrad level math forever.


I never memorized the trig identities, but used them so much I knew them anyway.

so you memorized them, you just didn't explicitly do it.

Memorizing is an active activity. Simply remembering something you use frequently is not the same.

Chess Grand Masters have large repertoires of memorised openings. They do not play rote games with no understanding.

They run variations, twists and traps, on recalled openings and duel and fool by creating and breaking expectations.

In line with a number of other activities rote core skills and reflexes are foundational but not all, they're essential to practice and to dealing with situations where they don't fit but can be bent to purpose.


> Chess Grand Masters have large repertoires of memorised openings. They do not play rote games with no understanding.

This is a good example because a test of one's understanding is "do you know how to make the opponent pay for varying from the standard opening?"

For a beginner, the answer is no.


Chess960 was invented to shatter this disgusting debasement of chess. That's not just my opinion, that's Bobby Fischers! Opening books, endgame tablebases, piece square tables, etc heuristic hacks that both grand masters and chess engines use is evidence that Chess needs to be replaced with variants that are resistant to letting "strong memorizers" beat actually good tacticians and strategists.

This is why Stratego, or various grand/large chess variants, or Chess960 needs to have replaced standard chess yesterday.


He may have little name recognition, but he's considered, at a minimum, one of the most important, influential, masterful nonfiction writers of his generation.

Yes, definitely. There was tourism in Greece in the Classical Period, too. Epidaurus is a good example: a major religious sanctuary, side by side with a theatre and athletics venues, all part of the thriving local economy propped up entirely by tourism. Fun fact: history's first recorded hypochondriac was a frequent patient/visitor at the temple of Asclepius in Epidaurus.

Modern Greek has so much continuity with ancient that, if you know ancient, you can read plenty of signs and pick your way through pamphlets at museums and archaeological sites. Every Greek I have known has valued that continuity and history. The proposed change -- rendering any text printed prior to the revision obsolete and eventually unreadable -- amounts to cultural suicide.

Maybe you can recognize the continuity as a native Greek speaker. As someone who has learnt Ancient Greek that didn't help at all when I visited Greece. I mean even the letters (some) are now used differently.

I know just Ancient Greek. When I was in the country, I was consistently blown away by how much I could understand, even without knowing Modern Greek.

A billboard for cigarettes contained their equivalent of the surgeon general's warning, using the word "βλάπτεται." βλάπτειν in Ancient Greek means "to injure." βλάπτεται is best read as a middle-voice form: "causes harm."

In a doctor's office in Athens I'd been waiting for a while, so I approached the nurse's station, and a visibly impatient nurse said to me before I could open my mouth "περιμένετε, παρακάλω."

"μένειν" in Ancient Greek means "wait." "μένετε" is a polite imperative, in the present tense ("do or keep doing something"). "περί" as a verbal prefix often means something similar to its meaning as a preposition: "around." "παρακάλω" means in this context exactly what it did 2,500 years ago: "I ask," i.e. "please." I could understand exactly the intended sense: "please continue to wait."

My jaw dropped. I was too dumbfounded to do anything but stare at her. She sighed, rolled her eyes, and switched to English: "Please wait!"

On my first morning in the country I bought a drink called μύθος, not realizing I'd purchased cheap beer (with a delicious tiropita). Same word, same sense: "mythos."

There are limits, definitely. The vocab, grammar, and syntax are different, often very different. The pronunciation letter by letter is broadly the same as it has been for millenia, though, since the changes that turned Ancient Greek into Koine, isn't it?

Still, the similarities and, to some extent, mutual intelligibility of Ancient and Modern Greek are mind-blowing, particularly for someone who grew up speaking English, which didn't even exist two thousand years ago, except, maybe, as some subtle quirk of proto-germanic on a weird little island off the coast of Europe proper.

The continuity is to some extent artificial, as there was a re-Hellenization effort in Greece after the expulsion of the Ottoman Empire. Even where the Ottoman Empire's cultural and linguistic influence were somewhat escapable, though, there's a shocking degree of linguistic continuity. Mani, an isolated, culturally distinct region of the deep Peloponnese, retains features of the Doric dialect that its residents spoke in Archaic Greece around the time the texts of Homer's Iliad and Odyssey were taking shape.


I've been reading Sacred Scripture in Greek these days, thanks to a free Bible study app with the approved translations. And it's extremely eye-opening to see the level of wordplay in the Gospels, that's sometimes hinted by modern homilists, but it goes to a depth you just wouldn't believe.

I thought I could "get by" in Greek just from my knowledge through medical and scientific terms, but there's a lot more to it!

One of the other exciting experiences was to attend a Greek Orthodox liturgy that was sung/chanted in Greek, too. I don't know exactly what variety of liturgical Greek is used, but speaking as someone who knows English and Spanish, and I can recognize many other languages, to hear the Greek chanted and pronounced so eloquently like that was transcendent, and sometimes surprisingly "foreign".

Whenever I see a film or TV of modern Greek signs, I try to sound out the words and decipher as much as possible. I feel like there is some "signal loss" since ancient times, with the musical tone, the rough breathing, etc. But it's definitely exciting to experience some comprehension across several millennia!


This is so nice to hear! I regret not spending more time with the New Testament when I had the opportunity (my time for Greek is essentially nil at this point). It's such a treat to run across people who savor the language the way you do, and it really is dumbfounding to hear those echoes across the milennia, isn't it?

The ways scripture intersects with more ancient literature is its own fascinating area of study, too. Take John 1:1: Ἐν ἀρχῇ ἦν ὁ λόγος, καὶ ὁ λόγος ἦν πρὸς τὸν θεόν, καὶ θεὸς ἦν ὁ λόγος. If you've read Hesiod's Theogony, the contrast could't be starker: ἦ τοι μὲν πρώτιστα Χάος γένετ᾽, αὐτὰρ ἔπειτα Γαῖ᾽ εὐρύστερνος. In Hesiod, the birth of the cosmos and the gods is physical. The first things that comes into being, Χάος, is just the absence of anything else. In the New Testament, by contrast, the beginning is not physical, nor is there any absence. It's something like an idea or something like rational intelligibility (ὁ λόγος). This is so wildly different from so-called Pagan religion: the universe not only makes sense; it makes sense in ways that human beings can access. Divinity isn't power and violence. It's intelligence. This reminds me, now that I mention it, of Augustine's view that evil is absence -- specifically the absence of good.

There's so, so much of this. I didn't know about the wordplay though! Wordplay has such a wonderful history in ancient literature. I distinctly remember a lightbuld moment, when I was reading Plato's Republic, where the god Wealth is described as a "blind (tuphlos) leader." In Plato's time, that "ph" is pronounced "p-h", not "f." And the word wealth, of course, is Plutos. So tuphlos is an phonetic anagram of Plutos.


Thanks for the substantial comment. So I guess your Ancient Greek is much better than mine. On the other hand when I learned the language I was suprised how many "foreign words" (in German) now started to make a lot more sense because I was able to decipher their constituents. Things like "telescope" or combinations with "peri" (as you said), "meta" and so on. Going to recap some things now :)

I'm quite surprised by the negativity of the comments in this thread, especially contrasted with the positivity and enthusiasm I see in other threads. I'm an AI pessimist. I don't like it. I have resisted it. You'll find plenty of Rage against the Machine comments in my account history on Hacker News. The AI optimists drive me up the wall.

And I can tell all of the nay-sayers in this thread, from first-hand experience, that the AI tools can be useful. When you use them well, they can save time. If you're writing just a dinky webapp for your "radio on the internet" startup, it can do a lot of grunt work. It's better auto completion, at a minimum.

Last week I was struggling with an annoying, interlocking-race-condition/-stale-state bug. Fixing one issue kept reintroducing others that I'd just fixed. Skill issue, right? Right. And Clause 4.6 Opus diagnosed the problem and fixed it with just a little bit of coaxing.

Then I asked it to fix another issue and it wound up chasing its tail, as it tried to apply the same principle to unrelated code with unrelated problems.

Call these tools stochastic parrots. Call them autocorrect on steroids. Call them whatever you want. If you think they're worthless or have no use, you're living either in a fantasy land or in 2022 just after openai released its first, hilariously stupid chatbot.


I think the unstated assumption here is that some of the criticism comes from different places.

1. what you have identified here, thinking they're useless

2. wanting them to be useless because they like the process of writing code itself, and AI makes that less important, so it's a form of wishful thinking.

3. having ethical concerns about AI, so they want it to fail. And part of that is dismissing their usefulness (after all, it's easier to get rid something which isn't that useful...)

I personally find the third one quite fascinating -- like the cognitive dissonance about how the whole free software movement started out as a way to subvert copyright and nowadays they're almost the biggest defenders of it... but I do understand the reasoning here.


Microsoft fired all QA people ten or fifteen years ago. I'd imagine it's a similar a story: boxed software needed much higher guarantees of correctness. Digital deliver leaves much more room for error, because it leaves room for easier, cheaper fixes.


> Unfortunately this doesn't fix a bigger problem... I just don't enjoy vibe coding as a craft. There's something special about sitting down in the morning with your coffee and taking on a difficult programming problem. You start writing some code, the solutions start to formalize in your mind, there's a strong back-and-forth effect where as you code, the concepts crystalize further... small wins fuel a wonderful dopamine hit experience... intellisense completions, compilation completions, page refreshes, etc. are now all replaced with dull moments often waiting for the agent to return its response, which you now read.

Agreed. For me, LLMs don't just reduce the kind of active learning and problem solving that make my job enjoyable; they change and replace it with a "skill" that barely merits the name. "Learning how to use AI" means learning how to use a product. That's worthless. It teaches nothing of durable value.

I'm also not in any way interested in using these tools to learn anything else. They can print out as much information as you want about this or that or that topic, and even if it's correct 100% of the time, you remain a passive consumer of information that the tool is chewing, digesting, regurgitating, and spitting into your mouth.

On the other hand, I'm also profoundly technically and intellectually bored. I can solve all of the problems I encounter in the codebase I work on. I can diagnose issues, refine build, shorten test pipelines, and mentor junior developer -- and I cannot imagine doing this for the rest of my working life. My brain will liquify and dribble out my ears long before I reach retirement age.

If what I were doing were more interesting and technically challenging, maybe I'd feel differently, but if LLMs kill off the kind of programming I do, I'm not sure anybody should grieve, particular if its death sends reasonably smart, curious people to fields where their efforts produce something of greater, or actual, value -- and doesn't wind up lining the pockets of the next generation of insufferable, bs-shoveling "thought leaders."

> I have been saying this to everyone -- what's your exit strategy?

Personally? I'm already preparing to sell our house. I'll keep my current job for as long as I can or for as long as makes sense, and then I'll go back to school for nursing, and become a psychiatric nurse practitioner.


I should add: I'm not saying LLMs can do my job for me. I still find them tedious and clumsy. I do better work when I write my own code.


I appreciate that your message is a good-natured, friendly tip. I don't mean for the following to crap on that. I just need to shout into the void:

If I have some time, the last thing I want to do with it is sharpen prompting skills. I can't imagine a worse or more boring use of my time on a computer or a skill I want less.

Every time I visit Hacker News I become more certain that I want nothing to do with either the future the enthusiasts think awaits us or the present that they think is building towards it.


While I somewhat understand the impact on the craft, the agents have allowed me to work on the projects that I would never have had enough time to work on otherwise.


I fear this will be horribly self-indulgent, but I'll share it anyway:

I'd always been a computer person, but it wasn't until I'd reached my thirties that I realized I could make a career out of that interest. The joy of programming still gets me out of bed in the morning and sends me skipping happily to my desk in my home office. What I do wouldn't impress anybody at a technical level. I'm not an innovator. The world of software and tech would not suffer if I had never existed. But I like the guy I work for. I like the people I work with. I write stuff that lots of people use. I do it well enough that I can feel decently good about it.

And I'm watching all of what I enjoy in software as a career and craft gradually disappear. Upper management are now all True Believer AI zealots who know, just know, that AI is the future and therefore ensure that it is also the present. They've caused nothing but organizational chaos, shoved out knowledgeable people, in some misguided effort to remake the company in their image, and replaced them with, to me, obvious bullshit artists.

Engineering time and effort that might a few years ago have produced value and good experiences for users now produce mediocre "MCPs," used only internally, that turn out even more mediocre code and tests that don't test anything.

I don't have nearly the chops or talent you and your peers have. I never could have run with you guys or made the mark on the world that you did. What I do, and the processes I follow, are probably the exact stuff that drove you to retirement. Still, I enjoy what I do and hate that it's being taken from me and replaced with something I hate, overseen, in my company's case, by vapor merchants pretending to be visionaries/cutting-edge 'thought leaders.'

I'm glad some of us got to build things when the inmates ran the asylum, and I regret the money and 'progress' that strangled the life and joy out of it for you.

Just an aside: I've really enjoyed everything you've posted on HN and look forward to your comments. Thanks, and cheers.


I have to call you out a bit on the: "I don't have nearly the chops or talent you and your peers have".

Trust me, when I started at Apple in 1995 I was way in over my head. Or so I thought.

After a couple months on the job I asked a coworker down the hall (who seemed particularly chill—Hi, Brian!), "How long until I feel like I know what I'm doing?"

"6 months."

I liked the unambiguity of his answer even if it seemed kind of off the cuff.

He was more or less right. It was somewhere about 6 months that I more or less knew what I was headed in each morning to work to accomplish. And I felt like I, with a little help perhaps, could even contribute in a small way.

Still, I was always surrounded by some of the most amazing programmers I had ever met. One guy (hi, Cam!) could walk through a "backtrace" in machine code, look at the registers, addresses and data on the stack, and then declare, "You're accessing memory after you've already released it. Do you know what could be 24 bytes in size?"

And who was I? Some kid from Kansas with no degree in software engineering.

It may have in fact taken closer to two decades before I was able to shake off the imposter syndrome. At some point I had to admit that I wasn't so dense to have not learned anything in my 20+ years of coding. I was still not on Cameron's level, never will be, but I might have made up for that shortcoming by leaning into being prolific, coding two or three prototypes quickly in order to finally determine The Best Path.

Just from your comment I would be willing to bet your enthusiasm alone would make you a valuable asset.

That is kind of how it worked: there were some people that could hold multiple threads in their head and rattle off a semaphore strategy that was performant, skirt a deadlock.

There was the "math guy". We all knew who they were and would cycle by their office when we were wrestling with matrix inversions and the order of transforms.

And there were people that you could rely upon to take perhaps the most dreaded task of a project and work diligently at it. Trust me, no one split hairs over whether that individual could disassemble PPC code just by looking at it. The team appreciated the "tanks" that could do some of the drudge-work. (I was from time to time that person.)

I don't need to belabor a point, you get it, it took all types. It took me some time to see that though, and longer still to see where I fit in as well.


Thanks, I needed this.

There doesn't seem to be a place for me in the future of software/tech: I like sitting quietly, alone, solving problems, writing code, and reading it. I like in code much of what I like in art: the fruits of human labor and the results of human ingenuity. Being excited about AI/LLMs makes no sense to people like me. If you're excited because LLMs let you make something, great, good for you. Have fun.

If the tools become a mandatory part of the job, I'll change careers. Spending my days talking to chipper robots and describing what I want rather than making it myself sounds unbearable.


I debated heavily whether I'd stay in tech or change my career almost a decade ago. I concluded that the only other profession that I considered rewarding (at that time) would be to become a professor of history. Making history interesting to even one student per semester would be a win.

In the end, I remembered how much I hated schooling. This is despite being a huge fan of education. It wasn't realistic to think that I'd complete the work needed for accreditation.

Regardless, I'm happy today having selected for the thing that I already knew. I hope you also find yourself satisfied. It's lonely feeling lost when evaluating a thing you'd known through a new paradigm.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: