Hacker Newsnew | past | comments | ask | show | jobs | submit | cjfd's commentslogin

"assuming you have a real engineering job" does a lot of work there. You could also do a lot of work the other way by stating "assuming you are getting a real education". I studied physics when I was young and that field is a lot deeper than my current work in programming. Computer science can also be quite deep if one considers things like the halting problem, type theory and proof assistants.

On the other hand, if you store a small integer in a float it is generally reliable to compare to it. E.g., setting a float to zero and comparing whether the float is zero.

This sounds all true to me, but I think there is more. It is not just decisions by management, it is also the wider economic context. Low interest rates and, for the US, having the world reserve currency as your own currency both seem to make many of these changes attractive or even inevitable. Low interest rates lead to 'innovation' which I put in scare quotes because besides real innovation it can also mean something that passes as innovation but in the end just turns out to be a bubble of stuff that was not valuable enough. The 'innovation' then crowds out investments in more boring sectors like manufacturing. This is also not good for the population in general because fewer jobs are left for people who are not suited for working in highly 'innovative' sectors.


From the quotes in the article it sounds pretty simple. These are words that everyone can understand. Of course, the fascist right is attempting to 'inform' the public at a level where even two-syllable words are a bit too complicated. But maybe the general public also should attempt to be at a level a bit higher than cattle.


All kinds of worries are possible. (1) It turns out that all this AI generated stuff is full of bugs and we go back to traditional software development, creating a giant disinvestment and economic downturn. (2) sofware quality going way down. we cannot produce reliable programs anymore. (3) massive energy use makes it impossible to use sustainable energy sources and we wreck the environment every more than we are currently doing. (4) AIs are in the hands of a few big companies that abuse their power. (5) AI becomes smarter than humans and decides that humans are outdated and kills all of us.

It obviously depends on how powerful AI is going to become. These scenarios are mutually exclusive because some assume that AI is actually not very powerful and some assume that it is very powerful. I think one of these things happening is not at all unlikely.


1 and 2 are really only an issue if you vibe code. There's no reason to expect properly reviewed AI assisted code to be any worse than human written code. In fact, in my experience, using LLMs to do a code review is a great asset - of used in addition to human review


When I was a teenager, I read a book about assembly language for the commodore and implemented the game of life in a really simple way. I just used the text screen. To switch on a cell, I would put an asterisk ('*') in it. Then I could run my machine code program and it would evolve according to the rules of the game of life.


And who didn't do that! :)

You could also 4x the resolution by using half- and quarter-block characters from the top half of the ASCII table (or it'd be the PETSCII one i C64 case).


> And who didn't do that! :)

Exactly. It's even how I taught myself extremely basic Pascal -- getting my BASIC Life program running in Pascal. With asterisks.

A taught a friend at uni, who was a much better programmer than me, how the algorithm worked. He did a pixel-by-pixel version in machine code, but it was a bit slow on a ZX Spectrum.

So he did exactly the quarter-character-cell version you describe. I wrote the editor in BASIC, and he wrote a machine-code routine that kicked in when told and ran the generations. For extra fun he emitted some of the intermediate state to the border, so the border flashed stripes of colour as it calculated, so you could see it "thinking". Handy for static patterns -- you could see it hadn't crashed.

I've been considering doing a quarter-cell Mandelbrot for about 30Y now. Never got round to it yet.


The answer to a lot of "wow, how did the 8-bit machine pull that off? it seems like that would eat a lot of RAM" is that the framebuffer is the data storage. You were literally looking at the primary data store itself, because when a full-resolution framebuffer was 1/4th your addressable RAM (and slightly more than that for your actual RAM since you couldn't ever quite use all 64KB no matter how you mapped it), you need to get the most bang for the buck out of that RAM usage as you can.


Ha, I remember doing this with my Apple //. I forget what I was doing, but realized if I could set a pixel and later get what color was drawn at that location I could use it as a big array. Didn't know about peek/poke yet. One of those core "computers are magic" memories.


When I got into retrocomputing a few years ago, I also did this. Works great with TRS-80 semigraphic characters. First, I wrote it in C with a Z80 c compiler. The, I wrote it again in assembly and it was much faster! Amazing!


Well, if you do not need to care about performance everything can be extremely simple indeed. Let me show you some data structure in coq/rocq while switching off notations and diplaying low level content.

Require Import String.

Definition hello: string := "Hello world!".

Print hello.

hello = String (Ascii.Ascii false false false true false false true false) (String (Ascii.Ascii true false true false false true true false) (String (Ascii.Ascii false false true true false true true false) (String (Ascii.Ascii false false true true false true true false) (String (Ascii.Ascii true true true true false true true false) (String (Ascii.Ascii false false false false false true false false) (String (Ascii.Ascii true true true false true true true false) (String (Ascii.Ascii true true true true false true true false) (String (Ascii.Ascii false true false false true true true false) (String (Ascii.Ascii false false true true false true true false) (String (Ascii.Ascii false false true false false true true false) (String (Ascii.Ascii true false false false false true false false) EmptyString))))))))))) : string


You know you could just define the verified specs in lean and if performance is a problem, use the lean spec to extract an interface and tests for a more performant language like rust. You could at least in theory use Lean as an orchestrator of verified interfaces.


In Lean, strings are packed arrays of bytes, encoded as UTF-8. Lean is very careful about performance; after all, a self-hosted system that can't generate fast code would not scale.


There are also some funny humorous pieces on this site.


The short summary of it being that these people are beyond terrible at giving names to things.


Programmers and engineers should never be allowed to name things.

I say that as a programmer and engineer.


"We suck at naming things" -- Bjarne Stroustrup, in a talk about SFINAE


On one side I agree. On other side I look how marketing people name things and I think we're still better off

Imagine if next edition of GCC, released in 2026 was named 2027. Then it was GCC One. Then GCC 720. Then GCC XE. Then just plain GCC. Then GCC Teams


And then finally…GNU 720 AssistantDriver.

(Tip of the hat to Microsoft’s marketing teams.)


The python community has the habit of giving short names for things


Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: