Hacker Newsnew | past | comments | ask | show | jobs | submit | dnautics's commentslogin

It's not.. but im pretty sure it could be. could probably even take this (WIP) idea and bolt on a formal verifier pretty easily.

https://github.com/ityonemo/clr


It'd take more than that to match rust's borrow checker. Rust's borrow checker tracks lifetimes, and sometimes needs annotations in code to help it understand what you're actually trying to do. I suppose you could work around that by adding lifetime annotations in zig comments. Then you've have a language that's a lot like rust, but without an ecosystem of borrowck-safe libraries. And with worse ergonomics (rust knows when it can Drop). And rust can put noalias everywhere in emitted code. And you'd probably have worse error messages than the rust compiler emits.

Its an interesting idea. But if you want static memory safety in a low level systems language, its probably much easier to just use rust.


> I suppose you could work around that by adding lifetime annotations in zig comments.

you can make a no-op function that gets compiled out but survives AIR

> rust knows when it can Drop.

and its possible to cause problems if you aren't aware where rust picks to dropp.

> And rust can put noalias everywhere in emitted code.

zig has noalias and it should be posssible to do alias tracking as a refinement.

> But if you want static memory safety in a low level systems language, its probably much easier to just use rust.

don't use that attitude to suck oxygen out of the air. rust comes with its own baggage, so "just using rust because its the only choice" keeps you in a local minimum.


> and its possible to cause problems if you aren't aware where rust picks to drop.

Can you give some examples? I've never ran into problems due to this.

> don't use that attitude to suck oxygen out of the air. rust comes with its own baggage

Yeah, that's a totally fair argument. One nice aspect of the approach you're proposing is it'd give you the opportunity to explore more of the borrow checker design space. I'm convinced there's a giant forest of different ways we could do compile time memory safety. Rust has gone down one particular road in that forest. But there's probably loads of other options that nobody has tried yet. Some of them will probably be better than rust - but nobody has thought them through yet.

I wish you luck in your project! If you land somewhere interesting, I hope you write it up.


> Can you give some examples? I've never ran into problems due to this.

If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

thank you. Unfortunately in the last few weeks i've been too busy with my startup to put as much work into it. We'll see =D


> If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

Yeah, I've heard of people being surprised that when they make massive collections of Box'ed entries, then get surprised that it takes a long time to Drop the whole thing. But this would be the same in C or Zig too. Malloc and free are really complex functions. Reducing heap allocations is an essential tool for optimisation.

The solution to this "unexpected performance regression" in rust is the same as it is in C, C++ and Zig: Stop heap allocating so much. Use primitive types, SSO types (SmartString and friends in rust) or memory arenas. Drop isn't the problem.


In zig the solution is to use an arena allocator. That’s about as easy as it gets. Maybe Rust also allows doing that, I don’t know.

You can use arenas in Rust, it's just not as trivial to swap allocators generally. But there are plenty of crates for it.

rust does not promise leak safety.

True. But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope. Ownership semantics mean the compiler knows when to free almost everything.

> But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope.

RAII has entered the chat.


> Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?

plenty of other companies/entities making high quality software in zig? tigerbeetle, zig itself for example.

Bun's entire history has been a kind of haphazard move as fast as you can story, so...


> "Make text look good on page" leaves lots of details unspecified.

Even just a sane layout renderer is incredibly hard. A decade ago I wrote a bespoke DNA sequence typesetter (in svg) and I had claude build an extension, for whatever reason it chose to build it from scratch instead of using the components I had built, and it did everything wrong.


> Slash commands, for instance, are a misfeature. I should never have to wait for the chatbot finish a turn so that I can check on the status of my context window or how much money I've spent this session. Control should be orthogonal to the chat loop.

I get what you're trying to say but in practice architecting what you propose is considerably more difficult. Why not build it and try to get hired by one of the bigcos?


I don't think the basic architecture principles are novel. The big AI labs and other large tech companies already have engineers who can see this, without a doubt. But the AI labs clearly don't care if their LLM agents are just big balls of mud, and the big tech companies priorities mostly lie elsewhere, too.

They just want features. They don't really care about duplicated work, so half of them reinvent the TUI rendering wheel. Pluggability is something that might be actually hostile to their interests in lock-in. And the AI labs probably think "after a couple more scaling cycles, our models will be so good that our agents can just rewrite themselves from scratch"; until they hit a compute or power wall, it always looks rational to them to defer rearchitecting.

Another real possibility is that if you work on an agent with a really clean architecture and publish it in hopes of getting hired by some AI company, all of them think "that looks great, but we don't want to rearchitect right now". Your code winds up in the training set, and a year and a half from now, existing agents can "one-shot" rewrites along the lines of your design because they're "smarter".

As for me, I'm not that interested, personally. There are other things I want to build and I'm working on those.


In what way would it be more complicated? This is pretty basic concurrent programming, we routinely have much much more complex concurrent designs..

Hell, a telegram bot can handle that just fine.


Yeah, basic concurrent programming is not more complicated than basic linear programming

Yes. Humans are also unreliable and nondeterministic (though certainly more reliable). Accordingly we have built software dev practices around this. I imagine it would be super useful for example to have a "TDD enforcer":

Phase 1: only test files may be altered, exactly one new test failure must appear.

Phase 2: only code files may be altered. The phase is cleared when the test now succeeds and no other tests fail.

If you get stuck, bail and ask for guidance


I've been busy building and dogfooding open-artisan for my own development purposes. I've diverged quite a bit from main and am hoping to merge some of those changes back soon. It's basically an OpenCode plugin that forces open-code token-hungry state machine that tries to map the engineering process I follow, exposing only valid tools and states at every step of development. If you're interested, in following along or trying it out, it's available here:

https://github.com/yehudacohen/open-artisan/

Hopefully, I'll merge in my large structural changes in the next couple of weeks. These structural changes will enhance the state machine meaningfully, as well as adding support for hermes agenet.


there's a reason why emerald cloud lab pivoted from dna-based computing to cloud lab services.

claude does not struggle with zig? not in my hands anyways.

I mean sure nature has no obligation to not have a unfalsifable particle, but you wind up in weird places, like, there exists a distribution of dark matter that explains the poltergeist that knocked over your coffee cup last week.

If there would be a distribution of dark matter that explains the poltergeist, we could measure that distribution of dark matter.

We can measure the mass distribution on astronomical scales. We "see" the dark matter. Just not with light.


We don't measure dark matter, we measure some anomaly and then we say "it must've been dark matter".

It's not crazy different from saying the same about that poltergeist.


We measure the matter distribution by its affect on light (strong/weak lensing). We also measure the matter distribution by the amount of light coming from it. The results are not the same. The simplest explanation is that there is matter which does not produce or reflect light via e/m, i.e. it is dark. Dark Matter.

We know of particles which behave the same way. Neutrinos for example.


You're saying the same thing, we see some anomaly in measured light and we say "the simplest explanation is dark matter".

We are not measuring dark matter, we are measuring something that is not what we expect and we decided it's dark matter.


I mean I don't believe a particle that only reacts with gravity is unfalseifiable , it's that gravity just demands the use of unimaginable energies that we've not accomplished at this time.

You act like we've managed to probe the depths of physics with certainty when in reality you find any means to reject that which offends your sensibilities.


> You act like we've managed to probe the depths of physics with certainty when in reality you find any means to reject that which offends your sensibilities.

At the root of science is "sensibilities", like occam's razor, even "what counts as experimental reproduction", etc.


I think you mean LCDM only does well on galactic rotation curves because it has free parameters per galaxy. MOND only has one free parameter, maybe two if you use the MOND+Relativity model that doesn't work.

I don’t think these are free parameters in the same sense.

Like, if one theory says that a hunk of metal actually is made of many microscopic grains of various sizes and orientations, where the sizes and orientations of these grains has an effect on the behavior of the metal, you don’t count the “the sizes and orientations of these grains” as free parameters, do you?


You would if you didn't have any ability to observe those sizes and orientations.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: