Hacker Newsnew | past | comments | ask | show | jobs | submit | germandiago's commentslogin

Some people ask me why I do not use Rust as opposed to C++ if it is already safer and more modern.

But I see the forums (and I also trued some toy stuff at times) plagued with rigidity problems that in C++ have obvious solutions.

For example, I am not going to fight a borrow-checker all the stack up to get a 0.0005% perf improvement, if sny, when I can use smart pointers.

I am not going to use Result everywhere when I can throw an exception and get done with it instead of refactoring all the stack up for the intermediate return types (though I use expected and optional and like them, but it is a choice depending on what I am doing).

I am not going to elaborate safe interfaces for my arrays of data I need to send to a GPU: there is no vslue in it and I can get it wrong snyway, it os ceremony. I assume this kind of code is unsafe by nature.

I find C++ just more flexible. Yes, it has warts, but I use all warnings as errors, clang tidy and have a lot of flexibility. I use values to avoid any trace of dangling and when it is going to get bad, I can, most of the time, switch to smart pointers.

I really do not get why someone would use Rust except for very niche cases like absolutely no memory unsafety (but this is not free either, as some reports show: you need to really be careful about reviewing unsafe if your domain is unsafe by nature or uses bindings to keep Rust invariants or you write only safe code, in whcih case, if memory safety is critical, it does give you something).

But I do not see Rust good for writing general application code. At least not compared to well-written C++ nowadays.


“I’m not going to use Rust because I don’t like it” seems like what you’re saying, which is totally fine. Plenty of people, myself included, manage to write and enjoy writing general application code in Rust. You’re allowed to not get it, just like I’m allowed to dislike writing C++.

No. That is not what I am saying. I am saying there are contexts where you do not get value out of it and you can potentially decrease your productivity because it is more rigid. You have examples above if you want to read through.

In no way I am saying it is useless. I just see niche uses for it compared to alternatives.


I read most of your comment as phrasing the things that make rust unique as being additional burdens relative to what you would prefer, which is fair, but often they are what I appreciate about the language. Explicit result types are a great example.

Rigidity is a trade off: it can make initial development slower but refactors significantly easier, just as an example.

I don’t think any of your examples show it to be niche. It operates well in most of the space where C++ is a good option, and a bit beyond that (embedded, firmware, but also higher level things where you want performance but don’t want to worry about memory safety).


> but also higher level things where you want performance but don’t want to worry about memory safety).

Well, at the cost of having a straight jacket. Result without option for exception handling is an example. You need to refactor all the way up if you notice that suddenly when refactoring you needed a Result bc a new error appears that could not happen before or you need to preventively spam Result everywhere since the start. You need to handle those all the stack up. The borrow checker is also rigid. I do know why it exists. I understand its value. I am just talking about the toll it imposes while coding, and wondering if it is a good default (I think most of the time it is not, but when you need it, it is invaluable, however these cases are a minority).

Another insight is that when you really go low-level, most of the time you are working with unsafe interfaces probably. At that time, you are using unsafe and now you have to satisfy Rust's borrow checker. How? By hand. So you lost part of the value proposition.

Can you recover it? Yes. How? By reviewing that code. But if I have to review that code, what is better from choosing a language (in this situation I mean, there are situations where Rust is the better choice) where I can understand the invariants in unsafe code better and anyway I have linters and a lot of established guidelines that are not difficult to follow? And by not difficult to follow I mean they are embedded in tooling like clang-tidy, not that I can follow because I know a lot.

So for me it is not so obvious at all, especially in the presence of quite a few unsafe blocks. If you want it safe, at that time, you are starting to compete with other unsafe languages: you need human review anyways... if there is tooling in Rust for unsafe blocks (I can imagine there could be something), that improves things competitively for Rust in unsafe blocks. But if you need careful review, you are stuck again in the non-magical real world: things are safe if you checked absolutely everything.

> Rigidity is a trade off: it can make initial development slower but refactors significantly easier, just as an example.

That is certainly true. It is also true that in areas where you put this extra effort and quickly refactor, it makes things more difficult.

Refactoring, if you mix it with unsafe, needs a much more strict review than just pretending things are safe because you refactored and put things behind an unsafe interface and present it as safe.

I am not convinced at all this is what you need in most scenarios. The productivity impact is relatively high IMHO.

OTOH, if I really want correctness (real correctness!) but not absolute full speed, I think I can reach to Ocaml (very practical) or Haskell (this one is also a bit too rigid actually sometimes).

So I am left in a situation where Rust just seems to be appealing for places where the most absolute memory safety is needed. But memory safety is still a composed characteristic of a running program: you have to take into account unsafe interfaces, bindings, etc.

So the only way to get real safety is anyways to review everything (if that is what you really want to deliver), probably proving your code, which anyway requires human intervention. Did we ever (even if less often) see crashes for invariant violations in code advertised as safe in Rust? Certainly yes. I acknowledge it is usually an improvement, but still not a guarantee.

So if it is not a guarantee and I can reach other tools where anyway the guarantee is there through GC or other mechanisms and where it is not I am equal to Rust, then, why bother?

Probably the only place where I see Rust appealing is where you need both max. performance and absolute memory safety (but you will still need the kind of reviews I mentioned if you spam unsafe and interact with bindings anyway). Those are niche cases, not the norm.

I see like a suboptimal choice to write much of the application code in Rust, even when you need speed, compared to C++. C++ has very good tools for compile-time programming, expression templates, good warnings and linters, a big ecosystem and it is way more voluble (exceptions and results can be used, invariants in unsafe code are easier to follow since a borrow checker does not need to be satisfied "by hand").

So I am not sure at all Rust is the reply for a more or less mainstream general-purpose application language.

I think that Rust is valuable in things like OS hardened interfaces, etc. But even there CVEs were found! Right? https://news.ycombinator.com/item?id=46302621

There is no magic bullet here, but I do know that when coding in Rust, the productivity toll I am paying is not negligible and I can reach for tools and techniques that make me very close or equal to that productivity.


No, there are contexts where you do not get value out of it. Don’t speak for others.

I think you csn also goa long way with C++ and templates to represent sny kind of restricted type in the type system. Variants are somewhat clumsy without pattern matching but most tools you can make use of are already there I would say.

In my backend system I represent users with different variant states to avoid a lot of unrepresentable states.

As for underutilization, I think only functional languages, Rust and C++ support variants and that might be one reason: people just make blobs of state and choose which fields to use instead of encoding states and make some combinations unrepresentable. Javascript, Java, C# or Python do not have Variant types to the best of my knowledge.In Ocaml and Haskell and with pattern matching they are very natural. In Rust with enums, same. In C++, they are so so but still usable compared to the others that do not have.

In my load tests I even went, since I launch thousands of clients, with a boos.MSM to drive the test behavior. One state machine per user.


I am currently reading Real-World Ocaml and I am really learning more about functional programming, though I was already familiar with a few things.

Looks to me like you can build amazingly robust pieces of software with functional programming.

However, I am divided.

I have a backend that works in NiceGUI for a product. It does the job. The code is reasonable and MVVM. The most important task it does is connecting to a websocket per customer and consume data to present some analytics.

I will not have a great deal of customers, maybe in the tens or maximum hundreds visiting the website.

I also want REPL and/or hot reload, but I am aware that as I grow features (users admin panels, more analytics, etc) maybe functional programming can do a good job transforming data pipelines.

But Haskell or Ocaml are static. I guess if I want something later that grows and scales and is still dynamic Clojure or Elixir should be a good choice. But at the same time I am afraid that if at some point I need to refactor, things will go wrong.

Currently I use Python with Mypy. All is written in the backend: the frontend is generated by NiceGUI from the backend.


Not sure about Ocaml but with Haskell you can use ghci/`cabal repl` and get blazing fast reload of a web app as you develop. Tbh a lot of haskellers don't take advantage of this IMO.

Ocaml seems to have a REPL as well, not sure how it works outside of Emacs (in Emacs with utop looks good what I am trying).

Haskell is so so correct that it tends to get a bit on the way and you tend to encode everything in the type system. This is a blessing for correctness and a curse for other stuff (tracing, debugging, adding side-effects).

This is the reason why I am looking at Ocaml instead of Haskell: not so pure, more pragmatic and supports imperative programming well.

As I said, it is double-edged.


I have designed a backend with exactly the same underlying philosophy as you ended up: load balancer? Oh, a problem. So better client-side hashing and get rid of a discovery service via a couple dns tricks already handled elsewhere robustly.

I took it to its maximum: every service is a piece that can break ---> fewer pieces, fewer potential breakages.

When I can (which is 95% of the time, I add certain other services inside the processed themselves inside the own server exes and make them activatable at startup (though I want all my infra not to drift so I use the same set of subservices in each).

But the idea is -- the fewer services, the fewer problems. I just think, even with the trade-offs, it is operationally much more manageable and robust in the end.


Let me see if I understand it. The TL;DR is that instead of asking for VMs and fit things there you reserve the CPU and RAM and do with that whatever you want? Number of mVMs, etc.?


I am spanish and I know the details better than you I presume.

Guernica was in many ways something both the british and republicans made stand out for propaganda.

There have been far worse things in the spanish civil wars.

For example Cabra was bombed in a day of market with the intention of killing and without being any kind of strategic strategic objective, way further than Guernica from other objectives and with more dead civilians actually.

It is just less well known bc of who did it.


>with the intention of killing and without being any kind of strategic strategic objective

Wikipedia says "The airstrike was carried out in the mistaken belief that Italian mechanized troops were stationed in the village. Once over the target, the pilots mistook the market's awnings for military tents." (Carlos Saiz Cidoncha, 2006)

https://en.wikipedia.org/wiki/Bombing_of_Cabra


Thanks because I did not know this piece of information. So maybe what I had was incomplete.


A b9mbing that was not worse than Cabra's bombing or Paracuellos killings but for some reason it stayed at the top like a myth.


Maybe we do not know what Claude has been doing and he keeps it secret...? :D


Indeed this is a nice discovery and I think it is useful in its own right.


> I don't think the fact that the bug being in the language runtime is going to be much consolation. Especially if the software you were running was advertised as formally verified as free of bugs.

Reminds of what some people in the Rust community do: they fight how safe this is or not. I always challenge that the code is composed of layers, from which unsafe is going to be one. So yes, you are righ to say that. Unsafe means unsafe, safe means safe and we should respect the meaning of those words instead of twisting the meaning for marketing (though I must say I heard this from people in the community, not from the authors themselves, ever).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: