Aren't these 3 different implementations with totally different use cases? Katex is latex-like implementation for web. Ratex is really 'rewrite katex in rust'. I don't understand what is getting "bolted on" to what here.
Rule should be whatever the people running the project think the rule should be. If you've got your own project, do implement the anti-fully-autonomous-PRs rule for your project. But the creators of Zig do not owe you or me the rule we like.
> With each trip generating multiple ledger entries, and Uber as a whole processing 15 million trips per day, it didn’t matter that DynamoDB was great because of high throughput at global scale. The proverbial bean counter should’ve stopped this madness from happening.
> At Uber’s scale, DynamoDB became expensive. Hence, we started keeping only 12 weeks of data (i.e., hot data) in DynamoDB and started using Uber’s blobstore, TerraBlob, for older data (i.e., cold data). TerraBlob is similar to AWS S3. For a long-term solution, we wanted to use LSG.
Honest question. Why do people go for this kind of complicated solution? Wouldn't Postgres work? Let's say each trip creates 10 ledger entries. Let's say those are 10 transactions. So 150 million transactions in a day. That's like 2000 TPS. Postgres can handle that, can't it?
If regional replication or global availability is the problem, I've to ask. Why does it matter? For something so critical like ledger, does it hurt to make the user wait a few 100 milliseconds if that means you can have a simple and robust ledger service?
I honestly want to know what others think about this.
It’s usually because executive management bakes hyper growth into the assumptions because they really want the biz to grow, then it becomes marching orders down the chain as it gets misinterpreted in a game of corporate telephone.
“We need to design this for 1b DAUs”
Then 1) that growth never happens and 2) you end up with a super complicated solution
Instead, someone needs to say, “Hey [boss], are you sure we need to build for 1b DAUs? Why don’t we build for 50m first, then make sure it’s extensible enough to keep improving with growth”
SRE here. Most of time we see choices like this because teams are under pressure to deliver and scale would likely exceed what a database will easily handle with the out of the box settings. So tweaking is required and that takes time/knowledge that Dev team doesn't have. AI helps a bit here but it didn't exist when DynamoDB solution was chosen. However, some terraform, and boom, scalable database created, only downside is the cost which is next Product Manager problem.
> Author here. I did not use AI to write this essay.
Maybe you did. Maybe you didn't. It's your word vs. theirs.
But one thing that is undeniable is that your article reads very much like AI-generated text. While reading it, I couldn't help thinking how ironic it is to write about the virtues of simpler devices using something that is obviously an AI-generated article.
Yeah, this one demonstrates a particularly pernicious view of software development. One where growth, no matter how artificial, is the only sign of success.
If you work with service oriented software, the projects that are "dying" may very well be the most successful if it's a key component. Even from a business perspective having to write less code can also be a sign of success.
I don't know why this was overlooked when the churn metric is right there.
Whenever we initiated a new (internal) SW project, it had to go through an audit. One of the items in the checklist for any dependency was "Must have releases in the last 2 years"
I think the rationale was the risk of security vulnerabilities not being addressed, but still ...
That was my question too. I have plenty of projects I've worked on where they rarely get touched anymore. They don't need new features and nothing is broken.
Sometimes you need to bump a dependency version, adjust the code to a changed API endpoint, or update a schema. Even if the core features stay the same, there's some expected maintenance. I'd still call that being worked on, in a sense that someone has to do it.
Technically you're correct that change frequency doesn't necessarily mean dead, but the number of projects that are receiving very few updates because they're 'done' is a fraction of a fraction of a percent compared to the number that are just plain dead. I'm certain you can use change frequency as a proxy and never be wrong.
That sort of project exists in an ocean of abandoned and dead projects though. For every app that's finished and getting one update every few years there are thousands of projects that are utterly broken and undeployable, or abandoned on Github in an unfinished state, or sitting on someone's HDD never be to touched again. Assuming a low change frequency is a proxy for 'dead' is almost always correct, to the extent that it's a reasonable proxy for dead.
I know people win the lottery every week, but I also believe that buying a lottery ticket is essentially the same as losing. It's the same principle.
With respect, this is a myopic view. Not all software is an "app" or a monolith. If you use a terminal, you are directly using many utilities that by this metric are considered dying or dead.
> it doesn't have to be files. it could be in memory on the browser.
How'd that work? If it's in memory, the extensions would vanish everytime I shutdown Chrome? I'll have to reinstall all my extensions again everytime I restart Chrome?
Have you seen any browser that keeps extension in memory? Where they ask the user to reinstall their extensions everytime they start the browser?
> but the language of "your computer" implies files on your computer, as it would be what people commonly call it. Merely just the extension is not enough.
But the language of "your computer" also implies software on your computer including but not limited to Chrome extensions.
It implies more than just the browser, which is likely why it was used for the post title. If it is exclusively limited to the browser, then "scans your browser" is more correct, and doesn't mislead the reader into thinking something is happening which isn't commonplace on the internet.
> An encouragement to be mindful of language, and therefore discuss what shared context we're trying to build, shouldn't be so controversial in a self-professed 'thoughtful' [0] forum.
I don't understand how HN's news guidelines apply to a blogger writing an article on their own blog. The controversial language was found in the article. It wasn't found in the thread you're replying to.
> Am I reading this right that people can (and do??) use images as a complete replacement for source code files?
Images are not replacements of source code files. Images are used in addition to source code files. Source code is checked in. Images are created and shipped. The image lets you debug things live if you've got to. You can introspect, live debug, live patch and do all the shenanigans. But if you're making fixes, you'd make the changes in source code, check it in, build a new image and ship that.
in smalltalk you make the changes in the image while it is running. the modern process is that you then export the changes into a version control system. originally you only had the image itself. apparently squeak has objects inside that go back to 1977:
https://lists.squeakfoundation.org/archives/list/squeak-dev@...
with originally i meant before the use of version control systems became common and expected. i don't know the actual history here, but i just found this thread that looks promising to contain some interesting details: https://news.ycombinator.com/item?id=15206339 (it is also discussing lisp which bring this subthread back in line with the original topic :-)
that's very interesting, thank you, i should have realized that even early on there had to be a way to share code between images. (and i don't know why i missed that comment before responding myself)
but, doesn't building a new system image involve taking an old/existing image, adding/merging all the changes, and then release new image and sources file from that?
in other words, the image is not recreated from scratch every time and it is more than just a cache.
what is described there is the process of source management in the absence of a proper revision control system. obviously when multiple people work on the same project, somewhere the changes need to be tracked and merged.
but that doesn't change the fact that the changes first happen in an image, and that you could save that image and write out a new sources file.
> image is not recreated from scratch every time and it is more than just a cache
Yes, some vm & image & sources & changes can be taken as the base implementation for development purposes -- a persistent cache.
The state of whatever IDE tools were in use will be saved -- is that what makes you say "more than just a cache"? If I sleep a windows desktop is that more than just a cache?
> changes first happen in an image
What if I write a plain-text source code file using Notepad, and use Smalltalk file handling and byte code compilation and command-line argument handling (packaged in the image) to write the result of a computation to stdout (and quit the image without saving)?
If I sleep a windows desktop is that more than just a cache?
yes, so basically what i meant here is that a cache just stores data, but it doesn't store the whole application.
this is significant in that i can shut down an application (say my webbrowser), then i can upgrade it to a new version, restart and and the application will reinitialize itself and load data from the cache, but now i have a new version of the application.
whereas if i put my laptop to sleep, or better yet, hibernate, then the whole state of the laptop is frozen in place, and i can't do anything to it until i run it again. same is true for smalltalk images.
What if I write a plain-text source code file using Notepad, and use Smalltalk file handling and byte code compilation and command-line argument handling (packaged in the image) to write the result of a computation to stdout (and quit the image without saving)?
you could be doing that, but then you would use the image as your IDE and runtime environment, but not building the actual application in your image. so you wouldn't using what i have been taught is the traditional way of doing smalltalk development.
i am not trying to be pedantic here. it does not matter either way. i just find the smalltalk image approach interesting because it forces you to think about software development in a different way.
this mattes to me because i am working with a web development platform (written in pike) that uses a similar approach. albeit more by accident than intentional. the developers of the platform added support for programmable objects that are stored in the platforms database. these objects can change the behavior of the platform itself, like plugins, but because they are stored in the database they can be changed at runtime, like a smalltalk image. and all the same implications for doing that apply here too. the database becomes more than a cache. and in theory the whole platform could be rewritten such that almost all of its code is stored in the database and only a small bootstrapping system needs to remain outside. this is simply made possible because pike can load and update code at runtime and code changes can be applied without restarting, just like smalltalk.
the downside of the image approach is that it makes upgrading the base image harder, because there is no clear distinction between the base image and any user added changes. i kind of have to take extra steps to pick out my changes and apply them to a new image.
it would be interesting if that process could be improved. it probably would require some compartmentalization just like an OS where i have the base OS, my home directory and the system configuration. i can take a disk image, upgrade the OS and the rest still works. it would be nice if upgrading pharo for example would work the same way.
btw: thanks for the email. i have to ask, how did you manage to reply to a comment more than a month old. normally the reply function is disabled on comments that are 14 days old.
Is that in-conflict with a reproducible build process or can we have both
in part the question is what makes a build reproducible. what actually needs to be reproduced? the point of a reproducible build is that a version of source code always produces the same binary.
how do you make reproducible builds in smalltalk? reproducible builds depend on what goes into the build process. so they depend on the compiler and build tools. in smalltalk those are all in the image, and the question is then what happens when i load code into an image. am i getting that right? i am not so familiar with the details here, but i would guess that it depends on how smalltalk compiles the code and how the import process deals with timestamps and source paths, etc.
however if i work on my source code within the image and i share the code by making a copy of the image then the image is the source and the binary and there is nothing to reproduce. your copy and my copy of the image are going to be identical until one of us makes changes to the image.
i'd be curious to learn more here. outside the smalltalk world my editing tools do not affect the reproducibility of the builds of my code. in smalltalk. just getting a new version of the code browser would change the build, wouldn't it? how do you track that or keep that separate?
Port "user added changes" from the source code archive to each vendor release
right, but that's the "wrong" way around from the perspective of a desktop. i don't need to port my code to new versions of VS Code or vim or which ever tools i use to develop. only smalltalk forces me to do that. so i don't mean distinction in the file structure but distinction in the code dependency.
(i would never have considered to ask for being allowed to make a late reply, especially in this case. it is unlikely that anyone else is going to see our conversation. we could have just continued over email. but hopefully we can dig out some worthwhile details that not only me but anyone searching can learn from)
it sounds like you are asking a question, but there is no "?" at the end, so i am slightly confused. if it is a question, my answer is no, because the tools i use to read or edit the code are completely distinct from the build tools.
what affects the reproducibility is the compiler, and that's the issue with smalltalk. you can't upgrade the IDE without upgrading the compiler. say if i use pharo, and switch to a new version of pharo then i get a new version of smalltalk, and i can't compile my own code in the image with the old version any more.
based on that i don't understand how you even enable or test reproducibility in smalltalk. i'd like to learn more about that.
(re: vanity. since a while i have wondered how i would go about creating the longest running discussion thread on hackernews. and how long i would be able to keep it going. i think we are off to a good start here. just remember to peek in here once a while to see if there is a response. and if there isn't feel free to poke me by email. (unless we decide that there is nothing to add))
> … can't upgrade the IDE without upgrading the compiler.
When packaged together IDE & compiler can't be upgraded separately (without doing the work to in-effect make separate packages).
> … can't compile my own code in the image with the old version
So we could try to compile own-code' in image' with compiler', and we could try to compile own-code' in image" with compiler", but we want to try to compile own-code' in image" with compiler' ?
Not without doing the work to in-effect make separate packages. (Say we could port compiler' to image" but would that mean trying to compile compiler' with compiler".)
And now we're back to what does "traditional way of doing smalltalk development" mean because supposedly "Team/V could forward and backwards migrate versions of Smalltalk “modules” within a running virtual image."
> the point of a reproducible build is that a version of source code always produces the same binary.
Scope: does "a version of source code" just mean own-code or does it mean sources+changes.
i have just been reading https://news.ycombinator.com/item?id=48081245 about reproducible builds in debian, and that reminded me of this discussion. truth is i still struggle to understand how reproducible builds with smalltalk images even work.
in debian, a build is reproducible if the checksum of the resulting package of my build matches that of your build. how do you do that in a smalltalk image? you take the checksum of what? which objects or rather entities do you compare, and how do you compare them, to verify that a build is reproducible? it can't be the whole image, because that is guaranteed to be different somewhere, and thus the image checksums won't match. so how does that really work?
there is another thought that i just realized. what are reproducible builds with runtime compiled languages like python, ruby, perl, even javascript? the point of a reproducible build is to verify that the binary i use, or that the package i use is based on the same source that i have. python et al don't have binaries. so my reproducible build check is a diff of the source tree. if in smalltalk the image is my source then i care about knowing that the image has not been modified. that's impossible because the image changes just by using it, unless i never save the changes. what's left is exporting the source and running a diff on that maybe.
> … the point of a reproducible build is to verify that the binary i use, or that the package i use is based on the same source that i have.
I have been talking about how to reproduce a particular build, not how to verify.
From the same initial state, perform the same sequence of actions, implies arrive at the same final state. Reproducible.
"Retaining your old change logs gives you a record of all the changes you have made to the system. This will prove invaluable when you receive a new release of Smalltalk/V.
… In short, back up the image and change log together, and you shouldn't have any problems."
page 285 Smalltalk/V 286 Tutorial and Programming Handbook 1988
i didn't see that comment, sorry. my primary purpose here is to learn more about smalltalk and the image based development model. you brought up a few interesting things that i have questions about. reproducible builds is one of them. and by question i don't mean to verify the validity of claims but to ask about them to learn more.
reply