I'm lucky enough that I live in a city that has a newbie-friendly group that climbs every week and goes for dinner and board games afterwards.
I consider myself an introvert, but after going for a while, I got to figure out who are regulars, and they recognise me as a new regular too, at which point they're more open to socialising more, even outside the weekly meetups.
Even when I'm bouldering alone, I've had random people cheer for me when I'm about to send, or show me the beta for a route I'm struggling with, or ask for help with a problem. It just provides a very natural conversation starter, at which point you can pivot to other topics, provided they seem open to talking more.
It's obvious in hindsight but to me its really interesting you can collect data points on the community just by chatting with them. Maybe you could guess, by appearance or behaviour or something, whether most people at the gym are university students, or gym bros, or something else.
But by chatting with them, the world seems a bit bigger. And even if you don't see them again often, or don't chat again, its just nice that you have some level of familiarity and learn new things you wouldn't know unless you chatted with them. And although sometimes you have that awkward uncomfortable short conversation, every once in a while, you make a new friend. That is life, I suppose.
Most of the people in my gym would not thank me for talking to them. They rarely talk to each other unless they know each other outside. It may be a cultural thing.
> ...stored in the global StorageDatabaseNameHashtable.
> This mapping:
> - Is keyed only by the database name string
> ...
> - Is shared across all origins
Why is this global keyed only by the database name string in the first place?
The post mentions a generated UUID, why not use that instead, and have a per-origin mapping of database names to UUID somewhere? Or even just have separate hash-tables for each origin? Seems like a cleaner fix to me compared to sorting (imo, though admittedly, more of a complex fix with architectural changes)
Seems to me that having a global hashtable that shares information from all origins is asking for trouble, though I'm sure there is a good explanation for this (performance, historical reasons, some benefits of this architecture I'm not aware of, etc.).
Someone from my high school added me on LinkedIn and works at Palantir.
What I find interesting, is that a few months after joining, he scrubbed all posts, descriptions, and mentions of the word "Palantir" in his profile, and replaced it by saying he works at an unnamed company as "a Forward Deployed Engineer". Judging by his activity reacting to other posts, it seems he coworkers also use the same term and removed mentions of "Palantir".
I find it interesting, I suppose it was to avoid backlash from others, or perhaps other companies would be hesitant to hire someone from Palantir (?). Or perhaps just a company policy to avoid scammers from finding employees.
But in any case, the hiding of the word is something I find interesting.
Back when I was in university, one of the units touching Assembly[0] required students to use subtraction to zero out the register instead of using the move instruction (which also worked), as it used fewer cycles.
I looked it up afterwards and xor was also a valid instruction in that architecture to zero out a register, and used even fewer cycles than the subtraction method; but it was not listed in the subset of the assembly language instructions we were allowed to use for that unit. I suspect that it was deemed a bit off-topic, since you would need to explain what the mathematical XOR operation was (if you didn't already learn about it in other units), when the unit was about something else entirely- but everyone knows what subtraction is, and that subtracting a number by itself leads to zero.
[0] Not x86, I do not recall the exact architecture.
It increases attack surface area on the browser. Even if you do need to "accept" a connection for a device, this isn't foolproof. I imagine adding WebUSB is a non-insignificant amount of code, who's to say there isn't a bug/exploit introduced there somewhere, or a bypass for accepting device connections?
This would still be better than downloading random native programs since it's under the browser's sandbox, but not everyone would _ever_ need to do something that requires WebUSB/USB, so this is just adding attack surface area for a feature only a small percentage of people would ever use.
The solution is to use a smaller separate _trusted_ native program instead of bloating the web with everything just for convenience. But I understand that most are proprietary.
I say all this, but a part of me does think it's pretty cool I can distribute a web-app to people and communicate via WebUSB without having the user go through the process of downloading a native app. I felt the same way when I made a page on my website using WebBluetooth to connect to my fitness watch and make a graph of my heart rate solely with HTML and Javascript (and no Electron).
I'm just not too happy about the implications. Or maybe I'm just a cynic, and this is all fine.
I do not understand the appeal of the workflow of working on separate things in parallel, then splitting it off into branches/commits. imo, isn't it better to fully focus on one thing at a time, even if it is "simple"?
I imagine if I follow this workflow, I might accidentally split it off in a way that branch A is dependent on some code changes in branch B, and/or vice versa. Or I might accidentally split it off in a way that makes it uncompilable (or introduce a subtle bug) in one commit/branch because I accidentally forgot there was a dependency on some code that was split off somewhere else. Of course, the CI/CD pipeline/reviewers/self-testing can catch this, but this all seems to introduce a lot of extra work when I could have just been working on things one at a time.
I'm open to changing my mind, I'm sure there are lots of benefits to this approach, since it is popular. What am I missing here?
From practical experience from using jj daily and having (disposable) mega merges:
When I have discrete, separate units of work, but some may not merge soon (or ever), being able to use mega merges is so amazing.
For example, I have some branch that has an experimental mock-data-pipeline thingy. I have yet to devote the time to convince my colleagues to merge it. But I use it.
Meanwhile, I could be working on two distinct things that can merge separately, but I would like to use Thing A while also testing Thing B, but ALSO have my experimental things merged in.
Simply run `jj new A B C`. Now I have it all.
Because jj's conflict resolution is fundamentally better, and rebases are painless, this workflow is natural and simple to use as a tool
> Because jj's conflict resolution is fundamentally better
I don't know jj well so its merge algorithm may well be better in some aspects but it currently can't merge changes to a file in one branch with that file being renamed in another branch. Git can do that.
I don't think I really understand the way jujutsu is doing this, but if it's what I think, one example would be that you realize while working that some changeset is getting too big and makes sense to split it. So B would depend on A and be on top eventually, but you don't know the final form of B until you've finished with both. I've always just done this with rebasing and fixups, but I could see it being easier if you could skip that intermediate step.
Sometimes you want to work on something and as a prerequisite that needs X. Then you realise once X is in place you can actually build a number of useful things against X. And so forth. There’s no good way to merge sequentially, other then a multi merge
I’ve found megamerge really helpful in cases where I’m working on a change that touches multiple subsystems. As an example, imagine a feature where a backend change supports a web change and a mobile change. I want all three changes in place locally for testing and development, but if I put them in the same PR, it becomes too hard to review—maybe mobile people don’t want to vouch for the web changes.
You’re right that I have to make sure that the backend changes don’t depend on the mobile changes, but I might have to be mindful of this anyway if the backend needs to stay compatible with old mobile app versions. Megamerge doesn’t seem to make it any harder.
>I do not understand the appeal of the workflow of working on separate things in parallel, then splitting it off into branches/commits. imo, isn't it better to fully focus on one thing at a time, even if it is "simple"?
because agents are slow.
I use SOTA model (latest opus/chatgpt) to first flesh out all the work. since a lot of agent harness use some black magic, i use this workflow
1. Collect all issues
2. Make a folder
3. Write each issue as a file with complete implementation plan to rectify the issue
After this, i change from SOTA to Mini model
Loop through each issue or run agents in parallel to implement 1 issue at a time.
I usually need to do 3 iteration runs to implement full functionality.
A real case from my work. I had to work on an old Python project that used Poetry and some other stuff that was just not working correctly on my computer. I did not want to touch the CD/CI pipeline by switching fully to uv.
But I created a special uv branch that moved my local setup to uv. Then went back up the tree to main and created a feature branch from there. Merged them together and worked out from that branch moving all the real changes to the feature branch.
Now whenever I enter that project I have this uv branch that I can merge in with all the feature branches to work on them.
I had a big feature I was working on. jj made it easy to split it into 21 small commits so I could give reviewers smaller things to review while I continued to work. It wasn't perfect and maybe git can do it all by itself but it's not my experience.
In other words, I effectively was working on one thing, but at a quicker easier pace.
But in general you are one of the few that does. Many devs do find git rebases scary. Having done it in both I can say with JJ it is much much simpler (especially with jjui).
It does seem to introduce a lot of complexity for its own sake. This kind of workflow only survives on the absorb command and like you said it doesn't really cover all the interplay of changes when separated. It's a more independent version of stacked diffs, with worse conceptual complexity.
As a jujutsu user, I don't disagree. I can see the appeal of doing a megamerge, but routinely working on the megamerged version and then moving the commits to the appropriate branch would be the exception, not the norm.
I gather one scenario is: You do a megamerge and run all your tests to make sure new stuff in one branch isn't breaking new stuff in another branch. If it does fail, you do your debug and make your fix and then squash the fix to the appropriate branch.
I wouldn't do it this exact way either but the benefit is "having any local throwaway integration branch" vs. having none at all. You don't need to do it this exact way to have one.
I wonder if someone more creative than me would be able to push this to do things it was not designed to do. I recently found a video where someone exploited some properties of certain transcript file formats to be able to make a primitive simple drawing app with Youtube's video player's closed captions.[0]
Since a brush's code can see the state of the canvas and draw on it, perhaps there can be a brush that does the opposite here, and instead renders a simple "video" when you hold down the mouse? Or even a simple game, like Tic-Tac-Toe.
I understand that obviously isn't the purpose of the brush programs, but I think it is an interesting challenge, just for fun.
[0] The video I am thinking of is by a channel named Firama, but they did not explain how they accomplished it. Another channel, SWEet, made their own attempt, which wasn't as full-featured as the original, but they did document how they did it.
> Never follow a shortened link without expanding it using a utility like Link Unshortener from the App Store,
I am unfamiliar with the Apple ecosystem, but is there anything special about this specific app that makes it trustworthy (e.g: reputable dev, made by Apple, etc.)? Looking it up, it seems like an $8 app for a link unshortener app.
In any case, there have been malicious sites that return different results based on the headers (e.g: user agent. If it is downloaded via a user-agent of a web browser, return a benign script, if it is curl, return the malicious script). But I suppose this wouldn't be a problem if you directly inspect and use the unshortened link.
> Terminal isn’t intended to be a place for the innocent to paste obfuscated commands
Tale as old as time. Isn't there an attack that was starting to get popular last year on Windows of a "captcha" asking you to hit Super + R, and pasting a command to "verify" your captcha? But I suppose this type of attack has been going on for a long, long, time. I remember Facebook and some other websites used to have a big warning in the developer console, asking not to paste scripts users found online there, as they are likely scams and will not do what they claim the script would do.
---
Side-Note: Is the layout of the website confusing for anyone else? Without borders on the image, (and the image being the same width of the paragraph text) it seemed like part of the page, and I found myself trying to select text on the image, and briefly wondering why I could not do so. Turning on my Dark Reader extension helped a little bit, since the screenshots were on a white background, but it still felt a bit jarring.
Agreed, the lack of borders or indentation on the screenshots is very confusing. It's hard to understand what text comes from the malicious website and what is from the author.
I'm lucky enough that I live in a city that has a newbie-friendly group that climbs every week and goes for dinner and board games afterwards.
I consider myself an introvert, but after going for a while, I got to figure out who are regulars, and they recognise me as a new regular too, at which point they're more open to socialising more, even outside the weekly meetups.
Even when I'm bouldering alone, I've had random people cheer for me when I'm about to send, or show me the beta for a route I'm struggling with, or ask for help with a problem. It just provides a very natural conversation starter, at which point you can pivot to other topics, provided they seem open to talking more.
reply