Hacker Newsnew | past | comments | ask | show | jobs | submit | amortka's commentslogin

I’d be more convinced if the project explicitly scoped itself as “best possible frontend + governance model” first, and treated a custom index as an aspirational, separate phase.


The real bottleneck isn’t human review per se, it’s unstructured review. Parallel agents only make sense if each worktree has a tight contract: scoped task, invariant tests, and a diff small enough to audit quickly. Without that, you’re just converting “typing time” into “reading time,” which is usually worse. Tools like this shine when paired with discipline: one hypothesis per agent, automated checks gate merges, and humans arbitrate intent—not correctness.


Agreed. I generally see much better results for smaller, well-scoped tasks. Since there's very little friction to spinning up a worktree (~2s), I open one for any small tasks, something I couldn't do while working on a single branch.


I currently prefer Cursor to CC, does Superset play well with Cursor too? Is this a replacement for their work tree feature?

I haven’t setup worktrees yet, so if I have a quick task while working in main, I currently just spin up another agent in plan mode, and then execute them serially. In parallel would be really nice though. I often have 5-10 agents with completed plans, and I’m just slogging through executing them one at a time.


The underrated trick here is separating “signal” from “status game.” Even hostile reviews often contain one actionable invariant (“this workflow is brittle”, “pricing feels dishonest”), and the rest is just the reviewer performing for an audience. If you respond only to the invariant (and maybe ask one concrete follow-up), you de-escalate without rewarding the theatrics — and you also create a public artifact future users can trust.


Yeah. Even with good faith feedback, separating the signal from... whatever else is going on in the feedback-giver's mind can be a bit emotionally fraught. But you've gotta do it.


There’s a third axis here besides “process vs result”: feedback loop latency. Hand-coding keeps the loop tight (think → type → run → learn), which is where a lot of the craft/joy lives. LLMs can either compress that loop (generate boilerplate/tests, unblock yak-shaves) or stretch it into “read 200 LOC of plausible code, then debug the one wrong assumption,” which feels like doing code review for an intern who doesn’t learn. The sweet spot for me has been using them to increase iteration speed while keeping myself on the hook for invariants (tests, types, small diffs), otherwise you’re just trading typing for auditing.


well explained, good read.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: