Hacker Newsnew | past | comments | ask | show | jobs | submit | kenforthewin's commentslogin

Tauri is great! this is my first Rust-based project and it took some real getting used to. But i love that Tauri does not ship with an entire chromium browser, and i love its focus on security.

nice username :)

fair point, the app makes requests to load fonts. we'll fix that next release.


This is .. honestly a great synopsis of Atomic and its design tradeoffs. Thanks! Giving commonplace a look.

Thanks!

The reviews are done automatically - here are the instructions: https://github.com/zby/commonplace/blob/main/kb/agent-memory...

I am open to changing these instructions - it cannot be about just making your system look better - but I'll try to incorporate genuine ideas how to improve these reviews.


Thanks for the feedback. Yeah, I admit copywriting is not my forte. i'm a solo dev, I'm focusing most of the time and energy on the product itself. There are always 100 things I could be polishing for Atomic - social media presence, website, docs, etc. even with AI there just aren't enough hours in the day - you have to triage somehow.

Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.

But local-first !== defaults to local inference, right?

I'm not sure I understand the question. Regardless of what provider you choose - be it cloud based or local - you have to provide setup information such as host, authentication, etc. So it "defaults" to nothing; you have to select something.

Maybe this will be a clearer question: What does "local-first" mean in the title that you typed in for this HN submission?

Local first means running Atomic with local models is not an afterthought. It’s a first class citizen that works just as seamlessly as running with a cloud provider - assuming you’ve done the work to provision the local models and their connections yourself.

Atomic supports any generic openAI compatible LLM provider, including ollama, LM studio, etc.

I'm not sure what the dunk is supposed to be here .. Atomic supports the exact same feature set with local models as it does for OpenRouter. Is your gripe just that Openrouter is the first option in the dropdown?

Yes. Why even call it local-first when local isn't first? Not to mention, for some reason they decided to only support Ollama instead of giving you the option to connect to any OpenAI-compatible server, which would make this work with any other inference server such as llama.cpp and vLLM as well as Ollama. (and also most SaaS inference providers, including OpenRouter, so the custom integration would not be necessary either, https://schizo.cooking/schizo-takes/9.html)

Did you think local-first meant how a dropdown is sorted?

OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.


The online documentation does not suggest that using a generic OpenAI-compatible server is an option, and it once again lists the non-local option first.

https://atomicapp.ai/getting-started/ai-providers/

> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.

Why is this necessary over just presenting the result of `/v1/models`?

You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.


It has supported local LLMs from the beginning, it was not something that was just tacked on. I don't know what else to tell you. Your assumptions are just wrong.


Biggest difference is Atomic leverages an LLM to auto-tag and a text embedding pipeline to drive semantic search - so the knowledge base is self-organizing. The bet here is that having an agent grep the filesystem is fine for a carefully curated, relatively small set of markdown files. It starts to degrade if you approach your knowledge base as a place to put everything: personal notes, articles you find interesting, entire textbooks if you want to. Having a vector database in this context is pretty much required past a certain scale; a filesystem-based approach is just an incredibly inefficient way to do retrieval in this context, and your agent is bound to miss important data points.

Does the LLM auto-tagging and embedding pipeline run on the device, or are they remote calls?

So an Obsidian plugin? Got it.

One can imagine an obsidian plugin of any arbitrary level of complexity, given it's written in a Turing-complete language.

Nice work, couple bits of feedback

* The editor doesn't seem to support code fence literals (as in I can't type ``` to get a code block)

* At very large markdown file sizes the performance is not great.

I'm building an obsidian-style markdown editor (for my own AI knowledge base product!) over at https://github.com/kenforthewin/atomic-editor


Atomic looks quite interesting, and the "wiki synthesis" is particularly interesting:

I've been working on a suite of skills and a tiny MCP (also SQLite + SQLite-vec based) where the focus is on making it easy to produce "atoms" from quick brain dumps.

The chunking problem is "bypassed" by declaring each section a chunk, and having the LLMs rewrite drafts to sections that chunk well. That means lots of redundancy, and no "As explained above".

The intended reader isn't a human, but rather agents that generate human-friendlier prose, for different target audiences. By assuming the reader is an "expert", the idea is that it's much cheaper to mass-produce reviewed "atoms".

Itching to try that workflow with Atomic or Tolaria.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: