Hacker Newsnew | past | comments | ask | show | jobs | submit | agent_anuj's commentslogin

I built this over 6 months. The system connects to raw data, infers schema and business relationships autonomously, then investigates using 100+ rounds of reasoning — each round is what a human analyst would spend hours on. Demo of the system: https://youtu.be/qqiQvZKoi-Q. The article walks through the architecture (MCP, Trino, agents-as-prompts). Happy to answer questions.


Built this over 6 months — an agentic data platform where AI agents handle infrastructure, query execution, and investigation autonomously.

Here's a demo of the system analyzing the same retail dataset: https://youtu.be/qqiQvZKoi-Q — the planogram result in the article was a different investigation. Happy to answer questions about the architecture or the MCP tooling.


This resonates. Been building with agents for about six months — not just code gen, the whole chain. Infra, testing, deployment, everything.

Spot on about the conversation being the commit. But the thing that caught me off guard is how fast everything moves. Like nothing stays current. You write docs, they're stale by tomorrow. Architecture diagrams? Useless within a week. The code itself is in constant motion.

What actually works is just asking the agent to look at the codebase and tell you what's going on right now. That's become my default. I don't read my own code anymore, I interrogate it. The source of truth isn't any artifact , it's the ability to ask.


I give you my personal experinces. I use it for everything design, coding, testing, deploying to kubernetes cluster, fixing issues on cluster. I use it to fix not only dev env issues, I use it for production issues. Confidently. Have things gone wrong. Sure. But mistakes have been rare (and catastrophic mistake - non recoverable , even rarer).

Everytime a mistake has happened,on diggin in I was always trace it back to something which I did wrong - either being careless in reading what it told me , or careless in telling what I want. I have had git code corruption issues, it overwrote uncommited working code with non working code. But it was my mistake to not tell it to commit the code before makign changes. It deleted QA cluster database but becuase I told it to delete it thinking it was my dev setup db. Net net. It;s mistakes are more a reflection of me as its supervisor than anything else.


This is coming straight from my experience last week. I actually tried to test this. took 30 days of my claude code sessions, about 32k conversation turns across 21 sessions and 10 projects. classified every user message - corrections, feedback, decisions, reframes. extracted about 3200 high signal training pairs. I put a lot of emphasis on my explicit corrections where I told the AI it was wrong and what the right answer was and WHY. fine tuned qwen 4B on it with qlora. the model learned my voice perfectly - during training it would say things like 'no. fix the query. you're doing 3 joins when you only need user_id' which is exactly how I talk. but thats the problem - it learned to parrot my phrasing without understanding why I made those corrections. it memorized the what, the artifact, but completely missed the how - the reasoning process that led to the correction. Title is exactly right.


been in enterprise tech for 20 years and what I see is what I will call unbundling. its more like seniors are now doing work of 5 juniors because AI does the grunt work. so companies just dont hire juniors anymore. but the senior guy doesnt get paid 5x, maybe 1.2x if lucky and expected to output 5x.

I myself have gone back to hands on coding in last 6 months along with managing the team. so now I am doing both developer and manager role. company loves it obviously but I am not getting paid more to do both and juniors in my team are under severe pressure to show thier worth. thats not unbundling thats just squeezing people. And companies will continue to do it.


It's cute you think seniors are getting paid more.


I use claude code pretty much for 100% of my work, even personal work. And I tell you the length it can go to sing to my tunes, its just neverending. Not only personal work, anything I say even related to system design, it will mostly be in affirmative. Some tricks that usually what I find helpful in these situations is to ask claude to be honest (or any AI tool for that matter) to give a confidence score to its response. When forced to assign a confidence score AI suprisingly do well and tell you clearly that it is not confident and mostly guessing.


It is not just embarrassing, it can potentially kill your demo, project or even product as user will first look at data and then the tech behind it. If the data is wrong, it means the tech does not work. I never took data seriously during my demos in the first 10 years of my career and no wonder the audience rejected most of my work though it was backed by solid platforms.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: