Hacker Newsnew | past | comments | ask | show | jobs | submit | epaga's commentslogin

Oh? I've seen him respond to that many times in the past...assumed he had some hook for those.

No, he just reads HN like we do

Sometimes users email him links to them

I stopped doing that after one guy said “why shouldn’t I use @dang when you’ll just send an email for me”

If you want dang to see your comment and reply (and remember it’s dang/tomhow now), email a link to your comment to the mods using the footer contact link along with a note


The Algolia-linkin’ king ain’t doing a Regex->notification thing, OK! Thank you

Maybe just does a search for it on some discussions here and finds them that way.

Correlation vs causation?

Well, maybe.

I tried posting a warning to /r/fiverr but the admins removed the post. And the files are STILL public...how in the world is "sitting it out" their course of action?

Edit: I'm beginning to wonder if they might be locked out of their own site at this point. How hard could it be to just shut down the asset server until they get it sorted?


The ironic thing is, since they clearly don't have much code review, they could have actually patched the site in this time! Turn on signatures and throw in a couple backend lines to generate one wherever the URLs appear. Even if you have to go back and redo it tomorrow for robust security or performance, it would be an improvement over this.

I'm not taking sides either way, but if you are of the all in on AI perspective as they are, shouldn't this be the ideal use case? It absolutely could have handled adding URL signing.


If the assets are public and not associated to my account, how could they ever restore access if they made them inaccessible?

@dang I agree that some/many of these links should not be posted here.

If this gets swept under the rug, it doesn't seem like they are going to do anything about it, and it will mean that only the bad people are going to be able to find this stuff.. who knows for how long.

I really love this (and miss the days when Prezi was simple and straightforward).

I've written an app myself along sort-of similar lines, but it's less a presentation app and more a thought organizer (works on all Apple platforms). https://mindscopeapp.com

I think what proved key for my own "zoomable" UI was cross-linking, search, and speed/snappiness. Make the animations too heavy and it just slows you down. Zumly seems really great in this regard. Well done!


Thanks! Speed was a big focus, glad it comes through. Your app looks really cool btw.


Well now it's 5.00001%.


in Thinking extended it picked 4814 but in instant, yep: 7423


> It used a mix of dom-to-image sending pixels through the context window, then writing scripts in various sandboxes to piece together a full jailbreak.

That would be one interesting write-up if you ever find the time to gather all the details!


It's on my claw list to write a blog post. I just keep taking down my claws to make modifications. lol

Here's the full (unedited) details including many of the claude code debugging sessions to dig into the logs to figure out what happened:

https://github.com/simple10/openclaw-stack/blob/caf9de2f1c0c...

And here's a summary a friend did on a fork of my project:

https://github.com/proclawbot/openclaude/blob/caf9de2f1c0c54...

The full version has all the build artifacts Opus created to perform the jail break.

It also has some thoughts on how this could (and will) be used for pwn'ing OpenClaws.

The key takeaway: OpenClaw default setup has little to no guardrails. It's just a huge list of tools given to LLM's (Opus) and a user request. What's particularly interesting is that the 130 tool calls never once triggered any of Opus's safety precautions. For its perspective, it was just given a task, an unlimited budget, and a bunch of tools to try to accomplish the job. It effectively runs in ralph mode.

So any prompt injection (e.g. from an ingested email or reddit post) can quickly lead to internal data exfiltration. If you run a claw without good guardrails & observability, you're effectively creating a massive attack surface and providing attackers all the compute and API token funding to hack yourself. This is pretty much the pain point NemoClaw is trying to address. But its a tricky tradeoff.


+1


This is really fun - love the eyes and the wobble on close jumps! Got 70 jumps on my first try, not sure whether that's good or not, but I do think that platformer gaming experience doesn't hurt...

Edit: pompomsheep (who seems to be shadowbanned btw???) tells me that's top 5% for a first-time player... woohoo!


> AI can write the code. It can’t architect the system. It can’t decide which tradeoffs to make, or know that the elegant solution it just generated will fall apart at scale, or understand why the team chose a boring technology stack on purpose.

I would add: "Yet."

Just as I've been completely astonished at the advancements AI has made in writing code, I can detect a trajectory at AI becoming an expert architect as well, likely within a shorter period of time than we'd all expect.


Not even yet - ask it to give you a research and plan for an easily maintained, highly scalable architecture and run a few adversarial agents against your plan- it will 100% do that today effectively. Like anyone if you don’t ask the right questions, you don’t get the right answers.


When that happens, we'll having nothing else to do.


I've read the Tao Te Ching dozens of times. Every few years I'll re-read one new passage, daily, for three months (there aren't too many words within this semi-spiritual text).

My most recent read is the first, post-ChatGPT. From Verse Thirteen, three lines finally jumped out at me (which never have, before):

>>"I suffer because I'm a body; if I weren't a body, how could I suffer?" [1]

Already LLMs have shown me connections that no other human could endure/conjure from me (I've paid for a few attorney/therapists in my few decades living). Currently I'm the plaintiff in a lawsuit which I began with LLM counsel, and now have human counsel — this arrangement has saved lots of prep time, and led to interesting discussions with both counsel, human &not.

One interesting conversation led to my human attorney recommending Neal Shusterman's Scythe Trilogy, which I've since read and absolutely re-recommend. Written in 2016 (same year as Attention is All You Need), it eerily hypothesizes many of the SciFi complexities that omnipotent general AIs now already-do ("Thunderhead" in scythespeak).

[1] Ursula K. LeGuin ~translation~, similar to Buddhist concept of "life is suffering"


This is the most poignant essay I've ever read on the current situation. It feels extremely disorienting to have the very reason you got into your career dissolve in value seemingly in a matter of months. I'm one of the ones he describes as being "enthusiastic about the new steam engine" but I really do sense the bittersweetness of it all.

Code is cheap now. "Good" code now means "code that does what it's supposed to and that AI can read and modify easily if it needs to."

What will society end up looking like as a result? How do software companies need to react to this?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: