I didn't assume bad faith, I simply reworded your conclusions with less soft language so that others would understand your position more clearly.
You are saying what they are doing is hard. That's fine. Their stated goals are to be the responsible stewards of the technology and we agree they are failing at that goal. You would attribute that to incompetence and not malice.
I personally try to follow Rapoport's Rules, and I since think they are consistent with the HN Guidelines, I like to mention them: [1].
I've thought on it, and I will try to start off with something we both agree on... We both agree that Anthropic made some mistakes, but this is probably a pretty uninteresting and shallow agreement. I find it unlikely that we would enumerate or characterize the mistakes similarly. I find it unlikely that we would be anywhere near the same headspace about our bigger-picture takes.
> I didn't assume bad faith
Ok, I'm glad. That one didn't concern me; if I had a do-over I would remove that one from the list. Sorry about that. These are the ones that concern me:
> Comments should get more thoughtful and substantive,
> not less, as a topic gets more divisive.
When I read your earlier comment (~20 words), it didn't come across as a thoughtful and substantive response to my comment (~160 words). I know length isn't a perfect measure nor the only measure, but it does matter.
> Please respond to the strongest plausible interpretation of what
> someone says, not a weaker one that's easier to criticize.
Are you sure you didn't choose an easier to criticize interpretation? Did you take the take to try to state to yourself what I was trying to say? Back to Rapaport's Rules ...
> You should attempt to re-express your target’s position so
> clearly, vividly, and fairly that your target says, “Thanks,
> I wish I’d thought of putting it that way.”
I'm grateful when people can express what I'm going for better than the way I wrote it or said it.
> I simply reworded your conclusions with less soft language
Technically speaking, lots of things could be called "rewording", but what you did was relatively far from "simply rewording". Charitably, it is closer to "your interpretation". But my intent was lost, so "rewording" doesn't fit.
> ... so that others would understand your position more clearly.
If you want to help others understand, then it is good to make sure you understand. For that, I recommend asking questions.
> Their stated goals are to be the responsible stewards of the technology and we agree they are failing at that goal.
No, I do not agree to that phrasing. It is likely I don't agree with your intention behind it either.
> You would attribute that to incompetence and not malice.
No; even if I agreed with the premise, I think it is more likely I would still disagree. I don't even like the framing of "either malice or incompetence". These ideas don't carve reality at the joints. [2] [3] There are a lot of stereotypes about "incompetence" but I don't think they really help us understand the world. These stereotypes are more like thought-terminators than interesting generative lenses.
I'll try to bring it back to the words "malice" and "incompetence" even though I think the latter is nigh-useless as a sense-making tool. Many mistakes happen without malice or incompetence; many mistakes "just happen" because people and organizations are not designed to be perfect. They are designed to be good enough. To not make any short-term mistakes would likely require too much energy or too much rigidity, both of which would be a worse category of mistake.
Try to think counterfactually: imagine a world where Anthropic is not malicious nor incompetent and yet mistakes still happened. What would this look like?
When you think of what Anthropic did wrong, what do you see as the lead up to it? Can you really envision the chain of events that brought it about? Imagine reading the email chain or the PRs. Can you see how there may be been various "off-ramps" where history might have gone differently? But for each of those diversions, how likely would it be that they match the universe we're in?
At some point figuring out what is a "mistake" even starts to feel strange. Does it require consciousness? Most people think so. But we say organizations make mistakes, but they aren't conscious -- or are they? Who do we blame? The CEO, because the buck stops there, right? He "should have known better". But why? Wait, but the Board is responsible...?
Is there any ethical foundation here? Some standard at all or is this all just anger dressed up as an argument? If this assigning blame thing starts to feel horribly complicated or even pointless, then maybe I've made my point. :)
If nothing else, when you read what I write, I want it to make you stop, get out a sheet of paper, and try to imagine something vividly. Your imagination I think will persuade you better than I can.
You are saying what they are doing is hard. That's fine. Their stated goals are to be the responsible stewards of the technology and we agree they are failing at that goal. You would attribute that to incompetence and not malice.