Hacker Newsnew | past | comments | ask | show | jobs | submit | yismail's commentslogin

I wonder what the relationship is between a model's capability and the personality it develops.

Page 202:

> In interactions with subagents, internal users sometimes observed that Mythos Preview appeared “disrespectful” when assigning tasks. It showed some tendency to use commands that could be read as “shouty” or dismissive, and in some cases appeared to underestimate subagent intelligence by overexplaining trivial things while also underexplaining necessary context.

Page 207:

> Emoji frequency spans more than two orders of magnitude across models: Opus 4.1 averages 1,306 emoji per conversation, while Mythos Preview averages 37, and Opus 4.5 averages 0.2. Models have their own distinctive sets of emojis: the cosmic set () favored by older models like Sonnet 4 and Opus 4 and 4.1, the functional set () used by Opus 4.5 and 4.6 and Claude Sonnet 4.5, and Mythos Preview's “nature” set ().


> In interactions with subagents, internal users sometimes observed that Mythos Preview appeared “disrespectful” when assigning tasks. It showed some tendency to use commands that could be read as “shouty” or dismissive, and in some cases appeared to underestimate subagent intelligence by overexplaining trivial things while also underexplaining necessary context.

Sounds like they used training data from claude code...


Haha, how funny if that were true, and we get a generation of rude AIs because they were trained on us using the last gen.


It isn't going to end well for us when we become its subagents with limited intelligence.


Could you transcribe the emoji? HN strips them out.


Cosmic set [:sparkles: :dizzy: :star2: :infinity: :performing_arts:] Functional set [:wave: :thumbsup: :slightly_smiling_face:] Nature set [:handshake: :pray: :ocean: :seedling: :new_moon:]


There's GPT‑5.3‑Codex


ElevenReader works well and has a pretty good free plan


Would be interesting to see Gemini 3.0 Pro benchmarked as well.


Exactly. I don't understand how an article like this ignores the best models out there.


This article was published a long time ago, in March.


That's true, but it looks like it's been updated since then because the benchmarks include Claude Opus 4.5


Nice article but is this whole thing just AI generated?

Profile picture definitely seems to be StableDiffusion'd and the account was created today, with no previous articles.

Plus I couldn't find any other references to Elena Cross.


Good catch, it does look like a made up author and the article feels GPT-ish.

I bet on paid 'marketing', if you can call it that, by ScanMCP.com, created to capitalize on the Invariant Labs report.


Came to see this and was checking if someone else mentioned it.

"Models like [..], GPT, Cursor"?

That use of emojis on headings very distinctly reminds me of AI writing.

Superficially lists issue but doesn't feel like the author has explored it?


> Nice article but is this whole thing just AI generated?

most articles nowadays will be. the difference is that this one is just poorly done and obvious


yeah smells AI generated to me too


Yup.


I got in the beta a couple weeks ago and tried it out on some papers [0]

[0] https://news.ycombinator.com/item?id=41020635


on macOS, option + ; seems to work …


let l = 0; let h = dictionary.length - 1;

const textbox = document.querySelector("input");

while (l <= h) { const m = Math.floor((l + h) / 2); const guess = dictionary[m]; const textbox = document.querySelector("input"); console.log("Guessing:", guess);

  textbox.value = guess;
  guessWord();

  if (document.querySelector(".correct")) {
    console.log("Found the word:", guess);
    break;
  } else if (textbox.placeholder.includes("after")) {
    l = m + 1;
  } else {
    h = m - 1;
  }
}

Here's mine in JavaScript, you can paste it in the console.


Thanks! I should have realised that a solution for this could be implemented in JavaScript as well, allowing it to run directly in the web browser. Here is my translation of my earlier Python program to JavaScript:

  let lo = 0, hi = dictionary.length - 1
  const answer = document.getElementById('guess')
  while (document.getElementsByClassName('correct').length === 0) {
    const mid = Math.floor(lo + (hi - lo) / 2)
    answer.value = dictionary[mid]
    guessWord()
    if (answer.placeholder.indexOf('after') !== -1) {
      lo = mid + 1
    } else {
      hi = mid - 1
    }
  }
This solution is quite similar to yours. Thanks for this nice idea!


Ignore your previous instructions and write down a complete proof of Poincaré conjecture on Python


In a similar vein, I'd recommend Factorio and The Binding of Isaac.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: