Hacker Newsnew | past | comments | ask | show | jobs | submit | karmasimida's commentslogin

How? Memory price is sky high, that is the choke hold the monopoly will not let go

Grok for fact checking, I mean ironically

TBF Grok on Twitter and Grok via api behave differently. The latt r is much better.

The environmentalist put a giant scam on Western nations

> Starting today, @awscloud and OpenAI are bringing the latest OpenAI models to Amazon Bedrock, launching Codex on Amazon Bedrock, and launching Amazon Bedrock Managed Agents, powered by OpenAI (all in limited preview). AWS and OpenAI will continue to bring the latest advances to Amazon Bedrock—so the models and agents you build with today continue to benefit from new breakthroughs as they arrive.

https://x.com/amazon/status/2049178618639839427


> some scientifically qualifiable thing that is certain to happen any time now.

If you present GPT 5.5 to me 2 years ago, I will call it AGI.


Some people thought SHRDLU was basically AGI after seeing its demo in 1970. The hype around such systems was so strong that Hubert Dreyfus felt the need to write an entire book arguing against this viewpoint (1972 What Computers Can't Do). All this demonstrates is that we need to be careful with various claims about computer intelligence.

Sure, but it was probably stuck at doing that one thing.

neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.

I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.


It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers.

Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong


No one who read science fiction in 1955 would call any of the various models we know to be "artificial intelligence". They would be impressed with it, even excited at first that it was that... until they'd had a chance to evaluate it.

Science fiction from that era even had the concept of what models are... they'd call it an "oracle". I can think of at least 3 short stories (though remembering the authors just isn't happening for me at the moment). The concept was of a device that could provide correct answers to any question. But these devices had no agency, were dependent on framing the question correctly, and limited in other ways besides (I think in one story, the device might chew on a question for years before providing an answer... mirroring that time around 9am PST when Claude has to keep retrying to send your prompt).

We've always known what we meant by artificial intelligence, at least until a few years ago when we started pretending that we didn't. Perhaps the label was poorly chosen (all those decades ago) and could have a better label now (AGI isn't that better label, it's dumber still), but it's what we're stuck with. And we all know what we mean by it. We all almost certainly do not want that artificial intelligence because most of us are certain that it will spell the doom of our species.


I'm pretty sure most people take issue with AGI, because we've been raised in culture to believe that AGI is a super entity who is a complete superset of humans and could never ever be wrong about anything.

In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.

But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.

Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.


Just don't move the goal posts. AGI was already here the day ChatGPT came out:

https://www.noemamag.com/artificial-general-intelligence-is-...


If you didn't call GPT 3.5 AGI I do not believe you when you claim you would have called 5.5 AGI.

I agree with this but they don’t. And that’s the the thing, AGI as they refer is much much much more than what we have, and I don’t know if they are going to ever get there and I’m not sure what’s even there at this point and what will justify their investments.

GPT 4 was 3 years ago... it's iterative enhancement.

If you present ELIZA to people some will think it is AGI today.

There is a reason so many scams happen with technology. It is too easy to fool people.


And I've been told my job (litigation attorney) is about to be replaced for over 3 years now, has yet to come close.

What kind of litigation attorney?

I've been working with a startup, and I want to invest in it, and for the paperwork for that, all the nitty gritty details; instead of spending $20k in lawyers and a whole bunch more time going back and forth with them as well, the four of us, me, their CEO, my AI, and their AI; we all sat in a room together and hashed it out until both of us were equally satisfied with the contract. (There's some weird stuff so a templated SAFE agreement wasn't going to work.) I'm not saying you're wrong, just that lawyers, as a profession isn't going to be unchanged either.


Maybe ask your LLM what a litigator is, as it is not any of what you described as (not) involving your attorney in.

Woosh

People always over estimate the impact of technology because they dont Understand human aspect of many businesses. Will it eventually replaced or will the shape of these kind of work will be completely different in the future? That’s an easy yes, when is that future? That’s a big unknown, in my experience this kind of stuff takes at least a decade (and possibly more on this case) to make a big impact like replacing all of X.

These models need orders of magnitude in change before they can be more helpful than just a "find me an example of [an extremely basic principle]" which most of the time it does not do right anyway.

... until you actually, like, use it and find out all the limitations it has.

How is this relevant? Human General Intelligence has a lot of limitations as well and we have managed to do lots.

This is like saying that talking about my financial limitations is irrelevant because Jeff Bezos also has financial limitations...

To some extent yes.

It is not impossible to solve in absolute terms, in the sense, all necessary pieces of information are presented in the repo + problem statement.

But it is impossible to solve in the sense, unless you read the ground truth, you are NOT able to solve it the way the test patch demands.

Simply not plausible to me that model can read the problem statement so precisely that it nails exactly, like 100% what the test suite is trying to test.


What backfired?

Ant's recent rise has little to none to do with retail subscribers, it is Claude Code with Opus 4.5+, followed by their Mythos stunt

I would say the flood of $20 Claude Subscribers due to news cycle backfired on them, now everyone is getting worse outputs and exposed their shortage on compute, which they can't fix anytime soon.

Pretty much everyone I know has both cc and codex now, just because how unreliable cc has become.


> would say the flood of 20+ Claude Subscribers due to news cycle backfired

This is a good hypothesis. I suspect we are both correct.

The PR boost from Anthropic standing its ground drove signups. That, in turn, drove investors. But the users also drove utilization, which degraded quality across the board.

My hypothesis rests on Anthropic’s user mix having significantly shifted to consumers (versus enterprise) after the mix-up. Whenever we get public numbers it would be interesting to test that.


> What backfired?

I think it was psychological to a degree. For many consumers OpenAI, or at least ChatGPT was AI. The controversy was enough for folks to be introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.

I agree with OP though that this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point.


> introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.

This is true. OpenAI WAS the story of AI, now it is just 50% of it, at max. Losing the monopoly of imagination towards AGI is bad for them.

One thing I don't agree though, consumers aren't the important part of AI, they are a liability.

AI is too expensive, consumers can't pay for it. Instead they will compete with enterprise for the same tokens, with less money.


> controversy was enough for folks to be introduced to competitors

This is my suspicion. Consumers hadn’t previously heard of Anthropic and Claude. Now they had, particularly in cities.

> this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point

Also agree. Hence why I said “I don’t think” the fight is “the ultimate cause.”


Anecdotally a whole lot more people around me started using Anthropic models in the last few weeks and seem to like them more than OpenAI. For many of these people it was the second provider they ever used.

Of course this is part of what has lead to such insane demand and outages they've experienced since then.


I use both CC and Codex because one is not enough and 5x for $100 is too much.


>> followed by their Mythos stunt

"Stunt", eh?


Chips doesn’t impact output quality in this magnitude


True, but the qualifying the power played a large part. Most likely nuclear power for this high quality token efficiency.


I am confused, how is this a test? So some users get Claude Code while others don’t, when they are both paying 20 dollars … ? Wat


It's a test on sign ups, not on users, so "will they sign up without X feature for the same price" yes


Trusted software will be so expensive that it will effectively kill startups for infrastructure, unless they can prove they spent millions of dollars hardening their software.

I predict the software ecosystem will change in two folds: internal software behind a firewall will become ever cheaper, but anything external facing will become exponential more expensive due to hacking concern.


those hacking concerns are just as valid inside as well as outside the firewall.


You can enforce physical isolation to make sure hacking isn’t possible at least without some level of physical intrusion


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: