> VR will be huge some day. Maybe not as huge as the Metaverse hype, but huge nonetheless.
I really doubt this. There’s too many people who suffer from motion sickness to make this payoff. 33% of the population suffers from motion sickness to varying degrees and current mitigations including blowing a fan at suffering users, is an unrealistc barrier to causal usage.
I love the quest and was just using it about an hour ago.
Even beyond motion sickness, it is not the same experience as it was when I first got the quest.
There is a habituation that happens the entire experience becomes far less immersive feeling. I have used the quest so much I don't really feel the immersion anymore at all.
I had just found youtube 360 videos of the sphinx and great pyramid last night. I wish I would have watched this a year ago as it would have been so mind blowing. It is still fun but it is nothing like what it use to be. I don't feel like I "go" to the places anymore.
It reminds me quite a bit of the way marijuana was such a different experience the first few times vs the 500th time.
So even if you don't get sick, the magic wears off in about a month and people stop bothering. The experience is so consistent with people getting bored after a month. I can say from experience that this has nothing to do with the lack of content but something to do with the way the brain adjusts.
i think the key is, about half of that 33% can tolerate certain elements of it (stationary experiences etc) and another slice suffer in a way that will be resolvable or at least somewhat mitigated by technology improvements. And then another slice will accommodate it if exposed early enough.
Put it all together and you probably are talking more like 10% of people residual. It is still a lot but I think it's just bearable to not be a death blow to mainstream use.
I have the Valve Index and had to buy prescription lenses to put inside to allow me to play without my glasses.
The first company to have auto adjustment lenses to my eye sight will get my money. when I can use it with my current eye sight and without having to buy accessories, I'll root for VR.
I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.
"I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive."
These are two assertions. There could have been a prior secret rewrite that took much longer than six days and this is a marketing stunt for Anthropic. In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.
Those are not assertions of anything meaningful. We have no idea what his expectations were. Maybe he expected it to be absolute crap, and it was only kind of crap. None of it means that it's actually viable. My fat uncle trying to beat Bolt's time could exceed my expectations by improving from 30s to 20s, doesn't mean it's ever going to be a reality.
> In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.
In case people still don't get it, Jarred works for Anthropic and Bun belongs to Antrhopic. This means that people that have an ax to grind against anthropic (admittedly a reasonable position), will take the most antagonistic position they possibly can because of personal bias.
I disagree. This is the same sort of marketing strategy as Mythos.Wow it out performed so much we have to tell you in the future. If he wasn't aligned financially with the outcome I'd agree but he's not
So do you picture them locking up the Rust port behind closed doors as well, or what's the game gonna be? Cause it reads like it's kinda all public already.
Absolutely not, I think they prioritize it because it's internal. I to expect to see a stronger marketing push on its ability to do language translations because there is honestly value in that. Question is when they have compute but it's less crisis marketing then their security stuff so I'd see it at a lower priority. I just don't think it's as honest as the parent post posits
The Mythos-truther community is absolutely batshit, sorry. You wrote fanfic and now you're writing more fanfic. The company is faking for marketing so therefore they're faking for marketing. The only things in common between the two situations are you and the word Anthropic, the rest of us are just confused and worried. I'm worried, that's why I'm speaking to you plainly.
Doesn’t this sum up most of the AI “innovations” we’ve seen shoveled in this bubble?
We constantly see AI thought leaders backpeddling on promises and just spouting general nonsense. Altman originally talked effusively about an era of “abundance”. An abundance of what? It’s a word salad of feel good vibes without any substance.
Sam Altman has gone from claiming AI might cure cancer to shoveling ads and the scope of AI seems to be reduced to mostly be suitable as flawed, imperfect, but mildly useful coding/automation agents that are likely subsidized beyond economic viability, but you can’t point that out because it’s the future!
I think the vagueness of statements like this is why a lot of people (myself included) are just so very skeptical. Surely some company wants to brag about their use. I don’t doubt it’s found its way into certain spaces, but by and large a lot of the “big” claims have been demonstrated to be borderline fraudulent. That Brad Pitt/Tom Cruise AI fight is fake. It is misleading. Taking existing green screen choreography and using AI to impose Brad Pitt and Tom Cruise’s faces is not what it is being sold as. Darren Aronofsky’s AI works are not good either. They can’t seem to hold a shot for more than a few seconds, why is that?
If the argument is that AI is being used in the background or for some VFX, sure, I’ll buy that. It’s just another tool, then. If it is being used to generate entire scenes, there’s no evidence of this, unless something like that atrocious holiday Coca-Cola commercial is a herald of our future.
As written, your claim is just handwavy. I get you might not be able to cite anything concrete due to NDAs or whatnot but, you also have to understand why a lot of people find this kinda unpersuasive.
I can respond directly to this, I’m a former VFX industry person and still fairly well connected.
The the former you suggested. Background plates and the like. The lack of actual creative direction tools, trite visual style, lack of consistency/repeatability and complete inability to be edited or adjusted easily make it a non-starter for most tasks. Compositors are fast, LLMs are slow at that scale. There are tools like ComfyUI that sit in the “we’re running experiments/useful sometimes” category.
Loads of ML tools are in use and incredibly handy, but fit into that tool category, but actual wholesale video/image generation is not that prevalent, no.
It worked for bread. Why wouldn't it work for AI? I've been baking allen wrenches and screws into my sliced bread for ages, and no single living person has complained about it.
> There's simply no way to price video any other way than by usage. I suspect the same will come for everything.
I don't think there's any way for all of the current AI models to work except as a usage model. The question is whether or not people are willing to pay for it that way in the long-term.
It sounds like it is producing positive ROI for your side, but I’m curious what the bean counters at the studios think of the bill when the budgets tighten.
I think we should at least ask the latter, if it turned out it cost $100,000 to generate this solution, I would question the value of it. Erdős problems are usually pure math curiosities AFAIK. They often have no meaningful practical applications.
Also, it's one thing if the AI age means we all have to adopt to using AI as a tool, another thing entirely if it means the only people who can do useful research are the ones with huge budgets.
Getting off-topic, but as a successful high-school dropout I am compelled to remind anyone reading this that [the American] college [system] is a scam.
That's not to say that there aren't benefits to tertiary education, for many people in different contexts. It's just not the golden path that it's made out to be.
Many people currently in college are just wasting their money and should enroll in trades programs instead.
Meanwhile, nothing about being in or out of school is mutually exclusive to using LLMs as a force multiplier for learning - or solving math problems, apparently.
These are absolutely worth studying, but being what they are, nobody should be dumping massive amounts of money on them. I would not find it persuasive if researchers used LLMs to solve the Collatz conjecture or finally decode Etruscan. These are extremely valuable, but it is unlikely to be worth it for an LLM just grinding tokens like crazy to do it.
If solving even the biggest problems in pure maths is not worth it for you, then I guess we should stop all the pure maths research - researchers are getting paid much more than potential token spend, frequently for decades and they frequently work on much less important and easier problems.
Maybe... but I would love if 1% of the investment in AI were redirected to the mathematics education and professional research that would allow progress on any of these problems...
No meaningful, practical applications? You realize that sounds incredibly naive in the history of mathematics, right? People thought this way about number theory in general, and many other things that turned out to have quite important practical applications. Your statement is also a bit odd in that researchers are already paid throughout their whole careers to solve such problems. I don't know.
> You realize that sounds incredibly naive in the history of mathematics, right?
This is after the fact justification. You are arguing that because a thing (number theory) showed practical applications we should have dumped a lot more effort into it. There is no basis for this argument whatsoever; it also seems to involve inventing a time machine. Number theory had no practical applications until the development of public-key cryptography, but you cannot make funding decisions based on the future since it’s unknowable.
Once we get something working, sure, you can justify more aggressive investment. This is not to say that we should not invest in pie-in-the-sky ideas. We absolutely should and need to. Moonshot research or even somewhat esoteric research is vital, but the current investment in AI is so far out of the ballpark of rational. There’s an energy of a fait accompli here, except it’s still very plausible this is all unsustainable and the market implodes instead.
> Number theory had no practical applications until the development of public-key cryptography, but you cannot make funding decisions based on the future since it’s unknowable.
You are completely missing the point. The point is that we should invest in pure maths because it has always been an investment with very good ROI. The funding should be focused on what experts believe will advance pure maths more (not whether we believe that in 100 years this specific area will find some application) and that's pretty much what we are doing right now. I think it's just your anti-AI sentiment that's clouding your judgement and since AI succeeded in proving pure maths results, you are inclined to downplay it by saying that well, pure maths is worthless anyway.
>> Number theory had no practical applications until the development of public-key cryptography
This is so wrong I don't even know where to begin. Modular arithmetic, numerical integration, pseudorandom number generation, error-correcting codes, predicting planetary orbits (!), etc.
At the risk of asserting the claim was obviously a bit facetious, number theory had the reputation of having very little practical applications and I don’t mean that silly quote by G. H. Hardy.
A lot of applications just required a lot more computing power to be practical. This all starts to happen around the same time (unsurprisingly) and if you’re going to make hay that Reed-Solomon coding was invented in 1960, I think it’s worth pointing the first big use of was on Voyager because the computing power was finally able to make these work. It’s not like people hadn’t started to notice some of this decades earlier.
Humans and very often the machines we create solve problems additively. Meaning we build on top of existing foundations and we can get stuck in a way of thinking as a result of this because people are loathe to reinvent the wheel. So, I don’t think it’s surprising to take a naïve LLM and find out that because of the way it’s trained that it came up with something that many experts in the field didn’t try.
I think LLMs can help in limited cases like this by just coming up with a different way of approaching a problem. It doesn’t have to be right, it just needs to give someone an alternative and maybe that will shake things up to get a solution.
That said, I have no idea what the practical value of this Erdős problem is. If you asked me if this demonstrates that LLMs are not junk. My general impression is that is like asking me in 1928 if we should spent millions of dollars of research money on number theory. The answer is no and get out of my office.
Probably worth noting that the new-ish Mozilla CEO, Anthony Enzor-DeMeo, is clearly an AI booster having talked about wanting to make Firefox into a “modern AI browser”. So, I don’t doubt that Anthropic and Mozilla saw an opportunity to make a good bit of copy.
I think this has been pushed too hard, along with general exhaustion at people insisting that AI is eating everything and the moon these claims are getting kind of farcical.
Are LLMs useful to find bugs, maybe? Reading the system card, I guess if you run the source code through the model a 10,000 times, some useful stuff falls out. Is this worth it? I have no idea anymore.
Hackernews has also been completely co-opted by boosters.
So much that i don't really visit anymore after 15 years of use.
It's a bizarre situation with billions in marketing and PR, astroturfing and torrents of fake news with streams of comments beneath them with zero skepticism and an almost horrifying worship of these billion dollar companies.
Something completely flipped here at some point, i don't know if it's because YC is also heavily pro these companies, and embedded with them, requiring YC applicants to slop code their way in, then cheering about it.
Either way it's incredibly sad and remind me of the worst casino economy, nft's, crypto, web3 while there's actually an interesting core, regex on steroids with planning aspects, but it's constantly oversold.
I say that as a daily user of Claude Max for over a year.
I haven't been able to find any communities with as high of a signal-to-noise ratio and breadth of experiences as HN, especially not public ones that one can stumble their way into without knowing a guy / joining a clique
> communities with as high of a signal-to-noise ratio and breadth of experiences as HN, especially not public ones that one can stumble their way into without knowing a guy / joining a clique
If this is such a low bar, then how come there's only HN? Can you name another? 10? 100? Because I can't.
I think the fact that AI is finally at a point where it seems to be more useful that annoying, it's easy to be overly optimistic. I've only been using Claude for a few months (I did try 20x, but fell back to 5x), and it's genuinely been a productivity multiplier. That said, the way I've worked with it is very different than me coding on my own... I spend way more time planning, there's a lot more documentation and testing that is part of the output and even then I still find a lot of issues.
I'm also mournful for those just starting out, that may lean so much on these tools that they may never have true proficiency to be able to spot issues with fitness and quality. I see people running half a dozen or more agents and know there is no way they're doping any kind of meaningful QA/QC on that output.
I've noticed a lot of astroturfing lately. It really bothers me, because it was kind of my last bastion of sanity for online tech discourse. Every forum I've used is now full of marketing and dishonesty by bots, paid shills, and bad actors.
I really doubt this. There’s too many people who suffer from motion sickness to make this payoff. 33% of the population suffers from motion sickness to varying degrees and current mitigations including blowing a fan at suffering users, is an unrealistc barrier to causal usage.
reply