What is the business model of open weight AI? I don't think there is any. At best it can serve as an advertisement for the more advanced models you sell.
The huge difference to open source is that you can't just train an LLM with free time and motivation. You need lots of data and a lot of compute.
I sure want to be wrong on that, I definitely like the open-weight version of the future more
Meta released Llama just when OpenAI was so hot and its valuation was going through the roof. Speculating, but Meta probably thought the model not competitive enough to keep as a secret weapon but well good enough to commercially damage OpenAI who were a sudden competitor for most-valued-company?
In the same way you can imagine the Chinese government pushing the release of deepseek etc to make sure no one thinks the US has “won” and to keep everyone aware that a foreign model might leapfrog in the short term future etc.
At some point though if OpenAI/Antropic/Google plateau or go bust then the open source sponsorship becomes less likely, as making it open source was a weapon not a principle.
I disagree. I think deepseek, qwen, and kimi earn a lot of trust open sourcing their models. While still profiting.
Effectively they are saying "yea don't crowd our data centers with small queries, go ahead and send your frontier questions to our frontier models. Oh btw those us models? You can run something about as good for free from us if you want hah." It's a power and marketing move. It's also insanely smart to keep up with it to remain sustainable as a brand. Especially given how small their investments into this are.
Look at anthropics growing pains. Deepseek has other hosts spreading their brand for free while they grow. Brilliant honestly. In my opinion it makes anthropic and openai look clueless on a lot of levels.
China is playing a different game here. To them this is commoditizing their compliment and building good will. The Chinese economy doesn't teter on the brink of collapse to deliver frontier grade LLMs. Nope, Alibaba just made qwen because it needs it. It needs efficient models. Similarly, in China they manufacture and automate so much more than the US ever could. LLMs to them are a topping not the whole meal like they are in the us.
The Chinese labs don't have to make money or be profitable. They are funded by the state to achieve the state's goals, and the global praise of their open models just serves as Chinese soft power.
They're state companies, not some kind of ethical VC charity fund project.
If the US’s fascist experiment continues past the current president, we’ll absolutely be nationalizing frontier companies or exerting equivalent control.
Sigh. Obama and Biden were as every bit "fascist" as Trump.
I'm glad I get reminded that TDS is real, but everyone forgets that Bush, Obama, and Biden all did things with executive power that Congress ignored or provided little real oversight for. And Congress has proven over the last several decades that their oversight is rather meaningless for the goals of American voters rather than special interests.
But it's all Trump's fault is much more convenient.
Certainly Biden and Obama check off a few of the 14 points of Fascism, but are we really being serious here? "TDS" is just a thought terminating cliche.
Interesting article, but Qwen does seem to be closing off. They don't release big variants anymore, and I'm not sure that the fact the local-LLM community keeps praising it actually increases the number of people using their API.
It did work for Deepseek for sure and it seems to move the needle for Xiaomi's MiMo; but will it be enough for Qwen and Gemma? Those are the models you can actually run without going all-in on AI (but only with gaming GPUs and such).
The compute required to run these models is still very far out of reach for the average consumer, yet known enthusiast, therefore they still sell inference, whilst also getting consumer goodwill for providing open weights.
Thats because the USA has really nothing big to export. Yay, designs.
China? Im getting ready to watch the URKL (universal robot knockout league) go on. The USA is dicking around with failed robot dogs.
The USA has been a failed country, coasting on massive inertia. But the tech avenues from a article I cant find showed the USA 8/64 areas excelling. China was 56/64 areas excelling.
China is an advanced 2nd world country with pockets of first world.
Smart people in China design fast manufacturing lines for $25k/yr.
Smart people in the US design bond hedging strategies or ad-pixel trackers for $250k/yr.
China is in the stage the US was in 60 years ago, and eventually those high paying, high impact jobs will suck the intelligence out of all the "blue collar" work. Just like it did in the US.
I believe it. The us intentionally lacks accountability to prop up the already wealthy in almost all of its ventures. Which socializes losses and capitalizes gains. It's an economic model that guarantees deterioration and stagnation.
Dodging politics, the power structures in us industry need serious revamping.
Open sourcing models is a marketing strategy. Chinese labs and small international labs have no awareness or distribution, so unless they become a hot topic for a while, nobody is going to bother trying out their models. Open source gets them that, and is essentially a tax on newcomers. When you start out you simply have no other option but to open source your models.
So, the business model of open models is the same as closed models: Sell inference. Open source is marketing for that inference.
China’s long term goal might just be to own the chip layer alongside everything else, and outproduce the US in data centers.
Frontier US labs could still have an advantage for a long time, but many use cases would start gravitating towards Chinese models if they 10x the data centers and provide similar quality inference for a third of the cost.
Wikipedia is cheap compared to creating and training models.
I don’t think donations will suffice at all.
As an example, we had millions of web developers download and install Firebug before browsers shipped their own dev tools. Donations over the course of multiple years would have paid my salary for a month if I were not a volunteer.
But from the “it’s fine” point of view, models will be baked into your OS.
Then later models will be embedded into hardware. Likely only OS makers models.
> Wikipedia is cheap compared to creating and training models.
DeepSeek said it spent $5.6M [1] on training V3, which doesn't sound too much for a near-SOTA model.
An open source entity can come up with a hybrid business model, such as requiring a small fee from those who want to host the model as a business for the first n months following the release of a new model, but making it fully free for individuals.
Ultimately, information is a public good: it is non-excludable (you can’t stop
people from using it) and it is non-rival (we can all use it at the same time). Public goods are often very useful, and because they are non-excludable and non-rival, ultimately can’t have a market-based business model. I would class open-weights AI models as public goods, and would support government expenditure to produce them.
Training AI models is capital intensive, though. Unless there's some sort of mega-crowdfunding effort for open weight model training there needs to be a way to recoup that money on the other end. Either that or state sponsorship I guess
Perhaps you can create a compelling UX around it and sell it as a subscription. "Normies" will not be able/willing to build it. You can then patch the model/ship new features around it as it evolves. For example I have built an ambient todo list / health data extractor using Gemma 4 2EB and Whisper. Nothing to brag about but it does fairly decent job even in foreign languages.
This is what I do not understand as well and advertising the knowledge and more advanced model is also the only thing that comes to my mind.
Since a month I am using gemma4 locally successfully on a MBP M2 for many search queries (wikipedia style questions) and it is really good, fast enough (30-40t/s) and feels nice as it keeps these queries private. But I don't understand why Google does this and so I think "we" need to find a better solution where the entire pipeline is open and the compute somehow crowdfunded. Because there will be a time when these local models will get more closed like Android is closing down. One restriction they might enforce in the future could be that they cripple the models down for "sensitive" topics like cybersecurity or health topics. Or the government could even feel the need to force them to do so.
Why would you want to try to support all users simple queries on your ai data center if they could run it on their own computer?
It builds good will also. it also shows research prowess.
For China it's different. They need to show Americans who don't trust them at all because of propaganda that they have no tricks up their sleeve. It also doesn't hurt when Chinese companies drop models for free people can run at home that are about as good as sonnet. Serious mic drop.
Very good point on using local ai to avoid data centers costs.
Running AI models on local hardware was exploratory at first, and if it's so easy today it's thanks to open source. It's a little bit coincidental that we have this today, and that mainstream hardware have this capability. The fact that a phone can run very small models is exploratory or some kind of marketing opportunity at best.
Why would hardware company ships cards with more AI capabilites (like more VRAM) in the foreseable future ? On what ground does the marketing for on device AI will keep generating interest ? For something as important, it's very uncertain. But above all, it should not depends on these brittle justifications.
Showing good will in distribution and research prowess today is positive communication, but it can be exactly the oppositite if/when an attack using those small models will reach a high value target.
For China the cultural difference is so huge, it's difficult to say. I would think they first and foremost need to show to evryone inside and outside of China that they match american models. Second, i would say that when americans prefer few very powerfull companies on the get go because they can leverage a lot of capital rapidly to industrialize, China will prefer leveraging a lot of smaller companies exploring a lot of things simultanously (so doing a lot of research), THEN creating legislation to let only the best (or a few) to survive effectively. In the end it's the same result (monopoly or oligopoly), but China may have a stronger core (research) and America may have stronger productive capital, that may be proved obsolete... In the long run, in either side it's a gamble, again.
They have already shown that their models match or excel over American ones in different cases. For cheaper too.
I disagree on the second point. I think most Americans don't prefer fewer competition, that's a bit antithetical to the free market.
I doubt the Chinese government cares as much about controlling a few companies as you think they do.
China has a few things going for it beyond research. They are mission driven, they actually have needs for this technology, their needs will forward their entire economy as they are the world's largest manufacturers. They are also huge exporters and have buckets of customer support for various languages.
China also has considerably stronger infrastructure for electricity, etc. even with an nividia embargo they are doing more than showing up.
I don't think it's a matter of who "wins". There is no winning. I think China stands to gain far more from LLMs than the US does, and they have proven they don't need the us to do it, even with he us trying to sabotage it's every move into the space. The game is already more or less over in my mind.
If anything I see LLMs as having a huge market in China, and now the US can't even sell it to them.
All I care about is, if I have to use this technology, let me run it locally to avoid the surveillance capitalism aspect. That seems to be the real reason the us has propped up it economy in anticipation for this technology. Yet it doesn't long term benefit the us nor me.
Time will tell. Depends on small model architecture trends and hardware availability. I wouldn't be surprised if something came slightly out of left field. Considering Taiwan is trapped into producing the same chips for the next 2 years, I wouldn't be surprised if a new player emerged.
That and it's lucrative for Android/chrome to have a text summarizer model embedded on your phone probably for government contracts and data exfil but we won't go through there.
> What is the business model of open weight AI? I don't think there is any. At best it can serve as an advertisement for the more advanced models you sell.
I don't think local will necessarily be open-weight. And then it's not that different from personal computing: you're giving up the big lucrative corporate mainframe, thin-client model for "sell copies to a ton of individuals."
So it'd be someone else (an Apple, or the next-year equivalent of 1976 Apple) who'd start eating into that. There are a few on-device things today, but not for much heavy lifting. At first it's a toy, could maybe become more realized in a still-toy-like basis like a fully-local Alexa; in the future it grows until it eats 80-90% of the OpenAI/Anthropic use cases.
Incumbents would always rather you pay a subscription or per-use forever, but if the market looks big enough, someone will try to disrupt it.
Compute has gone back and forth from mainframe/thin client to fat client a few times already. LLMs will probably follow at some point but I think it's going to take a long time.
The cost to transmit text is basically free and instantaneous. The rent (i.e. a GPU in a data center) vs buy is going to favor rent until buy is a trivial expense. Like 50-100 range.
Even then a LLM that just works is easier than dealing with your own
Storage has moved back and forth but I don't thnk compute has ever really gone back to thin client. Even Gmail, Google Docs, etc are running a buttload of javascript on the user device. Various attempts at avoiding that (remote .NET or JVM stuff on early "smart-ish" phones) crashed and burned.
Video game streaming is the closest thing, and it's never really taken off. (And this, IMO, is a good comparison because it's a pretty similar magnitude up-front-cost, $500-$4000.)
Once the local-AI-is-good-enough (Sonnet level for a lot of basic tasks, say) for a $1k up-front investment the appeal of having something that can chew on various tasks 24/7 w/o rate limits, API token budget charge concerns, etc, is going to unlock a lot of new approaches to problems. Essentially more fully-baked line-of-business OpenClaw-type things. Or the smart home automation bot of Siri's dreams. You can more easily make that all private and secure when all the compute is local: don't give any outside network access. Push data into the sandbox periodically via boring old scripts-on-cronjobs, vs giving any sort of "agentic" harness external access. Have extremely limited data structures for getting output/instructions back out. I'd never want to pass info about my personal finances into a third party remote model; but I'd let a local one crunch numbers on it.
Even if you need Opus/Mythos/whatever level for certain tasks, if 95% of everything else you'd pay Anthropic or OpenAI for can now be done on things you own w/o third party risk... what does that do to the investment appeal of building better AI appliances to sell end users vs building better centralized models?
I think "what if today's LLM performance, but running entirely under your control and your own hardware" opens up a LOT of interesting functionality. Crowdsource the whole world's creativity to figure out what to do with it, vs waiting for product managers and engineers at 3 individual companies to release features.
There was a time where people ran software on their computer with limited connectivity. Late 90s/early 2000s most of what you did was running locally on your machine. Your emails would be downloaded and there'd be a shared drive but otherwise all local.
Anyways, who's spending $1k for a LLM machine when they can spend $20 (or 0) on a subscription? And who's having an LLM crunching away 24/7 anyways? Anyone who is going to do something like that probably wants a cutting edge model.
It'll (probably) get to a point where the hardware is cheap enough and advancement levels off. But we're a ways from that and even then when a data center is 20ms away why not offload heavy compute that's mostly text in text out.
Except that buy is a trivial expense because the hardware has been bought already. You've got a whole lot of iGPU and dGPU silicon that's currently sitting idle as part of consumer devices and could be working on local AI inference under the end user's control.
A training run costs somewhere in the neighborhood of a billion dollars. That’s a thousand millions.
How many crowdfunded projects do you know that have raised even one percent of that? Who’s going to be in charge of collecting that scale of money? Perhaps some sort of company formed for the benefit of humanity, which will promise to be a non-profit? Some sort of “Open” AI?
It’s well within the capabilities of governments in developed countries. If Mistral did not already exist, I would definitely expect the French government to invest in a national LLM, if only because of how defensive they are of the French language.
I can't say that you are lying and you are not exactly exaggerating either. It is true that a new SOTA model -- from literal scratch -- it would be expensive.
But, and it is not a small but, is the starting point really zero?
If a local model hits critical mass the business model is to use it to shape opinions in a way that is advantageous for the company/owners.
Much like the current Twitter model, being able to put your thumb on the scale of "truth". Bake a stronger bias towards their preferred narrative directly into the model. Could be as "benign" as training it to prefer Azure over AWS. Could be much worse.
"Government funding" these days would mean that Trump pays Elon Musk (or more likely vice versa) to make Grok 4.20 the only legal LLM for use by Americans.
Not every country is in a crypto-libertarian race to hoard power and wealth.
Meanwhile, in the EU, the model would be collectively financed, trained by a competent, neutral agency... and then completely lobotomized in the name of "the children," "safety," "IP rights," "correct speech," dozens of individual countries' legal and regulatory requirements, and any number of additional vocal, noncontributing NGOs.
So no one would get rich off of the public model, but no one would get much of anything else out of it, either.
As another reply suggests, there's a reason why things happen in the USA first. Even when they don't, the prime movers move here as soon as they can. Or at least they used to.
The business model is the total lack of attention to Qwen and Kimi that would happen if their models weren't downloadable. Before releasing the weights, there was basically zero attention paid in the western hemisphere to them, for whatever reason. By releasing the weights, they're relevant in the western world. The business model is to get people in the West to pay to use their platform hosting their AI, that otherwise would never have heard of them. As you said, advertising/marketing, essentially.
Baidu have a lot of services I've never heard of, that are highly successful in China. The lack of interest in expanding into Western audiences doesn't seem to matter there - what's different about inference?
Yeah, that's my issue with llm code. If we imagine a future without human programmers - sure, go ahead, we are not there yet, but maybe it's possible.
But if you want it to coexist with humans, then it doesn't seem to work well. It gets in the way of human learning and human communication. Making professionals and teams weaker essentially
But what is the purpose? When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better on some metrics due to advantages of the language. It doesn't hold when LLM does the rewrite, since there is no one who understands the code after that.
It's a good demonstration of capabilities, sure, but the result itself makes no sense. We'll have to figure out where these capabilities can bring real advantage
> When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better
I don't think that is the case here. Bun is pretty much using AI to write all of it's code, with a human reviewing it. Zig exists as a language to provide a nice DX over C and Rust, not to be memory safe. If you are using an LLM to generate code, the DX benefits are removed and so then why would you ever choose Zig over Rust?
I agree that the comment is insightful, but I don’t think AI companies are particularly promoting rewrites, other than that it’s a task LLMs are good at as “the code is the spec”.
The industry as a whole still is realizing that any LLM usage that actually writes all the code for you is causing cognitive debt, and we’re even slowly losing our skills of the art.
I’m trying my best to navigate this myself, but no matter what we do, using LLMs is both a blessing and a curse.
Becase no one has written it. You can't ask the guy who has written it, not because this guy has left, but because he does not exist. Also, it often reads weirdly.
These dimwits (and I don't just mean those in EU) seriously want to stop adolescents from watching porn, and are ready to mess with internet infrastructure for that. That's a depressing manifestation of aging society
> seriously want to stop adolescents from watching porn
no, they want to pretend this is the issue, so that pervasive monitoring or permission and/or deanonymization is normalized. It is to serve the state apparatus, rather than any actual protection.
If it is possible to "pretend that they want something reasonable", it means that there is something reasonable somewhere.
Maybe some want more control, but most certainly not everybody.
> so that pervasive monitoring
If you haven't gotten the memo, pervasive monitoring already exists. To sell ads.
> or permission and/or deanonymization is normalized
For age verification, it's possible to do it in a privacy-preserving manner. Now people spend their time complaining about the idea and claiming that all who disagree are extremists, so it doesn't help. But we could instead try to push for privacy-preserving age verification.
Are you calling me a corporate apologist? For one, corporations want less regulation.
"Being a hacker" does not mean "being stuck in the 80s", IMO. If TooBigTech cryptographically controls everything, it becomes harder to hack. Are you aware that the biggest restriction against jailbreaking stuff is that it was made super illegal... because it helps corporations?
You open with corporations want less regulations then give an example of corporations using law to protect their interests around jailbreaking? Just like they did with copyright/IP rules and the million rules around cars
You should ask the DIY diabetes community what they think of FDA regulations preventing modifications of medical devices.
Being reductive about this stuff is not a helpful framing.
Is that how it went? My feeling is that more and more people are realising that social media are terrible for the children (well, for everybody, but children we want to protect). Therefore more governments are looking into preventing children from accessing social media. Therefore social media companies are trying to lobby for whatever is better for them.
For Meta, it's better to have age verification than to downright ban social media.
In my experience they DO want more regulation. Regulation that helps them. The automobile’s industry was lobbying for years to have all kinds of things mandatory. They invented lots of standards and pseudo standards. At the end they shoot themselves in the foot.
what i personally don't like about privacy-preserving age verification is the single subsequent law change that would criminalize individuals for "improperly" doing age verification.
it'd be so easy to do, and would immediately make obsolete any measures taken digitally to preserve privacy
Believe me, in some EU countries (like my country Poland) people are very sensitive for this kind of bullshit.
Last two times they tried to push other censorship/tracking laws (claiming as always "we have to, EU is making us") there were mass protests in every city and town.
In my own town of 5k people there were several hundred (500 people at least, probably more). And the previous govt backed down.
This topic seems to be coming back everytime certain countries (Denmark etc) hold the rotating EU presidency. Our current PM is certainly in the same EU clique that wants to push this so much, but it's an extremely unpopular position and he is already leading a weak minority coalition govt. It wouldn't take much to topple him, so he will not do anything like that (unless he is convicted people are distracted with some crisis, but that is where normal people come in. To keep watching what is being smuggled in).
I wonder why do voters in those countries that propose these laws tend to allow this to happen again and again.
It's because it's not about the opinion of voters, but about existing political powers that want to retain their power.
No matter what you (as population) say it will get implemented. If you don't, then they will put sanctions on Poland, withdraw financial partnerships, etc. Like with migrants, they are going to be sent there, even if Polish people vote against.
No matter if you are in favor or against, raising the topic will just make you socially be isolated or even legally punished in some places.
Sad for democracy and free speech.
EDIT: clarified about migrant policy and the decision of countries to choose or not
I think you are confusing democracy with anarchy. There is tons of free speech in Europe, and it is the most democratic of all countries that are alternative. UK is still very democratic country. The generation of politicians from zero rate era had gotten carried away on certain topics (defence, immigration) and now we have to deal with this. they also have undermined public trust by acting the way they did in covid, to an extent that any new pandemic is likely to be way worse just because no body will trust the government again for a long time.
But there is a problem with kids today having pretty easy access to all kinds of nasty shit on internet, and it isnt like it was in 90s and 00s. 10 year olds on social media is fucked. I dont really care for blocking internet on anything, yet it does appear that simple age verification is necessary for access to certain services.
most people are just super mad that they wont be able to watch porn or their favourite show thats on netflix or whatever. BUt thats not what the article was about. Read a little bit. Comparing EU to fucking Russia is insane here.
my personal opinion is it’s an all or nothing thing for immigration. you either reduce entirely for all groups or it’s racism and unfair treatment. everyone deserves an equal chance
There's something in-between that could work: allow everyone, kick criminals permanently, and give a path to permanent residence to others, or something like that.
>you either reduce entirely for all groups or it’s racism and unfair treatment.
Racism would be restricting immigration over skin color. What people want, and a lot of countries have, is immigration based on merit system: education, skill, income, criminal record. There's nothing racist about that.
I just made it softer (though I never mentioned race but economic migration), still can be too sensitive so let it be, just my opinion (I am in Eastern Europe and there the views are rather harsh, compared to Germany or France or Sweden)
You havent said anything wrong about immigration. People who misunderstood you are racist themselves as they have dismissed cultural issues of your country and tried to impose their own views. So dont listen to them, they are just ignorant at best.
Poland has a fresh memory on communism and censorship. Sadly the alternative to your current PM is PiS, who banned abortion, so basically christian conservatives with some fascist ideas, not very different than the ones currently running the US.
To the EU regulators: we don't need another Stasi, we already have Google and Meta to worry about, thank you.
Also, to regulate in my native language is just a nice way to say the f word, if this conversation is about porn.
It's not really about kids looking at porn, it's about tracking everyone else and making it easier for state surveillance and corporations to identify people.
Kids don't have money and hardly ever manage to do crime without getting caught so they're profoundly uninteresting to surveil in this way, but adults are and here the interests of the state and corporations converge so they'll make a push for tyranny.
But how to make people accept it? Tell them they want to expose kids to gruesome tentacle porn, or else they'd support this. Few adults are willing to admit they even look at porn, let alone argue that this is an important activity that needs to be protected, which it is.
If you think that there is a need for new technology to identify people, I suggest you wake up and start getting informed about surveillance capitalism.
There is absolutely no need for new technology to track people, it's there already.
Also I feel like a big reason for age verification is social media. Many countries are trying to prevent kids from accessing social media (because we know it's bad for them), and age verification is the way to do that.
Badly implemented, age verification is bad. But there are ways to implement it in a privacy-preserving manner, which wouldn't make the current state of surveillance capitalism worse.
People who are actually interesting, are often aware of that fact and avoid surveillance at the moment. You can use tor/i2p, proper VPN setups, VMs, alternative mobile ROMs and other tech and cut most of the fingerprints, trackers and identification. Pretty sure the trash from state agencies doesn't like that.
But the current push from all sides to provide id for everything and remote attestation through Google and apple will make the alternatives very hard to use as it basically cuts such people from the economy altogether.
Need is a very strong word. I'd call it a desire. Currently you can often identify people, sure, but there's hassle involved. What they want to do is to plug in a private corporation separate from whatever service that is likely to be more loyal with the state apparatus than the service, or else it is easily switched out for another.
And corporations are having issues discerning bots from people without making access to their services a fuss or dependent on possibly idealistic and troublesome open source projects, like Anubis.
It's truly, absolutely, not about "age verification". If it were about protecting kids from harm they'd take money from corporations post factum that are offending. Instead they're preparing to spend a lot of money. You could also look at who is heavily lobbying for this, you'll find it is fascist tech oligarchs from the US. They couldn't care less about kids except for obscene or profitable purposes. It would be weird for them to be cosy with epsteinian networks of power and at the same time be mindful about the wellbeing of children.
> Currently you can often identify people, sure, but there's hassle involved
You vastly underestimate the current state of surveillance capitalism.
> You could also look at who is heavily lobbying for this, you'll find it is fascist tech oligarchs from the US.
Go in the street, and ask a bunch of random people: "If there was a way to prevent kids from accessing stuff that is bad for them, and it had no downsides. Would you want it?". I'm absolutely certain that not only fascists will say they would want it.
No one is doing it that way though. Also to be truly privacy-preserving you cannot rely on anything that requires any specific OS (especially Android or iOS) as every single OS requires some compromises to privacy.
The only privacy-preserving (effective) age verification is asking user if they are over 18 and requiring that they answer truthfully under penalty of perjury. Then prosecute the kids who claim they are over 18. For reason or another no one seems to be pushing for that option.
Well it exists in Privacy Pass, which is deployed in production. And there are countries that are currently actively looking into privacy-preserving age verification. I don't think that "I keep saying that age verification fundamentally leaks your ID, which is wrong, but it's still valid because nobody will notice" is a good argument.
> The only privacy-preserving (effective) age verification is asking user if they are over 18 and requiring that they answer truthfully under penalty of perjury.
I disagree, I think that there could be a sane debate around ZK age verification, if we could elevate it to that.
There are a lot of things that most people in the street want that aren't even on the road map to happening, so you have to ask yourself why this thing (which isn't hardly anyone's top motivating issue) is gaining traction.
> this thing (which isn't hardly anyone's top motivating issue)
Do you have kids growing up with social media?
My experience is that parents with kids growing up with social media generally care about whether or not social media are bad for their kids. And generally, parents try to give kids a smartphone and access to social media as late as possible, generally when "everybody else has it" and it feels like it becomes counter-productive to make an outlier out of your kid.
I wouldn't say nobody cares. If anything, I think most parents would care a lot more about limiting access to social media than about privacy. It's pretty obvious that nobody gives a shit about privacy.
My argument is: it is possible to do privacy-preserving age verification, and that technology is already deployed (look at Privacy Pass). We should acknowledge that and stop claiming that the age verification issue is the same as the E2EE one, because it is NOT.
And then we could maybe have a constructive debate about whether or not we as a society want that technology. That would be more interesting than "if I keep yelling that it's fundamentally stupid, maybe people on the other side will start believing it".
You're still just stating your opinions. What do you mean by "it is possible"? Are you sure there are states that are privacy respecting enough to actually be able to, or have all the relevant states broken down walls between government agencies that would shield citizens from secret surveillance and registers?
The rise of authoritarianism? Inequality? Revival of geopolitical "realism"? Decrease in empathy and holistic thinking? Increasing willingness of the general population to engage in political adventurism? Accelerating resource consumption (and decelerating resource stocks).
And if you consider none of those "real" problems, I know some people seem to have forgotten about it, but what about climate change? Given the half-life of CO2 and methane, that's a problem as "real" as they get.
There are real problems and they are huge but the solutions to them are very unpopular. So that's why political parties resort to this kind of distraction politics. Blaming immigrants, LGBT people etc. Or simply causing other problems by bombing random countries for no reason.
Because nobody wants to limit the big corporations polluting the world and exploiting the population, tackle climate change with more than some hollow measures. Because people will be annoyed when it affects them and that means political suicide.
So they manufacture other problems, ones they can control and point the blame to groups that have little voting power.
Adolescents, or kids? Would you say it's completely stupid to want to stop kids from watching porn, or accessing social media?
Did you grow up with free streaming platforms? Pretty sure many adolescents were accessing porn before those, though it was slightly less accessible.
I personally don't have a definitive opinion about porn (I feel like young kids obviously shouldn't have access to it, but it shouldn't be illegal to adults, but I don't know where the limit should be), but I feel like making it harder for kids to access social media makes sense.
I dunno, you have experience being a kid, right? Young kids are just not interested enough to look for porn, not to say figure out how to use VPN to access it. Lax restrictions like we have today are enough to stop porn from being forced on children who are not interested in it
> There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies." — Tony Hoare
It went down on poor earnings call. Layoffs were probably an attempt to soften the blow. Hard to tell what was the effect, because the two happened simultaneously
This is the simplest and almost certainly correct answer.
I’ve seen this at a number of public companies, and is a reason I hate working for them. These decisions are always unbelievably short sighted and ruin companies in the long term.
Depends what it's used for, generally I've seen that due to the paucity of C or Rust etc training data vs Javascript and TypeScript, LLMs aren't as good at the former vs the latter.
The huge difference to open source is that you can't just train an LLM with free time and motivation. You need lots of data and a lot of compute.
I sure want to be wrong on that, I definitely like the open-weight version of the future more
reply