Rendered at 17:28:53 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
cmiles8 6 hours ago [-]
Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.
aurareturn 6 hours ago [-]
Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
All compute companies say they don't have enough compute to meet demands. Why do you think there isn't enough AI adoption to justify the investment?
cmiles8 6 hours ago [-]
“Demand” is mostly their training of models, which they’ve yet to demonstrate is a profitable business.
Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.
couchdb_ouchdb 2 hours ago [-]
With respect, I don't think you've used the latest models and have not seen Anthropic's enterprise revenue hockey-stick like number. They are so busy outfitting fortune-500, you can't even get someone in sales to respond to emails. I've been waiting for months and so have others.
lancebeet 6 hours ago [-]
This will sound snarky, so forgive me, but I honestly don't know the answer. Is this actually true? Is there a reliable source containing statistics on LLM compute usage that includes training vs inference for the whole market?
seanmcdirmid 2 hours ago [-]
I don’t understand why people don’t just use Gemini or some other AI web search to get an answer to these kinds of questions quickly (I excluded the sources, you can get them if you ask the same question).
> While AI training is often the most intense and expensive process for a single model, the majority of total AI compute usage (approximately 90%) is used for inference.
> Here is the breakdown of why this is the case:
> Inference as High-Volume
> Activity: Inference occurs every time a user interacts with an AI model (e.g., asking ChatGPT a question, using image recognition, or generating code). While a model is trained once (or updated infrequently), it runs millions or billions of inferences continuously.
> Cost Scaling: Training is a massive, one-time upfront cost, while inference is an ongoing, daily operational cost. As the number of AI users grows, the demand for inference compute scales faster than the need for training new, large models.
> The Shift to Efficiency: While early AI hype focused on the immense compute needed for training, the industry has shifted toward making inference cheaper and faster through specialized hardware and techniques like optimization, quantization, and small language models (SLMs).
concats 6 hours ago [-]
The revenue numbers are public for the major AI companies. That's probably the best estimate for "inference for the whole market" we have, since most of that inference is billed in either API usage or subscriptions, and it won't include any in-house usage such as training.
seanmcdirmid 2 hours ago [-]
Most of the compute is actually used for inference (90% if Gemini is to be trusted).
aurareturn 5 hours ago [-]
Do you have source?
duskdozer 5 hours ago [-]
"enough compute" will be when there is no more hardware for use outside of their walled garden, at which point they can control what they want
JKCalhoun 5 hours ago [-]
"Not surprisingly companies are willing to get into bed with more and more questionable use cases…"
But not all companies as we have seen over the last week or so.
Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.
(In short, they don't get to hide behind "everyone else is doing it".)
Tklaaaalo 5 hours ago [-]
Google has enough money, still has positive revenue and still invests in AI + Deepmind.
Google doesn't need to do anything to make any other numbers work.
Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.
Nano Banana 2 Pro is alsy very good.
OpenAI numbers might not add up, Antrophic might burn through cash, but not google.
And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.
Microsoft invests in OpenAI and Amazon invests in Antrophic.
cmiles8 5 hours ago [-]
Worth remembering that Amazon is now taking out loans to help pay for it all. That says a lot.
Tklaaaalo 3 hours ago [-]
That honestly says nothing.
Even a company like Amazon hasn't just billions on a bank account.
They make enough profit to easily afford this.
Its easier to get a loan instead of getting a lump sump through other means.
cermicelli 5 hours ago [-]
Amazon now has just as much invested in OpenAI, as much as Microsoft most likely.
Given Anthropic is also funded by them, either they are desperate to not lose or they really don't think Anthropic has a moat.
jasonfrost 3 hours ago [-]
Questionable use cases like hyperscalers housing confidential data of military operations? Use case is the same, private companies supporting military operations, as they have for ages.
nxobject 6 hours ago [-]
And, in a post-ZIRP era, guess where all of the easy money for growth is coming from? Yup, deficit-funded defense spending.
dotancohen 6 hours ago [-]
The pentagon is a questionable use case?
pjc50 5 hours ago [-]
The most questionable of all! You just know it's going to be used for increasingly inappropriate "generate me a list of targets in Iran" stuff.
dotancohen 44 minutes ago [-]
I don't "just know that". However you think you "just know that", I think you should verify your sources before spreading that around.
cmiles8 5 hours ago [-]
I’m OK with it, but the fact that this is news highlights that many others don’t like it
dotancohen 42 minutes ago [-]
The fact that this is news highlights that there is an effort to discredit US institutions. We are meant to believe that others don't like it.
UncleMeat 40 minutes ago [-]
US institutions are discrediting themselves.
SecureVillage27 7 hours ago [-]
Sounds sketchy as hell but the article suggests its for unclassified work, like "drafting meeting notes, creating action items, and breaking large projects into step-by-step plans".
I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.
duskdozer 5 hours ago [-]
It does those things poorly.
free652 6 hours ago [-]
>The DOD’s workforce of more than 3 million people will now be able to use a no-code or low-code tool called Agent Designer to create their own digital assistants for repetitive administrative tasks.
coffeefirst 6 hours ago [-]
Oh this is dumb.
So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.
I could have sworn I signed up for the other future-the one without quite this much stupid.
JKCalhoun 4 hours ago [-]
Had the film "Brazil" been written today, AI no doubt would be a significant plot-element.
_DeadFred_ 2 hours ago [-]
As someone who moved from software companies to IT management, seeing this move to fully embrace 'everything in Excel' or basically undefined business use cases/processes moved into software ad hoc and without validation, it's going to be interesting to see how this plays out. Especially for companies that have outsourced IT and expect software to be defined/tested out business processes in supported systems.
In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.
simianwords 3 hours ago [-]
Everyone’s scared that it would be used for war but how would they break the alignment on llm models? They don’t even allow me to generate black people on AI. How the hell will it work for war related tasks? Or would there be a separate model fine tuned for government that allows being used to kill people?
max_ 6 hours ago [-]
Hey chat GPT, could you bomb all enemies of the USA.
This should surprise no one. A CIA-backed VC was one of the first investors of Google. Big tech will always serve the powers that be. Employees that think their letters of appeal will do anything live in a fantasy land. That’s not how the real world works.
dotancohen 5 hours ago [-]
What is wrong with a company serving the country in which it operates?
Tistron 5 hours ago [-]
Surely that depends heavily on the country.
CrzyLngPwd 6 hours ago [-]
War is a racket. It always has been. It is possibly the oldest, easily the most profitable, surely the most vicious. It is the only one international in scope. It is the only one in which the profits are reckoned in dollars and the losses in lives. A racket is best described, I believe, as something that is not what it seems to the majority of the people. Only a small "inside" group knows what it is about. It is conducted for the benefit of the very few, at the expense of the very many. Out of war a few people make huge fortunes - Smedley D. Butler
...is as true now as ever.
mattmaroon 6 hours ago [-]
Health care didn’t exist in his day. War’s the second most profitable now.
josefritzishere 1 hours ago [-]
Theory: selling half-baked AI options to the government is plan B. It's an alternative to bailing out these financially failing AI companies. This is a delay tactic to prevent a collapse scenario.
haritha-j 6 hours ago [-]
Hegseth: "Hang on, that last bomb was dropped on a girl's school, not a missile launch site!".
Gemini: "You're absolutely right! That's my bad. Here's the actual missile launch target."
_ink_ 5 hours ago [-]
Gemini: "You're absolutely right! That's my bad. Do you want me to create a press statement deflecting blame to other nations?"
1vuio0pswjnm7 6 hours ago [-]
"“We’re starting with unclassified because that’s where most of the users are, and then we’ll get to classified and top secret,” Michael said in an interview, adding that talks with Google over using the agents on the classified cloud are underway."
PetriCasserole 5 hours ago [-]
War Games II anyone?
glimshe 6 hours ago [-]
Silicon Valley started with the military... And the military won't ever go away.
spwa4 5 hours ago [-]
Can you name even a single large company that wasn't created by the state? And yes, maybe created means "picked up a tiny company and made it big", I'm treating that as the same (ie. Amazon)
Also the whole internet started as a military project. The big reason, especially when it comes to Silicon Valley's tech is that people just don't want it until they can see what it does.
glimshe 4 hours ago [-]
Well... We're kind of saying the same thing, I just said it from another perspective. I meant to say that the military created it, so the military will stay around to reap the dividends.
5 hours ago [-]
brettkromkamp 7 hours ago [-]
So it begins.
Noaidi 6 hours ago [-]
If (IF!) the U.S. government is a corrupt authoritarian regime does it matter what services Google was providing?
When is the point we see that boycotting these companies that are helping kill, lets say 100 little girls with a tomahawk missiles, is the very least we can do?
reedf1 7 hours ago [-]
Pete Hegseth: Hey Google, what are the best bits of Iran to bomb to maximize civilian damage?
blitzar 6 hours ago [-]
You are absolutely right. Here is a list of schools.
aurareturn 6 hours ago [-]
Wasn't Claude already used with Palantir to choose Iran bombing targets?
blitzar 6 hours ago [-]
I don't know exactly how I would feel if the software I created selected a school to bomb and then suggested bombing the rescue parties trying to find / save any unexploded children 40 minutes later (double tap strategy to kill rescue parties and/or medics).
It wouldn't be good though.
kace91 6 hours ago [-]
That 'let claude wing it, then send for review' approach that your lazy coworker uses is now how the largest military in the world operates. No big drama.
duskdozer 5 hours ago [-]
Fortunately for the government, there's no lack of "I'm just here for the tech, keep politics out of this" developers
elil17 7 hours ago [-]
"Don't be evil"
dotancohen 5 hours ago [-]
What is evil about working with the government of the country in which they were founded and operate?
Tklaaaalo 5 hours ago [-]
Just beause doesn't mean you support war.
Do you support the current Iran war and the way its handled?
Opposition and critisism (normally done by the independent press and the party not in power) is there to align. With trump you have 'deals' of rich people doing other rich people favours. They do not care about human lives
dotancohen 40 minutes ago [-]
Of course I support the war against the regime that chants "Death to America" and is building a nuclear bomb. What kind of question is that?
3 hours ago [-]
sbarre 7 hours ago [-]
Sorry that's on page 5 of the search results, so it doesn't exist.
richsouth 6 hours ago [-]
I think that went into the Google Graveyard years ago
conartist6 8 hours ago [-]
Google to help staff the Pentagon with sycophantic incompetent sociopaths! Hooray!
Will I be the only one concerned that amplifying bullshit might run contrary to the mission of the national defense
rvz 7 hours ago [-]
At this point, the employees at Google who signed that open letter might as well call it quits and leave. Google already has military contracts the Pentagon previously, so this is not surprising at all.
To them, this is just another Tuesday.
postsantum 6 hours ago [-]
> sycophantic incompetent sociopaths
It's possible to use just one word for it but I don't want to get banned
cermicelli 5 hours ago [-]
Let's all boycott Google folks, I want all of HN to band together and in solidarity just not use Google for anything...
Let's see if anyone here has the guts to even switch away from GCP, scratch that can folks even move away from Apple(Apple pays for Gemini too) and Android?
I do think OpenAI deserves the boycott but people talking about Anthropic as they were taking some kind of ethical stand when it was just ego tripping for everyone involved is insane.
Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.
Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.
> While AI training is often the most intense and expensive process for a single model, the majority of total AI compute usage (approximately 90%) is used for inference.
> Here is the breakdown of why this is the case: > Inference as High-Volume
> Activity: Inference occurs every time a user interacts with an AI model (e.g., asking ChatGPT a question, using image recognition, or generating code). While a model is trained once (or updated infrequently), it runs millions or billions of inferences continuously.
> Cost Scaling: Training is a massive, one-time upfront cost, while inference is an ongoing, daily operational cost. As the number of AI users grows, the demand for inference compute scales faster than the need for training new, large models.
> The Shift to Efficiency: While early AI hype focused on the immense compute needed for training, the industry has shifted toward making inference cheaper and faster through specialized hardware and techniques like optimization, quantization, and small language models (SLMs).
But not all companies as we have seen over the last week or so.
Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.
(In short, they don't get to hide behind "everyone else is doing it".)
Google doesn't need to do anything to make any other numbers work.
Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.
Nano Banana 2 Pro is alsy very good.
OpenAI numbers might not add up, Antrophic might burn through cash, but not google.
And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.
Microsoft invests in OpenAI and Amazon invests in Antrophic.
Even a company like Amazon hasn't just billions on a bank account.
They make enough profit to easily afford this.
Its easier to get a loan instead of getting a lump sump through other means.
Given Anthropic is also funded by them, either they are desperate to not lose or they really don't think Anthropic has a moat.
I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.
So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.
I could have sworn I signed up for the other future-the one without quite this much stupid.
In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.
No mistakes,
Thanks.
...is as true now as ever.
Gemini: "You're absolutely right! That's my bad. Here's the actual missile launch target."
Also the whole internet started as a military project. The big reason, especially when it comes to Silicon Valley's tech is that people just don't want it until they can see what it does.
When is the point we see that boycotting these companies that are helping kill, lets say 100 little girls with a tomahawk missiles, is the very least we can do?
It wouldn't be good though.
Do you support the current Iran war and the way its handled?
Opposition and critisism (normally done by the independent press and the party not in power) is there to align. With trump you have 'deals' of rich people doing other rich people favours. They do not care about human lives
Will I be the only one concerned that amplifying bullshit might run contrary to the mission of the national defense
To them, this is just another Tuesday.
It's possible to use just one word for it but I don't want to get banned
Let's see if anyone here has the guts to even switch away from GCP, scratch that can folks even move away from Apple(Apple pays for Gemini too) and Android?
I do think OpenAI deserves the boycott but people talking about Anthropic as they were taking some kind of ethical stand when it was just ego tripping for everyone involved is insane.