When 32GB NVIDIA cards seem to start at around $4000 that's a big enough gap to be motivating for a bunch of applications.
robotnikman 13 hours ago [-]
I'm probably going to snag one of the Intel cards just for the SR-IOV and use with VM's
scrubs 10 hours ago [-]
I tried to use SRIOV to virtualize mellanox nics with vlans on redhat Linux. Long story short it did not work. Per Nvidia the os has to also run open switch. This work was on an already complex setup in finance ... so adding open switch was considered too much additionally complexity. This requirement is not something I run across in the docs.
Anybody know better?
hedgehog 10 hours ago [-]
Depending what you're doing AMD's support for VirtIO Native Context might be a useful alternative (I think it gives less isolation which could be good or bad depending on use).
qingcharles 18 hours ago [-]
I think the B65 is priced at $650. Both supported by llamacpp I believe. With that power draw you could run two of them.
838 seems to be the real INT8 TOPS number for the 5090; going from 800 to 3400 takes an x2 speedup for sparsity (so skipping ops) and another x2 speedup for FP4 over INT8.
So it's closer to half the speed than a tenth. Intel also seems to be positioning this card against the RTX PRO 4000 Blackwell, not the 5090, and that one gets more like 300 INT8 TOPS. It also has less memory but at a slightly higher bandwidth. The 5090 is much faster and IIRC priced similarly to the PRO 4000, but is also decidedly a consumer product which, especially for Nvidia, comes with limitations (e.g. no server-friendly form factor cards available, and there are or used to be driver license restrictions that prevented using a consumer card in a data center setup).
jauntywundrkind 13 hours ago [-]
Thank you for the correction. That seemed way too lopsided to be believed. This assessment balances the memory to tops ratio much much more evenly, which is to be expected! I was low key hoping someone would help me make sense of how wildly disparate figures were, but I wasn't seeing.
AMD R9700 is 378/766 tops int8 dense/sparse. 644GB/s of 32GB memory. ~$1400. To throw one more card into the mix. Intel undercutting that nicely here.
You're right that for companies, the pro grade matters. For us mere mortals, much less so. Features like sr-iov however are just fantastic so see! Good job Intel. AMD has been trickling out such capabilities for a decade (cards fused for "MxGPU" capability) & it makes it such an easier buy to just offer it straight up across the models.
adgjlsfhk1 14 hours ago [-]
especially for exploratory work 1/10th the perf is fine. Intel isn't able to compete head to head with Nvidia (yet), but vram is capability while speed is capacity. There will be plenty of use cases where the value prop here makes sense.
wmf 15 hours ago [-]
It's more like a 70 class card with extra VRAM.
giancarlostoro 18 hours ago [-]
Intel GPU prices have stayed fine, but I do wonder if they are viable for Inference if they will wind up like Nvidia GPUs, severely overpriced.
cmovq 16 hours ago [-]
I mean it kind of is considering that's comparable to a 5070 which has 672 GB/s? Benefit of NVIDIA being the only one using GDDR7 for now I guess.
daemonologist 15 hours ago [-]
7800 XT has 624 GB/s as well, and can be found for $400 used. 16 GB of course.
varispeed 14 hours ago [-]
The product would be excellent in 2024, but now it's a landfill filler. You can run some small models at pedestrian speed, novelty wears off and that's it.
Intel is not looking in the future. If they released Arc Pro B70 with 512GB base RAM, now that could be interesting.
32GB? Meh.
throwaway85825 6 hours ago [-]
It's true that it's severely late and missed it's market window but 512gb just isn't possible.
kadoban 13 hours ago [-]
The last go around they looked good on paper and then Intel just didn't make any of them to sell.
Announce all you want, if you don't ever ship anything I could buy, who gives a shit.
cmxch 13 hours ago [-]
The B60 (and the dual edition) were an entire exercise in how NOT to launch a product.
They let people have the B50 but only released the B60 late in the cycle.
kadoban 5 hours ago [-]
I wasn't even aware they ever _really_ released the B60. When I got bored of paying attention it was ~months after "release" and they just didn't exist to buy. I do technically see them on ebay, so yeah apparently they're out there.
thefounder 7 hours ago [-]
Why don’t they make an GPU optimised for inference/batch jobs with 1 TB of ram ? Everyone wants to run the biggest models locally.
It has 32gb of RAM and memory controllers are about 10% of the the total area. What would you have to do for 1024gb of RAM?
Not to mention the price would be astronomical.
tbyehl 17 hours ago [-]
Where's the A310 / A40 successor? Gimme some SR-IOV in a slot-powered, single-width, low-profile card.
jmward01 16 hours ago [-]
I think this shows a shift in model architecture. MOE and similar need more memory for the compute available than just one big model with a lot of layers and weights. I think this is likely a trend that will accelerate. You build the trade-off in which encourages even more experts which means more of a tradeoff, so more experts.....
zozbot234 15 hours ago [-]
Most people doing local inference run the MoE layers on CPU anyway, because decode is not compute constrained and wasting the high-bandwidth VRAM on unused weights is silly. It's better to use it for longer context. Recent architectures even offload the MoE experts to fast (PCIe x4 5.0 or similar performance) NVMe: it's slow but it opens up running even SOTA local MoE models on ordinary hardware.
jmward01 14 hours ago [-]
I think you are making my point. Having a little slower, but a lot more, memory on the card would speed this use-case up a lot and remove the need to go to system memory or make it available for very rarely used experts allowing for even larger MOE models running with good performance.
zozbot234 13 hours ago [-]
I think speeding up long context and opening up the use of models with larger shared layers is ultimately more relevant than hosting unused MoE layers. Of course you could do that as a last resort, i.e. when running with a smaller context that leaves some VRAM free to use.
jmward01 11 hours ago [-]
Long context will be solved and capped and turned into a theta 1 operation or, at worst, theta log(n). People don't have infinite perfect recall so agents don't need it. Also, there are really good solutions to it that just aren't explored enough right now since transformer architectures are where everyone is dumping money and time. I suspect very soon somone will have a much better system that just takes over and then the idea of context limits will be a thing of the past. I've actually built something myself that allows infinite context/perfect recall in theta 1 (minor asterisk here as there has to be but meh). I know others have solutions too.
zozbot234 6 hours ago [-]
There's already models with capped long context but if you make that the whole model it makes needle-in-haystack search impossible and that's actually a very common operation. Which is why Qwen 3.5 only makes a portion of it capped, and AIUI the new Nemotron models are broadly similar.
They observe significant gains in factual knowledge retrieval capabilities, but reasoning barely moves the needle.
pjmlp 17 hours ago [-]
New cards in 2026, and targeting Vulkan 1.3?!
SkyeCA 16 hours ago [-]
32GB of vram for a decent price? I wonder if these will work well for VR, because vram is my current main issue.
aruametello 15 hours ago [-]
(VR enthusiast here, mostly under windows)
intel support has been mild to non existent in the VR space unfortunately. Given the very finicky latency + engine support i wouldn’t bet on a great experience, but hope for the best for more competition in this market. (even amd has a lot of caveats comparing to nvidia)
Footnotes:
* critical "as low as it can be" low latency support on intel XE is still not as mature as nvidia, amd was lagging behind until recently.
* Not sure about "multiprojection" rendering support on intel, lack of support can kill vr performance or make it incompatible. (the optimized vr games often rely on it)
HerbManic 14 hours ago [-]
It looked like when Intel jumped into this space, they tried to do everything at once. It didnt work well, they were playing catch up to some very mature systems. They are now being much more selective and restrained. The down side is that things like VR support are put on the back burner for years.
Good for most people but if you need that fuctiobality and they dont have it, go somewhere else.
whalesalad 18 hours ago [-]
Anyone running an ARC card for desktop Linux who can comment on the experience? I've had smooth sailing with AMD GPU's but have never tried Intel.
oakpond 18 hours ago [-]
Running dual Pro B60 on Debian stable mostly for AI coding.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
lostmsu 12 hours ago [-]
> small fp8 local model with almost 100k token context
Would not fit Qwen3.5 27B would it? That's the SOTA
oakpond 2 hours ago [-]
This is a fp16 model. That's 54G in weights. I can load it only with fp8 quantization enabled (>= 128k context). I run into this error during generation though: https://github.com/vllm-project/vllm/issues/36350. Looks like an issue with the flash attention backend. But yeah, if you are OK with fp8 quantization on this model, it fits. I expect with 64G VRAM it will fit without quantization
robertVance 17 hours ago [-]
Ive ran arc on fedora for years and for general desktop use it’s been perfect. For llm’s/coding it’s getting better but it’s rough around the edges. Had a bug where trying to get vram usage through pytorch would crash the system, ect.
Levitating 16 hours ago [-]
Afaik driver support is very complete on Linux. You often see Arc GPUs used in media transcoding workloads for that reason.
HerbManic 14 hours ago [-]
We can all agree that Intel absolutely nailed it with the media encoding on these things. A nice to have for many, vital for others.
whalesalad 14 hours ago [-]
quicksync has been around for ages its surprising to me that other platorms have not adopted this. no reason a modern cpu can't transcode video.
vel0city 13 hours ago [-]
Quicksync doesn't do its work on the CPU, it does the work on the integrated GPU. Their processors that did not have on-board graphics did not have Quicksync support. See their P series and many of their Xeon parts which do not carry Quicksync support, while the versions with integrated graphics do have it.
AMD chips that have integrated GPUs (their APU series of chips) often do have support for hardware video encoders. Because, once again, its a function of the GPU and not the CPU.
himata4113 15 hours ago [-]
Linus Torvalds runs ARC :)
wyre 18 hours ago [-]
There was the video a little while back where LTT built a computer for Linus Torvalds and they put an Intel Arc card inside, so I'd imagine Linux support is at the very least, acceptable.
My B580 works fine on Linux. Graphics perf is a bit worse than under Windows, but supposedly compute is pretty much the same.
unethical_ban 16 hours ago [-]
I'm running A-series Arc for media transcoding and it works just fine.
SmellTheGlove 14 hours ago [-]
Any idea if it'll be possible to mix these with nvidia cards? Adding 32GB to a single 3090 setup would be pretty nice.
nickthegreek 18 hours ago [-]
Both have 32gb vram. Could be a pretty compelling choice.
cptskippy 18 hours ago [-]
They certainly look viable as replacements for my Tesla P40 for virtual workloads.
mikelitoris 15 hours ago [-]
Too little too late, classic Intel
cmxch 13 hours ago [-]
Good to see that Intel learned to release product to more than just resellers.
Now can we have a 64gb B70 that’s worldwide available and not marked to unicorns like the Maxsun B60 Dual model has been?
lostmsu 12 hours ago [-]
Nothing like Crossfire/SLI? Not possible to efficiently connect multiple cards for one large model?
17 hours ago [-]
DiabloD3 17 hours ago [-]
Since they fired the entire Arc team and a lot of the senior engineers already updated their Linkedins to reflect their new positions at AMD, Nvidia, and others, as well as laying off most of their Linux driver team (GPU and non-GPU), uh...
WTF?
staticman2 17 hours ago [-]
You are exaggerating, right? They didn't really fire the entire Arc team did they? I couldn't find a source saying that.
DiabloD3 17 hours ago [-]
Nope, no exaggeration.
The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.
Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.
There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).
They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.
There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.
For further information, see Moore's Law Is Dead's coverage off and on over the past year.
chao- 15 hours ago [-]
You may be a bit too credulous. There has been a "leak" or "rumor" that Intel's GPU initiatives are canceled about once every three months, for over two years. Yet Intel continues to release new SKUs and make new product announcements. Just last month they announced a new data center GPU product (an inference-focused variant of Jaguar Shores).
I can't see the future, but I can see patterns: the media that reports straight from the industry rumor mill LOVES this "Intel has cancelled its GPUs" story, for whatever reason. I have no particular love for Intel (out of my six current systems, my only Intel box is a cheap NUC from 2018), but at this point, these rumors echo the old joke about economists who "accurately predicted the last nine out of two recessions".
gk-- 16 hours ago [-]
ah, so this is MLID. yeah i'll wait for the announcement.
mtlmtlmtlmtl 12 hours ago [-]
MLID has been saying Arc was cancelled since before the first Alchemist cards were released.
PowerElectronix 16 hours ago [-]
MLID is a terrible information source.
thesmart 14 hours ago [-]
The idea that Intel's foundry could replace TSMC is hilarious. No. Maybe a gamer-focused mid-market card based on 30-series.
throwaway85825 6 hours ago [-]
Pat spent a lot of money on foundry to catch up.
wtallis 17 hours ago [-]
This is a chip they've had lying around for a while. It's the same architecture as used in the Arc B580 that launched at the end of 2024; this is just a slightly larger sibling. Intel clearly knew that their larger part wouldn't make for a competitive gaming GPU (hence the lack of a consumer counterpart to these cards), but must have decided that a relatively cheap workstation card with 32GB might be able to make some money.
throwaway85825 6 hours ago [-]
Now if they launched the 32GB workstation card in 2024 with cheap RAM it would have been a success.
DiabloD3 17 hours ago [-]
Still seems crooked to sell a GPU that is already lost their driver team and will get no new meaningful updates.
wtallis 17 hours ago [-]
Does it need a huge driver team pushing out big updates in order to be suitable for the kind of Pro use cases it's targeted at? They're explicitly not going after the gaming market so they don't need to be on the treadmill of constant driver updates delivering workarounds and optimizations for the latest game releases.
They're still going to be employing some developers for driver maintenance for the sake of their iGPUs, and that might be enough for these cards.
unethical_ban 16 hours ago [-]
I didn't know this. Have they officially given up on building discrete GPUs? Is this a last gasp of Arc to offload decent remaining architectures at a lower price than nvidia?
It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.
StilesCrisis 15 hours ago [-]
It does sound like a very Intel choice though.
mschuster91 15 hours ago [-]
> It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.
You still need to fab it somewhere. Intel's fabs have been plagued with issues for years, the AI grifters have bought up a lot of TSMCs allotments and what remains got bought up by Apple for their iOS and macOS lineups, and Samsung's fabs are busy doing Samsung SoCs.
And that unfortunately may explain why Intel yanked everything. What use is a product line that can't be sold because you can't get it produced?
Yet another item on my long list of "why I want to see the AI grift industry burn and the major participants rotting in a prison cell".
jeremie_strand 4 hours ago [-]
[dead]
WarmWash 19 hours ago [-]
Wake me when they wake up and release a middling card with 128GB memory.
I'm not sure what you're asking. The link I posted is for a PCIe card.
lostmsu 8 hours ago [-]
The page doesn't mention what interface the 160GiB card uses. Quick Googling doesn't either.
zozbot234 15 hours ago [-]
Buy Strix Halo or Apple Silicon platforms and you get essentially that.
Weryj 18 hours ago [-]
Buy 4?
electronsoup 18 hours ago [-]
Which mainboards are cheap and have 4 pcie16x (electrical) slots, that don't need weird risers to fit 4 GPUs
SmellTheGlove 14 hours ago [-]
Consumer CPUs don't have enough PCIE lanes to do that. Even if they had physical x16 slots, at most two of them would be x16.
What's cheap to you? You can find Epyc 7002/7003 boards on ebay in the $400 range and those will do it. That's probably the best deal for 4x PCIE 4.0 x16 and DDR4. Probably $500 range with a CPU. That's in the ballpark of a mid to high end consumer setup these days.
irishcoffee 17 hours ago [-]
If your actual gripe is risers, sounds like a "you" problem, not a technical problem.
MrDrMcCoy 12 hours ago [-]
Even if you're fine with risers, that might not be enough. If the bridge lanes are PCIe Gen 3, as many consumer boards have, your Gen 5 card might not init. I extensively tested several motherboards to try and get my AM5 CPU talking to a triple Radeon AI Pro 9700 XT setup, and they absolutely refuse to come up on PCIe3. I was using dummy EDID plugs for them, so they think they have a display, ruling out that issue.
What I eventually had to do was buy a used Threadripper box to run those cards, because PCIe Gen 4 definitely works.
WarmWash 16 hours ago [-]
Because I don't want to spend $4k.
I want to spend $1500 for a card that can run a proper large model, even if it only can do 25 tk/s.
Intel is squandering a golden opportunity to knee-cap AMD and Nvdia, under the totally delusional pretense that intel enterprise cards still have a fighting chance.
ericd 16 hours ago [-]
I saw a good quote recently, "you're not going to get 128 gigs of vram loose in a plastic bag for that much".
vessenes 18 hours ago [-]
Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.
hedgehog 18 hours ago [-]
Being able to keep infrastructure on Linux is a big advantage.
RestartKernel 18 hours ago [-]
How many compatibility issues is MacOS realistically expected to spur? Windows DX felt unusable to me without a Linux VM (and later WSL), but on MacOS most tooling just kinda seems to work the same.
einr 17 hours ago [-]
It’s not the tooling for me, macOS is just bad as a server OS for many reasons. Weird collisions with desktop security features, aggressive power saving that you have to fight against, root not being allowed to do root stuff, no sane package management, no OOB management, ultra slow OS updates, and generally but most importantly: the UNIX underbelly of macOS has clearly not been a priority for a long time and is rotting with weird inconsistent and undocumented behaviour all over the place.
wolfhumble 16 hours ago [-]
> Weird collisions with desktop security features
Linux is not immune to BIOS/UEFI firmware attacks either. Secure Boot, TPM, and LUKS can work well together, but you still depend on proprietary firmware that you do not fully control. LogoFAIL is a good example of that risk, especially in an evil maid scenario involving temporary physical access. I think Apple has tighter control over this layer.
MrDrMcCoy 12 hours ago [-]
You completely misunderstood the quoted remark you responded to. The desktop security features in MacOS that interfere with unblessed binaries and libraries loading is a huge pain in the ass, especially for headless server use.
functional_dev 13 hours ago [-]
Yeah... attacks like LogoFAIL hit during the DXE and BDS phases when the firmware is acting as its own 'mini OS' before the handoff
Provisioning, remote management, containers, virtualization, networking, graphics (and compute), storage, all very different on Mac. The real question is what you would expect to be the same.
bigyabai 17 hours ago [-]
For server usage? macOS is the least-supported OS in terms of filesystems, hardware and software. It uses multiple gigabytes of memory to load unnecessary user runtime dependencies, wastes hard drive space on statically-linked binaries, and regularly breaks package management on system upgrades.
At a certain point, even WSL becomes a more viable deployment platform.
protimewaster 17 hours ago [-]
My thinking is that I'd pick this, because I can't just plug a Mac into a slot in my server and have it easily integrate with all my other hardware across an ultra fast bus.
If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.
fvv 18 hours ago [-]
with those $2k you can have 2xB70, with 1.2Tb/sec and 64G Vram, on linux ( and you can scale further while mac prices increase are not linear 0
Reubend 17 hours ago [-]
You're absolutely right. And these Intel GPUs will also be much faster in terms of actual math than the M series GPUs that the Apple setup would have.
pjmlp 2 hours ago [-]
Some folks care about the workstation market, and the flexibility it offers in choice.
thesmart 13 hours ago [-]
Because the B70 cards can pipeline 500 tok/s on concurrent workloads. Apple Silicon and Nvidia consumer cards only work well w/ serial workloads.
cptskippy 18 hours ago [-]
Support for Single Root IO Virtualization (SR-IOV) to enable compute and Graphics workloads in virtualized environments.
2OEH8eoCRo0 18 hours ago [-]
Funny, I not sure why anyone would use Apple over Linux.
pjmlp 2 hours ago [-]
Good support on laptops that I can buy at Media Market, FNAC, Cool Blue,....
Although personally I am more of the Windows/Linux VM workstation laptop kind.
wyre 18 hours ago [-]
one can upgrade and swap parts with a computer running an Intel GPU. Linux is very well supported compared to Mac hardware.
~$1000 for the Pro B70, if Microcenter is to be believed:
https://www.microcenter.com/product/709007/intel-arc-pro-b70...
https://www.microcenter.com/product/708790/asrock-intel-arc-...
https://www.bhphotovideo.com/c/product/1959142-REG/intel_33p...
When 32GB NVIDIA cards seem to start at around $4000 that's a big enough gap to be motivating for a bunch of applications.
Anybody know better?
So it's closer to half the speed than a tenth. Intel also seems to be positioning this card against the RTX PRO 4000 Blackwell, not the 5090, and that one gets more like 300 INT8 TOPS. It also has less memory but at a slightly higher bandwidth. The 5090 is much faster and IIRC priced similarly to the PRO 4000, but is also decidedly a consumer product which, especially for Nvidia, comes with limitations (e.g. no server-friendly form factor cards available, and there are or used to be driver license restrictions that prevented using a consumer card in a data center setup).
AMD R9700 is 378/766 tops int8 dense/sparse. 644GB/s of 32GB memory. ~$1400. To throw one more card into the mix. Intel undercutting that nicely here.
You're right that for companies, the pro grade matters. For us mere mortals, much less so. Features like sr-iov however are just fantastic so see! Good job Intel. AMD has been trickling out such capabilities for a decade (cards fused for "MxGPU" capability) & it makes it such an easier buy to just offer it straight up across the models.
Intel is not looking in the future. If they released Arc Pro B70 with 512GB base RAM, now that could be interesting.
32GB? Meh.
Announce all you want, if you don't ever ship anything I could buy, who gives a shit.
They let people have the B50 but only released the B60 late in the cycle.
Take a look at the die shot of a 5090:
http://dieshot.com/wp-content/uploads/2025/03/Dieshot-GB202-...
It has 32gb of RAM and memory controllers are about 10% of the the total area. What would you have to do for 1024gb of RAM?
Not to mention the price would be astronomical.
They observe significant gains in factual knowledge retrieval capabilities, but reasoning barely moves the needle.
intel support has been mild to non existent in the VR space unfortunately. Given the very finicky latency + engine support i wouldn’t bet on a great experience, but hope for the best for more competition in this market. (even amd has a lot of caveats comparing to nvidia)
Footnotes:
* critical "as low as it can be" low latency support on intel XE is still not as mature as nvidia, amd was lagging behind until recently.
* Not sure about "multiprojection" rendering support on intel, lack of support can kill vr performance or make it incompatible. (the optimized vr games often rely on it)
Good for most people but if you need that fuctiobality and they dont have it, go somewhere else.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
Would not fit Qwen3.5 27B would it? That's the SOTA
AMD chips that have integrated GPUs (their APU series of chips) often do have support for hardware video encoders. Because, once again, its a function of the GPU and not the CPU.
[1] https://www.youtube.com/watch?v=mfv0V1SxbNA
just add a little bit:
linus requested the card be intel as well.
Now can we have a 64gb B70 that’s worldwide available and not marked to unicorns like the Maxsun B60 Dual model has been?
WTF?
The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.
Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.
There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).
They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.
There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.
For further information, see Moore's Law Is Dead's coverage off and on over the past year.
I can't see the future, but I can see patterns: the media that reports straight from the industry rumor mill LOVES this "Intel has cancelled its GPUs" story, for whatever reason. I have no particular love for Intel (out of my six current systems, my only Intel box is a cheap NUC from 2018), but at this point, these rumors echo the old joke about economists who "accurately predicted the last nine out of two recessions".
They're still going to be employing some developers for driver maintenance for the sake of their iGPUs, and that might be enough for these cards.
It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.
You still need to fab it somewhere. Intel's fabs have been plagued with issues for years, the AI grifters have bought up a lot of TSMCs allotments and what remains got bought up by Apple for their iOS and macOS lineups, and Samsung's fabs are busy doing Samsung SoCs.
And that unfortunately may explain why Intel yanked everything. What use is a product line that can't be sold because you can't get it produced?
Yet another item on my long list of "why I want to see the AI grift industry burn and the major participants rotting in a prison cell".
Probably 160 GB for $4,000.
What's cheap to you? You can find Epyc 7002/7003 boards on ebay in the $400 range and those will do it. That's probably the best deal for 4x PCIE 4.0 x16 and DDR4. Probably $500 range with a CPU. That's in the ballpark of a mid to high end consumer setup these days.
What I eventually had to do was buy a used Threadripper box to run those cards, because PCIe Gen 4 definitely works.
I want to spend $1500 for a card that can run a proper large model, even if it only can do 25 tk/s.
Intel is squandering a golden opportunity to knee-cap AMD and Nvdia, under the totally delusional pretense that intel enterprise cards still have a fighting chance.
Linux is not immune to BIOS/UEFI firmware attacks either. Secure Boot, TPM, and LUKS can work well together, but you still depend on proprietary firmware that you do not fully control. LogoFAIL is a good example of that risk, especially in an evil maid scenario involving temporary physical access. I think Apple has tighter control over this layer.
Easier to comprehend here - https://vectree.io/c/uefi-firmware-architecture-principles
At a certain point, even WSL becomes a more viable deployment platform.
If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.
Although personally I am more of the Windows/Linux VM workstation laptop kind.