Thought I was clicking on a deez nuts joke, but must say I was pleasantly surprised. Nice work!
- 1 Post
- 13 Comments
melfie@lemy.lolto Technology@lemmy.world•Windows 10 support has ended, but here's how to get an extra year for freeEnglish5·4 days agoI’ve distro hopped back to Mint and stayed on Mint over a year now. I think Mint being beginner-friendly kind of makes it a victim of its own success, because as someone who has been using Linux several years, Ubuntu without Snaps and a highly polished UX is pretty ideal. PopOS has the same value proposition, but I like Cinnamon way better than Cosmic, or even KDE.
melfie@lemy.lolto Hardware@lemmy.world•With considerably less fanfare, Apple releases a second-generation Vision ProEnglish5·4 days agoReports have circulated that Apple has deprioritized Vision Pro development internally and that the company is trying to shift to something more along the lines of Meta’s less-obtrusive augmented reality glasses
Smaller and cheaper is definitely the way to go if they can pull it off while still retaining the existing functionality.
Yeah, looking forward to the day when SoCs fully replace discrete GPUs for all the reasons you stated, and also when there are better options than Apple devices in that space. Pretty sure there have never been many render farms built from Apple hardware, though, and Mac Pros have never been the most cost effective option for applications requiring a lot of compute. MacBooks and phones, on the other hand, are more of a sweet spot, and the M chips have done wonders there to your point.
It’s a SoC and is certainly more power efficient, can fit into smaller form factors, etc. It’s definitely progress in the right direction, but is still to expensive to be a practical alternative to higher-end GPUs. What am I missing?
According to this Blender benchmark, a M3 Ultra with 80 cores is similar to a 4070 Ti. Too bad a machine with a M3 Ultra with 80 cores will cost several grand while a 4070 Ti can be had for a grand. I appreciate that a SoC can use RAM instead of the scam that is VRAM, but Apple needs to do something about that price, or otherwise, might as well get a 5090.
melfie@lemy.lolto Technology@lemmy.world•Is the AI Conveyor Belt of Capital About to Stop?English36·6 days agoNvidia announced that it would invest $100 billion into OpenAI, OpenAI announced that it would pay $300 billion to Oracle for computing power, and Oracle announced it would buy $40 billion worth of chips from Nvidia.
I can’t help but feel like we both just ate shit for nothing." “That’s not true”, responded the second economist. “We increased the GDP by $200!”
Except the way it actually works is Larry, Jensen, and Sam keep the money while the rest of us eat shit.
Setting up full-disk encryption on a Steam Deck with an on-screen keyboard should definitely be an option during SteamOS installation, but it’s a pain as it stands. It’s my only Linux device not using LUKS.
Seems a lot of distros put it under an advanced section in the installer, but I think the “advanced” option should be not enabling full-disk encryption, meaning you know what you’re doing and have assessed the risk.
melfie@lemy.lolto Linux@lemmy.ml•For Linux gaming (including DX12), is there a strong reason to choose NVIDIA over AMD?1·9 days agoNVIDIA definitely dominates for specialized workloads. Look at these Blender rendering benchmarks and notice AMD doesn’t appear until page 3. Wish there were an alternative to NVIDIA Optix that were as fast for path tracing, but there unfortunately is not. Buy an AMD card if you’re just gaming, but you’re unfortunately stuck with NVIDIA if you want to do path traced rendering cost effectively:
Edit:
Here’s hoping AMD makes it to the first page with next generation hardware like Radiance Cores:
That would be wild if a SoC approaches 5090 performance. In this Blender benchmark here, it shows a M3 Ultra with 80 cores being similar to a 5070 Ti, though you’re going to pay several times the price for the M3 machine. At this rate, it’s quite possible that SoCs will make discrete GPUs the less practical choice for most GPU-intensive workloads in the not too distant future, though the opposite is true today, even despite the silly power requirements of top-end NVIDIA GPUs. I think NVIDIA is especially digging themselves a hole with the VRAM nonsense, and we will all rejoice when we can run GPU workloads with 64Gi of shared, cheap RAM. It would certainly be ideal if other competitors could develop equally powerful chips, though, since being stuck in Apple’s walled garden is a fairly undesirable tradeoff.