Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> So, NVIDIA, the choice is yours! Either: Officially make current and all future drivers for all cards open source, while keeping the Verilog and chipset trade secrets... well, secret OR Not make the drivers open source, making us release the entire silicon chip files so that everyone not only knows your driver's secrets, but also your most closely-guarded trade secrets for graphics and computer chipsets too!

Interesting. Someone got really fed up with them. I don't think their binary blob ever made them any friends.



No single piece of software has wasted more of my time than Nvidia's drivers, mostly (though not exclusively) on Linux. They've rendered my OS unbootable so many times over the years. So many times I've spent whole days of my life troubleshooting, upgrading, downgrading, configuring, rebooting. Then often reinstalling the OS after their installers mess stuff up in ways that are impossible to even know until they pop up and screw you later on.

I don't condone cybercrime but man would I just be ecstatic if Nvidia would finally follow AMD and Intel in developing an open source driver.


Linus Torvalds directly addressing NVIDIA (worth the click: it's only 17 seconds):

https://youtu.be/_36yNWw_07g


Is it the video that I think it is?

Yeah it is~ ^^


Nvidia license taints kernel.


Oh, tell me about it.

+Installs Linux, this time determined to make it daily driver. Why is this so slow though?

- You need to install Nvidia drivers

+ Oh, OK makes sense.

-- INSTALLS NVIDIA DRIVERS --

-- LINUX NO LONGER BOOTS, OBSCURE ERROR MESSAGE, FURIOUSLY GOOGLING ON A TINY PHONE SCREEN TRYING TO RESOLVE THE ISSUE --

Later made myself a Hackintosh, eventually bought an actual Mac. The Hackintosh stuff was much, much more stable than anything Nvidia ever released when I was trying to use Linux. Installing patched kext files, trying make MacOS run at full resolution and with smooth animations on unsupported hardware was much less pain than dealing with official Nvidia drives on Linux.

It's sad to hear that graphics drivers are still not a solved problem on Linux.


> It's sad to hear that graphics drivers are still not a solved problem on Linux.

They are, Intel and AMD have shown that it can be done and it can be done excellently. NVIDIA just decided they do not want to be part of the solution.


Absolutely. On my last NVIDIA device, I ended up using Nouveau (luckily enough my specific card did not have firmware signing - NV110). There were absolutely no crashes, but some games did become unplayable (incomplete features). https://nouveau.freedesktop.org/FeatureMatrix.html

Ever since, I have only bought CPU-integrated graphics (an Intel desktop and an AMD laptop for my cousin). Proud to say I have never sponsored the mandatory signed firmware devices.


My son wanted a "gaming PC" and I gave in with the only stipulation being that it had to be Linux. He plays Minecraft and Cities: Skylines, so it's all good. The Nouveau drivers suck for this. It's a ~3-year old i7 PC with a 1660 GPU. Maybe that's the difference? Switching from Nouveau to the Ubuntu-provided recommended Nvidia driver (510.XX?) gives you a 200-300% improvement!


> Maybe that's the difference?

The difference is that free drivers don't have access to changing the GPU frequency in newer cards. Newer means anything above NV110, that is, 900-series.


Thanks for the info!


Certainly the Nouveau drivers are slower. I sacrificed both speed and capability for software freedom.

I don't know about Minecraft, but Minetest runs well for me on Nouveau, on a 7-year old i7 with a GTX 850M.

I tried RimWorld and Factorio, but they were unplayable due to I believe missing OpenGL features.

I have not tried Cities: Skylines.


Pedant nit: Intel and AMD CPUs also both have mandatory signed firmware with higher privilege levels[0] than Linux. Blame enterprises that want owner-operated backdoors into their own system.

It's still not as egregious as Nvidia's though, which is specifically designed as a defeat device to frustrate the use of third-party drivers. You at least don't need ME/PSP access to boot into a Free OS.

[0] ME/PSP are technically separate processors, but they have full control over the x86 cores. Their firmware can be minimized but they are integral to the boot process and enforce code signing on the BIOS. Speaking of, the BIOS gets to load into SMM, which has been around since the 386 and runs above both ring 0 and hypervisors.


I’m trying to be part of the solution by buying AND graphics card for my Linux box.


Vote with your wallet, very nice!


The problem with non-nvidia gpus is not having access to CUDA. Lots of productivity software target Nvidia for gpgpu stuff as well. As much as I love linux, if I built a gaming pc I'd swallow my pride and install windows with WSL and call it a day. I long for the day when AMD really invests in an open source alternative to CUDA and doesn't just abandon it like they did with ROCm.


While I hate unreliable drivers as much as the next guy this is not just an Nvidia problem. Several Linux kernel developers love to break API just to mess with closed source GPU driver e.g. replace an API for no reason other than hiding a few indispensable functions behind a GPL only macro. This causes nothing but avoidable grieve to users and the driver maintainers.

It’s really interesting to see the difference between Linux and FreeBSD (which doesn’t break kernel APIs for shit and giggles). The damn Nvidia kernel module is still a bloated closed source blob, but not once in >10 years did it break. The largest problems I had with this driver under FreeBSD was after the adoption of KMS because Linux locked away the new memory management API required for efficient tearing free frame swapping behind GPL only macros resulting in massive tearing (there were workarounds like using a vsync enabled compositor adding latency and wasting enough power to cost me 30-60 min battery runtime).


> It's sad to hear that graphics drivers are still not a solved problem on Linux.

This is not a Linux (or BSD) problem but purely a Nvidia Issue. After my last issue with an embedded Nvidia GPU in my motherboard (~10 years ago), I stay far away from Nvidia, I will never but anything remotely associated with Nvidia until they Open Source their drivers. I knew someone who worked there as an engineer and told him that, he just smiled with me knowing I knew it is really beyond his control.

This is 2022, we should be well beyond the point of worrying about Video Hardware in Linux/BSD.


Interesting. I’ve not used Linux as my mainly driver for years. I miss it and I want to come back but I basically don’t have any decent PC anymore so I have the freedom to get/build one.

When I used it, ironically, NVidia was the way to go on Linux and ATI/AMD was a shit show. Is AMD ok nowadays ? My needs are mostly confortable casual gaming where I don’t care having more than 60fps and I don’t play online (so this anti cheats situation doesn’t really bother me other than ideological issues).


I never had any problems with the NVidia driver either, and I can't see how it would prevent the system from booting. (I mean, it's not linked with the kernel and not set to load the module until X starts, so you can get to a single user mode console or SSH in if X breaks. Maybe people think X and booting are the same thing?)

The only complaint I had with NVidia was that a long time ago I had one of the first consumer-grade 4k displays, and it exposed itself as 2 1920x2160 displays. That all worked ... "great". X thought the monitor was two displays, and so anything that cares about the calculation of display boundaries (say, full screen video), didn't work correctly; it would just show on half the monitor. One day I found that a config parsing bug could cause the xrandr extension to be disabled while keeping everything else working, and then full screen worked. (The options to just lie to xrandr or disable it were of course broken, which is why bugs interacting with bugs was the fix. They never fixed the bugs or the bugged bugs, so shrug I guess.) It took me months of pulling out my hair to find that workaround, and having used X since before xrandr was invented it drove me crazy. In a past life, I also pulled out all my hair to get xrandr to work. It never fucking worked. And then the one time you don't want it to work, it works and can't be made not to work. Argh!!! Just send me back to 2000 please!

(Since then I've switched to Windows, and I still have that monitor. It works perfectly under Windows. You wouldn't even know that it uses the hacky Displayport MST stuff. Probably have a nice hack hard-coded right into the driver, and if it doesn't activate for some reason, you're completely screwed.)


Your right I suspect as I have had this problem myself; it is boot-able, but it will break the Xserver in such away that it can't even change to text only terminal to fix it. So I have to go into single user mode uninstall and reinstall the graphics driver then reboot, and it magically works again. I think it is the installer breaking if there is a old copy installed already rather than a bug in the actual driver itself. This happens every 3-4 months on my Ubuntu gaming laptop.


Despite my complaints, for casual gaming on a desktop you shouldn't have too many issues with Nvidia's drivers these days as long as you can use the ones packaged by your distro instead of Nvidia's installer. My troubles have mostly been associated with laptops that have both Intel and Nvidia graphics, or desktops used for ML training where the distro-packaged drivers wouldn't work for one reason or another.

I've heard AMD's drivers are OK these days but haven't tried them myself because I care about ML.


>I've heard AMD's drivers are OK these days but haven't tried them myself because I care about ML.

They are OK but with some caveats like VP9 HW decode not being available in Linux AMD iGPU drivers for reasons, which won't be a problem if you're running an AMD desktop tower PC, but if you're gonna watch Youtube on your AMD laptop, your battery will take a hit along with possible extra fan noise, which is unacceptable for a great media content consumption experience in Linux when even no-name Android phones and tablets have VP9 HW decode in Youtube out of the box. No bueno.

This is such a mess, as VP9 & AV1 HW video decode is not an issue under linux for Intel iGPUs, but AMD seems to not give a shit to expose VP9 decode on linux for their iGPUs. Also, unlike with Intel iGPUs, under linux, radeon-top does not show GPU decode usage for easy diagnostic purposes. I assume it's because AMD bought the video codec engine IP from some third party fabless IP vendor and they don't have the permission to open source the drivers for the video engine block as part of the licensing agreement, like they do for their in house developed GPU block. But still, regardless of the reason, it sucks for the linux end-users.

And another important issue nobody seems to talk about but one should be aware of, AMD APUs (at least to 5000 series) don't have unified memory between CPU and iGPU like Intel and Apple do, so you have to configure in your laptop BIOS how much of your system RAM you want allocated to your iGPU (512MB/1GB/2GB/4GB), with the rest remaining for the CPU. This sucks big time if you're switching between productivity where you need more memory for applications, and playing video games where you need as much memory as possible for the GPU, so you need to keep rebooting and changing in the BIOS the amount of memory you allocate to your iGPU, vs on Intel chips where the whole system memory is unified and shared dynamically between the CPU and iGPU, and it boils my blood as the whole marketing gimmick of AMD's APUs was the seamless merger between CPU and GPU resources together, while in practice their (5000 series) APUs are just a Ryzen CPU with a Vega GPU glued on the same die without them actually sharing any of the resources properly. Maybe the new 6000 series with RDNA2 changed that but I couldn't find any detailed info.

Honestly, as a laptop owner with AMD APU, if you're planning to go Linux, just get one with an Intel chip instead. Much less head aches as Intel Linux drivers and apps are second to none and have a unified memory model.

The performance and efficiency of AMD APUs is still great though, especially under Windows, but the overall implementation as a whole package seems poorly thought out and rushed out the door, especially under Linux compared to Intel.


Isn't this some distro-specific problem?

VP9 should be available since Mesa 18.1 (or so) for VCN-based GPUs (i.e. Raven Ridge and newer). VP9 is not supported in hardware on UVD-based GPUs (i.e. discrete Vega).

VCN 3.0-based GPUs (i.e. RDNA2, the 6000-series APUs) also support AV1 decoding.


>VP9 should be available since Mesa 18.1 (or so) for VCN-based GPUs (i.e. Raven Ridge and newer).

Yeah, in theory it's available, but good luck actually getting your browser to use it to HW decode YouTube videos under linux out of the box.

It usually works fine with video players like mplayer but not in browsers.

There are endless threads online about this not working. I for one, never managed to get it to work in any browser. What a mess.


> It usually works fine with video players like mplayer but not in browsers.

Chrome doesn't support VA-API, so do not bother trying.

Some distributions do have Chromium builds with VA-API support enabled, but YMMV. Chromium uses X11 only and libva requires dri2, so it doesn't work under XWayland, which supports only dri3. (=> Chromium with VA-API works only in native X11 session)

Firefox does support VA-API. There was a period of time, when it did only under Wayland, but nowadays both X11 and Wayland should work. When running under Wayland, use the native version and /not/ via XWayland, because the same thing as with Chromium applies.


The above is a fair summation of Linux's failure as a desktop in a single post.


Maybe it is, but keep in mind, that Chrome is a proprietary application - the Linux community and the desktop communities have exactly zero input, what will and what will be not supported.

On the Firefox side, Redhat and Suse people did the work to support video acceleration.


Minor point. That bios setting does dedicate some amount of ram to the GPU, so the rest of the system can't use it. The GPU can allocate more than that though if the program makes a slightly different allocate call to request it.

Seamlessly sharing memory between the two chips works fine with the compute toolchain (rocm), I don't know how transparent it is for games.


>The GPU can allocate more than that though if the program makes a slightly different allocate call to request it.

Maybe in ROCM, but not in generic Windows and Linux apps and video games that I've tried. They all cap out at whatever you set in BIOS for the iGPU VRAM reservation and don't go beyond that into system RAM. If ROCM has this ability how come AMD's driver, at least in Windows, doesn't do this memory overflow thing for VRAM hungry apps like video games, as that would certainly alleviate this issue.

IMHO, still a worse implementation than the unified model of Intel. Having a large chunk of memory constantly blocked for a HW component, regardless if it's currently needed or not is just horribly inefficient, especially when you're on a mobile device and have 16GB of RAM or less to play with.

Disappointing.


I have built my desktop with top end components: ASRock X570M Pro4, ASUS ROG Radeon RX 6800, AMD Ryzen 7 5800X, WDBlack Nvme, and I have no issue at all with drivers or anything (I also use gentoo with a self-configured kernel, no genkernel). It's my daily driver for working and gaming 4k (with 3 4k monitors)

Linux has no issue with drivers and/or video cards in particular, people just need to pay some attention when buying components, saying that linux has issues with graphics because of nvidia it's a bit unfair, nouveau is not able to use acceleration because the videocard doesn't activate it if it isn't loaded by the official nvidia blob, so if linux is supposed to have issues with graphic card while vendors actively create obstacles, we will never figure the real issue and hold the right people accountable: It's nvidia that has still issues with linux


That works if you don’t care about deep learning or use any apps that need CUDA


Yeah but why do we blame linux if it's the official nvidia drivers that sucks and when the video cards doesn't work with acceleration if you don't use that driver that sucks? What is linux supposed to do?


I don’t care who’s to blame, but I’ll just say that the proprietary drivers work just fine on servers and I imagine they’d work fine on Desktop too if distros installed them automatically. Loads of people try Linux on a PC they already have instead of buying a new one that has upstream drivers for every component. This is yet another reason why Linux desktop will never have >2% market share.


I've had a few Nvidia cards (from 9xx to 20xx) and one (very modern) AMD.

The AMD drivers runs fine. They're functionally inferior to the Nvidia ones (in several aspects), but I had no big problems. I don't play on Linux, though.

I think open source makes a significant the difference - with Nvidia, issues often can't be solved by software devs (e.g. Libreoffice Calc running updating the screen very slowly). I suppose that with open source drivers, devs can at least get an idea of what's going on.

However, there's one dealbreaker with Nvidia - they have such "blob of a cards", that Linux, on a default configuration (which means, ISO with Nouveau) can't run at all with many Nvidia card series. I couldn't even boot with a GT 1030 (among the others).

I actually have no idea how people can install Linux on Nvidia systems, since I had to create an adhoc iso with the binary drivers preinstalled.


> that Linux, on a default configuration (which means, ISO with Nouveau)

To this day I cannot think of a good reason to ship nouveau outside of "lets make users with nvidia cards suffer". I get the idea behind the nouveau project, but pushing an experimental driver that cannot work with 99% of the hardware it latches onto and forces users to disable it if they want a basic working system is actively malicious.


I don't know the last time you used nouveau drivers but this is just wrong.

is it feature complete ? no, but to go ahead and say it's not working for 99% is just lying here is the list: https://nouveau.freedesktop.org/FeatureMatrix.html

what you can say though is that it is a bad idea for GPU that are relatively recent. wont block you from booting or displaying to screen but would probably be slow.

also this is distro dependent, I remember being given the choice with no default one (mean chosen automatically) when installing Manjaro (a derivative of arch linux).

I chose the proprietary driver since they are feature complete but later switched to nouveau driver since they are not prone to breaking on every other update. that was 2 years ago, zero problems since.


for years I'd give nouveau a try and see if it could run Wayland. for years I'd get a black screen and have to repair in GRUB (or, thankfully, roll back in the boot screen on NixOS.)

since it doesn't do CUDA either, nouveau is useless to me. next Linux build I'll either use Intel or AMD, but can't economically do ML on AMD still.


No, it isn't, and the malice comes entirely from circumstances foisted on the Free/Libre Open Source community by Nvidia's inherent user hostility at the behest of corporate America/the music/content delivery industry.

It isn't that hard to write a manual. It isn't that hard to respect user freedom to use tge device they purchased. It isn't that hard to just leave well enough alone.

But no.... Nvidia just couldn't.


>user hostility at the behest of corporate America/the music/content delivery industry.

Meanwhile the company responsible for the biggest security issues in the last decade still provides its CPU microcode as signed binary blob and the FLOS community is fine with that because someone at the FSF drew a random line to stand on. I will accept the FLOS communities hatred of NVIDIA the moment it stops bending over backward for the company that brought us Meltdown, Spectre, RowHammer (who needs consumer grade ECC anyway), etc. .


You won't hear a peep one from me in that regard, because I am of the same line of thinking.


Yes, AMD is fine. Intel works well if you get a computer with an iGPU, but the last time I looked, AMD's iGPUs were a bit more powerful. You'll want a Nvidia card for ML, unfortunately.


The AMD drivers are pretty good, at least a year after the cards are released. AMD has only just started doing the Intel thing of pushing driver support for GPUs before they are released so you often need to run bleeding edge kernel/Mesa or wait a while to get good support.


Amd after their island cards just works.


> It's sad to hear that graphics drivers are still not a solved problem on Linux.

Indeed and still nothing has changed. Last time I checked, installing a Nvidia driver crashed the whole desktop and every-time you boot you Linux distro, it gives you a black screen with X11 or Wayland pop ups.

A magnificent waste of time that was fixing things in order to do basic work. I ended up using Windows with WSL2 and waiting for the new Macbook Air. I never saw the point of dealing with anything Nvidia on Linux these days.


That's weird. I've been using an Ubuntu desktop with a GTX 1080 daily for years now, across multiple major OS versions and driver versions, with zero GPU-related issues.


For me the same, I'm using a GTX1060 with Ubuntu LTS for over 3 years now without any issues, ever. I mostly attribute this to using the Ubuntu and the LTS version of it - it is probably one of the only few desktop distros nvidia tests/optimizes for as part of their QA.

However, during my time at Uni I maintained over the course of a couple of month a patchset against a kernelmodule for my research, and I remember what a mess it was. Slightest kernel updates broke it, and even supporting just 1-2 distros we used in our lab was very time consuming. Even after I left I got a couple of mails from researchers whether I could assist getting it to build on newer versions of the kernel. And even though I had a fraction of funcitonality compared to what NVidia provides, I absolutely understand how difficult it is to maintain a non-upstreams patchset over time - so I definetely believe all bad I hear about nvidia on various distros/setup is 100% true. It is just NVidias fault not to go the AMD route and at least try to get as much as possible upstreamed and open source.

On a sidenote, the only reason I went with NVidia was that at the time due to crypto-hype, a competetive AMD card was 50-70% more expensive (in retail, not the manufacturer suggested price). I'd definitely would go AMD next time.


Desktop is OK for the most part, hybrid graphics on Laptop can still be a living nightmare, though.

Every other driver update breaks the system in unexpected ways, Wayland wasn't supported for the longest time, switching GPUs had no support at all for quite a while and still is a bad joke compared to Windows (e.g. you have to log out and log in again to switch the graphics processor unless you use 3rd party tools).


> Every other driver update breaks the system in unexpected ways

That's almost surely your distro's fault. I used to ran Nvidia for well over a decade on Arch with zero issues, and it's far from the only distro that properly handles Nvidia drivers.

If your distro doesn't distribute an Nvidia driver on their repos and maintains it accordingly with the kernel it ships, it is not an option you can consider if you are using Nvidia.


I’m using Windows with WSL these days. Works great


is cuda usable under wsl now?


Idk I’ve never used cuda, but worst case you can still use it on windows


This was the single biggest "driver" to me moving to Mac from Linux. I had a really nice system that I built and used for work. I was working on a project with a 24 hour requirement, it was only an hour of work. I needed to update for a particular dependency, but decided to update everything and low and behold, my video output broke after the update. I made the deadline, but spent literally all day and night trying to get everything working and done. The next day after a long nap, I shelved what should have been a perfectly good piece of hardware and bought a Mac after using Linux for well over a decade.


This is similarly perhaps my single biggest reason for using the NixOS disto instead of Arch. If one big update breaks everything, rolling back is as simple as choosing the older state from the boot menu and voila. It's helped me sleep so much better with updates both on my personal computers and my server, to know I'm not going to bring it down for more than a minute or two even with a failed update.


This is basically what happened to me, though twenty years ago.

Since then linux has been great for me - on the servers.


> but decided to update everything

We each make that mistake once ;)


Do you now love or loathe your mac and it's cost?


I am not dan-0, I like my Macs, I used to love them, but software quality has gone down. Weird UI decisions, mostly around size, colours (all friggen grey!) and contrast were the older I get the harder it gets to use in default setup. The cost of them are really a bit of a challenge now. A Mac Mini with 16 Gb ram and 2 TB SSD for $1600 is a bit much. And the Pro is just eye watering.

Windows is out of the question in how they treat their customers. Example: Dialogue when rejecting upgrading to Win10, click the X button at the dialog upper right corner and be greeted by an immediate reboot and upgrade.

Linux, life is to short for me to handle the hassle on the desktop. Server is a no brainer.


> No single piece of software has wasted more of my time than Nvidia's drivers

I see you haven't used printers much :)


I had to install a new inkjet printer because kids now need to print out their homework during COVID.

I swear to god if I ever become wealthy the printer industry is what I intended to completely destroy. Not in it for profit. Not positive sum whatever startup thinking. It will be Zero Sum.

Edit: Lasers are fine. That will be left alone.


> I swear to god if I ever become wealthy the printer industry is what I intended to completely destroy.

Not he hero we deserve, but the hero we need!


Damn it feels good to be a gangsta...



Is now a bad time to put down your pom poms and pitch it on kickstarter?


Ironically, printer drivers are pretty ok on Linux.

It's just everywhere else that they are a problem.


A printer has never made my entire system unable to boot. I've experienced an issue where gnome just grey screens and it takes me hours to diagnose and fix.


That has more to do with a screen being pretty essential to running a system than driver quality.


Or tried to install tensorflow ayoo


Eh it’s not too bad on Linux as long as you use the official docker image that has cuda, cudnn, and tensorflow with matching versions. This also lets you use tf1, tf2, and PyTorch in separate containers without upgrading cudnn in lockstep. Larger projects that use several models inevitably require all three. If you install it yourself, you’re in for a world of hurt.


NixOS has this decently solved. “Some assembly required”, but at least you can switch between versions without messing up your system.

So long as the driver doesn’t change, you can even run multiple versions simultaneously and nothing will break.


Imagine having to install a different OS to run something, what year is this?


Nix, the package manager, can at least be used on other distros.


I don't understand how anyone can be okay with:

"Here, run this piece of code. No don't bother trying to read, build or understand it yourself, we know you won't be able to do it, so we put it in a little mystery box over here that you can run on top of your existing operating system. Now please don't ask any questions and go away."


Well that's basically all we do isn't it? Every exe you run on windows, every apt package you get on linux, every docker container, or node module, etc. Just layers and layers of running black boxes that nobody but the original developer knows much about.

But even if you compile it yourself, what exactly are you gaining? Extra work for something you won't look into anyway? You'd think open source stuff would be more peer reviewed, but as a maintainer of a few open company repos it's surprising how much nobody actually cares and will just run whatever.

Most of the time it works fine because the average person isn't a malicious actor, but every so often something gets compromised and it's always discovered by pure chance.


Vouched, but you should know that literally all the comments on your account are [dead].

I can read and understand tf just fine and I’ve had to work around bugs (like the broken image resampling) by doing just that.

Treating tf as a blank box has nothing to do with it. The problem is that nothing none of the ABIs involved here (tf, cudnn, distro libc, nvidia driver, etc) are stable. The best you can hope for is to use the same driver for several containers with matched (tf, libc, cudnn) and that’s worked out well for me in the last couple of years.


Netcat to address?


A better solution would be for AMD to invest in bringing their ML stacks up to date to work with PyTorch and such.


That would be nice. It really boggles the mind how thoroughly AMD has missed the boat on ML. And it really seems like they aren't on any trajectory to catch up even today. I've given up on them. Instead I'm hoping that Intel's imminent entry into the discrete GPU market does better. Nvidia is in desperate need of some competent competition in ML.


You seem to overlook the fact that NVIDIA poured immense resources into their ML software stack.

This was back in AMD's Bulldozer days (2011), when the company struggled both financially and technologically.

Meanwhile NVIDIA sponsored universities with graphics cards and had already developed their CUDA ecosystem (in 2007) when AMD was still busy with the ATI acquisition. In 2011 NVIDIA had an annual net income of about $500 million, while AMD had a net loss of $600 million at the same time and kept struggling for the following 5 years.

In other words, NVIDIA already had an existing ecosystem of professional grade H/W accelerators and S/W infrastructure, when AMD was still a CPU manufacturer without a dedicated GPU division. When AMD acquired ATI, NVIDIA was already in the process of transitioning their GPGPU stack from data centre-only products to consumer hardware.

AMD has powerful ML hardware today, but that's data centre and supercomputer only. They didn't miss the boat on ML - they were busy not drowning while NVIDIA was handing out goodies to academia.


Yeah, there was a lot of poor management from AMD though that led to them being in that situation, let's not pretend like it's just one day "oops we have no money".

(And nor was it "all Intel's fault" either, AMD fucked up a lot in this time period, both technically and in their business decisions)

At one point AMD was set to merge with NVIDIA but the board couldn't get over the sticking point of Jensen wanting to be CEO of the resulting company. Had the board swallowed their pride and let that happen, I doubt he would have led the company down the Bulldozer garden path.

Instead AMD said no, and then way overpaid for ATI, which depleted their cash reserves and forced them to sell their foundries (which was ultimately probably the correct long-term move, but that isn't the proximate reason why they sold them - they were just out of cash at the time). Then they had no money and had to underfund their architectures, and move to "cost reduction" mechanisms in the design and implementation, and had a series of implementation problems and poor designs.

Phenom was late (so much so that AMD had to put out a stopgap dual-socket "Quad FX" system just to try and compete against Core2 Quad) and Phenom had a major bug where part of the cache system had to be turned off, tanking performance by like >25%. And by the time Phenom II came out, AMD was putting non-SMT quadcores against Nehalem. By the time Bulldozer came out, the 8150 was going against Sandy Bridge, and by the time AMD fixed the worst of Bulldozer's performance problems, the 8350 was going against Ivy Bridge, and it still sucked anyway. AMD just executed extremely poorly in this period and a large part of that is the cash shortages resulting from buying ATI.

The next decade of AMD's financial woes largely spring from that moment when the AMD board said "no" to the merger, and the chain of decisions and failures that resulted. And while consoles might have saved the company - they likely would not have been in that position in the first place had they said yes to the merger instead of emptying the bank account to buy ATI, instead of doing a merger-of-equals with NVIDIA.

Also, AMD still isn't taking the steps that they can to increase their ML marketshare. They are actively reducing the support levels of their ROCm package - amateurs can easily get into the basics with a consumer NVIDIA gaming GPU while AMD forces you to buy a $5000 enterprise card.


It's been obvious for 10+ years now that AI was not only incredibly important but also a ginormous market opportunity up for grabs. To the extent that they were cash strapped they should have reprioritized. The effort would have paid for itself many times over. And that excuse no longer holds water now that AMD's market cap rivals Intel's and they're still not investing enough or making the right technical decisions. There must be deeper problems.


They actually have already - PyTorch works straight up with ROCm, and so does Tensorflow. There is a little faffing to do to get it to work but they've made great progress in the last six months.


Except consumer GPUs don't officially support ROCm, despite consistent pressure from users for years. And there's no indication of when that situation will change.

CUDA is successful because the same software works on low-powered laptop GPUs and expensive datacenter GPUs.


Yes, it would also nice if they would implement stuff like CUDA (if thats not illegal) or provide alternatives, which is also used by many ML programs and forces you to buy a Nvidia card


They have ROCm and many popular frameworks support it.

The problem is that NVIDIA has 100% of the mindshare and owns 97% of the dGPU market so it's a self-fullfilling prophecy that proprietary tech like CUDA will stay dominant in the foreseeable future.

Hopefully Intel re-entering the dGPU market will put a dent in that and move the needle a little.


Haven't had an issue with the nvidia proprietary drivers since the 495 series, they've gotten a lot better recently, and more frequently updated, though still less configurable than their windows counterparts.

Currently on driver 510.54 and playing Elden Ring on maximum settings with a 2080 super without issue.

Though you do have to add a kernel parameter to use nvidia's DRM mode for best performance which is non-obvious. And hybrid GPU laptops are a whole other thing, I guess.


Yeah. It directly led me to buying AMD hardware this time and I'm much happier for it. AMD aren't perfect but at least their drivers are in the kernel.


Is t there a ton of IP and patents which can't be made open source?

There should not be much of an issue for Nvidia to open source it otherwise.

Open source doesn't mean free anyway


It's not about making it 'free' in the sense of money. The term free often infers money, but in this regards its more about the philosophy of freedom itself, not cost of product.

We as consumers should not be held back from using hardware the way that we intend to use it. Nvidia does not have any right to hinder us from using what we purchased how we see fit. (Provided that we are not infringing upon copyrights and/or I.P.)


yes, that's always been the primary underlying reason AFAIK, there's too much licensed code involved in their driver stack, and they don't care to spend the time re-implementing those components.


Ironically, they wouldn't need to spend the time re-implementing those componenents and dealing with licenses if they had just made their code open source to begin with.

The community would likely gladly help them with anything we can help with; and licenses would be less of an issue. It wouldn't be the first time closed source stuff has been made alternatives to it in open source specifically for reasons like licensing. MP3 for instance... not a terribly hard thing to deal with anymore, but once upon a time ago..

Anyways. Nvidia's true reason for doing anything is ultimately in their namesake. They want people to be envious. Part of making people envious is to keep things they want away from them.

So, yeah... that's the real reason. If they actually gave a flying fuck about the rest of us, they wouldn't be that way.


I used to cringe whenever I see a prompt to upgrade driver. But it's getting better. Version 460.39 hasn't caused any pain so far.


Just look at the version number, 460.39.


Why don't you just not buy Nvidia hardware?


Because I care about ML which AMD has been utterly failing at for 10 years running.


Same here, I got fed up with it and went AMD (or Intel) only. It just works.


I recently tried wayland with fedora 35. The driver destroyed the system. I switched back from pop os lts to the latest pop os and the performance was even better (compared to lts).

GTX 1080


seriously, nothing makes me feel more inept that wrestling with a new GPU.


Printer Drivers, but those are not brand specific, take the cake here.


Same, glad to know i'm not the only one that deal with this.


I can suggest a good reason why the Nvidia GPU drivers are opaque binary blobs.

Generally speaking, you don't build a multi-billion transistor chip, and manage to ship the A01 spin without doing some magic.

The Nvidia GPU i/o interface is a remarkable mechanism where the resource manager (the code that sits at the very lowest level) can patch the hardware interface to get around hardware issues and give the appearance of a chip that is working perfectly.

I'm not claiming the blobs work well or anything. I'm just saying there's a reason Nvidia does blobs.

Typically, an A01 chip will ship with about 50 bugs that are worked around in the resource manager and driver. They really do not want to tell you about these bugs -- because they are WORKED AROUND. You don't need to see the sausage made, so to speak.


I do not understand your argument.

You suggest that there are work-arounds in the drivers. And then you imply that people do not want to see them.

First of all, I am unsure if people not willing to see the work-arounds is generally true to begin with.

But more importantly, if someone does not want to see something, they can simply not look. I do not understand how that is meant to justify these drivers being binary blobs.


The argument is "It's embarrassing"

Outsiders are irrelevant - if something makes you feel embarrassed, it's embarrassing.


So you are saying Nvidia hardware is crap, and therefore we don’t want to see their drivers?

Either that means that AMD hardware is far superior, or people do actually want to see the drivers (including workarounds).


Isn't that essentially the same as what everyone does with CPUs?


GPUs ship with far more workarounds in the drivers because the programmer only has access to the hardware via drivers that can cover up the bugs, and because they’re not constrained by binary compatibility.


Yes. I think for the CPU they mostly stick those work-arounds into microcode?


Honest question: if you are nvidia, why not publish the interface specification for your device? If there are silicon bugs, publish the errata and the workaround. I've seen SOC manufacturers do similar. What's different about GPUs?


Reminds me of a story where someone joined a company, then immediately fixed a bug in the code that had been bothering them for years, sighed, and put in their two weeks' notice.


I'm assuming you meant this?

https://news.ycombinator.com/item?id=26663798

Comment says it's a joke, however.


Personally, I just switched to AMD and never had to think about GPU drivers again.

Ok, I do have to add the non-free firmware when I install Debian, once every installation. But there are no random problems all the time, no "your card is too old for your computer to keep working", no "your card is too new for your computer", basically nothing more than adding a single package.

I imagine NVidia works really well on Windows, because people keep saying good things about them, and there is absolutely no chance somebody on Linux will ever say something good.


Nvidia drivers are decent on Linux - almost all ML people use it and they're incredibly dependent on CUDA. Nvidia has a vested interest in this because they pivoted hard towards deep learning, but they're also comfortable because there's no good competition from AMD or ARM yet. TPUs are excellent but still niche, despite Google's best efforts to push them. Driver installation on Linux is mostly good I've found, just using native package managers and Conda for dev environments. Gaming also works fine and of course so does mining...

But there are odd annoyances - you get less control over device parameters in Linux. Things like setting the memory clock or undervolting (not just power factor) only seem to work in Windows.

This leak is a pain. If Nvidia refuses and they just leak the code, we're still dependent on Nvidia to approve an open source driver. It's like when hackers try and take code to other companies - AMD wouldn't touch any of this with a barge pole for fear of infringement. You're not going to get magical driver support in Debian from leaked code. Best case someone releases an illegal driver and it doesn't get immediately taken down.


The non-free firmware is a shame. There are basically no modern GPUs that work with linux-libre because of it. This issue doesn't get much attention.


The binary driver made me a big NVidia fan FWIW. It works, consistently and easily, on both Linux and FreeBSD. The unified release makes both of those feel like a first-class citizen alongside Windows. It's always been more reliable than the ATI/AMD drivers, even when the latter are notionally open-source. (I will say that Intel video drivers have also always been good).

Sure, it'd be nice for it to be open-source too, but frankly if the software is good enough that I never want to fix bugs in it then it's a lot less important whether it's open-source.


So long as you don’t mind getting wayland 6 years late.


I use FreeBSD now so it was however many years late for me already. And even when I was using Linux, the benefits seemed extremely marginal - like, sure, in theory I can see some situations where using two screens with different DPI settings would be useful, but it's rarely something I've wanted or needed in practice - when I got a laptop with a 4k screen I did try using it with an old 1080p monitor for a little while but it looked so bad in comparison that I replaced that with a 4k monitor as soon as I could.


BSD makes you a minority of a minority, so that’s probably to be expected. Afaik it offers sandboxing and a better security model, reduced screen tearing, etc. Nvidia dragging their feet has surely impacted on the early adoption of the project.

It’s also created painpoonts. I’d be using fedora, but there’s no assurance on the content within the fedora community repos. I don’t really want to apply trust there, so I stick with Ubuntu for better or worse, all because of nvidia.


Which card do you use?


I've had several over the years; I'm currently on a GTX 1060 (this machine is getting old, but I don't game anywhere near as much as I used to, especially not AAA).


I definitely support making GPUs less attractive to miners. I'd buy a card that could be remotely disabled when usage patterns matched crypto mining if it meant I could spend MSRP on a modern card.


You would allow hardware DRM in your box just to allow Nvidia to "revoke" ownership of your card when you use it "incorrectly".

Yeah I'll pass.


The parent comment makes no mention of giving Nvidia control over the GPU. You know you can have remote access features that are solely under your control, right? That's how SSH works.

You should read comments before you reply to them.


This is so painfully pedantic and obviously not what the original commenter meant. What point would a remote killswitch have to deter crypto miners if the owner of the card (the crypto miner) fully controlled it as you imply. Use your head.


> I'd buy a card that could be remotely disabled when usage patterns matched crypto mining...

What does LHR and SSH have in common? How does 3rd party LHR make any sense.


You may feel you own Windows.


I can see why it's attractive when the alternative is no card at all.


Short term thinking can often appear to be attractive in the short term. It's the long term implications that get you, and by then it's too late.


At what point does short term become long term? Crypto mining has been an issue for people looking to obtain GPUs going back to at least 2016, which is when I built my first PC, and its only gotten worse and worse as time has gone on.


When it spills over and becomes a common practice beyond the scope of the original problem?


Allowing/supporting crypto mining is short-term thinking.

It's a bunch of wealthy people burning absurd amounts of fossil fuels solely for the purpose of making themselves wealthier. There is no benefit to society from allowing this.


> There is no benefit to society from allowing this.

Sure there is. We get a financial system that's independent of banks and governments. I don't care how much energy it consumes, that's a worthy goal.

Also, it's not even 1% of global energy consumption and even that calculation is based on rather questionable assumptions like "energy usage is proportional to bitcoin price". If you want to stop the burning of fossil fuels, what you need to do is advocate against trade with not only China but also all western countries whose luxurious lifestyles are literally dependent on such wasteful use of resources. Gotta level heavy economic sanctions against all the highly polluting developed nations until they put a stop to it.


> We get a financial system that's independent of banks and governments. I don't care how much energy it consumes, that's a worthy goal.

I see it as exactly the opposite.

We live in a society, and as a society, we have agreed upon certain rules about how money is handled. If you want to opt out of those rules, then get out of our society.


> We live in a society, and as a society, we have agreed upon certain rules about how money is handled. If you want to opt out of those rules, then get out of our society.

Even the friends of mine who work in the banking industry will tell you that the current financial system is a rigged game that serves the elite and well-connected at the expense of ordinary people.

Reasonable people can disagree about the relative benefits and harms of cryptocurrency, but trying to make the case that the current financial system reflects some sort of happy societal consensus is roughly on par with being a lobbyist for big tobacco.


I'm not opting out of anything, nor am I leaving. Those "rules" form the backbone of the financial arm of total global surveillance. I oppose them on principle, as should everyone on this site. I want to see them either repealed or completely neutralized with subversive technology such as cryptocurrency. The only problem here is the fact that so much energy is being wasted on the failure that is bitcoin. I wouldn't mind even higher amounts being poured into Monero.

"Agreed upon" rules? This stuff is imposed on us.


Only in the same sense that laws are "imposed on" criminals. You may not have personally agreed to it, but it's part of the legal framework our society rests on.


Criminals like murderers? Yeah, everyone agrees with that, it's not controversial. Total surveillance? Not everyone accepts this fundamental injustice. Their actions violate the principles countries like the USA were literally founded upon. As far as we're concerned, they're the criminals here and this technology is just self-defense against their continued abuse.


They’re not merely making themselves wealthier - they’re providing a service. How is this different from an airline or any other user of fossil fuel?


Because the service as it stands is of dubious (or negative) worth. So far its main uses are enabling illegal activity and MLM-style get rich quick schemes. For all of the hubbub about the crypto-revolution, we are over a decade in and 99% of its use falls into those two categories.


I was sympathetic to that view right up until the Canadian government froze the bank accounts of the truckers. Now I'm openly supportive of the activities the regimes don't like in crypto.


> they’re providing a service

Doubling the price of GPUs is a pretty bad service.


The alternative is not "no card" it's "don't buy a top of the line card"

Every expensive product with potential commercial use has this situation. Anything you can easily make money with has a price that reflects it. People who want stuff for limited use cannot justify those kinds of prices so they have to buy older stuff, lesser stuff or stuff that needs effort/time dumped in in order to work.


The alternative _is_ no card. Even 5 year old cards are twice the MSRP if you can get them.


Is there some practical way to delineate the two product usages that isn't arbitrary I wonder?

Maybe make mining only cards more performant and affordable than consumer graphics cards, so that consumer cards will drop in price from less demand?

I find it hard to believe that a use case specific device wouldn't be able to dethrone graphics cards as the better choice for miners at the right price, and it should be cheaper right? No need for all that extra graphics related componentry like visual ports etc.

I know stuff like that is already on the market, but it's usually more expensive as they are from more niche manufacturing. NVidia has the scale to make a similar device more affordable I imagine.


You would have to saturate the market with specific mining devices. Like with bitcoin ASIC miners, the crypto mining demands have to increase to the point where using gaming graphic cards becomes unprofitable. Otherwise miners will buy the specific mining devices and the gaming devices. The problem is that Etherum was designed to resist ASIC approaches.

Instead of locking cards down, Nvidia could manage the supply. They could sell their cards in cooperation with Steam: Only people with an active steam account are eligible to a new card. If their performance stats don't improve after the purchase, which means they have resold their hardware, they won't be allowed to buy another card.


> If their performance stats don't improve after the purchase, which means they have resold their hardware, they won't be allowed to buy another card.

And then you have people who lose their cards, or it gets damaged, or who don't want Steam, etc. It's a big can of worms.


Not wanting Steam is the biggest issue. They could include further shops and even individual games. There also needs to be an option for people who want to do scientific computing. Having vouchers from scientific institutions would exclude everybody who is independent. The approach has to be refined a bit, but the general principle is that you can link cards to individuals.

To balance the downsides, Nvidia doesn't have to go all in on 'Steam sales', like Sony, who sells some of their PS5 to registered gamers but not all of them.

On the other hand, I don't see the problem with damaged or lost cards. Damaged cards can be treated like guarantees: send in your damaged card and buy a new one. Lost cards on their own are a problem but how do you lose your card? You have to lose your entire gaming rig which most likely means that it is stolen. So send in a copy of your police report. That way, miners will be punished for wrong legal proceedings if they make a business out of 'losing cards'.


> They could sell their cards in cooperation with Steam: Only people with an active steam account are eligible to a new card. If their performance stats don't improve after the purchase, which means they have resold their hardware, they won't be allowed to buy another card.

I have a baby. I do anything on Steam once a month at best. Crypto miners could pretty trivially make their use cases look like they spend more time gaming than a good 30% of Steam's active user base. This is a non-starter.


There are more mining rigs than people behind them, so limiting the number to number of people would be some improvement for sure


> Is there some practical way to delineate the two product usages that isn't arbitrary I wonder?

Highly unlikely. There might be reasonably reliable heuristics to identify gaming operations, but any signal for mining is likely to trigger for other long-process number-crunching too.

The idea puts me in mind of the other side: the police once raided a home because surveillance via helicopter identified, from the heat pattern, the property as like to have a loft-based cannabis farm. Turned out that the loft was full of equipment running coin miners. Simple checks get significant false positives.


Oh I meant specifically avoiding those kinds of arbitrary locks and checks, by delineate I meant make two separate product lines to cater for the two different markets. My hope would be that miners would buy the one better for mining, taking some pressure off the graphics cards for graphics market. Imagine a "3080 Mining" card that has no monitor outputs, no huge memory reserves, no extra-curricular processing units. It should be much cheaper.

But perhaps miners would just buy both mining and non-mining cards, since it's not a zero sum purchase, every card added to the stack is worthwhile.


Mining does have strong indicators though: mining memory access is nothing like memory access for a game or application.

Normal games and applications won't just max the whole memory bus at 100% for hours and hours. Even in a high-load situation it will still bounce around 90-100% or whatever. Mining absolutely slams the memory bus 100%, because current implementations are Proof Of Bandwidth so that's the stat you need to maximize.

Normal games or applications will go out of their way to align threads in a warp to access contiguous memory blocks ("coalesced memory access"), because that can be efficiently handled as a single request and broadcast to every thread. Mining can't do that because every thread is working in a totally different place in memory determined by the DAG. So normal applications will have lots of coalescing happening, mining will have zero.

Normal games or applications will have cache hits, because they're re-using at least some data frequently. Mining will have an (effectively) 0% cache hit rate because the DAG is sending threads to random places in memory and memory is much larger than cache.

These are simple things that can be metered with O(1) performance counters on the hardware. If the card sees a load like that, run in cripple mode, nuke the memory bus to 10% performance. Then you can also add the capability for a whitelist - the driver can attest that a certain binary has been signed by windows from the appropriate vendor, and TPM can attest that the driver and hardware hasn't been tampered with (Remote Attestation). So no need to worry about legacy code or DIY cuda code unless it happens to look like a mining application, and then you can whitelist any edge-cases.

TPM is going to be mandatory for OEM Windows 11 systems - home builders technically don't need it, but then if they happen to run some code that performs 100% random unaligned memory access you'd have to get it signed. But most code doesn't behave anything like that, so that is a real edge-case unless you are trying to write a miner.

You may be morally opposed to the above, but it's not impossible at a technical level, mining code actually looks nothing like normal code at a perfcounter level.


Just wait. GPU mined currencies is pretty much exclusive to Ethereum which has dedicated plans to migrate away from PoW. Monero is set on PoW but uses a psuedo random mining challenge to thwart dedicated hardware and target all purpose CPUs - its targeting, not sure if its what you want.


Crypto mining is just one of many factors driving availability down and prices up. If it disappeared tomorrow, you still would not be able to buy a GPU at MSRP.


Eh, you can fix that problem much simpler: nvidia can raise MSRP.


That or they want to remove software limits on performance to mine cryptocurrency faster.


In fairness, it's hardly an outrageous view to want to remove artificial performance bottlenecks on a piece of hardware you've spent a considerable amount of money on.


Except there is nothing "fair" about this. In all fairness these crypto-bros could buy the cards specifically made for mining, but they won't because they don't have any resale value.

Only reason they are buying GPUs is that they can run it until something better comes to market and still sell it to some gamer who is actually going to use the hardware for its intended purpose.

More power to Nvidia. If I could make the decision the hashrate of any cryptomining on gaming cards would be 0. Let the crypto-bros buy the crypto cards and leave the gaming cards for people who are actually going to enjoy them.


I don't care what the use case is. I don't support artificial performance limitation on expensive hardware. It's as simple as that.


I do care when this use case is harmful.

We shouldn't need to limit hardware for specific usages, but we already see that people won't stop themselves from wasting energy and accelerating world climate if this gives you more wealth and power.

I'd also be up for limiting all military weapons from being so destructive across the globe for example. It's mind blowing how we managed to create so many harmful tools in such a small time span of our history.

But I agree with you when the use case is actually useful for humanity.


Likening Crypto Mining to military weapons is ridiculous. Not to mention you as those taking your position on crypto often to have painfully failed Chesterton's Fence.


There's no likening, it was just one more example in the bag of hardware used for harmful activities, which is a very big scope.

I'd love to learn that I'm wrong on the crypto one, give how well widespread is, but I haven't seen any fully convincing argument.


Woah there. I don’t like the crypto culture and mindset as the next guy, but there are plenty of other options to preventing use case you presented. If systems aren’t closed enough already…

And as far as the gaming goes, I cannot see how playing games is more noble usage of gpu than crypto mining.


I've noticed a lot of directed outrage as a result of people unable to find a MSRP graphics card that I would almost treat as funny if it wasn't so scary to see how the internet culture changes. I can't speak to the motivations of grandparent or other single individuals, but I have noticed in other communities a lot of moral arguments being used very selectively that reminds me of my sociology class on the origin of the drug war and how moral outrage at drug use and the harms of drugs were pushed as the reasons for banning drugs, yet such logic was not consistently applied to drugs based on their actual harm to a community.


I think I get what you are saying, but I think it’s less of a social problem, and more of a resource scarcity problem that humanity isn’t used to solving.

As much as people are aware of resources being limited on our planet, rarely anybody actively thinks that it’s already becoming a reality, yet it seems like extreme addiction to technology has only just started.


"I'm just gonna artificially raise price of this limited resource even though there already exists hardware specifically made for me, but after the next generation comes out I can re-sell this one to some sucker"

If you don't see anything wrong with that then that's about it. I really hope US or China or someone is going to regulate cryptocurrencies to shit so cryto-bros can stop destroying the planet and normal people can again afford GPUs.


I am probably not that informed on who does what artificially and intentionally, but I still don’t see a point in justifying gaming usage over crypto usage. As far as destroying the planet argument goes, gamers could equally be limited to using a specifically designed devices for optimal gaming and entertainment purposes.

Apart from the ability to check the GPU health, GPU providers could easily implement some hardware “calculation counter” or design specific solution so resell value could be more easily evaluated.


> As far as destroying the planet argument goes, gamers could equally be limited to using a specifically designed devices for optimal gaming and entertainment purposes.

gamers frequently are, you just described a console.


Yes, I did. Intentionally. If you forbid consumer GPUs for both sides, limit or cripple the gaming tech development in a sense it offers lower variety of products, you are going to see a lower demand for gaming in general. Thus, “saving the planet”.

It’s not something I propose, but giving hypothetical example of what would really be fair for both sides


>gamers could equally be limited to using a specifically designed devices for optimal gaming and entertainment purposes.

We are trying, but crypto-bros are buying our cards. C'mon pay a bit more attention.


If video games are that important to you buy a video game console. The availability there is also bunk and has nothing to do with mining - claiming something which uses energy is automatically 'bad' is silly and relies on common ignorance and frustration. Cars use energy, lights use energy, video games use energy - crypto is new and you don't understand it past that fact that it uses energy so you can't tolerate it. If you don't want to get rid of everything which uses considerable energy then you need to determine the value of each thing individually, and if you say crypto is worthless, you should have a non-circular reason for that claim. Clearly the market disagrees with you.


>If video games are that important to you buy a video game console.

and play PC games how? for some reason I thought people around here were smart


Trust me, I'd take your problem seriously if only those cards would speed up growing up for you, guys.


I hope you dont have hobbies, since apparently hobbies are now childish.

Maybe you should grow up a bit :)


So basically you are surprised that people want to buy a higher value and better product instead of lower value?

Also, for "normal people" there are plenty of 1 and 2Gb videocards at reasonable prices, unusable for mining.


Not surprised (why would that surprise me?) just that this throttling is justifiable and I would go even further


You aren't entitled to cheaper video cards just because you don't like what other people are doing with them


Conversely miners aren't entitled to cheaper video cards just because they built a business around consumer hardware.

Companies seeking to maximize their revenue is very much the nature of the free market. It's their business, you don't get to tell them how to run it. You're a customer in a market, if you don't find the product acceptable then you can go elsewhere. There are competitors offering products as well, and as a whole this determines a market rate.

If your business is no longer profitable (or profitable enough) paying the market rate, then you go out of business. That is also how the market works. Many, many businesses would be far more profitable if they could force their suppliers to cut their revenue streams.

Miners have responded to the shift in the power dynamic by throwing a tantrum and attacking and blackmailing their suppliers. Bioshock nailed it: laissez-faire is great when you're the one on top, but as soon as someone else out-competes you, or exerts their own market leverage, it's an unfair and ridiculous imposition on your own right to profit, and it's time to shout and flip the game board.

This is exactly what you see with the whole "gamers aren't entitled to cheap cards" thing you said, that was great when miners had more market power than gamers, but everyone leaves off the whole "and miners aren't entitled to cheap cards either", which is equally true. Suppliers are taking note of that market power and moving to take a cut of the revenue for themselves. Customers are free to re-shuffle to new suppliers if they no longer find the terms acceptable. And that's how the free market works.

As always - businesses that are not agile enough to adapt, will "exit the market", and create room for newer, healthier businesses.

And remember, this has been status quo for a long time. If your business depended on CAD, it probably sucked when ATI and NVIDIA started releasing workstation products and artificially limiting CAD performance on gaming cards. The world moved on though.


> Miners have responded to the shift in the power dynamic by throwing a tantrum and attacking and blackmailing their suppliers. Bioshock nailed it: laissez-faire is great when you're the one on top, but as soon as someone else out-competes you, or exerts their own market leverage, it's an unfair and ridiculous imposition on your own right to profit, and it's time to shout and flip the game board.

#notAllMiners

All you're saying is that businesses should sell to whoever makes them the most profit - so how does that explain Nvidia cutting value and lowering the price of their cards to sell to gamers rather than miners?


Seeing as Nvidia agrees with me, I think you are wrong. In any case the sales have started to drop so if Nvidia doesn't do anything after crypto fad ends they'll be out of business since people aren't buying their GPUs anymore.


So you believe that Nvidia needs crypto to survive but knowingly and intentionally hampers their cards' mining performance? You seem confused.


Not what I said at all. Re-read.


wrong comment


> And as far as the gaming goes, I cannot see how playing games is more noble usage of gpu than crypto mining.

As usual the gaming crowd is utterly lacking in self-awareness.

"Hey, those crypto guys need to stop doing dumb things with GPUs so me and my friends can use that fancy hardware to waste hours of our lives pretending to be soldiers and race-car drivers on our computers in the basement"

Say what you will about the crypto crowd, at least they know people think they're ridiculous.


That's how the market works though. Your AMD 290X had artificial performance bottlenecks created (gimped FP64, driver performance limiter for enterprise software, etc) so that AMD could sell more Radeon WX cards. Your AMD APU has ECC artificially disabled so that AMD can sell more Ryzen Pro APUs. Your Epyc has its overclocking controls artificially disabled. etc etc. Those are accepted and normal practices to determine "what you can do with the hardware you paid for" in the industry.

Miners are just mad that they got a free ride for a lot of years and are now being shifted to their own segment to try and control the infinite demand they tend to periodically create. But they are a money-making asset in a business, nobody cried a tear when AMD and NVIDIA forced Raytheon to pay a premium to buy Quadros to run their CAD software at the full performance levels the hardware is capable of.

Also, generally speaking this arrangement is beneficial for consumers: if you outlawed artificial segmentation tomorrow, companies aren't going to hugely lower their prices and give up all that revenue from the enterprise market, they are going to raise prices in the consumer market. The alternative to gimped Celeron chips isn't that you get Xeon capability for Celeron prices, it's that you pay much closer to Xeon prices for your celeron. Which is like, several times as much.

That R&D has to be paid for somewhere, and margins are not all that huge considering the total lifecycle (the chips are cheap once you make them, but the R&D for the first chip costs billions). You can't just give up 80% of your enterprise revenue and make a go of it. If we really did have Xeon for Celeron prices, the alternative would be much longer product cycles and other belt-tightening in the R&D department. The beige box market has a huge business component too and they won’t have any qualms about an extra $200 on every cpu if that’s what it costs. It’s just gonna suck personally for you as a consumer.

Consumers are the “price sensitive” market that benefits from price discrimination, in this instance, and product segmentation is how you allow that. Take that away and those price-sensitive markets are the ones that will pay more, because business pricing is very inelastic and quantities are very large. They don’t care about you buying one celeron every 5 years as much as the business who buys 1000 office desktops every 3 years.

https://en.m.wikipedia.org/wiki/Price_discrimination


There should be no software limits in the first place.


Wouldn't that just end up increasing the difficulty of mining in the long term since it would all be opensource? I feel like any advantage they get by patching the software would be short lived with many people following suit, but I could 100% be wrong about that.


Bingo, that's what this is.


That cybercriminal’s name? Uhhh Tobias Lorvalds…




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: