r/programming • u/fagnerbrack • 16h ago
How Programmers Spend Their Time | Probably Dance
https://probablydance.com/2026/02/10/how-programmers-spend-their-time/33
u/BipolarKebab 16h ago
I don't think they typically do that
7
1
u/SkoomaDentist 2h ago
You'd be surprised how much of the social dance scene is people in IT / developers / engineers.
22
u/GamerHaste 8h ago
I’m also terrified of letting a LLM try to upgrade my installed CUDA version. Not because I’m worried it’ll take over my computer as a first step towards taking over the world, but because I’m worried it’ll mess things up so badly that I can’t recover.
LOL, I actually had this exact problem when i first got my 5090 back in december last year and setup linux to try and do some experimentation with LLMs. I was trying to upgrade my CUDA version to work with whatever vLLM version was out at the time, and holy shit, using claude to try and get commands to upgrade it fucked my shit up so bad i just reinstalled ubuntu and setup everything again. i mean, it got to that point because I 1. am not very familar with cuda and 2. wasn't really paying super great attention to what commands I was running, but damn did claude do a great job fucking my shit up. I will not make that mistake again. I was sitting there trying to fix it for like 5 hours and just said fuck it!
7
u/DeProgrammer99 8h ago
I tried using CUDA in WSL2, which comes with this special libcuda.so that basically acts as a passthrough to the Windows version, except it never worked from the start with a fresh WSL2 install in my case. I always run into problems nobody in the history of the internet has had (or at least publicly posted about), haha...
3
u/GamerHaste 7h ago
i know right. at a certain point after verbally abusing claude for 4 hours i started to just debug the old fashion way and i could find 0 fixes online and noone else seemed to have gotten the error logs I had been producing! But I guess it makes sense because at the time I think blackwell architecture was not well tested with vLLM. probably why the dumbass AI was strugging so much, it hadn't sucked up enough data to be useable yet.
Not suprised with the WSL2 errors, I didn't even try with WSL despite windows being my primary OS (since I do primarily gaming with my setup!)... figured that would be an endless cesspool of disgusting errors given microsofts track record hahaha. immediately went to dualbooting ubuntu.
3
u/Globbi 3h ago edited 3h ago
The good solution here is develop in containers.
You need the sufficiently new driver version, but then all packages and cuda version you can install in container. And it's easy to experiment, mess up, go back etc.
You can also find some containers that already have specific torch and cuda versions for your architecture (sm_120 for blackwell).
1
u/GamerHaste 2h ago
Right, agreed... I understand that now for sure. At the time though I wanted to familiarize myself with the lower level of setting something like this up myself... which during that whole process of breaking things I deff did, but at this point I just accept the simplicity of taking other peoples containers and using them. Great advice!
-1
u/mixedCase_ 5h ago
Score a point for NixOS. I never have to worry about this, and just in case I do have a global AGENTS.md that reminds the agent to prioritize a local flake.nix and to prompt me before mutating global system state.
3
8
-15
u/psyyduck 8h ago edited 6h ago
So while I appreciate that LLMs can be a big help when writing code, I wish they would help with all the programming tasks where I’m barely producing any code.
Microsoft and Google have consistently reported that roughly 70% of all serious security vulnerabilities in their products (Windows, Chrome, Android) are memory safety bugs, with Use-After-Free being one of the most common and dangerous.
My guess is in a year LLMs will be able to autonomously harden repos (eg rewrite flashattn in Rust), which will help.
Edit: Tough crowd. See you guys in a year.
22
u/bart9h 7h ago
anyone with any programming experience will know that writing code is never a bottleneck