Gaben is that you? Where half-life 3?
duh
Gaben is that you? Where half-life 3?
Yes, selfhost most essential services like mail, messengers, web search, piped frontend, vpn, and other things like gitea/forgejo and jellyfin, web 3.0 will be federated network
Those Nvidia cards used in mining and AI need reballs regularly, also ps4 south bridge often falls off, also intel sockets sometimes need reball, also you can upgrade ram on your phone and Nintendo switch
Reball of chips on gpus and motherboards of PCs and gaming consoles
I bought my soldering station with air solderer and iron solderer for about 40$ from AliExpress, the ones with IR bottom heater cost around 90-100$
Use adb, I’ve seen Canadian turn off all alarms using adb
Everyone thought 1tb+ microsd is impossible, but here we are
Steam deck modders rejoice
*1440p upscaled from 720p
Idk, i specifically plug it into motherboard since i use cheap used gpus that can break easily, example is Nvidia p106, it doesn’t even have video output, and it’s easier to flip DRI_PRIME from 1 to 0 than redo the cables
DRI_PRIME=1 goes brrrr
It depends, if you want to run llm data center gpus are better, if you want to run general purpose tasks then newer silicon is better, in my case i prefer build to offload tasks, since I’m daily driving linux, my dream build is main gpu is amd rx 7600xt 16gb, Nvidia p40 for llms and ryzen 8700g 780m igpu for transcoding and light tasks, that way you’ll have your usual gaming home pc that also serves as a server in the background while being used
I looked it up, rtx 3070 have nvlink capabilities though i wonder if all of them have it, so you can pair it if it have nvlink capabilities
Ryzen 3700? Or rtx 3070? Please elaborate
Try ryzen 8700g integrated gpu for transcoding since it supports av1 and these p series gpus for llm/stable diffusion, would be a good mix i think, or if you don’t have budget for new build, then buy intel a380 gpu for transcoding, you can attach it as mining gpu through pcie riser, linus tech tips tested this gpu for transcoding as i remember
How about cyberpunk?
Search Nvidia p40 24gb on eBay, 200$ each and surprisingly good for selfhosted llm, if you plan to build array of gpus then search for p100 16gb, same price but unlike p40, p100 supports nvlink, and these 16gb is hbm2 memory with 4096bit bandwidth so it’s still competitive in llm field while p40 24gb is gddr5 so it’s good point is amount of memory for money it cost but it’s rather slow compared to p100 and compared to p100 it doesn’t support nvlink
I think amd is worth it’s money tho, they scale their price to match their nvidia counterparts performance wise, i mean 7900xtx worth just as much as 4080 and performs as such but have more memory and is a better match for Linux gaming
Russia did it recently