

You can cycle the smaller drives to cold backup, that’s not a waste. You do have backups, which RAID is not, right?
You can cycle the smaller drives to cold backup, that’s not a waste. You do have backups, which RAID is not, right?
Sure, works fine for inference with tensor parallelism, USB4 / thunderbolt 4/5 is a better (40Gbit+ and already there) bet than ethernet (see distributed-llama). Trash for training / fine tuning, that needs higher inter GPU speed, or better a bigger GPU VRAM.
Seems like data integrity is your highest priority, and you’re doing pretty well, the next step is keeping a copy offsite. It’s the 3-2-1 backup strategy, 3 copies, 2 media (used to mean CDs etc but now think offline drives) 1 offsite (in case of fire, meteor strike etc), so look to that, stash a copy at a friends or something.
In your case I’d look at getting some online storage to fill the offsite role while you’re overseas (paid probably, but a year of 1 or 2 Tb is quite reasonable) leaving you with no pressure on the selfhosting side, just tailscale in, muck around and have fun, and if something breaks, no harm done, data safe.
I’ve done it for what seems like forever and I’d still be worried about leaving a system out of physical control for any extended period of time, at the very least having someone to reboot it if connectivity or power fails will be invaluable, but talking them through a broken update is another thing entirely, and you shouldn’t make that a critical necessity, too much stress.
I run a gluetun docker (actually two, one local and one through Singapore) clientside which is generally regarded as pretty damn bulletproof kill switch wise. The arr stack etc uses this network exclusively. This means I can use foxyproxy to switch my browser up on the fly, bind things to tun0/tun1 etc, and still have direct connections as needed, it’s pretty slick.
Well, that’s all kinds of wrong.
The old adage is never use v x.0 of anything, which I’d expect to go double for data integrity. Is there any particular reason ZFS gets a pass here (speaking as someone who really wants this feature). TrueNAS isn’t merging it for a couple of months yet, I believe.
Yup (although minutes seems long and depending on usage weekly might be fine). You can also combine it with updates which require going down anyway.
Basically, you want to shut down the database before backing up. Otherwise, your backup might be mid-transaction, i.e. broken. If it’s docker you can just docker-compose down it, backup, and then docker-compose up, or equivalent.
I could care less, but then I wouldn’t care at all…
Hanlon’s razor: Never attribute to malice that which is adequately explained by stupidity (or incompetence).
This works for individuals, but when it comes to corporations, you really have to ask, why not both?
Just use multiple database files (e.g. one for unimportant, one for important) and automate the syncing with syncthing or something so the lazy doesn’t matter…
Clear as mud. (I actually dimly get it, I’m a dev, but mere mortals will be clueless and move on). Farcaster is right, you need to define terms and give examples of actually getting this up and running, you’ve got way too much internal context that you’re not making explicit. Not an attack, trying to help, project sounds cool.
FreshRSS for those playing along at home…
I get it, but I contend my suggestion would allow exactly that, without relying on the opinion of some internet rando. YMMV.
Find a list of books you like, find an entry that interests you, go to anna’s archive. Why overcomplicate it?
Who knows ?, which is the same as no.
Leveraging the tendancy for everyone frustated with shit search results due to SEO to slap site:reddit.com on the end, and enshittifying it (coz any AI they can train (also any AI, but moreso) will mangle the shit out of it, seriously reddit is not enough data). Way to dilute your brand for a headline. Morons.
Sure is, use a vpn obs.
I just use the save to zotero extension in firefox and backup the directory, works fine. Maybe you’re overthinking?
Perhaps not saved, but I’d venture the most significant nail in the coffin of the scientific publishing mafia so far, pursued with integrity and honor. The rise of open publishing that followed is very telling, and in my mind directly attributable to Alexandra’s work and it’s popularity, they know they need to adapt or (probably and) die.
Still need to work on the publish or perish mentality, getting negative results published, and getting corporate propaganda out of the mix, to name a few.