There’s a transaction fee, the higher you pay the more priority you have (since miners get a cut).
There’s a transaction fee, the higher you pay the more priority you have (since miners get a cut).
It’s not complicated until your reputation drops for a multitude of reasons, many not even directly your fault.
Neighboring bad acting IPs, too many automated emails sent out while you were testing, compromised account, or pretty much any number of things means everyone on your domain is hosed. And email is critical.
Not in this one, iirc they actually reverse engineered and were working off of apple libraries, rather than proxies.
In which case the -a isn’t needed.
Better have not created any new files tho - git commit -a doesn’t catch those without an add first.
As a Linux user (and ex arch user btw), I’m deeply offended.
It looks like on blender’s website there’s 6 entities on there, and one of them does seem to be an individual fwiw. Here’s his website: https://aras-p.info/.
The rest all seem to be corporations though - meta, aws, some game company I’ve never heard of, AMD, and epic.
I just checked their financial report for 2022 and it looks like 50% came from patron funding (which looks like entirely companies like Google), 5% from epics grant, and then 10% corporate membership. 20% came from individuals, and the rest from random other miscellaneous things like the blender market. If you search blender foundation annual report 2022, the finances breakdown will be near the end of the slides.
Wikimedia foundation is, none of the other things I listed are.
I think the key there is funding from big companies. There’s tons of standards and the like in which big companies take part - both in terms of code and financial support. Big projects like the rust compiler, the Linux kernel, blender, etc. all seem to have a lot of code and money coming in from big companies. Sadly there’s only so much you can get from individuals - pretty much the only success story I know of is the wikimedia foundation.
The point is to minimize privilege to the least possible - not to make it impossible to create higher privileged containers. If a container doesn’t need to get direct raw hardware access, manage low ports on the host network, etc. then why should I give it root and let it be able to do those things? Mapping it to a user, controlling what resources it has access to, and restricting it’s capabilities means that in the event that my container gets compromised, my entire host isn’t necessarily screwed.
We’re not saying “sudo shouldn’t be able to run as root” but that “by default things shouldn’t be run with sudo - and you need a compelling reason to swap over when you do”
In this context it actually means that you can take the source code, and get the exact same binary artifact as another build. It means that you can verify (or have someone else verify) that the released binary is actually built from the source code it says it is, by comparing their hashes. You can “reproduce” a bit for bit copy of the released binaries.
Yeah. There’s reasoning for why they do it on their docs, but the reasoning iirc is kanidm is a security critical resource, and it aims to not even allow any kind of insecure configuration. Even on the local network. All traffic to and from kanidm should be encrypted with TLS. I think they let you use self signed certs though?
Kanidm doesn’t require a CA, it just requires a cert for serving https (and it enforces https - it refuses to even serve over HTTP). I think that was just the OP not quite understanding the conceptual ideas at play.
Kanidm wants to directly have access to the letsencrypt cert. It refuses to even serve over HTTP, or put any traffic over it since that could allow potentially bad configurations. It has a really stringent policy surrounding how opinionated it is about security.
The last bit isn’t strictly true - there’s ways to trace such tasks by generating IDs and associating it per task / request / whatever, letting you associate messages together even in a concurrent environment. You can’t just blindly print but there’s libraries and the like to help you do it.
Right, but when there’s third parties involved which you may not trust (which is almost always going to be the case when talking to users not on your server), e2e’s benefit starts becoming a lot more enticing. And while you have a point on out of band key sharing being annoying, it makes sense as a default - especially when content is going across servers. Content should be secure with an opt-out rather than insecure with an opt-in. The latter is just more error prone.
Also: while it’s not friction free, apps like signal have shown that you can get verified e2e to be usable for the general population.
I don’t think the SATA acronym is right…
The point of federation means your content doesn’t only stay on your server. The person you’re talking too can be on a different one and their admin can see them too. Also, I wouldn’t want to be able to access content from any user - it’s a “no trust needed” thing.
For context for other readers: this is referring to NAT64. NAT64 maps the entire IPv4 address space to an IPv6 subnet (typically 64:ff9b). The router (which has an IPv4 address) drops the IPv6 prefix and does a normal IPv4 NAT from there. After that, you forward back the response over v6.
This lets IPv6 hosts reach the IPv4 internet, and let you run v6 only internally (unlike dual stack which requires all hosts having v6 and v4).