

Swede here.
Some American candy, mostly bad chocolate
Cryptography nerd
Fediverse accounts;
[email protected] (main)
[email protected]
[email protected]
Lemmy moderation account: @[email protected] - [email protected]
Bluesky: natanael.bsky.social
Swede here.
Some American candy, mostly bad chocolate
Same thing with early studies on prime numbers
The judge explicitly did not allow piracy here. Only legally acquired media can be used for training.
This case didn’t cover the copyright status of outputs. The ruling so far is just about the process of training itself.
IMHO the generative ML companies should be required to build a process tracking the influence of distinct samples on the outputs, and inform users of potential licensing status
Division of liability / licensing responsibility should depend on who contributes what to the prompt / generation. The less it takes for the user to trigger the model to generate an output clearly derived from a protected work, the more liability lies on the model operator. If the user couldn’t have known, they shouldn’t be liable. If the user deliberately used jailbreaks, etc, the user is clearly liable.
But you get a weird edge case when users unknowingly copy prompts containing jailbreaks, though
The ruling explicitly does not allow pirating. It only lets you run ML training on legally acquired media.
They still haven’t ruled on copyright infringement from pirating the media used to train, and they haven’t ruled on copyright status of outputs (what it takes to be considered transformative).
This is judge Alsup, same guy who ruled in Oracle vs Google
I run a cryptography forum
Encryption doesn’t hide data sizes unless you take extra steps
It’s called traffic analysis
Timing of messages. They can’t tell what you send, but can tell when
You could tie it to requiring access to a digital ID (with password / PIN protection, etc), but yes kids could still “borrow” it
What you want is cryptographic Zero-knowledge proofs, not regular encryption. See anonymous credentials protocols.
And it does require every verifying entity to trust the issuer (each user could collect attestations from multiple issuers, to prove different things to different verifiers)
Another issue is the risk of deanonymization by verifiers simply asking for more proof of many different properties, until you can be identified anyway
Or they’re trying to figure out who’s trying to stay connected with who
Consider getting VoIP phone numbers from a jurisdiction that’s much less hostile, so you have another number available to use
Telegram also don’t have E2E encryption on groups
Do you think a device with regulation circuits is more likely to be overloaded and start fires…?
The infinitely easier solution is to let the car charger know how much power is available to draw.
Besides the general security risk of they run trojaned clients, if they run it in the office they’re spending the company’s electricity
It’s called incident response
Get somebody else to take a truck to you?
I remember this lol
Tldr neural network models are incredibly weird. My best guess is that the combination of common recurring structure with variations based on common rules (joke threads and all) helps the model derive some intuition about how to handle variations of things.
Also reminds me of an even earlier neutral network which got better at playing specific games after being trained on large amounts of text completely unrelated to the game, like encyclopedias or whatever.
The Pixel line is comparable to the Samsung S line, you got a budget phone before