Ah, you’re suggesting using RFC 3514. Good thinking.
Ah, you’re suggesting using RFC 3514. Good thinking.
what about edited?
I was just looking at https://haveibeenpwned.com/ and it listed appen as a site that breached my details. I had no idea who they were or why they had my details. I guess this is related?
Appen: In June 2020, the AI training data company Appen suffered a data breach exposing the details of almost 5.9 million users which were subsequently sold online. Included in the breach were names, email addresses and passwords stored as bcrypt hashes. Some records also contained phone numbers, employers and IP addresses. The data was provided to HIBP by dehashed.com.
They have released it on github. The code is only about 500 lines. But releasing the model is arguably more important because that sort of compute is not affordable to any mortals.
Using copyrighted material is not the same thing as copyright infringement. You need to (re)publish it for it to become an infringement, and OpenAI is not publishing the material made with their tool; the users of it are. There may be some grey areas for the law to clarify, but as yet, they have not clearly infringed anything, any more than a human reading copyrighted material and making a derivative work.
Yeah, the ingestion part is still to be determined legally, but I think OpenAI will be ok. NYT produces content to be read, and copyright only protects them from people republishing their content. People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies. This isn’t even really very harmful to the NYT, since the historical material used doesn’t even conflict with their primary purpose of producing new news. It’ll be interesting to see how it plays out though.
Only publishing it is a copyright issue. You can also obtain copyrighted material with a web browser. The onus is on the person who publishes any material they put together, regardless of source. OpenAI is not responsible for publishing just because their tool was used to obtain the material.
Your friend was right.
They didn’t use this system. There are other needleless systems, primarily jet systems that use high pressure.
An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it’s a wee way away before they can do it, but they can potentially affect the real world.
The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.
Once we get robots with embodied AIs, they can directly affect the world, and that’s probably less than 5 years away - around the time AI might be capable of such things too.
AI girlfriends are pretty lucrative. That sort of thing is an option too.
Yeah, I’m surprised Google or another big player hasn’t released something yet, or that the people like the IETF haven’t had any RFCs or produced any practical standards. Now’s the time to get market dominance. Perhaps nobody will react until the shit hits the fan.
I mean, pgp is great, but in this day and age we need a simple standard people can use to sign media without a hassle and we may also need chain of custody in light of social media (edits and whatnot). Developers will likely need or want to build it into their software, so we need a standard. I don’t think the pgp approach really worked for most people.