The linked post goes into detail about why the author views Kagi as not privacy oriented, and that in the author’s opinion Kagi is overly focused on AI. (And was originally started as an AI company)
The linked post goes into detail about why the author views Kagi as not privacy oriented, and that in the author’s opinion Kagi is overly focused on AI. (And was originally started as an AI company)
You’re right, cameras can be tricked. As Descartes pointed out there’s very little we can truly be sure of, besides that we ourselves exist. And I think deepfakes are going to be a pretty challenging development in being confident about lots of things.
I could imagine something like photographers with a news agency using cameras that generate cryptographically signed photos, to ward off claims that newsworthy events are fake. It would place a higher burden on naysayers, and it would also become a story in itself if it could be shown that a signed photo had been faked. It would become a cause for further investigation, it would threaten a news agency’s reputation.
Going further I think one way we might trust people we aren’t personally standing in front of would be a cryptographic circle of trust. I “sign” that I know and trust my close circle of friends and they all do the same. When someone posts something online, I could see “oh, this person is a second degree connection, that seems fairly likely to be true” vs “this is a really crazy story if true, but I have no second or third or fourth degree connections with them, needs further investigation.”
I’m not saying any of this will happen, just it’s potentially a way to deal with uncertainty from AI content.
Well as I said, I think there’s a collection of things we already use for judging what’s true, this would just be one more tool.
A cryptographic signature (in the original sense, not just the Bitcoin sense) means that only someone who possesses a certain digital key is able to sign something. In the case of a digitally signed photo, it verifies “hey I, key holder, am signing this file”. And if the file is edited, the signed document won’t match the tampered version.
Is it possible someone could hack and steal such a key? Yes. We see this with certificates for websites, where some bad actor is able to impersonate a trusted website. (And of course when NFT holders get their apes stolen)
But if something like that happened it’s a cause for investigation, and it leaves a trail which authorities could look into. Not perfect, but right now there’s not even a starting point for “did this image come from somewhere real?”
In this case, digitally signing an image verifies that the image was generated by a specific camera (not just any camera of that brand) and that the image generated by that camera looks such and such a way. If anyone further edits the image the hash won’t match the one from the signature, so it will be apparent it was tampered with.
What it can’t do is tell you if someone pasted a printout of some false image over the lens, or in some other sophisticated way presented a doctored scene to the camera. But there’s nothing preventing us from doing that today.
The question was about deepfakes right? So this is one tool to address that, but certainly not the only one the legal system would want to use.
My thought was that the video loading probably isn’t going to be nearly as fast as TikTok because of the money behind their servers and optimization.
Leica has one camera that does this, and others are working on them. Just posted this link in another comment
I think other answers here are more essential - chain of custody, corroborating evidence, etc.
That said, Leica has released a camera that digitally signs its images, and other manufacturers are working on similar things. That will allow people to verify whether the image is original or has been edited. From what I understand Leica has some scheme where you can sign images when you update them too, so there’s a whole chain of documentation. Here’s a brief article
I’m very pleased. I have a 2023 Bolt.
For us there was no way we’d get one without a home charger. It’s great because every day you wake up and it’s like a full tank of gas.
My wife still has a gas car and we bought the electric planning that we’d still use the gas one for road trips. The Bolt in particular doesn’t have super fast charging (probably like 45 minutes to get to 80% using a fast charger) so if we didn’t have the second car that might be my one concern.
My wife wasn’t sold when we got it, but the electric was for me so we went ahead. Now she likes it. I’m banking on better EV options being available when we get our next car but I think it will be electric too.
Lol I can’t wait to hear everyone recite their respective pledges at the same time
Yes and
Yeah that’s pretty dystopian. Something worse hasn’t been done with it probably just because many bad actors haven’t been aware its an option.
Very satisfying
It’s a Jesus thing you wouldn’t understand.
I used to have a variety of Christian shirts with “cool” phrases on them
I remember seeing some of this stuff when it came out and thinking “why are they doing this?” A bunch of it I never heard of, and a handful I wish had seen success (Firefox OS). Not sure how this counts as a hit piece, it didn’t seem mean spirited and definitely didn’t seem to be misrepresenting anything.
But I don’t want my devices to be bombs
I assume you’re getting down voted because of AI use but I don’t mind it in this case because I think it’s a useful starting point for “how many big holidays are we talking about”
I am completely unfamiliar with that person
I didn’t take the image to be showing a macbook, it could just as easily be my computer or probably many others.
You’re right, doesn’t sound great. In the example they shared, sounds like the issue wasn’t that the car couldn’t drive around the fire truck, but that it couldn’t break a programming rule about crossing into a lane that would normally be opposing traffic. Once given the “ok” to follow such a route, the car handled it on its own, the human doesn’t actually drive it.
I could imagine a scenario where you need one human operator for every two vehicles. That’s still reducing labor by 50%.
Obviously they want it to be better than that, they want it to be one operator per ten vehicles or no operator at all.
And the fundamental problem with these systems is they will be owned by big corporations, and any gained efficiency will be consumed by the corporation, not enjoyed by the worker or passed on to the customer.
But I think there’s true value to be found there. Imagine a transportation cooperative - we’re a thousand households, we don’t all need our own car, but we need a car sometimes. We pool our resources and have a small fleet that minimizes our cost and environmental impact, and potentially drives more safely than human drivers.
I think it would have helped for the person who posted that to include context, but I would guess they were linking because it also talks about how Kagi isn’t privacy focused.