The nearest bus stop is an hour away, and it’s for interstate transit. 🤷
Owner and writer of CovertWiki.org. It’s basically a wannabe spy handbook in wiki format. Feel free to leave a bookmark until more content is released, or message me on Discord under the same username to become a contributor.
The nearest bus stop is an hour away, and it’s for interstate transit. 🤷
The place I’m planning to buy a home is so remote that I’m considering a backup car.
I learned how to repair my own vehicles after I was quoted $2,600 to install a $40 part. I could’ve also had an entire rebuilt engine shipped and swapped it in myself for about half that, but I ultimately decided to go with the $40 + basic tools.
I could sure use some of that money to buy the next iPhone. Just imagine what my friends would think if I didn’t.
I didn’t read very far up into the thread. Sorry.
Automated filters will just drive determined botters to play the system and perfect their craft until they can no longer be automatically identified, in my opinion. I’m more of the stance that accounts should be reviewed manually so that a leap into convincing bot accounts will need to be much more dramatic, and therefore difficult. If it’s done the hard way from the start with staff who know how to identify these accounts, it may keep it from growing into an issue to begin with.
Any threshold to be automatically flagged for review should be relatively low, but the process should also be quick and efficient. Adding more metrics to the flagging process only means botters will have a narrower gaze to avoid. Once they start crunching the numbers and streamline mimicking real user accounts it’s game over.
Signup safeguards will never be enough because the people who create these accounts have demonstrated that they are more than willing to do that dirty work themselves.
Let’s look at the anatomy of the average Reddit bot account:
Rapid points acquisition. These are usually new accounts, but it doesn’t have to be. These posts and comments are often done manually by the seller if the account is being sold at a significant premium.
A sudden shift in contribution style, usually preceded by a gap in activity. The account has now been fully matured to the desired amount of points, and is pending sale or set aside to be “aged”. If the seller hasn’t loaded on any points, the account is much cheaper but the activity gap still exists.
My solution? Implement a weighted visual timeline for a user’s points and posts to make it easier for admins to single out accounts that have already been found to be acting suspiciously. There are other types of malicious accounts that can be troublesome such as self-run engagement farms which express consistent front page contributions featuring their own political or whatever lean, but the type first described is a major player in Reddit’s current shitshow and is much easier to identify.
Most important is moderator and admin willingness to act. Many subreddit moderators on Reddit already know their subreddit has a bot problem but choose to do nothing because it drives traffic. Others are just burnt out and rarely even lift a finger to answer modmail, doing the bare minimum to keep their subreddit from being banned.
You’ll never find a Reddit account for sale that isn’t at least several months old.
Bots don’t upvote. There’s so much voting activity here as a ratio to actual contributions that my first impression was that the votes might be faked.
It’s a multi-edged sword. It also means someone could be forced to testify against a friend or loved one, and in a slightly removed example, my beliefs also apply to laws that allow individuals to be imprisoned for failing to provide a password to locked electronics, regardless of whether or not they actually remember it.
Maybe it would be a good middle ground to instead expand the privileges that allow members of a marriage to avoid testifying against one another, to include friends and family. The same reasoning applies, except that the state believes it can determine the strength and meaning of a relationship by its title and type alone.
I think you’re confused. The court already has the ability to force testimony, and witnesses can already be thrown in jail for refusing to testify.
I updated the title to make it clear that I’m referring to penalties that already exist, rather than suggesting that new penalties should be created.
Yes, exactly like that.
Of course, it depends on whether the court can prove their recollection whether or not they can be punished, but the bottom line is that it’s still illegal and the court remains legally entitled to forcefully procure truthful thoughts and memories from a person.
I don’t support any suggestion that updating the law doesn’t matter because it is sometimes difficult to enforce, if that was your intention.
A witness can still be punished if the court can prove that claims of poor recollection are being abused.
Cobwebs/Penlink seemed much more tailored to that, but these companies also have an incentive to exaggerate their products’ capabilities as much as they can get away with.
The government doesn’t need a warrant to browse data that it’s already in possession of. Food for thought.
I’ll always be distrustful of Google hardware, but yes. I’ve been considering a Pixel as my next.
In my experience several years ago, Facebook was actually super fast to take down bad groups. I must’ve been reporting so many and with such reliability that they started coming down instantaneously after reporting them.
“…to maintain the safety and security of the building and everyone in it.” - An actual FAQ
Way to make home feel like a prison.
It’s complicated, but no, I don’t.