pootriarch@poptalk.scrubbles.techtoPiracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•The New York Times tried to block the Internet Archive: another reason to value the latterEnglish
4·
1 year agoIt exists, it’s called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.
the internet archive doesn’t respect robots.txt:
Over time we have observed that the robots.txt files that are geared toward search engine crawlers do not necessarily serve our archival purposes.
the only way to stay out of the internet archive is to follow the process they created and hope they agree to remove you. or firewall them.
so per wikipedia and confirmed at MDN, firefox is the only major browser line not to consider certificate transparency at all. and yet it’s the only one that has given me occasional maddening SSL errors that have blocked site access (not always little sites, it’s happened with amazon).
i don’t understand how firefox can be simultaneously the least picky about certificates and the most likely to spuriously decide they’re invalid.