• 4 Posts
  • 82 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle





  • The latter, but I also don’t really mind paywalls in the form of “get early access” like SMBC comics or “get exclusive special content” like a lot of bands do.

    You can just straight paywall with those too, but you don’t have too. A band I like crowdfunded a music video and you can watch it free on youtube, but if you didn’t crowdfund it you missed out on perks that go all the way up to being in the music video







  • There’s a number of major flaws with it:

    1. Assume the paper is completely true. It’s just proved the algorithmic complexity of it, but so what? What if the general case is NP-hard, but not in the case that we care about? That’s been true for other problems, why not this one?
    2. It proves something in a model. So what? Prove that the result applies to the real world
    3. Replace “human-like” with something trivial like “tree-like”. The paper then proves that we’ll never achieve tree-like intelligence?

    IMO there’s also flaws in the argument itself, but those are more relevant



  • This is a silly argument:

    […] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’

    That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

    ‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.

    That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.

    EDIT: From the paper:

    The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.

    That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.



  • On a related note, I think libraries do need a bit of a facelift, and not just be “the place where books live”. It’s important to keep that function, but also expand to “a place where learning happens”. I know lots of libraries are doing this sort of thing, but your average person is probably still stuck in the “place where books live” mindset, as you allude. I’m talking stuff like 3D printers, makerspaces, diybio, classes about detecting internet bullshit, etc.


  • Threads like this, with highly upvoted comments like

    americans are more propagandized than they think citizens of the DPRK are

    They also use sarcasm try to push the narrative that North Korea is actually just fine, OK?

    Guys you don’t understand; the West has spoken; we MUST hate North Korea, our governments have already decreed it so.

    Many of them are also seemingly physically incapable of communicating without hexbear’s custom reaction images, which is a weird behavior common to many cults. Makes it harder to communicate with the outgroup.

    I think LW is defederated from them (or vice versa) so you can’t post over there, but for further examples, try making an account over there and saying that maybe, just maybe, Putin did a bad thing by invading Ukraine, and they’re defending an imperialist.



  • From here:

    On occasion, a writer will coin a fine neologism that spreads quickly but then changes meaning. “Factoid” was a term created by Norman Mailer in 1973 for a piece of information that becomes accepted as a fact even though it’s not actually true, or an invented fact believed to be true because it appears in print. Mailer wrote in Marilyn, “Factoids…that is, facts which have no existence before appearing in a magazine or newspaper, creations which are not so much lies as a product to manipulate emotion in the Silent Majority.” Of late, factoid has come to mean a small or trivial fact that makes it a contronym (also called a Janus word) in that it means both one thing and its opposite, such as “cleve” (to cling or to split), “sanction” (to permit or to punish) or “citation” (commendation or a summons to appear in court). So factoid has become a victim of novelist C.S. Lewis’s term “verbicide,” the willful distortion or deprecation of a word’s original meaning.



  • Use Tor for everything. Search for “disposable email”, find a service that you can use in Tor. Sign up through Tor using that disposable email address for any service that you want to post to. Be aware that some services try to deny access to Tor and/or disposable email addresses. Try a different service or a different disposable email provider if you encounter that.

    You should define your threat model. Longer essays can probably be deanonymized with stylometry. The above will probably work fine up to maybe the NSA taking an interest in the origins of the essay. You can probably post something to the Fediverse and reputation-wash it to a larger audience by saying “look at this link that i have no affiliation with”, but it’s more likely that someone would figure out that it’s you. You can use the Tor method to post on Reddit, but many subreddits will have automods that delete posts from new/low karma users.