Cross-posted to [email protected], which is probably the closest active community we’ve got
Cross-posted to [email protected], which is probably the closest active community we’ve got
Ha, that reminds me of Donald Knuth offering 0x$1.00 to anyone that finds a mistake in TAOCP, like this guy:
Definitely not. There’s a whole genre of music that’s created for riding the coattails of popular songs. They wait for a song title by artists like Taylor Swift to be announced and then release their own songs with the same title. Sometimes they’re actually good, like this dude:
The latter, but I also don’t really mind paywalls in the form of “get early access” like SMBC comics or “get exclusive special content” like a lot of bands do.
You can just straight paywall with those too, but you don’t have too. A band I like crowdfunded a music video and you can watch it free on youtube, but if you didn’t crowdfund it you missed out on perks that go all the way up to being in the music video
The trilogy would’ve been much better if either director had done all 3. Either J.J. Abrams with a fun nostalgic return to form, or Rian Johnson with a fresh new take. The whiplash from them fighting with each other over the direction of the plot just ended up being a huge mess. I’m pretty surprised they weren’t just told what the plot was going to be, kind of seems like a screwup by whoever handled that.
False dichotomy, I’d rather see other funding models like Patreon/Kickstarter. Paying gets you early access/bonus stuff/whatever, and you don’t need intrusive technologies like ads/paywalls.
How are you defining “far extreme liberal”?
Not sure how ollama integration works in general, but these are two good libraries for RAG:
That’s a great line of thought. Take an algorithm of “simulate a human brain”. Obviously that would break the paper’s argument, so you’d have to find why it doesn’t apply here to take the paper’s claims at face value.
There’s a number of major flaws with it:
IMO there’s also flaws in the argument itself, but those are more relevant
Meshuggah:
https://www.youtube.com/watch?v=m9LpMZuBEMk
Listened to them before I got into metal, came back to them later and now love them. That’s from probably one of their more accessible records, they also have more experimental stuff like this:
This is a silly argument:
[…] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’
That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.
‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
EDIT: From the paper:
The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
On a related note, I think libraries do need a bit of a facelift, and not just be “the place where books live”. It’s important to keep that function, but also expand to “a place where learning happens”. I know lots of libraries are doing this sort of thing, but your average person is probably still stuck in the “place where books live” mindset, as you allude. I’m talking stuff like 3D printers, makerspaces, diybio, classes about detecting internet bullshit, etc.
Threads like this, with highly upvoted comments like
americans are more propagandized than they think citizens of the DPRK are
They also use sarcasm try to push the narrative that North Korea is actually just fine, OK?
Guys you don’t understand; the West has spoken; we MUST hate North Korea, our governments have already decreed it so.
Many of them are also seemingly physically incapable of communicating without hexbear’s custom reaction images, which is a weird behavior common to many cults. Makes it harder to communicate with the outgroup.
I think LW is defederated from them (or vice versa) so you can’t post over there, but for further examples, try making an account over there and saying that maybe, just maybe, Putin did a bad thing by invading Ukraine, and they’re defending an imperialist.
From here:
On occasion, a writer will coin a fine neologism that spreads quickly but then changes meaning. “Factoid” was a term created by Norman Mailer in 1973 for a piece of information that becomes accepted as a fact even though it’s not actually true, or an invented fact believed to be true because it appears in print. Mailer wrote in Marilyn, “Factoids…that is, facts which have no existence before appearing in a magazine or newspaper, creations which are not so much lies as a product to manipulate emotion in the Silent Majority.” Of late, factoid has come to mean a small or trivial fact that makes it a contronym (also called a Janus word) in that it means both one thing and its opposite, such as “cleve” (to cling or to split), “sanction” (to permit or to punish) or “citation” (commendation or a summons to appear in court). So factoid has become a victim of novelist C.S. Lewis’s term “verbicide,” the willful distortion or deprecation of a word’s original meaning.
I think [email protected] would be a good place for it. The community sidebar says your own stories are welcome. You might want to add that you’re specifically looking for feedback
Use Tor for everything. Search for “disposable email”, find a service that you can use in Tor. Sign up through Tor using that disposable email address for any service that you want to post to. Be aware that some services try to deny access to Tor and/or disposable email addresses. Try a different service or a different disposable email provider if you encounter that.
You should define your threat model. Longer essays can probably be deanonymized with stylometry. The above will probably work fine up to maybe the NSA taking an interest in the origins of the essay. You can probably post something to the Fediverse and reputation-wash it to a larger audience by saying “look at this link that i have no affiliation with”, but it’s more likely that someone would figure out that it’s you. You can use the Tor method to post on Reddit, but many subreddits will have automods that delete posts from new/low karma users.
I like Bluey and metal, so this shirt is perfect for me
Not sure if this is what you’re referencing, but there’s a famous quantum computer researcher named Scott Aaronson who has this at the top of his blog:
His blog is good, talks about a lot of quantum computing stuff at an accessible level