There’s an extraordinary amount of hype around “AI” right now, perhaps even greater than in past cycles, where we’ve seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn’t matter.

But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another “Web3”-style froth bubble because they’ve again abandoned one of the core values of actual technology-based advancement: reason.

  • auth@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    8 months ago

    which one is your favorite one? I might buy some hardware to be able to run them soon (I only have a laptop right now that is not the greatest, but I am willing to upgrade)

    • voracitude@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      8 months ago

      You might not even need to upgrade. I personally use GPT4All and like it for the simplicity. What is your laptop spec like? There are models than can run on a Raspberry Pi (slowly, of course 😅) so you should be able to find something that’ll work with what you’ve got.

      I hate to link the orange site, but this tutorial is comprehensive and educational: https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/

      The author recommends KoboldCPP for older machines: https://github.com/LostRuins/koboldcpp/wiki#quick-start

      I haven’t used that myself because I can run OpenOrca and Mistral 7B models pretty comfortably on my GPU, but it seems like a fine place to start! Nothing stopping you from downloading other models as well, to compare performance. TheBloke on Huggingface is a great resource for finding new models. The Reddit guide will help you figure out which models are most likely to work on your hardware, but if you’re not sure of something just ask 😊 Can’t guarantee a quick response though, took me five years to respond to a YouTube comment once…

      • auth@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        8 months ago

        thanks a lot man, I will look into it but I have on-board gpu… not a big deal if I need to upgrade (I spend more on hookers and blow weekly)

        • voracitude@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          It’s ok if you don’t have a discrete GPU, as long as you have at least 4GB of RAM you should be able to run some models.

          I can’t comment on your other activities, but I guess you could maybe find some efficiencies if you buy the blow in bulk to get wholesale discounts and then pay the hookers in blow. Let’s spreadsheet that later.