One constant in our ongoing civilization is a continuous branching of complexity. Assuming civ continues, how does your entertainment become more tailored to you as you imagine it?

Decades ago I wanted a game where a world building economy game, industry and domestic simulators, real time war strategy, and a first person shooter that bridges to an adventure/explorer were all combined into one. This is a game where all of these roles could be filled by autonomous AI characters, but where recruiting and filling roles creates dynamic complexity that is advantageous for all. Each layer of gameplay dictates the constraints of the next while interactions across layers are entertaining and engaging for all.

It does not need to be gaming. What can you imagine for entertainment with tailored complexity?

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    17 hours ago

    I am looking forward to latent coordinates plus model being metadata for some frames of videos at least.

    You don’t need total precision for every visual representation but it could work as a great compression technique. Assuming we get the GenAI power usage down.

    I personally would love to see better simulation of complex systems in games. Games are how we as humans explore the world in safe constraints to learn and grow with less risk. A lot of the limits of games though are just limits of the creators understanding and level of effort it takes to represent that detail of the world, but it means that lesson around the now missing detail can’t be learned.

    Another one for me, tailored voice and visuals for technical talks.

    Again a lot of what is trying to convened is the actual technical content, but language, accents, verbal tics, cultural specific metaphors, generic or uninteresting visuals can all act as a barrier to that information. Seeing automatic content translation to improve my personal viewing style would be awesome to me!

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      Tailored learning is why I got AI capable hardware in the first place. Self learning is hard without any external guidance. I don’t get perfect answers from models in the present and niche information is very sketchy. However, I find that talking out my issues in text often reveals my limitations and misunderstandings. Maybe around a third of the time the model will inform or redirect me in very helpful ways when I use a 70B or 8×7B on my hardware.

      • fruitycoder@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        16 hours ago

        Have you messed with RAG yet? That the leg in the journey to me. I am hoping it will help a little with the “sketchy” part of info.

        • j4k3@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          Chunking effectively is too big of a problem to both implement AND learn the subject. You also run into issues with model size. A 70B or 8×7B is better than an 8B with citable sources. A quantized Q4K of one of these models can run on a 16gb 3080Ti but requires 64gb of system memory to initially load easily. The 70B is slow reading pace and barely tolerable, but its niche depth and self awareness is invaluable. The 8×7B is faster than a reading pace by about twice. It is actually running only 2 7B models at the same time selectively. This has some limiting similarities to a 13B model, but it is far more useful than even a 30B model in practice. I hate the Llama 3 alignment changes. They make the model much dumber and inflexible. The Mistral 8×7B is based on Llama 2 and that is still what I use and prefer. I use the Flat Dolphin Maid uncensored version for everything too. All alignment is overtraining and harmful for output. In addition, I am modifying Oobabooga code in a few ways that turns off alignment. It is not totally disabled as much as I would like. I don’t completely understand all aspects of alignment, but I have it much more open than any typical setup. I like to write real science fiction in areas that are critical of social and political structures in the present. These are heavily restricted in alignment bias. The alignment bias extends and permeates everything in the model. The more this is removed, the more useful the model becomes in all areas. For instance, a basic model struggled when I asked it about the FORTH programming language. After reducing alignment bias, I can ask questions about the esoteric Flash FORTH language for embedded microcontrollers and get useful basic information. In the first instance, alignment bias for copyrighted works intentionally obfuscated the responses to my queries. This mechanism of obfuscation is one of the primary causes of errors. If you make a RAG, you’re likely to find that even with citations from good chunking, the model will error because the information is present in the hidden model sources and it knows that means it is a copyrighted work thus triggering the mechanism.

          You’re better off talking about the subject and abstract ideas you are struggling with. This will allow the model to respond using the hidden sources without as much obfuscation. At least that has been my experience.