• Sparking@lemm.ee
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    3
    ·
    3 days ago

    What are you guys working on where chatgpt can figure it out? Honestly, I haven’t been able to get a scrap of working code beyond a trivial example out of that thing or any other LLM.

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      1
      ·
      3 days ago

      I’m forced to use Copilot at work and as far as code completion goes, it gets it right 10-15% of the times… the rest of the time it just suggests random — credible-looking — noise or hallucinates variables and shit.

        • 0x0@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          I would quit, immediately.

          Pay my bills. Thanks.
          I’ve been dusting off the CV, for multiple other reasons.

          • 9bananas@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            how surprising! /s

            but seriously, it’s almost never one (1) thing that goes wrong when some idiotic mandate gets handed down from management.

            a manager that mandates use of copilot (or any tool unfit for any given job), that’s a manager that’s going to mandate a bunch of other nonsensical shit that gets in the way of work. every time.

            • 0x0@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              It’s an at-scale company, orders came from way above. As did RTO after 2 years full-at-home, etc, etc.

    • Thorry84@feddit.nl
      link
      fedilink
      English
      arrow-up
      31
      ·
      3 days ago

      Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.

      It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I’m on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

      In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.

      I’m afraid for a future where forums and such don’t exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn’t know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.

      • Hoimo@ani.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.

        The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.

        What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.

      • baltakatei@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 days ago

        It’s like useful information grows as fruit from trees in a digital forest we call the Internet. However, the fruit spoils over time (becomes less relevant) and requires fertile soil (educated people being online) that can be eroded away (not investing in education or infrastructure) or paved over (intellectual property law). LLMs are like processed food created in factories that lack key characteristics of more nutritious fresh ingredients you can find at a farmer’s market. Sure, you can feed more people (provide faster answers to questions) by growing a monocrop (training your LLM on a handful of generous people who publish under Creative Commons licenses like CC BY-SA on Stack Overflow), but you also risk a plague destroying your industry like how the Panama disease fungus destroyed nearly all Gros Michel banana farming (companies firing those generous software developers who “waste time” by volunteering to communities like Stack Overflow and replacing them with LLMs).

        There’s some solar punk ethical fusion of LLMs and sustainable cultivation of high quality information, but we’re definitely not there yet.

        • Jayjader@jlai.lu
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          To extend your metaphor: be the squirrel in the digital forest. Compulsively bury acorns for others to find in time of need. Forget about most of the burial locations so that new trees are always sprouting and spreading. Do not get attached to a single trunk ; you are made to dance across the canopy.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      edit-2
      3 days ago

      When I had to get up to speed on a new language, it was very helpful. It’s also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I’ve been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code).

      It’s also nice for writing C# unit tests.

      However, the times I’ve been stuck on my main languages, it’s been utterly useless.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        1
        ·
        3 days ago

        ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.

        It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.

      • prettybunnys@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        I love asking AI to generate a framework / structure for a project that I then barely use and then realize I shoulda just done it myself

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      I’ve been using (mostly) Claude to help me write an application in a language I’m not experienced with (Rust). Mostly with helping me see what I did wrong with syntax or with the borrow checker. Coming from Java, Python, and C/C++, it’s very easy to mismanage memory the exact way Rust requires it.

      That being said, any new code that generates for me I end up having to fix 9 times out of 10. So in a weird way I’ve been learning more about Rust from having to correct code that’s been generated by an LLM.

      I still think LLMs for the next while will be mostly useful as a hyper-spell checker for code, and not for generating new code. I often find that I would have saved time if I just tackled the problem myself and not tried to reply on an LLM. Although sometimes an LLM can give me an idea on how to solve a problem.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 days ago

      Same. It can generate credible-looking code, but I don’t find it very useful. Here’s what I’ve tried:

      • describe a function - takes longer to read the explanation than grok the code
      • generate tests - hallucinates arguments, doesn’t do proper boundary checks, etc
      • looking up docs - mostly useful to find search terms for the real docs

      The second was kind of useful since it provided the structure, but I still replaced 90% of it.

      I’m still messing with it, but beyond solving “blank page syndrome,” it’s not that great. And for that, I mostly just copy something from elsewhere in the project anyway, which is often faster than going to the LLM.

      I’m really bad at explaining what I want, because by the time I can do that, it’s faster to just build it. That said, I’m a senior dev, so I’ve been around the block a bit.

    • Kng@feddit.rocks
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Lately I have been using it for react code. It seems to be fairly decent at that. As a consequence when it does not work I get completely lost but despite this I think I have learned more with it then I would have without.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      3 days ago

      I used it a few days ago to translate a math formula into code.

      Here is the formula: https://wikimedia.org/api/rest_v1/media/math/render/svg/126b6117904ad47459ad0caa791f296e69621782

      It’s not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.

      It saved me some time typing around. And searching online a few things.