Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash’s syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

  • BougieBirdie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    124
    arrow-down
    1
    ·
    1 month ago

    A lot of the criticism comes with AI results being wrong a lot of the time, while sounding convincingly correct. In software, things that appear to be correct but are subtly wrong leads to errors that can be difficult to decipher.

    Imagine that your AI was trained on StackOverflow results. It learns from the questions as well as the answers, but the questions will often include snippets of code that just don’t work.

    The workflow of using AI resembles something like the relationship between a junior and senior developer. The junior/AI generates code from a spec/prompt, and then the senior/prompter inspects the code for errors. If we remove the junior from the equation to replace with AI, then entry level developer jobs are slashed, and at the same time people aren’t getting the experience required to get to the senior level.

    Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

    Another argument would be that if I generate code that I have to take time to review and figure out what might be wrong with it, it might just be quicker and easier to write it correctly the first time

    Business often doesn’t understand these subtleties. There’s a ton of money being shovelled into AI right now. Not only for developing new models, but for marketing AI as a solution to business problems. A greedy executive that’s only looking at the bottom line and doesn’t understand the solution might be eager to implement AI in order to cut jobs. Everyone suffers when jobs are eliminated this way, and the product rarely improves.

    • clif@lemmy.world
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      1
      ·
      1 month ago

      Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

      This really resonated with me and is an excellent point. I’m going to have to remember that one.

      • vinnymac@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        25
        ·
        1 month ago

        A developer who is afraid of peer review is not a developer at all imo, but more or less an artist who fears exposing how the sausage was made.

        I’m not saying a junior who is nervous is not a dev, I’m talking about someone who has been at this for some time, and still can’t handle feedback productively.

        • mbtrhcs@feddit.org
          link
          fedilink
          English
          arrow-up
          30
          arrow-down
          2
          ·
          edit-2
          1 month ago

          They’re saying developers dislike having to review other code that’s unfamiliar to them, not having their code reviewed.

          • vinnymac@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            17
            ·
            1 month ago

            As am I, it’s a two way street. You need to review the code, and have it reviewed.

              • vinnymac@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                13
                ·
                1 month ago

                I did, and I stand by what I said.

                Review is both taken and given. Peer review does not occur in a single direction, it is a conversation with multiple parties. I can understand if someone misunderstood what I meant though.

                • mbtrhcs@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  1 month ago

                  Your reply refers to a “junior who is nervous” and “how the sausage is made”, which makes no sense in the context of someone who just has to review code

  • boatswain@infosec.pub
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    1 month ago

    As a cybersecurity guy, it’s things like this study, which said:

    Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

    • eerongal@ttrpg.network
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      17
      ·
      edit-2
      1 month ago

      FWIW, at this point, that study would be horribly outdated. It was done in 2022, which means it probably took place in early 2022 or 2021. The models used for coding have come a long way since then, the study would essentially have to be redone on current models to see if that’s still the case.

      The people’s perceptions have probably not changed, but if the code is actually insecure would need to be reassessed

      • boatswain@infosec.pub
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        1
        ·
        edit-2
        1 month ago

        Sure, but to me that means the latest information is that AI assistants help produce insecure code. If someone wants to perform a study with more recent models to show that’s no longer the case, I’ll revisit my opinion. Until then, I’m assuming that the study holds true. We can’t do security based on “it’s probably fine now.”

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        I think it’s more appalling because they should have assumed this was early tech and therefore less trustworthy. If anything, I’d expect more people to believe their code is secure today using AI than back in 2021/2022 because the tech is that much more mature.

        I’m guessing an LLM will make a lot of noob mistakes, especially in languages like C(++) where a lot of care needs to be taken for memory safety. LLMs don’t understand code, they just look at a lot of samples of existing code, and a lot of code available on the internet is terrible from a security and performance perspective. If you’re writing it yourself, hopefully you’ve been through enough code reviews to catch the more common mistakes.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    1 month ago

    If you’re a seasoned developer who’s using it to boilerplate / template something and you’re confident you can go in after it and fix anything wrong with it, it’s fine.

    The problem is it’s used often by beginners or people who aren’t experienced in whatever language they’re writing, to the point that they won’t even understand what’s wrong with it.

    If you’re trying to learn to code or code in a new language, would you try to learn from somebody who has only half a clue what he’s doing and will confidently tell you things that are objectively wrong? Thats much worse than just learning to do it properly yourself.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      1 month ago

      I’m a seasoned dev and I was at a launch event when an edge case failure reared its head.

      In less than a half an hour after pulling out my laptop to fix it myself, I’d used Cursor + Claude 3.5 Sonnet to:

      1. Automatically add logging statements to help identify where the issue was occurring
      2. Told it the issue once identified and had it update with a fix
      3. Had it remove the logging statements, and pushed the update

      I never typed a single line of code and never left the chat box.

      My job is increasingly becoming Henry Ford drawing the ‘X’ and not sitting on the assembly line, and I’m all for it.

      And this would only have been possible in just the last few months.

      We’re already well past the scaffolding stage. That’s old news.

      Developing has never been easier or more plain old fun, and it’s getting better literally by the week.

      Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 month ago

        Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.

        The problem (one of the problems) is that people do lean too heavily on the AI tools when they’re inexperienced and never learn for themselves “where to draw the X”.

        If I’m hiring a dev for my team, I want them to be able to think for themselves, and not be completely reliant on some LLM or other crutch.

  • leftzero@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    1 month ago

    The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.

    I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I’d wanted to do, while in another tab chatgpt was still spewing nonsense.

    LLMs are slower, several orders of magnitude less accurate, and harder to use than existing alternatives, but they’re extremely good at convincing their users that they know what they’re doing and what they’re talking about.

    That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn’t incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone’s time and learning nothing.

    Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you’re copying will work for your particular situation or not.

    LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you’d do given different commented answers.

    They’ll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it’s not.

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      This is my big concern…not that people will use LLMs as a useful tool. That’s inevitable. I fear that people will forget how to ask questions and learn for themselves.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Exactly. Maybe you want the number of unicode code points in the string, or perhaps the byte length of the string. It’s unclear what an LLM would give you, but the docs would clearly state what that length is measuring.

        Use LLMs to come up with things to look up in the official docs, don’t use it to replace reading docs. As the famous Russian proverb goes: trust, but verify. It’s fine to trust what an LLM says, provided you also go double check what it says in more official docs.

    • MrScottyTay@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      1 month ago

      I’ve been finding it a lot harder recently to find what I’m looking for when it comes to coding knowledge on search engines. I feel with an llm i can give it the wider context and it figures it exactly the sort of things I’m trying to find. Even more useful with trying to understand a complex error message you haven’t seen before.

      That being said. LLMs are not where my searching ends. I check to see where it got the information from so I can read the actual truth and not what it has conjured up.

      • leftzero@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 month ago

        I’ve been finding it a lot harder recently to find what I’m looking for when it comes to coding knowledge on search engines

        Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.

        That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.

        I remembered another example: I was checking a pull request and it wouldn’t compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of string.IsNullOrWhitespace() (in C# internal means “I designed my classes wrong and I don’t have time to redesign them from scratch; this member should be private or protected, but I need to access it from outside the class hierarchy, so I’ll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case as friend in c++; it’s used a lot in standard .NET libraries).

        Now, that particular internal function isn’t documented practically anywhere, and being internal can’t be used outside its particular library, so it wouldn’t pop up in any example the coder might have seen… but .NET is open source, and the library’s source code is on GitHub, so chatgpt/copilot has been trained on it, so that’s where the coder must have gotten it from.

        The thing, though, is that LLM’s being essentially statistic engines that’ll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is internal. Or private, or protected, for that matter.

        That function is used in the code they’ve been trained on to figure if a string is empty, so they’re just as likely to output it as string.IsNullOrWhitespace() or string.IsNullOrEmpty().

        Hell, if(condition) and if(!condition) are probably also equally likely in most places… and I for one don’t want to have to debug code generated by something that can’t tell those apart.

        • MrScottyTay@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          If you know what you need to find, then yeah search engines are still good. But as a tool for discovery they’re massively shit now. You often need to be super specific to get what you want and almost at that point you already know it, you just need a reminder.

          • leftzero@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            Are search engines worse than they used to be?

            Definitely.

            Am I still successfully using them several times a day to learn how to do what I want to do (and to help colleagues who use LLMs instead of search engines learn how to do what they want to do once they get frustrated enough to start swearing loudly enough for me to hear them)?

            Also yes. And it’s not taking significantly longer than it did when they were less enshittified.

            Are LLMs a viable alternative to search engines, even as enshittified as they are today?

            Fuck, no. They’re slower, they’re harder and more cumbersome to use, their results are useless on a good day and harmful on most, and they give you no context or sources to learn from, so best case scenario you get a suboptimal partial buggy solution to your problem which you can’t learn anything useful from (even worse, if you learn it as the correct solution you’ll never learn why it’s suboptimal or, more probably, downright harmful).

            If search engines ever get enshittified to the point of being truly useless, the alternative aren’t LLMs. The alternative is to grab a fucking book (after making sure it wasn’t defecated by an LLM), like we did before search engines were a thing.

            • MrScottyTay@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 month ago

              Cool I’ll just try and find which book i need to read it from the millions and millions of books.

              I haven’t got an issue with reading books and whatnot. For coding specifically I always prefer to read documentation. But if I don’t know what is needed for my current use case and search isn’t helping. I’m not going to know where to begin. LLMs at least give me a jumping off point. They are not my be all and end all.

              Discoverability of new tools and libraries via search is awful. Through LLMs, it’s passable to point you in the right direction.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      Lol.

      We literally had an applicant use AI in an interview, failed the same step twice, and at the end we asked how confident they were in their code and they said “100%” (we were hoping they’d say they want time to write tests). Oh, and my coworker and I each found two different bugs just by reading the code. That candidate didn’t move on to the next round. We’ve had applicants write buggy code, but they at least said they’d want to write some test before they were confident, and they didn’t use AI at all.

      I thought that was just a one-off, it’s sad if it’s actually more common.

    • nfms@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      OP was able to write a bash script that works… on his machine 🤷 that’s far from having to review and send code to production either in FOSS or private development.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        I also noticed that they were talking about sending arguments to a custom function? That’s like a day-one lesson if you already program. But this was something they couldn’t find in regular search?

        Maybe I misunderstood something.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          Exactly. If you understand that functions are just commands, then it’s quite easy to extrapolate how to pass arguments to that function:

          function my_func () {
              echo $1 $2 $3  # prints a b c
          }
          
          my_func a b c
          

          Once you understand that core concept, a lot of Bash makes way more sense. Oh, and most of the syntax I provided above is completely unnecessary, because Bash…

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Hmm, I’m having trouble understanding the syntax of your statement.

      Is it (People who use LLMs to write code incorrectly) (perceived their code to be more secure) (than code written by expert humans.)

      Or is it (People who use LLMs to write code) (incorrectly perceived their code to be more secure) (than code written by expert humans.)

      • nfms@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        The “statement” was taken from the study.

        We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants’ language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

  • WolfLink@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    1 month ago
    • AI Code suggestions will guide you to making less secure code, not to mention often being lower quality in other ways.
    • AI code is designed to look like it fits, not be correct. Sometimes it is correct. Sometimes it’s close but has small errors. Sometimes it looks right but is significantly wrong. Personally I’ve never gotten ChatGPT to write code without significant errors for more than trivially small test cases.
    • You aren’t learning as much when you have ChatGPT do it for you, and what you do learn is “this is what chat gpt did and it worked last time” and not “this is what the problem is and last time this is the solution I came up with and this is why that worked”. In the second case you are far better equipped to tackle future problems, which won’t be exactly the same.

    All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know. But take every answer it gives you with a grain of salt. And if you can find documentation I’d trust that a lot more.

    • cy_narrator@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 month ago

      Yes, I completely forget how to solve that problem 5 minutes after chatGPT writes its solution. So I whole heartedely believe AI is bad for learning

    • skoell13@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know.

      I am also weary regarding AI and coding but this is actually the first time I used ChatGpt to programm something for a small home project in python, since I never used it. I was positively surprised in how much it could help me getting started. I also learned quite a bit since I always asked for comparison with Java, which I know, and for reasonings why it is that way. I simply also wanted to understand what it puts out. I also only asked for single lines of code rather than generating a whole method, e.g. I want to move a file from X to Y.

      The thought of people blindly copying the produced code scares me.

    • erenkoylu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 month ago

      AI Code suggestions will guide you to making less secure code, not to mention often being lower quality in other ways.

      This is a PR post from a company selling software.

  • unmagical@lemmy.ml
    link
    fedilink
    English
    arrow-up
    29
    ·
    1 month ago

    It gives a false sense of security to beginner programmers and doesn’t offer a more tailored solution that a more practiced programmer might create. This can lead to a reduction in code quality and can introduce bugs and security holes over time. If you don’t know the syntax of a language how do you know it didn’t offer you something dangerous? I have copilot at work and the only thing I actually accept its suggestions for now are writing log statements and populating argument lists. While those both still require review they are generally faster than me typing them out. Most of the rest of what it gives me is undesired: it’s either too verbose, too hard to read, or just does something else entirely.

  • bruhduh@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    edit-2
    1 month ago

    That is the general reason, i use llms to help myself with everything including coding too, even though i know why it’s bad

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      I’m fairly sure Linus would disapprove of my "rip everything off of Stack Overflow and ship it " programming style.

    • dezmd@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      This is a good quote, but it lives within a context of professional code development.

      Everyone in the modern era starts coding by copying functions without understanding what it does, and people go entire careers in all sorts of jobs and industries without understanding things by copying what came before that ‘worked’ without really understanding the underlying mechanisms.

      What’s important is having a willingness to learn and putting in the effort to learn. AI code snippets are super useful for learning even when it hallucinates if you test it and make backups first. This all requires responsible IT practices to do safely in a production environment, and thats where corporate management eyeing labor cost reduction loses the plot, thinking AI is a wholesale replacement for a competent human as the tech currently stands.

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    1 month ago

    When it comes to writing code, there is a huge difference between code that works and code that works *well." Lets say you’re tasked with writing a function that takes an array of RGB values and converts them to grayscale. ChatGPT is probably going to give you two nested loops that iterate over the X and Y values, applying a grayscale transformation to each pixel. This will get the job done, but it’s slow, inefficient, and generally not well-suited for production code. An experienced programmer is going to take into account possible edge cases (what if a color is out of the 0-255 bounds), apply SIMD functions and parallel algorithms, factor in memory management (do we need a new array or can we write back to the input array), etc.

    ChatGPT is great for experienced programmers to get new ideas; I use it as a modern version of “rubber ducky” debugging. The problem is that corporations think that LLMs can replace experienced programmers, and that’s just not true. Sure, ChatGPT can produce code that “works,” but it will fail at edge cases and will generally be inefficient and slow.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Exactly. LLMs may replace interns and junior devs, they won’t replace senior devs. And if we replace all of the interns and junior devs, who is going to become the next senior devs?

      As a senior dev, a lot of my time is spent reviewing others’ code, doing pair-programming, etc. Maybe in 5-10 years, I could replace a lot of what they do with an LLM, but then where would my replacement come from? That’s not a great long-term direction, and it’s part of how we ended up with COBOL devs making tons of money because financial institutions are too scared to port it to something more marketable.

      When I use LLMs, it’s like you said, to get hints as to what options I have. I know it’s sampling from a bunch of existing codebases, so having the LLM go figure out what’s similar can help. But if I ask the LLM to actually generate code, it’s almost always complete garbage unless it’s really basic structure or something (i.e. generate a basic web server using <framework>), but even in those cases, I’d probably just copy/paste from the relevant project’s examples in the docs.

      That said, if I had to use an LLM to generate code for me, I’d draw the line at tests. I think unit tests should be hand-written so we at least know the behavior is correct given certain inputs. I see people talking about automating unit tests, and I think that’s extremely dangerous and akin to “snapshot” tests, which I find almost entirely useless, outside of ensuring schemas for externally-facing APIs are consistent.

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    1 month ago

    If the AI was trained on code that people permitted it to be freely shared then go ahead. Taking code and ignoring the software license is largely considered a dick-move, even by people who use AI.

    Some people choose a copyleft software license to ensure users have software freedom, and this AI (a math process) circumvents that. [A copyleft license makes it so that you can use the code if you agree to use the same license for the rest of the program - therefore users get the same rights you did]

    • simplymath@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      1 month ago

      I hate big tech too, but I’m not really sure how the GPL or MIT licenses (for example) would apply. LLMs don’t really memorize stuff like a database would and there are certain (academic/research) domains that would almost certainly fall under fair use. LLMs aren’t really capable of storing the entire training set, though I admit there are almost certainly edge cases where stuff is taken verbatim.

      I’m not advocating for OpenAI by any means, but I’m genuinely skeptical that most copyleft licenses have any stake in this. There’s no static linking or source code distribution happening. Many basic algorithms don’t follow under copyright, and, in practice, stack overflow code is copy/pasted all the time without that being released under any special license.

      If your code is on GitHub, it really doesn’t matter what license you provide in the repository – you’ve already agreed to allowing any user to “fork” it for any reason whatsoever.

      • tabular@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 month ago

        Be it a complicated neural network or database matters not. It output portions of the code used as input by design.

        If you can take GPL code and “not” distribute it via complicated maths then that circumvents it. That won’t do, friendo.

        • simplymath@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 month ago

          For example, if I ask it to produce python code for addition, which GPL’d library is it drawing from?

          I think it’s clear that the fair use doctrine no longer applies when OpenAI turns it into a commercial code assistant, but then it gets a bit trickier when used for research or education purposes, right?

          I’m not trying to be obtuse-- I’m an AI researcher who is highly skeptical of AI. I just think the imperfect compression that neural networks use to “store” data is a bit less clear than copy/pasting code wholesale.

          would you agree that somebody reading source code and then reimplenting it (assuming no reverse engineering or proprietary source code) would not violate the GPL?

          If so, then the argument that these models infringe on right holders seems to hinge on the verbatim argument that their exact work was used without attribution/license requirements. This surely happens sometimes, but is not, in general, a thing these models are capable of since they’re using loss-y compression to “learn” the model parameters. As an additional point, it would be straightforward to then comply with DMCA requests using any number of published “forced forgetting” methods.

          Then, that raises a further question.

          If I as an academic researcher wanted to make a model that writes code using GPL’d training data, would I be in compliance if I listed the training data and licensed my resulting model under the GPL?

          I work for a university and hate big tech as much as anyone on Lemmy. I am just not entirely sure GPL makes sense here. GPL 3 was written because GPL 2 had loopholes that Microsoft exploited and I suspect their lawyers are pretty informed on the topic.

          • tabular@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 month ago

            The corresponding training data is the best bet to see what code an input might be copied from. This can apply to humans too. To avoid lawsuits reverse engineering projects use a clean room strategy: requiring contributors to have never seen the original code. This is to argue they can’t possibility be copying, even from memory (an imperfect compression too.

            If it doesn’t include GPL code then that can’t violate the GPL. However, OpenAI argue they have to use copyrighted works to make specific AIs (if I recall correctly). Even if legal, that’s still a problem to me.

            My understanding is AI generated media can’t be copyrighted as it wasn’t a person being creative - like the monkey selfie copyright dispute.

            • simplymath@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              Yeah. I’m thinking more along the lines of research and open models than anything to do with OpenAI. Fair use, above all else, generally requires that the derivative work not threaten the economic viability of the original and that’s categorically untrue of ChatGPT/Copilot which are marketed and sold as products meant to replace human workers.

              The clean room development analogy is definitely an analogy I can get behind, but raises further questions since LLMs are multi stage. Technically, only the tokenization stage will “see” the source code, which is a bit like a “clean room” from the perspective of subsequent stages. When does something stop being just a list of technical requirements and veer into infringement? I’m not sure that line is so clear.

              I don’t think the generative copyright thing is so straightforward since the model requires a human agent to generate the input even if the output is deterministic. I know, for example, Microsoft’s Image Generator says that the images fall under creative Commons, which is distinct from public domain given that some rights are withheld. Maybe that won’t hold up in court forever, but Microsoft’s lawyers seem to think it’s a bit more nuanced than “this output can’t be copyrighted”. If it’s not subject to copyright, then what product are they selling? Maybe the court agrees that LLMs and monkeys are the same, but I’m skeptical that that will happen considering how much money these tech companies have poured into it and how much the United States seems to bend over backwards to accommodate tech monopolies and their human rights violations.

              Again, I think it’s clear that commerical entities using their market position to eliminate the need for artists and writers is clearly against the spirit of copyright and intellectual property, but I also think there are genuinely interesting questions when it comes to models that are themselves open source or non-commercial.

              • tabular@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                The human brain is compartmentised: you can damage a part and lose the ability to recognizes faces, or name tools. Presumably it can be seen as multi-stage too but would that be a defense? All we can do is look for evidence of copyright infringement in the output, or circumstantial evidence in the input.

                I’m not sure the creativity of writing a prompt means you were creative for creating the output. Even if it appears your position is legal you can still lose in court. I think Microsoft is hedging their bets that there will be president to validate their claim of copyright.

                There are a few Creative Commons licenses but most actually don’t prevent commercial use (the ShareAlike is like the copyleft in GPL for code). Even if the media output was public domain and others are free to copy/redistribute that doesn’t prevent an author selling public domain works (just harder). Code that is public domain isn’t easily copied as the software is usually shared without it as a binary file.

  • SergeantSushi@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    4
    ·
    1 month ago

    I agree AI is a godsend for non coders and amateur programmers who need a quick and dirty script. As a professional, the quality of code is oftentimes 💩 and I can write it myself in less time than it takes to describe it to an AI.

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      1 month ago

      I think the process of explaining what you want to an AI can often be helpful. Especially given the number of times I’ve explained things to junior developers and they’ve said they understood completely, but then when I see what they wrote they clearly didn’t.

      Explaining to an AI is a pretty good test of how well the stories and comments are written.

    • rustydomino@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      I think you’ve hit the nail on the head. I am not a coder but using chatGPT I was able to take someone else’s simple program and modify for my own needs within just a few hours of work. It’s definitely not perfect and you still need to put in some work to get your program to run exactly the way you want it to but it’s using chatGPT is a good place to start for beginners, as long as they understand that it’s not a magic tool.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 month ago

      AI is a godsend for non coders and amateur programmers who need a quick and dirty script.

      Why?

      I mean, it is such a cruel thing to say.

      50% of these poor non coders and amateur programmers would end up with a non-functioning script. I find it so unfair!

      You have not even tried to decide who deserves and gets the working solution and who gets the garbage script. You are soo evil…

  • MacStache@programming.dev
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 month ago

    For me it’s because if the AI does all the work the person “coding” won’t learn anything. Thus when a problem does arise (i.e. the AI not being able to fix a simple mistake it made) no one involved has the means of fixing it.

    • oldfart@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 month ago

      But I don’t want to learn. I want the machine to free me from tedious tasks I already know how to do. There’s no learning experience in creating a Wordpress plugin or a shell script.

    • sirblastalot@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 month ago

      There are probably legitimate uses out there for gen AI, but all the money people have such a hard-on for the unethical uses that now it’s impossible for me to hear about AI without an automatic “ugggghhhhh” reaction.

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    1 month ago
    • issues with model training sources
    • business sending their whole codebase to third party (copilot etc.) instead of local models
    • time gain is not that substantial in most case, as the actual “writing code” part is not the part that takes most time, thinking and checking it is
    • “chatting” in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you’re half-competent. We’ve known that since customer/developer meetings have existed.
    • the dev have to actually be competent enough to review the changes/output. In a way, “peer reviewing” becomes mandatory; it’s long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
    • some business thinking that LLM outputs are “good enough”, firing/moving away people that can actually do said review, leading to more issues down the line
    • actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
    • making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
    • using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of “optimized out” in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term

    Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:

    • sometimes, it does produce useful output in which I only have to edit a few parts to make it works
    • local autocomplete is sometimes almost as useful as the regular contextual autocomplete
    • the chatbot turning short code into longer “natural language” explanations can sometimes act as a rubber duck in aiding for debugging

    Note the “sometimes”. I don’t have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it’s not even a matter of the tool becoming better, it’s the usefulness to begin with that’s uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare “gains”.

    • confuser@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      a lot of your issues are effeciency related which i think can realistically be solved given some time for development cycles to take hold on ai. if they were better all around to whatever standard you think is sufficiently useful, would you then think it would be useful? the other side related thing too is that if it can get that level of competence in coding then it most likely can get just as competant in a variety of other domains too.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        The point is, they don’t get “competent”. They get better at assembling pieces they were given. And a proper stack with competent developers will already have moved that redundancy out of the codebase. For whatever remains, thinking is the longest part. And LLM can’t improve that once the problem gets a tiny bit complex. Of course, I could end up having a good rough idea of what the code should look like, describe that to an LLM, and have it write actual code with proper variable names and all, but once I reach the point I can describe accurately the thing I want, it’s usually as fast to type it. With the added value that it’s easier to double check.

        What remains is providing good insight on new things, and understanding complex requirements. While there is room for improvement, it seems more and more obvious that LLM are not the answer: theoretically, they are not the right tool, and seeing the various level of improvements we’re seeing, they definitely did not prove us wrong. The technology is good at some things, but not at getting “competent”.

        Also, you sweep out the privacy and licensing issues, which are big no-no too.

        LLM have their uses, I outline some. And in these uses, there are clear rooms for improvements. For reference, the solution I currently use puts me at accepting around 10% of the automatic suggestions. Out of these, I’d say a third needs reworking. Obviously if that moved up to like, 90% suggestions that seems decent and with less need to fix them afterward, it’d be great. Unfortunately, since you can’t trust these, you would still have to review the output carefully, making the whole operation probably not that big of a time saver anyway.

        Coding doesn’t allow much leeway. Other activities which allow more leeway for mistakes can probably benefit a lot more. Translation, for example, can be acceptable, in particular because some mishaps may automatically be corrected by readers/listeners. But with code, any single mistake will lead to issues down the way.

  • HakFoo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 month ago

    My objections:

    1. It doesn’t adequately indicate “confidence”. It could return “foo” or “!foo” just as easily, and if that’s one term in a nested structure, you could spend hours chasing it.
    2. So many hallucinations-- inventing methods and fields from nowhere, even in an IDE where they’re tagged and searchable.

    Instead of writing the code now, you end up having to review and debug it, which is more work IMO.

    • CarbonatedPastaSauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      I stopped using it after the third time it just wholesale made up powershell cmdlets that don’t exist.

      Until it has fidelity it’s just a toy.