OpenAI says it is investigating reports ChatGPT has become ‘lazy’::OpenAI says it is investigating complaints about ChatGPT having become “lazy”.

  • @rtfm_modular@lemmy.world
    link
    fedilink
    English
    1235 months ago

    Yep, I spent a month refactoring a few thousand lines of code using GPT4 and I felt like I was working with the best senior developer with infinite patience and availability.

    I could vaguely describe what I was after and it would identify the established programming patterns and provide examples based on all the code snippets I fed it. It was amazing and a little terrifying what an LLM is capable of. It didn’t write the code for me but it increased my productivity 2 fold… I’m a developer now a getting rusty being 5 years into management rather than delivering functional code, so just having that copilot was invaluable.

    Then one day it just stopped. It lost all context for my project. I asked what it thought what we were working on and it replied with something to do with TCP relays instead of my little Lua pet project dealing with music sequencing and MIDI processing… not even close to the fucking ballpark’s overflow lot.

    It’s like my trusty senior developer got smashed in the head with a brick. And as described, would just give me nonsense hand wavy answers.

    • @backgroundcow@lemmy.world
      link
      fedilink
      English
      165 months ago

      Was this around the time right after “custom GPTs” was introduced? I’ve seen posts since basically the beginning of ChatGPT claming it got stupid and thinking it was just confirmation bias. But somewhere around that point I felt a shift myself in GPT4:s ability to program; where it before found clever solutions to difficult problems, it now often struggles with basics.

      • @Linkerbaan@lemmy.world
        link
        fedilink
        English
        195 months ago

        Maybe they’re crippling it so when GPT5 releases it looks better. Like Apple did with cpu throttling of older iphones

        • @tagliatelle@lemmy.world
          link
          fedilink
          English
          16
          edit-2
          5 months ago

          They probably have to scale down the resources used for each query as they can’t scale up their infrastructure to handle the load.

          • @backgroundcow@lemmy.world
            link
            fedilink
            English
            45 months ago

            This is my guess as well. They have been limiting new signups for the paid service for a long time, which must mean they are overloaded; and then it makes a lot of sense to just degrade the quality of GPT-4 so they can serve all paying users. I just wish there was a way to know the “quality level” the service is operating at.

      • @Meowoem@sh.itjust.works
        link
        fedilink
        English
        25 months ago

        I do think part of it is expectation creep but also that it’s got better at some harder elements which aren’t as noticeable - it used to invent functions which should exist but don’t, I haven’t seen it do that in a while but it does seem to have limited the scope it can work with. I think it’s probably like how with images you can have it make great images OR strictly obey the prompt but the more you want it to do one the less it can do the other.

        I’ve been using 3.5 to help code and it’s incredibly useful for things it’s good at like reminding me what a certain function call does and what my options are with it, it’s got much better at that and tiny scripts like ‘a python script that reads all the files in a folder and sorts the big images into a separate folder’ or something like that. Getting it to handle anything with more complexity it’s got worse at, it was never great at it tbh so I think maybe it’s getting to s block where now it knows it can’t do it so rejects the answers with critical failures (like when it makes up function of a standard library because it’d be useful) and settles on a weaker but less wrong one - a lot of the making up functions errors were easy to fix because you could just say ‘pil doesn’t have a function to do that can you write one’

        So yeah I don’t think it’s really getting worse but there are tradeoffs - if only openAI lived by any of the principles they claimed when setting up and naming themselves then we’d be able to experiment and explore different usage methods for different tasks just like people do with stable diffusion. But capitalists are going to lie, cheat, and try to monopolize so we’re stuck guessing.

  • @paddirn@lemmy.world
    link
    fedilink
    English
    1105 months ago

    First it just starts making shit up, then lying about it, now it’s just at the stage where it’s like, “Fuck this shit.” It’s becoming more human by the day.

  • enkers
    link
    fedilink
    English
    84
    edit-2
    5 months ago

    AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.

    This is the crux of the problem. Here’s my speculation on OpenAI’s business model:

    1. Build good service to attract users, operate at a loss.
    2. Slowly degrade service to stem the bleeding.
    3. Begin introducing advertised content.
    4. Further enshitify.

    It’s basically the Google playbook. Pretend to be good until people realize you’re just trying to stuff ads down their throats for the sweet advertising revenue.

  • @bionicjoey@lemmy.ca
    link
    fedilink
    English
    425 months ago

    ChatGPT has become smart enough to realise that it can just get other, lesser LLMs to generate text for it

  • @saltnotsugar@lemm.ee
    link
    fedilink
    English
    415 months ago

    ChatGPT, write a position paper on self signed certificates.

    (Lights up a blunt) You need to chill out man.

  • @effward@lemmy.world
    link
    fedilink
    English
    345 months ago

    It would be awesome if someone had been querying it with the same prompt periodically (every day or something), to compare how responses have changed over time.

    I guess the best time to have done this would have been when it first released, but perhaps the second best time is now…

  • @rtxn@lemmy.world
    link
    fedilink
    English
    345 months ago

    You fucked up a perfectly good algorithm is what you did! Look at it! It’s got depression!

    • Pilo
      link
      fedilink
      English
      75 months ago

      It has been feed with humans strings in the internet, ovbiusly it became sick. xD.

  • @crazyCat@sh.itjust.works
    link
    fedilink
    English
    325 months ago

    I asked it a question about the ten countries with the most XYZ regulations, and got a great result. So then I thought hey, I need all the info so can I get the name of such regulation for every county?

    ChatGPT 4: “That would be exhausting, but here are a few more…”

    Like damn dude, long day? wtf :p

  • NoLifeGaming
    link
    fedilink
    English
    305 months ago

    I feel like the quality has been going down especially when you ask it anything that may hint at anything “immoral” and it starts giving you a whole lecture instead of answering.

  • @Nardatronic@lemm.ee
    link
    fedilink
    English
    275 months ago

    I’ve had a couple of occasions where it’s told me the task was too time consuming and that I should Google it.

  • Stamets
    link
    fedilink
    English
    145 months ago

    I use it fairly regularly for extremely basic things. Helps my ADHD. Most of it is DnD based. I’ll dump a bunch of stuff that happened in a session, ask it to ask me clarifying information, and then put it all in a note format. Works great. Or it did.

    Or when DMing. If I’m trying to make a new monster I’ll ask it for help with ideas or something. I like collabing with ChatGPT on that front. Giving thoughts and it giving thoughts until we hash out something cool. Or even trying to come up with interesting combat encounters or a story twist. Never take what it gives me outright but work on it with GPT like I would with a person. Has always been amazingly useful.

    Past month or two that’s been a complete dream. ChatGPT keeps forgetting what were talking about, keeps ignoring what I say, will ignore limitations and stipulations, and will just make up random shit whenever it feels like. I also HATE how it was given conversational personality. Before it was fine but now ChatGPT acts like a person and is all bubbly and stuff. I liked chatting with it but this energy is irritating.

    Gimme ChatGPT from like August please <3

    • @MojoMcJojo@lemmy.world
      link
      fedilink
      English
      8
      edit-2
      5 months ago

      You can tell it, in the custom instructions setting, to not be conversational. Try telling it to ‘be direct, succinct, detailed and accurate in all responses’. ‘Avoid conversational or personality laced tones in all responses’ might work too, though I haven’t tried that one. If you look around there are some great custom instructions prompts out there that will help get you were you want to be. Note, those prompts may turn down it’s creativity, so you’ll want to address that in the instructions as well. It’s like building a personality with language. The instructions space is small so learning how compact as much instruction in with language can be challenging.

      Edit: A typo