Angry users claim they are enabled to delete their own content from the site through the “right to forget,” a common name for a legal right most effectively codified into law through the EU’s General Data Protection Regulation (GDPR). Among other things, the act protects the ability of the consumer to delete their own data from a website, and to have data about them removed upon request. However, Stack Overflow’s Terms of Service contains a clause carving out Stack Overflow’s irrevocable ownership of all content subscribers provide to the site
It reality irritates me when ToS simply state they will do against the law.
It’s not quite that simple, though. GDPR is only concerned with personally identifiable information. Answers and comments on SO rarely contain that kind of information as long as you delete the username on them, so it’s not technically against GDPR if you keep the contents.
You could argue that people can be identified by their writing style. I have no idea how far you’d get with that though.
Frankly I don’t see any way whatsoever that this would fly, and that’s a good thing!
Imagine what it would mean for software-development if one angry dev could request the deletion of all their contributions at a moments notice by pointing to a right to be forgotten. Documentation is really not meaningfully different from that.
If we can’t delete our questions and answers, can we poison the well by uploading masses of shitty questions and answers? If they like AI we could have it help us generate them.
The poison was there all along the way. The poison is us
Inserts spider man meme
Reddit/Stack/AI are the latest examples of an economic system where a few people monetize and get wealthy using the output of the very many.
Technofeudalism
It’s very precisely that.
Take all you want, it will only take a few hallucinations before no one trusts LLMs to write code or give advice
We already have those near constantly. And we still keep asking queries.
People assume that LLMs need to be ready to replace a principle engineer or a doctor or lawyer with decades of experience.
This is already at the point where we can replace an intern or one of the less good junior engineers. Because anyone who has done code review or has had to do rounds with medical interns know… they are idiots who need people to check their work constantly. An LLM making up some functions because they saw it in stack overflow but never tested is not at all different than a hotshot intern who copied some code from stack overflow and never tested it.
Except one costs a lot less…
This is already at the point where we can replace an intern or one of the less good junior engineers.
This is a bad thing.
Not just because it will put the people you’re talking about out of work in the short term, but because it will prevent the next generation of developers from getting that low-level experience. They’re not “idiots”, they’re inexperienced. They need to get experience. They won’t if they’re replaced by automation.
First a nearly unprecedented world-wide pandemic followed almost immediately by record-breaking layoffs then AI taking over the world, man it is really not a good time to start out as a newer developer. I feel so fortunate that I started working full-time as a developer nearly a decade ago.
Dude the pandemic was amazing for devs, tech companies hiring like mad, really easy to get your foot in the door. Now, between all the layoffs and AI it is hellish
People keep saying this but it’s just wrong.
Maybe I haven’t tried the language you have but it’s pretty damn good at code.
Granted, whatever it puts out needs to be tested and possibly edited but that’s the same thing we had to do with Stack Overflow answers.
I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.
For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.
This is what LLMs are currently good for. They are just another tool like tab completion or code linting
Maybe for people who have no clue how to work with an LLM. They don’t have to be perfect to still be incredibly valuable, I make use of them all the time and hallucinations aren’t a problem if you use the right tools for the job in the right way.
The last time I saw someone talk about using the right LLM tool for the job, they were describing turning two minutes of writing a simple map/reduce into one minute of reading enough to confirm the generated one worked. I think I’ll pass on that.
[…]will only take a few hallucinations before no one trusts LLMs to write code or give advice
Because none of us have ever blindly pasted some code we got off google and crossed our fingers ;-)
We should already be at that point. We have already seen LLMs’ potential to inadvertently backdoor your code and to inadvertently help you violate copyright law (I guess we do need to wait to see what the courts rule, but I’ll be rooting for the open-source authors).
If you use LLMs in your professional work, you’re crazy. I would never be comfortably opening myself up to the legal and security liabilities of AI tools.
If you use LLMs in your professional work, you’re crazy
Eh, we use copilot at work and it can be pretty helpful. You should always check and understand any code you commit to any project, so if you just blindly paste flawed code (like with stack overflow,) that’s kind of on you for not understanding what you’re doing.
Why?? Please make this make sense. Having AI to help with coding is ideal and the greatest immediate use case probably. The web is an open resource. Why die on this stupid hill instead of advocating for a privacy argument that actually matters?
Edit: Okay got it. Hinder significant human progress because a company I don’t like might make some more money from something I said in public, which has been a thing literally forever. You guys really lack a lot of life skills about how the world really works huh?
humanity progress is spending cities worth of electricity and water to ask copilot how to use a library and have it lie back to you in natural language? please make this make sense
Why do people roll coal? Why do vandalize electric car chargers? Why do people tie ropes across bike lanes?
Because a changing world is scary and people lash out at new things.
The coal rollers think they’re fighting a vallient fight against evil corporations too, they invested their effort into being a car guy and it doesn’t feel fair that things are changing so they want to hurt people benefitting from the new tech.
Because being able to delete your data from social networks you no longer wish to participate in or that have banned you, as long as they specifically haven’t paid you for the your contributions, is a privacy argument that actually matters, regardless and independent of AI.
In regards to AI, the problem is not with AI in general but with proprietary for-profit AI getting trained with open resources, even those with underlying license agreements that prevent that information being monetized.
Were in a capitalist system and these are for-profit companies, right? What do you think their goal is. It isn’t to help you. It’s to increase profits. That will probably lead to massive amounts of jobs replaced with AI and we will get nothing for giving them the data to train on. It’s purely parasitic. You should not advocate for it.
If it’s open and not-for-profit, it can maybe do good, but there’s no way this will.
Good to know that stackoverflow will not be a trustable place to find solutuons anymore.
This sort of thing is so self-sabotaging. The website already has your comment, and a license to use it. By deleting your stuff from the web you only ensure that the AI is definitely going to be the better resource to go to for answers.
I’m not sure about that… in Europe don’t you have the right to insist that a website no longer use your content?
Not when you’ve agreed to a terms of service that hands over ownership of your content to Stack Overflow, leaving you merely licensed to use your own content.
Bets are strong such tos are not legally enforceable.
How many trees does a person need to make one coffin…
It’s a metaphor for us killing ourselves in the processes of deforestation, not a story of someone actually making a coffin.
It may not have been a wholly serious question.
You’re not a wholly serious person
You wound me.
Removed by mod
I counted around 30-32 in panel 2.
Thank you for your diligence.
I got an email ban.
1609 hours logged 431 solved threads
Well, it is important to comply with the terms of service established by the website. It is highly recommended to familiarize oneself with the legally binding documents of the platform, including the Terms of Service (Section 2.1), User Agreement (Section 4.2), and Community Guidelines (Section 3.1), which explicitly outline the obligations and restrictions imposed upon users. By refraining from engaging in activities explicitly prohibited within these sections, you will be better positioned to maintain compliance with the platform’s rules and regulations and not receive email bans in the future.
ITT: People unable to recognize a joke
Is this a joke?
This is an ironic ChatGPT answer, meant to (rightfully) creep you out.
NGL I read it and laughed at the AI-like response.
Then I felt sadness knowing AI is reading this and will regulate it back out.
Nope, it’s the establishment is cool, elon rocks type.
You really don’t need anything near as complex as AI…a simple script could be configured to automatically close the issue as solved with a link to a randomly-selected unrelated issue.
The enshittification is very real and is spreading constantly. Companies will leech more from their employees and users until things start to break down. Acceleration is the only way.
Accelerationism is like being on a plane and wishing it crashes when one of the engine fails.
That’s a terrible analogy, implying the wish that everyone on the plane dies if one engine fails.
It’s like an airline company has been complete shit for decades, wanting to see them fail fast so that a better airline company can take their place.
Eventually, we will need a fediverse version of StackOverflow, Quora, etc.
Those would be harvested to train LLMs even without asking first. 😐
I’d rather the harvesting be open to all than only the company hosting it.
Why does OpenAI want 10 year old answers about using jQuery whenever anyone posts a JavaScript question, followed by aggressive policing of what is and isn’t acceptable to re-ask as technology moves on?
jQuery is still an excellent Javascript library
Nice try, ChatGPT
At the end of the day, this is just yet another example of how capitalism is an extractive system. Unprotected resources are used not for the benefit of all but to increase and entrench the imbalance of assets. This is why they are so keen on DRM and copyright and why they destroy the environment and social cohesion. The thing is, people want to help each other; not for profit but because we have a natural and healthy imperative to do the most good.
There is a difference between giving someone a present and then them giving it to another person, and giving someone a present and then them selling it. One is kind and helpful and the other is disgusting and produces inequality.
If you’re gonna use something for free then make the product of it free too.
An idea for the fediverse and beyond: maybe we should be setting up instances with copyleft licences for all content posted to them. I actually don’t mind if you wanna use my comments to make an LLM. It could be useful. But give me (and all the other people who contributed to it) the LLM for free, like we gave it to you. And let us use it for our benefit, not just yours.
An idea for the fediverse and beyond: maybe we should be setting up instances with copyleft licences for all content posted to them. I actually don’t mind if you wanna use my comments to make an LLM. It could be useful. But give me (and all the other people who contributed to it) the LLM for free, like we gave it to you. And let us use it for our benefit, not just yours.
This seems like a very fair and reasonable way to deal with the issue.
If this is true, then we should prepare to be shout at by chatgpt why we didnt knew already that simple error.
You joke.
This would have been probably early last year? Had to look up how to do something in fortran (because fortran) and the answer was very much in the voice of that one dude on the Intel forums who has been answering every single question for decades(?) at this point. Which means it also refused to do anything with features newer than 1992 and was worthless.
Tried again while chatting with an old work buddy a few months back and it looks like they updated to acknowledging f99 and f03 exist. So assume that was all stack overflow.
ChatGPT now just says “read the docs!” To every question
Hey ChatGPT, how can I …
“Locking as this is a duplicate of [unrelated question]”