

It’s also the free market for those corporations to buy a government and use it to outlaw competition.
It’s also the free market for those corporations to buy a government and use it to outlaw competition.
Yeah, I agree that in the long term those two sentiments are inconsistent, but in the short term we have to deal with allegedly misguided layoffs, and worse user experiences, which I think makes both fair to criticise. Maybe firing everyone and using slop AI will make your company go bankrupt in a few years, and that’s great; in the meantime, employees everywhere can rightfully complain about the slop and the jobs.
But yeah, I don’t think it’s fair to complain about how “inefficient” an early technology is and also call it “magic beans”.
Hah, see that’s what I thought when various family members asked if I had heard about it. Turns out, if our electronics need grounding, so must our bodies…
I have made only factual statements. You can believe I’m arrogant for doing so, you can believe the preference of hundreds of millions of people is “niche” or “few” in number. Those are called opinions.
Which statements have I made that you believe to be my opinion?
Yeah, I understand that you personally choose to disagree with reality, maybe you don’t like what reality has become, but unfortunately that doesn’t make it less real.
Twitter wasn’t profitable for its entire existence, it’s often a cesspool of ragebaiters, but clearly it has value because the second it was taken over, everyone insisted on continuing to use it, even choosing to migrate to various clones.
Uber and Lyft have been struggling to be profitable by effectively stealing from their drivers, but millions of people get off a plane and immediately use the services every day. It clearly has value.
Same for doordash and uber eats.
Your personal distaste for the business practices are valid, but they’re not relevant when discussing what the current state of the technology is. For many millions of people, chatgpt has (for better and worse) replaced traditional search engines. Something like 80% of students now regularly use AI for their homework. When Deepseek released, it immediately jumped to #1 on the Apple Store.
None of that is because they’re “magic beans” from which no value sprouts. Like it or not, people use AI all. the. time. for everything they can imagine. It objectively, undeniably has value. You can staunchly say pretend it doesn’t, but only if you are willingly blind to the voluntary usage patterns of hundreds of millions (possibly billions) of people every hour of every day.
And for the record, I am not in that group. I do not use any LLMs for anything currently, and if anything makes me use AI against my will, I will promptly uninstall it (pun intended).
No opinions whatsoever. I believe I made that clear in my list of things to disregard when considering the objective reality of current AI tech.
Lol this article is very relevant to a lot of scam industries (essential oils, Earthing, 5G protection crystals, etc), but AI is objectively not one of them.
Regardless of how much of a bubble we’re in, regardless of how many bad ideas are being pushed to get VC funding or pump a stock, regardless of how unethical or distopian the tech is, AI objectively has value. It’s proving to be the most disruptive tech since the world wide web (which famously had a very similar bubble of bad ideas), so to call it “magic beans” is just wishful thinking at best.
“Facebook says it’s not forcing you to use Facebook”
The US leadership right now, maybe, but remember that Trump didn’t win so much as the incumbent lost. Most Americans didn’t vote for right-wing policies, they voted against inflation and housing costs. It’ll only take a year or two before people start realizing that Trump can’t fix the problems either (or won’t, because that would mean eating the rich).
So yeah, probably no alliance in the short term, but the US isn’t even its own ally right now, so we need to see how this all shakes out before we know how we’ll align with the EU in the long term (i.e. beyond this term).
Trump knows this, and he’s also been advised that the one thing that historically restores popularity for a leader is expanding a country’s territory. So my guess is that, the worse Trump’s approval rating is, the more likely it’ll be that he tries to take Greenland or Panama. Which I think is still a huge gamble for his approval rating.
Hah, I was going to say, I do check for updates at least once when I first get it, because I have run into TVs that shipped with HDR bugs in the stock firmware.
For the Chromecast, what happens with yours? Mine randomly restarts, or reconnects to wifi, or sometimes Plex has trouble buffering until I reboot it.
I recently bought a raspi5 to try out FCast, though currently afaik only Grayjay supports it.
I just never connect my TV to the internet and never have any problems. My old Chromecast is showing its age though.
I assume DeepComputing isn’t releasing any of their designs as open source, right? They’re just producing RISC-V compatible chips?
we will not land in a society where the general public profits from not having work. It will be the same owners of capital profiting as per usual.
If we do nothing, sure. I’m suggesting, like the article, that we do something.
The only sentiment I took issue with was the poster above who suggested that somehow the solution would be to delete/destroy illegally trained networks. I’m just saying that’s not practical nor progressive. AI is here to stay, we just need to create legislature that ensures it works for us, especially when it couldn’t have been built without us.
I didn’t misinterpret what you were saying, everything I said applies to the specific case you lay out. If illegal networks were somehow entirely destroyed, someone would just make them again. That’s my point, there’s no way around that, there’s just holding people accountable when they do it. IMO that takes the form of restitutions to the people proportional to profits.
I understand that you are familiar with the buzzword “LLM”, but let me introduce you to a different one: transformers.
Virtually all modern successful AIs are based on transformers, LLMs included. I agree that LLMs currently amount to a chinese-room-inspired parlor trick, but the money involved has no doubt advanced all transfomer-based AI research, both directly (what works for LLMs may generalize) and indirectly (the market demand for LLMs in consumer products has created the a demand for power and compute hardware).
We have transformer-based AI to thank for our understanding of the covid19 protein, and developing a safe and effective vaccine in a timely manner.
The massive demand for energy has convinced Microsoft, Meta, and others to invest in their own modern nuclear power plants, representing a monumental step forward in sustainable energy generation that we have been trying to convince the US government to take for decades.
Modern AI is being used to solve the hardest problems of nuclear fusion. If we can finally crack that nut, there’s no telling what’s possible.
But specifically when it comes to LLMs, profitable or not, people obviously find them useful. People aren’t using it in place of search engines, or doing all their homework with it because they don’t find it useful. My only argument is that any AI trained on public content without consent should be required to effectively buy a license from, or pay royalties to the public. If McDonald’s is going to replace their front counters with AI trained on public content, then they should have to pay taxes proportional to how much use they get from that AI.
In the theoretical extreme, if someone trains an AI on the general public’s data, and is able to create an AI that somehow replaces every job on earth, then congrats, we now live in a post-work society, we just need to reach out and take it rather than letting one person capitalize infinitely.
And at the end of the day, if you honestly believe the profits from AI are non-existent, then what are you worried about? All those companies putting all their eggs in the LLM basket are going to disappear overnight when the AI bubble finally pops, right?
Destroying it is both not an option, and an objectively regressive suggestion to even make.
Destruction isn’t possible because even if you deleted every bit of information from every hard drive in the world, now that we know it’s possible, someone would recreate it all in a matter of months.
Regressive because you’re literally suggesting that we destroy a new technology because we’re afraid of what it will do to the technology it replaces. Meanwhile, there’s a very decent chance that AI is our best chance at solving the energy/climate crises through advancing nuclear tech, as well as surviving the next pandemic via ground breaking protein folding tech.
I realize AI tech makes people uncomfortable (for…so many reasons), but becoming old fashioned conservatives in response is not a solution.
I would take it a step further than public domain, though. I would also make any profits from illegally trained AI need to be licensed from the public. If you’re going to use an AI to replace workers, then you need to pay taxes to the people proportional to what you would be paying those it replaces.
Yeah it’s not a mastodon issue any more than racist speech is an issue with our ability to vocalize as humans.
Similarly, the solution to people saying racist things isn’t for all speech to be policed by a central authority, it’s for societies themselves to learn to identify and reject racism.
Comparing the “racism” present on a federated service to that on a centralized one doesn’t make sense. You can say certain instances of the service fail to adequately moderate racism, but there are so many niche pockets of mastodon that most people are exposed to, and moderated by, completely different groups.
To make a slightly more nerdy analogy, it’s like someone saying “the windows desktop experience is better than Linux”. Well Linux doesn’t come with a desktop interface, so that statement doesn’t make sense. Which of the dozens of windowers/distros are you talking about? I’m sure the criticism is fair, but it doesn’t contain enough information to make any real claim.
So it’s not unreasonable for one person to say “I see racism on Mastodon” and many others to say “I never see it”, and not just because of the races of the people involved. “Mastodon” refers to a protocol, not the various ecosystems that use it.
Is there any good LLM that fits this definition of open source, then? I thought the “training data” for good AI was always just: the entire internet, and they were all ethically dubious that way.
What is the concern with only having weights? It’s not abritrary code exectution, so there’s no security risk or lack of computing control that are the usual goals of open source in the first place.
To me the weights are less of a “blob” and more like an approximate solution to an NP-hard problem. Training is traversing the search space, and sharing a model is just saying “hey, this point looks useful, others should check it out”. But maybe that is a blob, since I don’t know how they got there.