• 0 Posts
  • 18 Comments
Joined 10 months ago
cake
Cake day: June 27th, 2023

help-circle
rss
  • I am not gambling anyone’s life. I have almost no power and can’t do anything to create or delay a crisis. The best I will do with my limited power is try and make it so the organization on the ground is ready to attempt that leftward shift if/when the crisis comes.

    It could end in a fascist dystopia, but I think that’s less likely. At least in the u.s. where fascism never took off in its heyday before it had any stigma. If your talking about something with no evidence then that fascist speculation would be something, at least there’s precedent for a Keynesian new deal in the U.S. I do recognize it as a possibility though and that’s why I said probably, not with “absolute certainty”.

    If the crisis doesn’t happen we may all die as well as neither party seems willing to deal with the climate catastrophe. That outcome seems way more certain to me, as shown by the repeated calls for action in the last 2 decades falling in deaf ears, than fascists taking over if a crisis does happen.

    The way I see it is there’s a 90% chance of severe climate catastrophe on the current course, and a 30% chance there will be a fascist takeover if there’s a crisis, but a 50% chance for a green new deal. These are all completely speculative, but so is any guess on the future and I like to believe my guess has some backing in historical reality.


  • I think your overestimating how much people will tolerate deprivation before turning on the system. After a certain point people will reject the system, sometimes violently , and seek a new way of organizing society. It’s why the great depression didn’t turn into the corporate hellscape you envision even though companies were just as powerful at the end of the 1920s. Barring some sort of military coup you can’t subject a majority of the population to slavery and poverty without those people revolting.

    The system relies on the at least tacit consent of the majority of the population, if you break that it becomes unstable and in that instability new ideas can come in. This is why most successful revolutions follow a crisis, one that discredits the current ruling order and allows something new to take it’s place.

    It can be dangerous though, that new thing could be FDR or it could be Hitler, but it’s bound to happen eventually and our best hope now is to lay the groundwork so that when it does we get a leader ready to usher in a new green economy.


  • It can answer questions as well as any person. Just because you may need to check with another source doesn’t mean it didn’t answer the question it just means you can’t fully trust it. If I ask someone who’s the fourth u.s. president and they say Jefferson they still answered the question, they just answered it wrong. You also don’t have to check with another source in the same way you do with asking a person a question, if it sounds right. If that person answered Madison and I faintly recall it and think it sounds right I will probably not check their answer and take it as fact.

    For example I asked chatgpt for a chocolate chip cookie recipe once. I make cookies pretty often so would know if the recipe seemed off but the one it provided seemed good, I followed it and made some pretty good cookies. It answered the question correctly as shown by the cookies. You could argue it plagiarized but while the ingredients and steps were pretty close to some I found later none were a perfect match which is about as good as you can get with recipes which tend to converge in the same thing. The only real difference between most of them is the dumb story they give at the beginning which thankfully chatgpt doesn’t do.

    The 7th grader and plagiarism comment make me think you haven’t played with them much or really tested them. I have had it write contracts, one of which I had reviewed by a lawyer who only had some small comments, as well as other letters and documents I needed for my mortgage and buying a home. All of these were looked over by proffesionals and none of them realized it was a bot. None of them were plagiarized too because the parameters I gave it and the output it created were way too unique to be in its training set.


  • I agree that initially people respond to crisis with conservatism and leaning on the current system, but that conservatism runs out though. If the system is able to solve the crisis, or at least show progress in solving it then it can be re-entrenched. If it can’t and proves utterly incapable of solving it, or even perpetuating it then people start to get radical. In 1929 and 1930 many people still believed laissez-faire could fix the depression but as conditions stayed the same or worsened people started to realize it’s flaws. By 1932 they were ready to give up on it and try anything to end it. 2007 was different as the neo liberal system was able to muster a response to the problem of speculative financial collapse in the form of financial bail outs which did bottom out the recession and start an upward trend.

    The crisis I’d “root for”, as much as I can root for something that’d cause immediate suffering to many people, is one that neo liberalism can’t handle and therefore discredits it as a governing system. That crisis will come eventually, just as the depression ended laissez-faire and stagflation ended keynesianism and if that pattern holds up we’ll probably see a swing to the left this time on this metronome of economic consensus.



  • Are you saying the green new deal will be a bad idea and unpopular or triggering a depression to get it would be unpopular and a bad idea? Because the former I’d say is necessary to stop and help heal both climate change and income inequality, and if it’s anything like the first new deal would bring the party into power for a generation and set a new economic consensus. I think the latter is a bit extreme to accomplish it but idk any other way to get people to completely turn away from the current system and it’ll just be boiling the frog as the planet gets hotter, the rich get richer and the parties lose popularity but retain power.



  • Auto complete is not a lossy encoding of a database either, it’s a product of a dataset, just like you are a product of your experiences, but it is not wholly representative of that dataset.

    A wind tunnel is not intelligent because it doesn’t answer questions or process knowledge/data it just creates data. A wind tunnel will not answer the question “is this aerodynamic” but you can observe a wind tunnel and use your intelligence to process that and answer the question.

    Temperature and randomness don’t explain hallucinations, they are a product of inference. If you turned the temperature down to 0 and asked it the question " what happened in the great Christmas fire of 1934" it will give it’s best guess of what happened then even though that question is not in it’s dataset and it can’t look up the answer. The temperature would just mean that between runs it would consistently give the same story, the one that is most statistically probable, as opposed to another one that may be less probable but was pushed up due to randomness. Hallucinations are a product of inference, of taking something at face value then trying to explain it. People will do this too, if you tell someone a lie confidently then ask them about it they will use there intelligence to rationalize a story about what happened.



  • All inference is just statistical probability. Every answer you give outside of your direct experience is just you infering what might be the answer. Even things we hold as verifiable truth that we haven’t experienced is just a guess that the person who told it to us isn’t lying or has some sort of proof to there statement.

    Take some piece of knowledge like “Biden won the 2020 election” me and you would probably agree this is the truth, but we can’t possibly “know” it’s the truth or connect it to some verifiable experience, we never counted every ballot or were at every polling station. We “know” it’s the truth because more people, and more respectable people, told us it was and our brain makes a statistical guess that their answer is right based on their weight. Just like an LLM other people will hallucinate or bullshit and come on the other side of that guess and assert the opposite and even make up stuff to go along with that story.

    This in essence is what reasoning is, you weigh the possibilities of either side being correct, and pick the one that has more weight. That’s why science, an epistemological application of reason, is so heavily reliant on statistics…


  • This is not how LLMs work, they are not a database nor do they have access to one. They are a trained neural net with a set of weights on matrices that we don’t fully understand. We do know that it can’t possibly have all the information from its training set since the training sets (measured in tb or pb) are orders of magnitude bigger than the models (measured in gb). The llm itself is just what it learned from reading all the training data, just like how you don’t memorize every passage in a book you read, just core concepts, relationships and lessons. So if I ask you " who was gatsbys love interest?" You don’t remember the line and page of the text that says he loves Daisy, your brain just has a strong connection of neurons between Gatsby, Daisy , love, longing etc. that produces the response “Daisy”. The same is true in an LLM, it doesn’t have the whole of the great Gatsby in its model but it too would have a strong connection somewhere between Gatsby, Daisy, love etc. to answer the question.

    What your thinking of are older chatbots like Siri or Google assistant which do have a set of preset responses mixed in with some information from a structured database.


  • computer scientists, neurologists, and philosophers can’t answer that either, or else we’d already have the algorithms we’d need to build human equivalent A.I.

    I think your mixing up sentience / consciousness with intelligence. What is consciousness doesn’t have a good answer right now and like you said philosophers, computer scientists and neurologist can’t come to a clear answer but most think llms aren’t conscious.

    Intelligence on the other hand does have more concrete definitions that at least computer scientists use that usually revolve around the ability to solve diverse problems and answer questions outside of the entities original training set / database. Yes doing an SAT test with the answer key isn’t intelligent because that’s in your “database” and is just a matter of copying over the answers. LLMs don’t do this though, it doesn’t do a lookup of past SAT questions it’s seen and answer it, it uses some process of “reasoning” to do it. If you gave an LLM an SAT question that was not in it’s original training set it would probably still answer it correctly.

    That isn’t to say that LLMs are the be all and end all of intelligence, there are different types of intelligence corresponding to the set of problems that intelligence is solving. A plant identification A.I. is intelligent for being able to identify various plants in different scenarios but it completely lacks any emotional, conversational intelligence, etc. The same can be said of a botanist who also may be able to identify plants but may lack some artistic intelligence to depict them. Intelligence comes in many forms.

    Different tests can measure different forms of intelligence. The SAT measures a couple like reasoning, rhetoric, scientific etc. The turing test measures conversational intelligence , and the article you showed doesn’t seem to show a quote from him saying that it doesn’t measure intelligence, but turing would probably agree it doesn’t measure some sort of general intelligence, just one facet.






  • Sorry to burst your bubble but it’s decades away if it’s even possible. The current process involves putting animal cells in a bioreactor(vat) with nutrients and having them propagate. It’s hitting hard limits on scaling though because the larger the vat the harder it is to get waste out and nutrients in without some sort of vascular system. Even if it did scale it’s not producing steaks or even meat chunks, it’s just making meat cell slurry that’s mixed with a bunch of other stuff to make something like beyond burger but with some actual “cow” cells for probably 3 times the price.

    It took evolution billions of years to efficiently make complex multicellular structures, humans are a long way off.