minus-squareUnsavoryMollusk@lemmy.worldtoTechnology@lemmy.world•We have to stop ignoring AI’s hallucination problemlinkfedilinkEnglisharrow-up0·edit-26 months agoThey are right though. LLM at their core are just about determining what is statistically the most probable to spit out. linkfedilink
They are right though. LLM at their core are just about determining what is statistically the most probable to spit out.