Honestly I feel people are using them completely wrong.
Their real power is their ability to understand language and context.
Turning natural language input into commands that can be executed by a traditional software system is a huge deal.
Microsoft released an AI powered auto complete text box and it’s genius.
Currently you have to type an exact text match in an auto complete box. So if you type cats but the item is called pets you’ll get no results. Now the ai can find context based matches in the auto complete list.
This is their real power.
Also they’re amazing at generating non factual based things. Stories, poems etc.
They’re really, really bad at context. The main failure case isn’t making things up, it’s having text or image in part of the result not work right with text or image in another part because they can’t even manage context across their own replies.
See images with three hands, where bow strings mysteriously vanish etc.
New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently… So you can give whole datasets or books as context and ask questions about them.
Yes and no. I’ve had to insert a LOT of meaning to get a story worth any substance, and I’ve had to do a lot of editing to get good images. It’s really good at giving me a figure that’s 90% done, but that last 10% touching up still often takes me a day or so of work.
Exactly. The big problem with LLMs is that they’re so good at mimicking understanding that people forget that they don’t actually have understanding of anything beyond language itself.
The thing they excel at, and should be used for, is exactly what you say - a natural language interface between humans and software.
Like in your example, an LLM doesn’t know what a cat is, but it knows what words describe a cat based on training data - and for a search engine, that’s all you need.
So if you type cats but the item is called pets get no results. Now the ai can find context based matches in the auto complete list.
Google added context search to Gmail and it’s infuriating. I’m looking for an exact phrase that I even put in quotes but Gmail returns a long list of emails that are vaguely related to the search word.
It shouldn’t even automatically fallback. If I am looking for an exact phrase and it doesn’t exist, the result should be “nothing found”, so that I can search somewhere else for the information. A prompt, “Nothing found. Look for related information?” Would be useful.
But returning a list of related information when I need an exact result is worse than not having search at all.
Fuzzy matching is a search technique that uses a set of fuzzy rules to compare two strings. The fuzzy rules allow for some degree of similarity, which makes the search process more efficient.
That allows for mis typing etc. it doesn’t allow context based searching at all. Cat doesn’t fuzz with pet. There is no similarity.
Searching with synonym matching is almost.decades old at this point. I worked on it as an undergrad in the early 2000s.and it wasn’t new then, just complicated. Google’s version improved over other search algorithms for a long time.and then trashed it by letting AI take over.
Google’s algorithm has pretty much always used AI techniques.
It doesn’t have to be a synonym. That’s just an example.
Typing diabetes and getting medical services as a result wouldn’t be possible with that technique unless you had a database of every disease to search against for all queries.
The point is AI means you don’t have to have a giant lookup of linked items as it’s trained into it already.
That’s why I only use Perplexity. ChatGPT can’t give me sources unless I pay, so I can’t trust information it gives me and it also hallucinated a lot when coding, it was faster to search in the official documentation rather than correcting and debugging code “generated” by ChatGPT.
I use Perplexity + SearXNG, so I can search a lot faster, cite sources and it also makes summaries of your search, so it saves me time while writing introductions and so.
It sometimes hallucinates too and cites weird sources, but it’s faster for me to correct and search for better sources given the context and more ideas. In summary, when/if you’re correcting the prompts and searching apart from Perplexity, you already have something useful.
BTW, I try not to use it a lot, but it’s way better for my workflow.
Honestly I feel people are using them completely wrong.
Their real power is their ability to understand language and context.
Turning natural language input into commands that can be executed by a traditional software system is a huge deal.
Microsoft released an AI powered auto complete text box and it’s genius.
Currently you have to type an exact text match in an auto complete box. So if you type cats but the item is called pets you’ll get no results. Now the ai can find context based matches in the auto complete list.
This is their real power.
Also they’re amazing at generating non factual based things. Stories, poems etc.
…they do exactly none of that.
No, but they approximate it. Which is fine for most use cases the person you’re responding to described.
They’re really, really bad at context. The main failure case isn’t making things up, it’s having text or image in part of the result not work right with text or image in another part because they can’t even manage context across their own replies.
See images with three hands, where bow strings mysteriously vanish etc.
New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently… So you can give whole datasets or books as context and ask questions about them.
They do it much better than anything you can hard code currently.
Yes and no. I’ve had to insert a LOT of meaning to get a story worth any substance, and I’ve had to do a lot of editing to get good images. It’s really good at giving me a figure that’s 90% done, but that last 10% touching up still often takes me a day or so of work.
Exactly. The big problem with LLMs is that they’re so good at mimicking understanding that people forget that they don’t actually have understanding of anything beyond language itself.
The thing they excel at, and should be used for, is exactly what you say - a natural language interface between humans and software.
Like in your example, an LLM doesn’t know what a cat is, but it knows what words describe a cat based on training data - and for a search engine, that’s all you need.
Google added context search to Gmail and it’s infuriating. I’m looking for an exact phrase that I even put in quotes but Gmail returns a long list of emails that are vaguely related to the search word.
That is indeed a poor use. Searching traditionally first and falling back to it would make way more sense.
It shouldn’t even automatically fallback. If I am looking for an exact phrase and it doesn’t exist, the result should be “nothing found”, so that I can search somewhere else for the information. A prompt, “Nothing found. Look for related information?” Would be useful.
But returning a list of related information when I need an exact result is worse than not having search at all.
That’s called “fuzzy” matching, it’s existed for a long, long time. We didn’t need “AI” to do that.
No it’s not.
That allows for mis typing etc. it doesn’t allow context based searching at all. Cat doesn’t fuzz with pet. There is no similarity.
Also it is an AI technique itself.
Bullshit, fuzzy matching is a lot older than this AI LLM.
Searching with synonym matching is almost.decades old at this point. I worked on it as an undergrad in the early 2000s.and it wasn’t new then, just complicated. Google’s version improved over other search algorithms for a long time.and then trashed it by letting AI take over.
Google’s algorithm has pretty much always used AI techniques.
It doesn’t have to be a synonym. That’s just an example.
Typing diabetes and getting medical services as a result wouldn’t be possible with that technique unless you had a database of every disease to search against for all queries.
The point is AI means you don’t have to have a giant lookup of linked items as it’s trained into it already.
That’s why I only use Perplexity. ChatGPT can’t give me sources unless I pay, so I can’t trust information it gives me and it also hallucinated a lot when coding, it was faster to search in the official documentation rather than correcting and debugging code “generated” by ChatGPT.
I use Perplexity + SearXNG, so I can search a lot faster, cite sources and it also makes summaries of your search, so it saves me time while writing introductions and so.
It sometimes hallucinates too and cites weird sources, but it’s faster for me to correct and search for better sources given the context and more ideas. In summary, when/if you’re correcting the prompts and searching apart from Perplexity, you already have something useful.
BTW, I try not to use it a lot, but it’s way better for my workflow.