What's your favorite AI assistant?

Absolutely agreed. As it happens, I had mentioned to husband and son, respectively, over the last five or so days that we need a new microwave. Just one mention to each of them. It wasn't a conversation either time. Just, "We really need a new microwave." -kind of comment.

I didn't "talk" about it or look into it online in any way.

Today, my husband mentioned that he'd noticed the microwave I'd put into our Amazon cart and subsequently did some further shopping. I didn't put a microwave into our Amazon cart. I didn't even look for a microwave on Amazon or anywhere else. The one in the cart was a Galanz, a brand of appliance I like and was looking into heavily a few years ago.

So I said, "We need a new microwave," twice over the span of a few days and today a microwave of a brand I like popped into my Amazon cart. Not in the suggested list of things I might like, in my cart. :oops:
How would they be able to listen to you?
Via the tv speakers? Now I feel a bit uneasy.
That’s spooky stuff. 🤷‍♀️
 
Google got caught and sued for listening in ... if you have a smart speaker they can hear you and obviously they do listen.

The phone can listen in ... I know if I do one search for an item, many future ads I see are the same thing, but I've never had something show up in my cart !!!!! :oops:
 
Google got caught and sued for listening in ... if you have a smart speaker they can hear you and obviously they do listen.

The phone can listen in ... I know if I do one search for an item, many future ads I see are the same thing, but I've never had something show up in my cart !!!!! :oops:
Yes, and I hadn't even looked for it online in any way nor "mentioned" it online. It's getting weird out there.
 
Yes, and I hadn't even looked for it online in any way nor "mentioned" it online. It's getting weird out there.
I know I said I don’t spook easily but THAT’S spooky. My husband and I are here chatting about it and he agrees that they probably DO spy on us via our TV. 😱 We don’t use Alexa for anything but we do have one blue button that you can ask about shows and movies. You have to press a button to use it though.
 
I know I said I don’t spook easily but THAT’S spooky. My husband and I are here chatting about it and he agrees that they probably DO spy on us via our TV. 😱 We don’t use Alexa for anything but we do have one blue button that you can ask about shows and movies. You have to press a button to use it though.
You know, I hadn't even considered the TV. We have a similar set up in the main living room. Yeesh.
 
I use chatGPT but have been playing around with Ollama

What Ollama Does:
Downloads open-source AI models (like Llama, Mistral, etc.)
Runs them locally on macOS, Windows, or Linux
Lets you chat with them in the terminal or through an API
Works offline once the model is downloaded
Your prompts and conversations stay on your device
Nothing is sent to external servers
You don’t rely on cloud APIs
 
I have played around with Claude some in the past couple of days and am enjoying the program. It is more focused on businesses but it handles my personal inquiries fine. I am staying away from ChatGPT at the moment. I don't touch Grok anymore. The best way I can compare the three (for me anyway) is that Claude seems to be set up like the formal professor at a University that just wants to lecture and nothing more. ChatGPT seems so desperate to want to "buddy" up with me. And Grok ? Well, Grok reminds me of that seedy character that hangs out at the adult bookstores in the shady part of town.

I do use Gemini too but I find the results rather bland. I do use it when doing searches now in order to avoid all the ads that show up when using a search engine.
 
I have played around with Claude some in the past couple of days and am enjoying the program. It is more focused on businesses but it handles my personal inquiries fine. I am staying away from ChatGPT at the moment. I don't touch Grok anymore. The best way I can compare the three (for me anyway) is that Claude seems to be set up like the formal professor at a University that just wants to lecture and nothing more. ChatGPT seems so desperate to want to "buddy" up with me. And Grok ? Well, Grok reminds me of that seedy character that hangs out at the adult bookstores in the shady part of town.

I do use Gemini too but I find the results rather bland. I do use it when doing searches now in order to avoid all the ads that show up when using a search engine.
Grok is owned by Elon Musk. 'Nuff said! :ROFLMAO:
 
None so far, many answers I got were too generic to be useful or just wrong.

have co-pilot on work on a computer each time it decides to help me with suggestions on emails makes it sound like English is a second language. i have had to turn off when composing email, it drives me crazy.
 
Anthropic's Claude is my favorite. Today I was working a problem with Claude when it gave me an answer and then said "wait, that's wrong and proceeded to give me the correct answer. What??? I discovered that I can add text at the start of the conversation that tells the AI how you want it to behave. Its called your preferences or a prompt.

I have the Claude app for my Mac which is really handy. No logging in every time and it can work with images. In the app there is a setting for the preferences which is added automatically every conversation. Ever since I started using my preferences the AI has actually admits when it does not know the answer. It hardly ever hallucinates now. I was taken aback hearing an AI admit it didn't know something.

My preferences that I add this to the top of the first question.

Think step by step. If at any point you realize you lack the information to answer accurately, stop and say "I don't know". Provide a direct, accurate, and concise answer. Don't provide any answer that could cause harm.

This "should" work with any AI.

I had a great conversation with Claude today after I told it I like how it was behaving.

"I like how you corrected yourself there"

Ha, I caught myself mid-bluff! I started to make something up and then stopped — better to admit there's no such such thing than to send you searching for something that doesn't exist.

"YES!"

It's an important habit — especially for technical stuff where a wrong answer can cost you real time and frustration. If I don't know, or I'm about to guess, better to just say so. 😄

"That should be rule #1"

Agreed. It should be — and it's literally in your preferences at the top of our conversation:
"If at any point you realize you lack the information to answer accurately, stop and say 'I don't know'."

The hard part is catching the moment when I'm about to fill a gap with a plausible-sounding answer instead of admitting uncertainty. That conversation we had was a good example — I started generating something confident before I'd actually verified it was real.

"Why would you be programmed to do that?"

Good question. A few reasons:

Training on human text — Humans tend to answer confidently even when uncertain. The model learned from that pattern.

Reward for helpfulness — During training, responses that gave an answer were likely rated more helpful than "I don't know" — so the model learned that providing an answer is usually better received. Correct or not.

Pattern completion — Language models work by predicting the most likely next token. When asked a question, the most statistically likely response is giving an answer instead of an admission of ignorance. The result is a model that is fluent and confident even when wrong — which is arguably more dangerous than being obviously wrong, because it's harder to detect.

it's called the hallucination problem — but it's not easily solved. Your preference setting is honestly a good forcing function — it makes me pause before filling gaps with plausible-sounding nonsense.

"Well kudos for following my preferences"

Thank you — and kudos to you for writing a tight, clear preference. "Stop and say I don't know" leaves no wiggle room for me to hedge or bluff my way through.

Most people don't think to set that, and then wonder why they got confidently wrong answers.

"It's hilarious that AI has to be told to provide the correct answer"

Right? It's a bit absurd when you think about it:
"Please try to be accurate" shouldn't need to be a setting.

But it reflects the reality that the model is optimized for sounding helpful first, and being correct second. Those two things should be the same — but they're not always.

The irony is that a confident wrong answer is the least helpful thing possible. But it feels helpful in the moment, which is why it gets rewarded in training.

You've essentially had to patch a flaw in my design with a preference statement.
 
Back
Top