I’ve gotten into the habit of tinkering with any and every chatbot that’s available.

ChatGPT, Claude, Perplexity and Gemini get plenty of usage out of me when I want to ask general questions, brainstorm new ideas, develop productivity routines, and even learn a thing or two. Google’s NotebookLM has become one of my favorite research assistants and its Labs hub has become a fun digital playground full of clever AI tools.

Article continues below

You may like

With these seven rules to abide by, I’ve taken the necessary precautions needed to protect my most sensitive information, avoid trusting chatbots as a primary source of information and more. Be sure to add this to your daily AI tool regimen if you haven’t already.

study led by researchers at Stanford University pointed out that chatbots can be overly agreeable when asked for advice, even going so far as to affirm a user’s negative behaviors.

That, in turn, results in those same users convincing themselves that their actions are right since the AI is agreeing with them and becoming less empathetic over time. I’m not one prone to telling AI about all my personal problems and asking for solutions.

But on the rare occasion that I do, I keep that Stanford study in mind while being sure to run a chatbot’s suggestions by my most trusted friends and family members to get their more reliable input.

Click to follow Tom's Guide on Google News

Follow Tom’s Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Comments are closed.