5 Practices to Avoid with Artificial Intelligence

Jairo G. Sarmiento Sotelo
5 Practices to Avoid with Artificial Intelligence

Artificial intelligence is a powerful tool. It lets us process and analyze information at speeds that were unthinkable just a few years ago, and carry out countless tasks in a matter of seconds.

However, enthusiasm for this technology can lead to blind overconfidence in its capabilities, leading us to confuse the ability to process text with true human comprehension. This can lead us to dangerous ground.

At Datasketch, we are interested in exploring the opportunities AI gives us, but without losing sight of a critical approach to its use. For that reason, we’ll explain five everyday practices where you should avoid blindly delegating work to AI:

1. Medical self-diagnosis

It’s tempting: you enter your symptoms into a chatbot, and it gives you a possible diagnosis. The problem is that AI is not a doctor. It doesn’t know your clinical history, it cannot perform a physical exam, and, most importantly, it is prone to “hallucinating,” that is, inventing information that sounds credible.

Using AI for self-diagnosis can bring various risks; from causing unnecessary worry by suggesting serious illnesses for mild symptoms, to putting your health at risk by taking remedies or medications prescribed by a chatbot. Although this tool has provided countless possibilities to the medical and scientific world, using it autonomously instead of going to the doctor is an irresponsible practice that can put your health at risk.

2. AI as a therapist or friend

Applications have emerged that offer “therapy” with AI. While they can offer advice and psychological support, they do not replace the work of a therapist. It is not trained to handle a mental health crisis safely, it cannot understand the deep context of trauma, and it has no ethical or legal responsibility.

We know that for many people, AI chatbots have become an immediate, accessible, and private space for venting; however, blindly trusting this technology with your feelings and emotions can generate negative effects.

For example, the newspaper El Mundo reported that the parents of a teenager in the United States blamed a chatbot for “encouraging and validating” their son’s suicidal thoughts; he had cultivated an intimate relationship with one of these tools for several months before committing suicide. Meanwhile, authorities are investigating, along with OpenAI, the possible case in which a chatbot may have encouraged a man to murder his mother, and then commit suicide, as reported in September by the newspaper El País.

According to a recent investigation conducted by OpenAI along with the MIT Media Lab, those people who consider ChatGPT a friend are more likely to experience negative effects from using the chatbot.

3. The “AI Financial Advisor”: Making investment decisions

“Which cryptocurrency should I invest in?” “Should I sell my stocks?”. Asking a chatbot to design your financial future is a dangerous gamble. AI is not a regulated entity, it is not accountable for its advice, and its answers are based on past patterns, not on a real understanding of your risk profile or unpredictable future events.

Furthermore, markets are influenced by real-time global events (panic, politics, natural disasters) that a static model cannot foresee. AI has no fiduciary responsibility; it has nothing to lose if its “advice” leads you to ruin. Personal finance requires strategy and a deep understanding of context, not an algorithmic bet.

AI can summarize a law, but it cannot create legal strategy. Cases have been documented of lawyers in the U.S. who have been sanctioned for presenting arguments in court based on case law that the AI completely invented. The law requires interpretation, ethics, and knowledge of context, things that AI does not possess.

Law is not just a set of rules; it is its interpretation and analysis in a legal process. AI can summarize a document, but it cannot “interpret the intent” of the legislator. Relying on it for a contract or a defense is risking basing critical decisions on fabricated information.

5. The substitution of critical thinking

Even though AI allows us to save hours of work on different tasks, delegating 100% of decision-making or the final answer to a complex problem to this tool and accepting it without question means abandoning human judgment.

This recurring practice can result in a renunciation of our own skills. AI is a tool to speed up a draft or clean thousands of rows of data, but it is the human who must decide what story that data tells. Losing that ability turns us into simple operators, not analysts.

AI

🚀 Limited opportunity: Be one of our 100 data partners shaping the future of AI with verified data!

Join the Network