Why AI is failing at giving good advice

Published April 25th, 2024 4 min read

TLDR: ChatGPT generates responses based on the highest mathematical probabilities derived from existing texts on the internet. Popular advice (for various reasons) is seldomly good, nor (by definition) uniquely applicable, nor (mostly) founded on actual experience. You are probably better off taking advice from a real person who can empathize and knows what they are talking about.

Book

Freelance Fortune

From code to cash: Elevate your engineering expertise into a thriving, high-income freelance business.Preorder now
Freelance Fortune Hero

When you ask ChatGPT a question, something highly interesting happens:

ChatGPT, which has previously consumed half or more of the internet to build its language model, will translate your question into a mathematical representation of numbers (e.g., a vector).

I don't know in detail how they do it, and I am sure there are some layers in between and around that serve some specific purpose, but I understand that if you google the phrase "How are you?", you can statistically expect a certain range of words and sentences in the results around it. Most sentences following the question will probably sound like "I'm good, thanks" or "Doing great, how about you?". Whereas if you search the internet for all occurrences of "Integrated circuit", you will usually find a very distinct set of words and sentences nearby, like "silicon semiconductor," "MOS transistor," or "the voltage requirement is 0.6V".

With so much base data, you can assign a mathematical value (or direction) to every word, change it when it appears together with other words (context), and compute an entire, unique direction for a continuous piece of text.

Realize the following (exemplified): Anyone who ever had success on the internet writing articles (or anything else) in the broader sense of the universe just pressed a particular combination of buttons on their keyboard, and then more biological masses in the world started reading the outcome than other produced texts.

In a strange but very scientific way, when you ask ChatGPT a question, it tries to compute the exact combination of letters, words, and sentences based on their previously computed values that it thinks you are looking for. Astonishingly enough, that is often a highly useful response in the real world.

But this approach has problems, especially when you try to give someone good, specific advice:

The outcome is, by definition, mathematical. It's probability, applied to man-made text. The most propagated (related) text on the internet will likely be repurposed in its own words to answer anything that you ask. Essentially, that means it might give you a mashed answer as you would get from the X first results on Google, but it will fill in contextual gaps from other places and make it more applicable to your specific input.

If most internet texts said the sky was yellow, ChatGPT would say so, too. Similarly, suppose you ask ChatGPT the infamous question, "How can I make money online quickly?". In that case, you will get a shallow, unhelpful response (that will often stay unhelpful even if you drill down into specifics).

Standard deviation diagram

This is not to say that everything is particularly "wrong" (although some points are, according to most people's experience); it is just paraphrasing those online bubbles of drop shippers, BuzzFeed listicles, and affiliate boards.

For example, almost everyone who has succeeded with YouTube or affiliate marketing will tell you neither is quick. It takes years of work, dedication, and a fair pinch of scientific user analysis.

Even if you ask it to walk you through making money step by step, it fails: In March 2023 (2 days after the release of version 4), a tweet caught fire that documented the usage of ChatGPT as a business owner, giving precise directions to make money (starting with $100):

It did make some money, but with millions and millions of views and even mainstream news covering the endeavor, I am hesitant to attribute the generated income to ChatGPT. The updates died out quickly, and two weeks later, the official confirmation was posted that the project (and apparently, the site) was sunsetted. It's not what a successful attempt to make money looks like in my world.

A Large Language Model can provide accurate answers if fed the correct base context (superseding the general knowledge base) and if you ask the right questions. But even then, you need to find that respective chatbot and the questions you must ask to get helpful answers (although the latter may apply to many human conversations, too).

At the current state of the internet, there is almost any educational information and advice already out there in some form, freely accessible to everybody, more than anyone could ever take action on in their lifetime. Today, the value of providing information is about more than just delivering it; it's about delivering the right information to the right people the right way. And LLMs fail at the former.

The bottom line is that AI is not yet capable of what a good teacher or mentor can do: giving actually good, uniquely applicable, empathizing advice. It's much better at explaining things.

PS.: This article was peer-reviewed and approved by ChatGPT. I ignored its suggestion to add examples where it gave helpful advice because that's against my agenda statistically, with enough advice given, you will just randomly run into occasions where it gave good advice.

PPS.: A big discourse was recently sparked by Pieter Levels, who built a mental therapist Telegram bot with AI. This article has been sitting in my drafts for almost a year now and has absolutely nothing to do with that (I feel that discussion is more ethic-, accountability-, and risk-based anyway, as opposed to this article's core message). The timing of this article's publication just after his tweet is coincidental.

Maxim Zubarev Avatar

A personal publication by Maxim Zubarev.
I use software as a leverage for business.

Join my Newsletter

Want to receive more articles like this? I write about programming and how to grow your business with software. Subscribe to my newsletter to receive all my upcoming posts delivered to your inbox.