3 Comments

This was a great article. Really helped me reframe my mind around LLM's and the marketing.

LLM's are really just statistical predictors and don't really think. Also OpenAI's marketing is geared towards making it seem human and therefore making it look like they are close to solving AGI.

I've had a hunch that OpenAI actually has some marketing genius behind it with the way they launched. After their announcement they improved magically within months. There are some highly intelligent people running how they are perceived.

Expand full comment

what finally clicked for me recently was the realization that these so-called "AI" models are always trained in the past, and cannot learn anything new without being retrained from scratch. Apparently they're wasting massive amounts of energy on this process of training and retraining over and over again, but that's beside the point

The point is that these models are always trained in the past, on old data, and incapable of coming up with any new ideas. All they're really doing is REMEMBERING connections out of the data set

Anytime it seems like they're presenting a "new" sentence to us, that's really only a result of the training algorithm itself, the web crawler/weaver which weaves the model, which is eventually package up nicely with a clean user interface and presented to the end user for interactions

The bottom line is that these systems are not capable of "predicting" anything. They do not "predict" the next word, they REMEMBER the next word. There's a huge difference. These are not prediction machines, they are memory machines. The fact is, if you always say NO to these models they cannot predict what you will do. They can respond, but they will always be stuck in the past

Expand full comment

AI is definitely sinister.

Expand full comment