what finally clicked for me recently was the realization that these so-called "AI" models are always trained in the past, and cannot learn anything new without being retrained from scratch. Apparently they're wasting massive amounts of energy on this process of training and retraining over and over again, but that's beside the point
what finally clicked for me recently was the realization that these so-called "AI" models are always trained in the past, and cannot learn anything new without being retrained from scratch. Apparently they're wasting massive amounts of energy on this process of training and retraining over and over again, but that's beside the point
The point is that these models are always trained in the past, on old data, and incapable of coming up with any new ideas. All they're really doing is REMEMBERING connections out of the data set
Anytime it seems like they're presenting a "new" sentence to us, that's really only a result of the training algorithm itself, the web crawler/weaver which weaves the model, which is eventually package up nicely with a clean user interface and presented to the end user for interactions
The bottom line is that these systems are not capable of "predicting" anything. They do not "predict" the next word, they REMEMBER the next word. There's a huge difference. These are not prediction machines, they are memory machines. The fact is, if you always say NO to these models they cannot predict what you will do. They can respond, but they will always be stuck in the past
what finally clicked for me recently was the realization that these so-called "AI" models are always trained in the past, and cannot learn anything new without being retrained from scratch. Apparently they're wasting massive amounts of energy on this process of training and retraining over and over again, but that's beside the point
The point is that these models are always trained in the past, on old data, and incapable of coming up with any new ideas. All they're really doing is REMEMBERING connections out of the data set
Anytime it seems like they're presenting a "new" sentence to us, that's really only a result of the training algorithm itself, the web crawler/weaver which weaves the model, which is eventually package up nicely with a clean user interface and presented to the end user for interactions
The bottom line is that these systems are not capable of "predicting" anything. They do not "predict" the next word, they REMEMBER the next word. There's a huge difference. These are not prediction machines, they are memory machines. The fact is, if you always say NO to these models they cannot predict what you will do. They can respond, but they will always be stuck in the past