An AI doesn't need to think in the same way humans think. It just needs to achieve results (that are better, or at least equal to humans).
The same question has been asked of chess "ai" in the past - that chess ai isn't thinking, it's "just" searching through all possibilities etc. And yet, the result is that no humans can beat chess ais now-a-days.
I'm growing weary of all the AI hype being shoved down my throat. Every time I dig into examples of what it can do, the result seems shittier than what a talented human would achieve instead. I'm forming an impression it's a kludge for the mediocre.
I don't want to be a crotchety old grump about this - I'd love to hear thoughts on truly notable examples where quality far exceeds our existing best in class work.
I think your only problem is too-high expectations.
>examples where quality far exceeds our existing best in class work
I'm no expert, but what percentage of the economy needs/is best in class work?
I agree running up against the limits of these models after reading the marketing material can give the impression they're "a kludge for the mediocre" and they certainly are in part that, but they're also so much more.[0]
but i agree with the others - AI currently shines not at being better than best-in-class humans, it's - sometimes - useful at doing intellectual chores.
> examples where quality far exceeds our existing best in class work.
You would be seeing far more than hype if we were at this point. This level of AI would be world changing.
I feel your expectations are too high. Realise what LLM’s are, what they can do and to what standard, and leverage them accordingly. It takes time to determine the scopes in which they increase your productivity. Trying it a few times and then disregarding it seems extremely foolish when so many developers are telling you it has enhanced their productivity to some degree.
An AI doesn't need to think in the same way humans think. It just needs to achieve results (that are better, or at least equal to humans).
The same question has been asked of chess "ai" in the past - that chess ai isn't thinking, it's "just" searching through all possibilities etc. And yet, the result is that no humans can beat chess ais now-a-days.
"The question of whether computers can think is about as interesting as the question whether submarines can swim" - Dijkstra.
I'm growing weary of all the AI hype being shoved down my throat. Every time I dig into examples of what it can do, the result seems shittier than what a talented human would achieve instead. I'm forming an impression it's a kludge for the mediocre.
I don't want to be a crotchety old grump about this - I'd love to hear thoughts on truly notable examples where quality far exceeds our existing best in class work.
I think your only problem is too-high expectations.
>examples where quality far exceeds our existing best in class work
I'm no expert, but what percentage of the economy needs/is best in class work?
I agree running up against the limits of these models after reading the marketing material can give the impression they're "a kludge for the mediocre" and they certainly are in part that, but they're also so much more.[0]
[0] https://crawshaw.io/blog/programming-with-llms
> the result seems shittier than what a talented human would achieve instead.
but what about compared to an average, or below average human would achieve?
chess or go.
but i agree with the others - AI currently shines not at being better than best-in-class humans, it's - sometimes - useful at doing intellectual chores.
> examples where quality far exceeds our existing best in class work.
You would be seeing far more than hype if we were at this point. This level of AI would be world changing.
I feel your expectations are too high. Realise what LLM’s are, what they can do and to what standard, and leverage them accordingly. It takes time to determine the scopes in which they increase your productivity. Trying it a few times and then disregarding it seems extremely foolish when so many developers are telling you it has enhanced their productivity to some degree.
It's easier to just tell yourself that everyone else must be mediocre than it is to learn the strengths and weaknesses of a new class of tools.
In the news we seem to have reached Schrödinger's AI: Too dumb to do anything properly, but coming for everyone's jobs due to being too powerful.
https://archive.is/whcKL