02 December 2022

Some thoughts on AI

Here's an inchoate opinion/observation, based on just observing the unexpectedly slow progress in the efforts to develop truly autonomous driving AI systems. And as a curious layperson who has read about computer systems, languages, AI systems, etc., including some description of the "state of the art," such as IBM's Big Blue and its successors. With even the best AI, what you get is incredible speed, and very, very clever programming. Machines can be taught to emulate, and even surpass, some of the functions of biological brains. They can master patterns. They can, for example, even compose music which, for a brief period of time, might fool you into thinking it was really composed by a human being. Until, after a few minutes or even seconds, you realize that it is not music, just pattern-making; the tunes are not only dull, they are deadly dull. Because the one thing computers, even the fastest and "smartest" computers, lack is self-awareness, and it is self-awareness, or (more or less the same thing), consciousness, that is the real essence of biological intelligence. So here's my take: humans have invented something entirely new in the world, which is electronic computers. They are not successful emulators of biological intelligence. They don't use similar architecture, or work in the same way. I am a Spinozist; there is no "magic" to consciousness, it surely emerges somehow from matter and energy, but exactly how has nothing to do with pattern recognition or machine learning. Evolution, the great creator through the cruelty of elimination of failure and try-try-again, discovered long ago that survival depends on consciousness in higher organisms, and the systems that cause it to emerge evolved, over an inordinately long period of time. Creatures which needed it to work superlatively well but lacked that capacity perished and those that were better and better at it survived; the time honored way of the world. But we have not figured out how this works, or replicated it, not even on a primitive level. I am certain, from interacting with them, that AIs are fast and amazingly adept at certain functions, but they are not self-aware. At all. I'm guessing this is possible. But it has not yet happened, not even a little bit. 

So here's my intuitive opinion: it will never be possible for a computer system to write a decent novel or to do something as seemingly simple as to sit and enjoy a sunset. It may be possible to anticipate so many of the twists and turns and unexpected developments of something like driving a car for an AI to do it, but it's much more difficult than it seemed, and I think it could be that even this requires so much ability to imagine, and not just calculate, future scenarios, that it will not really be safe to trust them to do it (not that humans are safe at it, either, so it's possible the trade off will indeed favor AI before too long). But real machine consciousness... this is not on the horizon. I would not go so far as to say it is impossible, but we (humans, meaning the smartest of us) are not going about this the same way Evolution did, and our results are very unlikely to be successful, if the goal really is self-awareness, any time soon. 
"If you want the present to be different from the past, study the past."
― Spinoza

No comments:

Post a Comment

Gyromantic Informicon. Comments are not moderated. If you encounter a problem, please go to home page and follow directions to send me an e-mail.