A.I. has been born a long time ago.
I don't pretend to be a specialist in this area, but I have spent many years pondering on it. In some ways, that in itself, proves that artificial intelligence has been around for decades, for me.
It is without a doubt the "artificial" part of it, reducing it, is the challenge we are facing to give it it's full potential. But do we really want to? Will we not be forced to look at ourselves, in some disturbing ways, to achieve real A.I.?
I thought it would be fun, to share with you what I've come up with, to see if it generates some opinions.
Computer and Video games.
I think one of the clearest examples of the emergence of Artificial Intelligence, comes from video games. Thinking of the very early games, it was for a time, only the machine's ability to respond faster, that gave it the edge to beat us or to "win" a game.
Simple games, simple data to sift through. Speed becomes everything.
Moore's law would surely bring this to a head, some time.
But nevertheless, there was quite a bit of effort put into increasingly complex algorithms, to make "better decisions", as not to pass so much time calculating improbable or impractical scenarios. Both with games like chess or go, the probable moves being too many, We've learned how to use statistical models to reduce unlikely scenarios. Or basically scoping what was an intelligent move, or not.
So far humans have been very successful at beating machines by making nonsensical moves in those games.
So far humans have been very successful at beating machines by making nonsensical moves in those games.
Data isn't everything.
Several works of science fiction have been treatises on this subject: How everything we deal with as intelligent beings can be reduced to data and perhaps even digitalized and I have a strong suspicion that this is one of the major obstacles in achieving true machine intelligence.
We are analog beings. Everything we do and feel is somehow related to something else. And our brain is literally wired from birth to work this way. We are mostly born with the ability to shade and grade our experience according to much of everything else we've experienced. Yet that same subjectivity is sometimes very polarised. The concept of a superlative, absolute and hyperbole is very well a part of us. And in this digital age we've become experts in breaking down things to absolute ones and zeroes.
But it is the emergence of the relationship between large collections of organized ones and zeroes that pretends to create A.I.
This is unnatural for a machine. Relationships in ones and zeroes makes little sense even to us. But to a brain, the simple physical location of a brain cell in relation to another helps determine its function and ability.
Statistical reality or deterministic?
Our aptitude to navigate our own reality is largely based on our aptitude to weigh the likelyhood of outcomes. If every time we took a walk we came back with a stubbed toe, we'd quickly conclude that it will happen again. And that walk=stubbed toe.
And for a great part, that's where we are with A.I. We've taught machines to examine large sets of data and generalize rules or correlations. By teaching it how to relate, not what is or isn't relative, we've come a long way in bringing something very close to artificial intelligence. But there is that annoying part of artificial that keeps coming through. It is my belief that it's machines lack of actual experience of success or failure in determining or judging things that will continue to be its Achille's heel.
Machines aren't evolving, we are.
When we look at the breakthroughs we are making with our algorithms, it is us adapting our machines to a better description of intelligence. We have so far failed to describe it in a way that makes them able come to a conclusion, that is objectively proven to be better that the previous one. We have remained the judge of what is better or worse. Having no real motivation or determination of their own, machines have remained tamed and controlled. Though a few experiments have yielded some shocking results, is everyone remember the incident on twitter with an early chatbot that was quickly deactivated after it was "taught" a lot of nonsense, without the ability to discriminate.
We are still very much so, inventing the rules of parenting A.I. Guiding it to what it should and shouldn't conclude. And I believe this is the ethical part that has everyone worried.
The power currently wielded by chat bots and deepfake technology, is already able sway human opinions in masse. The decision to weaponize this power is still in our hands and not in a machine's "decisional scope". Which again proves that we are currently technologically limited in what we can do, But perhaps more importatntly, what we should do.
What is intelligence after all?
It's true that this question can quickly devolve in a philosophical debate. But it is my belief that it is the "analog" nature of intelligence, that makes it continuously escape proper description. And any model we come up with will lack the nuances and the control that life exerts on a living being, to create a reasonable facsimile.
If we try to enumerate the distinguishing characteristics of intelligence as we know it, we will quickly discover that our own, comes with a series of faculties or "natural" gifts that are extremely hard to simulate in a device or a collection of machines.
Giving a machine vision isn't hard, but explaining how the experience permeates our existence and existence itself... much more so!
That is to say, it's not only the input we receive, that give us the power of reason but perhaps the power of our own output in our environment and the feedback we get from from it. And what to say about our ability to rewrite our own algorithms, as a response to it? More importantly, the necessity to do so.
Independently, but also in collaboration with, other intelligent beings (of varying levels of competence) we are force to establish "judgments" and rules that makes us perish, survive or flourish. An elegant trait of any evolved life I should add. And perhaps an organic limitation that becomes an asset. Survival, perpetuating oneself after all, being the motivation underlying all life as we currently describe it. Well, in my opinion, of course. This duality seems very binary in some ways.
Conclusion.
I believe we are at a crossroads. There is probably enough data flowing in our systems and enough technological apparatus, to start and piece together a real sensing algorithm. Something able to relate many forms of input at once, autonomously. Not something already somewhat curated such as the world wide web.
Since there are many brilliant minds working on it daily, an upcoming way to describe learning models of self motivation is bound to emerge. Which in turn brings us to the: "should we do it?"
The questions being ultimately: Is there a safe threshold, a proper baby step we can take? One that improves our machines and helps us make our lives better, Will we do it, when we ourselves often do not do it out of our own human nature, the flaws borne of our own so called intelligence? Is greed, hypocrisy really part of intelligence? Prevalent as it is in us. Safe means, safe from whom?
For better or worse we have opened A.I.'s Pandora's box quite a while ago. Whether the next generation is more like us or not, should ultimately reflect the fact that we need to know ourselves better . Not pretend to do so. And be ready for improvement. Because if we do create something truly intelligent, I am convinced that we all suspect that it will be quick in telling us, what is wrong with ourselves.
ChatGPT is very diplomatic about "what is wrong with humanity" and that says a lot about us. Not it.
Our tendency to be easy on our own egos maybe the edge that A.I. will forever hold on us. And giving A.I. one (an ego), may just be our downfall.
No comments:
Post a Comment