David Petry
2020-04-08 07:15:45 UTC
Since most of us are locked in our homes with little to do, this might be a good time to have a discussion about the future. The so called "singularity" is probably not far off, maybe just a few decades. So what impact will this have on mathematics, and what impact will mathematics have one the singularity?
Here's one thing to consider. It was just a few years ago that Google's AlphaZero program did something rather remarkable. It was given the rules of chess, and then just by playing against itself and learning as it played, within four hours of this process it learned to play chess at a level far exceeding the abilities of any human player, and even surpassed the abilities of any computer player that had come before it.
So, does this not suggest that when the singularity arrives, and we have computers that may be millions of times more intelligent than humans, that these artificially intelligent computers will be able to start from the axioms of mathematics and little more, and then within a few hours, develop mathematics far beyond the level that humans have been able to develop it?
What this suggests is that if mathematicians are not doing something that will have a benefit for humanity before the singularity arrives, then what they are doing will never have any benefit for humanity; an advanced AI will be able to reproduce that the mathematicians' life work in just a few hours, and will even go far beyond that in a few more hours. And then what the mathematicians have been doing for all these years will be mostly consigned to the dustbin of history. That's something to think about, is it not?
We might also discuss what mathematicians could be doing to bring about the singularity. I like to point out that mathematics has a very close connection to artificial intelligence. We can think of mathematics as a rigorous conceptual framework for reasoning about the real work, and likewise, a computer will be intelligent when it can reason about the real world. It would be reasonable to say that mathematics is a rigorous theory of inteligence.
So what I have been arguing is that mathemticians could be developing a rigorous foundation for artificial intelligence. That's something I have always wanted to do. But what mathematicians have chosen to be the foundation of their own subject (ie ZFC) is quite deficient as a foundation for AI, and the mathematicians now seem to be very opposed to the idea of even considering alternate foundations that might have great practical value.
Something else the mathematicians could be doing is to develop a language that facilitates communication between humans and AI. Again, that's something that would be of enormous value to humanity, in the long run.
As I see it, the AI that is being developed is being built on a deficient theoretical foundation, and what is being produced will be unavailable to all but a few members of an elite priestly class of experts. Again, as I see it, AI itself holds out enormous potential for the benefit of humanity, but the AI that is currently being produced will likely be a flawed AI, with its enormous power resting in the hands of the elites, and then the dystopic nightmares that guys like Elon Musk and others are concerned about will very possibly come to fruition.
People with mathematical talent could be doing something to smooth out the transition to a post-singularity world.
So like I said, I'd like to encourage people to discuss this topic.
Here's one thing to consider. It was just a few years ago that Google's AlphaZero program did something rather remarkable. It was given the rules of chess, and then just by playing against itself and learning as it played, within four hours of this process it learned to play chess at a level far exceeding the abilities of any human player, and even surpassed the abilities of any computer player that had come before it.
So, does this not suggest that when the singularity arrives, and we have computers that may be millions of times more intelligent than humans, that these artificially intelligent computers will be able to start from the axioms of mathematics and little more, and then within a few hours, develop mathematics far beyond the level that humans have been able to develop it?
What this suggests is that if mathematicians are not doing something that will have a benefit for humanity before the singularity arrives, then what they are doing will never have any benefit for humanity; an advanced AI will be able to reproduce that the mathematicians' life work in just a few hours, and will even go far beyond that in a few more hours. And then what the mathematicians have been doing for all these years will be mostly consigned to the dustbin of history. That's something to think about, is it not?
We might also discuss what mathematicians could be doing to bring about the singularity. I like to point out that mathematics has a very close connection to artificial intelligence. We can think of mathematics as a rigorous conceptual framework for reasoning about the real work, and likewise, a computer will be intelligent when it can reason about the real world. It would be reasonable to say that mathematics is a rigorous theory of inteligence.
So what I have been arguing is that mathemticians could be developing a rigorous foundation for artificial intelligence. That's something I have always wanted to do. But what mathematicians have chosen to be the foundation of their own subject (ie ZFC) is quite deficient as a foundation for AI, and the mathematicians now seem to be very opposed to the idea of even considering alternate foundations that might have great practical value.
Something else the mathematicians could be doing is to develop a language that facilitates communication between humans and AI. Again, that's something that would be of enormous value to humanity, in the long run.
As I see it, the AI that is being developed is being built on a deficient theoretical foundation, and what is being produced will be unavailable to all but a few members of an elite priestly class of experts. Again, as I see it, AI itself holds out enormous potential for the benefit of humanity, but the AI that is currently being produced will likely be a flawed AI, with its enormous power resting in the hands of the elites, and then the dystopic nightmares that guys like Elon Musk and others are concerned about will very possibly come to fruition.
People with mathematical talent could be doing something to smooth out the transition to a post-singularity world.
So like I said, I'd like to encourage people to discuss this topic.