What makes people think that NNs have more computational power than existing models?
- by Bubba88
I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors
Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine?
Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important?
What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem.
Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid.
Thank you!