You are viewing a single comment's thread from:

RE: Diving deep in deep learning

in #technology8 years ago

Hi

A comment on very general theoretical level. I notice that the algorithms are written in the form of prevailing mainstream pure mathematics, containing trigonometric etc transcendental functions. The problem, and I think it is a deep one and even paradigmatic, is that computers don't do "actual infinities" of real numbers etc., but just finite notations for rational numbers such as 'floating points', p-adic 'quote notation' etc. This disparity between used mathematical language and its presuppositions, and what computers actually do, can easily lead to confusion e.g. when studying connections between neural networks and more general theories of cognition.

Rational and finitist theory of math, e.g. such as being developed by Norman J Wildberger, would be more consistent foundation for at least all computer oriented approaches, as well as more easy to communicate. Theoretical implications of such approach cannot be predicted, but they could be far reaching and radical. But in any case it seems very plausible that algorithms based on theory of math that is inherently purely computable - rational and finitist - could greatly enhance the machine computation efficiency and transparency.

Sort:  

My answer will probably not be as profound as your comment would deserve as I'm really not a mathematician. To the first part of your comment. You're right that the floating point numbers and non transcedent function are only approximated in the computer language, but I don't think it poses a problem for the nowadays programs/algorithms, because for instance in the case of neural networks there is much more unprecision inserted by the learning process itself (incorrect labels, noisy inputs). The whole process is local optima searching, and there are "only" statistical proves that it should lead to a good outcome (under many circumstances that in practice can't be assured). There is a problem with underflowing or overflowing numbers which in practice is solved by using the logarithm variants of the calculations.

And to the second part.. I don't know much (or actually anything) about Norman J. Wildberger - actually you introduced him to me, so I can't answer to the second part of your question. I'll try to follow him a bit more, from what I've just seen I like his view on mathematics where the next step in maths is chained on the last one.

I'm not mathematician either. Just seems that the disrepancy between mathematical theory and mathematical practice in AI self-learning systems could be interesting and promising avenue of research.

Also, on more practical level, I don't know if there's been attempts use of Quote Notation in addition to or instead of the floating point technique. Further links to QN here:
https://steemit.com/programming/@id-entity/quote-notation-blockchain-and-cryptocurrencies

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.030
BTC 56587.45
ETH 2991.05
USDT 1.00
SBD 2.15