Sort:  

Hm. I tend to disagree with that statement:

The idea about weights started in 1940s with Hebb’s learning law (Donald Olding Hebb, a Canadian neuropsychologist) also known as "cells that fire together wire together" which says that if one neuron A is causing activity of other neuron B, then their connection is getting stronger, otherwise - it’s weakening.

This is somewhat misleading. As far as I remember, Hebb did not write that two neurons need to fire together to increase the efficiency of their connection.
To understand this clearly, it is important to be aware of the concepts of causality and consistency.
He stated, that neuron A needs to repeatedly (consistency) take part in firing (causality) neuron B. That is an important difference to make. The presynaptic neuron needs to fire repeatedly just before the postsynaptic one, to have a potentiating effect on the synapse in the end. This mechanism is strongly connected to spike-timing-dependent plasticity (STDP) - a neurological process, which is responsible for adjusting the strength of neuron connections. This is based on the relative timing of a neuron’s output and input potentials (spikes).

Hi @egotheist,

Thank you for your insight, you are right - it is a little bit misleading, but if it's taken too literally.

Hebb really never said "cells that fire together wire together", neither I wrote that it's stated in his law - it's just part of the "slang" in academic circles because it's easy to remember.

The fact is that those two neurons can't fire exactly at the same time, so the presynaptic neuron have to fire a bit earlier (because of the causality, like you said) and that consistency is a big factor for weights setting.

I just wanted to adapt it to an average steemit reader.Me personally, never thought about it in that way, that neurons could fire literally at the same time, so that's probably why I haven't seen it as an oversight.

Again, you got the point,

Regards:)

I just wanted to adapt it to an average steemit reader.Me personally, never thought about it in that way, that neurons could fire literally at the same time, so that's probably why I haven't seen it as an oversight.

I see. Problem is, this kind of imprecise thinking leads to other mistakes. I have read this exact example already in books. Can't hurt to put it right when writing here.

Good article. Just one suggestion. When you are doing coding stuff it is better to give the source code and related data in websites like GitHub and link it in the article. So that it will be helpful to others too. 😃

I completely agree with you, it's just I never used GitHub before :(

But I certainly plan to learn using it, because I would love to start contributing to the @utopian-io community :)

Hi @nikolanikola!

Your post was upvoted by utopian.io in cooperation with steemstem - supporting knowledge, innovation and technological advancement on the Steem Blockchain.

Contribute to Open Source with utopian.io

Learn how to contribute on our website and join the new open source economy.

Want to chat? Join the Utopian Community on Discord https://discord.gg/h52nFrV

My father used to make me crazy while I was a kid, by me telling that he is going to sell our car and buy Subaru Rex! Puke! Bloody Subaru Rex! I hate it! :D

Anyway, Matlab...

Nice and tidy, I like that :P

Now I'm serious: do you have some experience with data that were already transformed before used as input for NN? I will need to solve something with hyperspectral images. I have two options, either to use raw data (matrices, 64k x 4k) and put them into NN to train the hell out of them. Or... Should it be better to reduce their dimensionality to 64k by 3 (not 3k, 3) and than put them as input? I don't like to do the analysis of the analysis of the analysis... Yet, reduction of dimensionality is in my veins :)

Well, I personally didn't have experience with such problem, but based on what you said, it looks like you have to reduce the number of the features without losing any important data - That looks to me as a problem for Karhuenen - Loeve expansion, you can check out this article on wiki https://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem and google it further.

I hope that I was helpful :)

Karhunen–Loève theorem
In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.Stochastic processes given by infinite series of this form were first considered by Damodar Dharmananda Kosambi. There exist many such expansions of a stochastic process: if the process is indexed over [a, b], any orthonormal basis of L2([a, b]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error.

Yes, yes, there is a whole bunch of similar techniques, each one of them good for the extraction/ elimination of "noise" based on slightly different statistical parameters.

Thank you!

One technical tip: .csv (comma separated values) and .txt (text) formats are the same data encoding. The difference is the file extensions can tell the operating system what types of programs should open the files by default. The matlab ecosystem has a csvread function that should help you avoid having to write parsing code in the future.

I hope this helps! Your article is a nice walkthrough. My only other recommendation is to add some high level comments in the code samples on the intent behind the code, but thats more personal taste as I skip to check these out first!

Thanks for the insight @carback1 :)

I had experience with only txt files so far, so it was naturally to me to first transform it in txt, but I'll keep your tips on my mind for the next time. :)

Risking to spoil my mentor

No worries, already too late for that 😎

I must say that this is one of the very rare occasions when I read the article from beginning to the end, and two things happen:

  1. I don't get bored;

  2. I understand everything perfectly, although I knew nothing about the topic prior reading.

Having that said, I think you did awesome work here, just keep it up!

As a person who knows nothing about this, I would have just one cliché question - in what way are neural networks implemented in our everyday lives, where can we find them?

Thank you very much, It really means to me when I see that I succeeded to spark interest in someone who is not actually from this profession :)

Well beside that we can use them as a classifier like I showed in this article with concrete example of its application, they may find use practically everywhere - for example in medicine to help us to determine the disease of a patient, in a business to find some correlation between the data etc.
There is one interesting use - they can be used to reconstruct 3D function, and that can be helpful for face recognition!
I plan to write about these things more in the future, so stay tuned :)

Wonderful post.

Strong nueral power

норм че



This post has been voted on by the steemstem curation team and voting trail.

There is more to SteemSTEM than just writing posts, check here for some more tips on being a community member. You can also join our discord here to get to know the rest of the community!

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.033
BTC 63155.22
ETH 3108.25
USDT 1.00
SBD 3.85