|
||||||||
Title: Neural Suggestion Post by TenaliRaman on Jul 12th, 2004, 9:54am I am currently in final year and i want to do a project on Neural Network for my final year project. Me and my teacher have been discussing abt what we should do and nothing fruitful has come out of the discussion so far. The problem is we have very little background of NN and my group is doing the entire study of NN from ground up (which basically means we are complete novice in this field). I would like to have any suggestion whatsoever on the project under NN. All suggestions are warmly welcomed :). |
||||||||
Title: Re: Neural Suggestion Post by towr on Jul 12th, 2004, 12:15pm Well, neural networks scream 'pattern recognition', and you can go numerous ways with that. Letter/number recognition (b/w bitmap-picture to characters) face recognition (f.i. faces of cartoon characters) Classification (unsupervised: kohonenmaps) I saw an interesting combination of markovmodels and kohonen networks a while ago to classify/recognize timeseries, which might be fun to do.. (or try another combination) |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Jul 12th, 2004, 2:12pm Thanks for the suggestions towr! :) I personally have no idea where to start my coding on any of those. I just started reading up on NN's a week back or so and just know that kohonen networks are self-organising networks but beyond that i am blank. I read a nice article at ibm , which gave me some idea as to how i should start thinking abt all this. Though still how the network "learns" (through coding) is a mystery for me!?! The perceptron model discussed in the ibm article doesn't really clarify things for me cuz they talk of changing weight matrix during learning stage but exactly how they go abt doing it seems unknown!! Anyways u have any ideas where i can start afresh on these things!?! Thanking again! TR |
||||||||
Title: Re: Neural Suggestion Post by towr on Jul 12th, 2004, 2:36pm on 07/12/04 at 14:12:42, TenaliRaman wrote:
What you do is this -each item you want to recognize can be represented with a vector, and you can compare vectors on similarity -generate a number of random vector (prototypes) -compare test items to all the prototypes, and make the prototype that is most like it even more like it (by averaging them weighted, f.i. 99% prototype + 1% new item) After learning for a while the prototype vectors will represent classes of items. It's not really a neural network though, but it does work similarly (especially once you get into kohonen maps you'll see the similarities, vector machines might also be of interest) Quote:
For this you need an input and to know what the output should be (supervised learning). Once you feed the input to the neural network you'll get some output, but it will differ from the output you want. This 'error' can be used to blame adjust the weight between the output layer and the layer before it, the goal here is to ensure that next time the output will be more like what you want. If you have multiple layers you'll need to propagate the error backwards, and lay blame in the nodes of the hidden layers (f.i. if a certain output node was 0 and should have been 1, and was connected to two nodes in a previous layer with weights 0.1 and 0.7 respectively, then you can propagate that error backwords by blaming the first node for 0.1/(0.7+0.1) * 1, and the other for 0.7/(0.1+0.7) * 1) So this gives new errors for the previous layer, and you can repeat the same process again. (maybe I should draw it.. :-/) Quote:
There's another simple learning rule, Hebbian learning, which was inspired by nature. Basicly you strengthen weights between neurons that are (in)active at the same time (and perhaps weaken the bond otherwise). If the activation of a node ranges from -1 to 1, you can just use the rule weight(i,j) = weight(i,j) + [nu] node(i)*node(j), where [nu] is the learning parameter. The weight matrix may be unstable though, some weights could grow to infinity, so it is customary to normalize it. There are a few common pitfalls with neural networks. How good it learns does depend a lot on the architecture of the network. It may have been proven that a threelayered network (input-hidden-output) can represent any function as long as the layers have enough nodes, but in practice the network may never learn actually learn the function. Another problem lies with the input. All inputs should lie in the same range (f.i. [0..1]), because if one input goes from 100-3000, and another goes from 0.01-0.05, the latter is probably going to be ignored by the network. Or at best it will take a very long time for the network to learn it is actually important. A third caution, use simple representations. A network can in theory learn binary coding, so you'd need only 3 nodes to represent inputs/outputs 1..8, but in practise it will work much better if you use 8 inputs (one for each number) Of course in some cases this may not be practical. To be honest, every time I did anything with neural networks I have been disappointed, I expect too much, too fast (the same probalem I have with genetic algorithms, and let's not get started on the unholy combination of the two, so tempting, yet so evil..) As my professor said, it's not magic, don't expect miracles. |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Jul 14th, 2004, 12:11pm Quote:
I will look into that ... I have read into details on error back propagation and kohonen model from "Neural Networks PC Tools by Eberhart and Dobbins". Personally i feel error back propagation is cheating and looks nothing more than trial and error measurement to me (which may not be even perfect). In this respect atleast i felt kohonen maps does well .... atleast during its training we need not adjust the learning factor manually. Quote:
hehe i am currently wading through all the notes that i could find on neural nets online .... though i never would mind learning it from u either :) u can be a better teacher than u think ;) Quote:
Yes i did guess something like that would happen .... and now it is just confirmed by listening to someone like you who has programmed a neural network. Quote:
The more i think abt it ... the more i tend to agree with your professor!! (ofcourse that hasn't dampened my spirits yet and noting that neural network is still a highly unexplored territory) |
||||||||
Title: Re: Neural Suggestion Post by towr on Jul 14th, 2004, 2:15pm on 07/14/04 at 12:11:06, TenaliRaman wrote:
Quote:
Even if you let the learning parameter decrease as the number of steps increase, the best rate to do this is still guesswork (or trial and error), as is the best value to start at. Quote:
Here's a link you might find interesting, http://yann.lecun.com/exdb/lenet/ It's quite a different approach to neural networks than you'd usually expect. |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Jul 17th, 2004, 7:09am Quote:
I dunno maybe its bcos , when i first read it , the first thing that came to my mind was "how close it resembles the difference method of extrapolation". The only difference being that the network tries to adjust the coefficients so as to get close to the answer while the difference method just gives us a polynomial which might give us a close value. Quote:
Hmm the method i had thought of for deciding the parameter was something like "newton-raphson method" wherein the start guess would be 1. Quote:
Does the front page applet implement their method .. if yes! then its oh so cool ..... ofcourse i haven't read through the literature yet but it sounds so interesting. Thanks for the link!! |
||||||||
Title: Re: Neural Suggestion Post by towr on Jul 18th, 2004, 7:58am on 07/17/04 at 07:09:11, TenaliRaman wrote:
Learning is basicly just 'hillclimbing' in the error landscape. Whether you use gradient descent (EBP) or another method (f.i. simulated annealing, genetic algorithm) doesn't fundamentally make a difference. Just as long as you get to a good local minimum, prefereably the absolute minimum. Quote:
Quote:
Another important pitfall just came to mind: overtraining a network. It may just memorize, or at least specialize, on the training set. So it's important to have a good, independant test set with which to check how well the neural network generalizes what he learned fromt he training set to new cases. A new problem here, is that inevitably you'll optimize the network to learn from the training set in such a way it'll optimally perform on the test set, so you've actually trained it on the latter as well, which means you need a third set, the validation set (and if it does much worse there than on the test set you know you've optimized it too much). Of course often you already have a very limited suply of data to learn from, so splitting it into three sets can pose new practical problems ;D |
||||||||
Title: Re: Neural Suggestion Post by rmsgrey on Jul 18th, 2004, 8:40am Of course, another classic pitfall is mistraining - closely related to the overtraining problem - a classic example being the tank detection software that trained on the set of pictures taken for the purpose of the exercise of the same patch of woodland on two occasions, one occasion had a tank present in all the pictures, and the other occasion had no tanks whatsoever. What no-one noticed at the time was that one set of pictures had been taken on a sunny day, while the other was taken under an overcast, so the software ended up learning the difference between sunny and cloudy weather... |
||||||||
Title: Re: Neural Suggestion Post by Sir Col on Jul 18th, 2004, 1:45pm I like it, rmsgrey! ;D on 07/18/04 at 07:58:51, towr wrote:
I'm afraid that I am quite ignorant in the area of artificial intelligence, but I've always wondered about this problem (as I see it) with AI and would like to know a little more. To use a crude analogy... Suppose that you are using an AI unit to find the lowest point (relative to sea level) within a given region, and suppose further that you start the unit in a valley. As it roams around it would continue to detect an increase in gradient in all directions and so after "searching all possible paths" it would conclude that it had located the absolute minimum. The alternative to this is to allow the unit to complete an exhaustive search of the entire landscape before it arrives at a conclusion. However, in a more complex problem this may prove to be impossible wthin a realistic timescale. So how would a human handle this? I would probably locate a high point and use it to survey the "landscape" and narrow the areas of research. It seems that, in order to complete a task, an AI would need to redesign their optimisation algorithm to do the opposite of what they were programmed to do. Will this ever be possible? |
||||||||
Title: Re: Neural Suggestion Post by towr on Jul 19th, 2004, 2:17am The problem is you can't 'survey' the errorlandscape, regardless of where you are at it. You can only sample bits of it and maybe make a map. If you're dealing with a real landscape, yeah sure, then you can have the AI use cameras to survey the landscape. But even then the minimum can be hidden from view by the next mountainrange. With complex problems chances are you'll never find the very best solution, but generally you don't need to. Getting a sufficient solution is often good enough, espescially when time limits are an issue. Anyway, there are many techniques to avoid getting trapped in local minima. So generally you wouldn't get trapped between grains of sand, only between really big rocks ;) |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Jul 20th, 2004, 12:08pm After all the hassles, my project guide has settled on a neural network simulator and he seems to be fixated on it. (i am not sure whether to feel happy abt it or sad! the idea of a simulator isn't particularly exciting to me!). Anyways , i will ofcourse pester towr whenever i need help on the proceedings :) . Thanks for all the help and guidance so far tho!! :) |
||||||||
Title: Re: Neural Suggestion Post by towr on Jul 20th, 2004, 12:43pm I wonder what is meant with a 'neural network simulator'. I don't think many people would consider building a real physical artificial neural network (consisting of physical nodes and links), it's usually software and inherently a serial simulation of a parallel system. |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Jul 21st, 2004, 3:12pm Yes ofcourse its a software simulator .. not a hardware one :). Not the best project around , well atleast i can get my basics right here. *sigh* |
||||||||
Title: Re: Neural Suggestion Post by towr on Jul 21st, 2004, 3:38pm on 07/21/04 at 15:12:51, TenaliRaman wrote:
There's many things it could mean. You could make a nice graphic representation of how neurons fire and how activity passes from one neuron to the next. Or you could simulate how noise on the input, or destruction of neurons, affects performance. Any number of aspects you could work with really.. Quote:
|
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Jul 23rd, 2004, 7:32am Nah! it need not be unique (tho i wish it were!) We will be implementing some already known models and learning techniques and test and analyse some results. I guess my guide doesn't want to overburden me with things with my semester syllabus being a bit bulky this year. Quote:
Hehe! i will try ( :D ) . |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Aug 22nd, 2004, 7:28am HELP!!! Two weeks ago i finished with whatever i could read up on Neural Networks. I got the general feel of what i am expected to do , so i thought i better start with the implementation. I spend two weeks to design classes (yes two weeks .... erm got other works too u know :P). In these two weeks i considered various different implementations with all its pros and cons and i finally settled on one two days ago and decided to implement it..... The model was simple as well as generic (to some extent) so they worked out fine and in no time i could develop a simple multi-layer feed forward network .... where each neuron has sigmoid transfer function.. I was overjoyed! I decided to hack in back propagation learning algorithm and this is where everything went wrong :'( .... I think i have implemented it correctly but its not learning anything ..... i mean the output is absolutely non-sense ..... after two days of banging my head over it and wasting a beautiful weekend on this ,i came to a conclusion that i was making some sort of idiotic mistake in the bpn algorithm ..... i looked into the books .... the two books that i currently have with me discusses weight changing for just 3 layers that is input-hidden-output ... (i have not implemented a simple 3 layer network with bpn yet but if nothing works i might implement that and see what happens) .. in anycase i thought it must easily extensible .. but its not working .... i searched the net but most of the lecture notes discusses simple 2 layer or 3 layer bpn networks :( i need help!!! what should i do? |
||||||||
Title: Re: Neural Suggestion Post by towr on Aug 22nd, 2004, 10:54am I suppose the first question to ask is: What sort of function are you trying to learn? And of course, haven't you set the learning speed too high (or too low, both can be catastrophic) |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Aug 22nd, 2004, 1:09pm i tried to implement this std diagram given for Xor (the 3 layer diagram) Now it should work since all the literature claimed so :( but mine does not so i am definitely making some god-damn mistake in my bpn algorithm but i dunno where the problem is ?? I kept the learning factor at 0.1 |
||||||||
Title: Re: Neural Suggestion Post by towr on Aug 22nd, 2004, 1:46pm Do you have a bias at each node as well? I'm not sure if it's strictly necessary, but it can help.. (And the diagram of the XOR network I have on my wall has it..) Aside from that there's little I can say without seeing your implementation of the algorithm. |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Aug 22nd, 2004, 2:09pm No i haven't given any bias yet. But the results aren't encouraging enough for me to tell that the bias might help .. I wanted to show the implementation but its distributed across different classes .... So i did not know how to put it up here ?? :( Should i attach the class files? |
||||||||
Title: Re: Neural Suggestion Post by towr on Aug 22nd, 2004, 2:16pm You could zip them together and attach them as a zip-file. |
||||||||
Title: Re: Neural Suggestion Post by towr on Aug 23rd, 2004, 1:53am After sleeping on it, I'm positive the bias is essential for the network to work. You can't even get a proper AND or OR without it. Just try to construct them by hand. Given inputs x and y, what you want is something AND = f(x+y - 1.5) such that x+y-1.5 is positive when x and y are positive, and after the sigmoid function it'll be higher then 0.5 (and thus '1') and otherwise x+y-1.5 is negative and after the sigmoid function it'll be lower then 0.5 (and thus '0'). Without the bias of about -1.5 you can't get that. (the weights here don't matter very much, x and y are equally important, so should have the same weight, and so essential increasing the weight is equal to contracting the x-axis in the sigmoid function making it more and more like the step/round function) |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Aug 23rd, 2004, 9:29am ok i will try the bias ... i need some time as to where and how to place the bias now .... prolly i will be placing it in the neural layer ... anyways, i have attached the zip file. Sorry tho i have not given any comments ... :( I will attach a small figure i just drew now in the next post which should clarify as to what i am doing. |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Aug 23rd, 2004, 9:35am Ok here is the diagram .... |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Sep 18th, 2004, 9:28am Some updates, I finally got my BPN working two weeks back.Made Xor, and and or and some other networks with 3/4 hidden layers and stuff .... it was fun .... Added the momentum factor bit and came to know that it enhances learning speed by atleast 50% :o My instructor hasn't given us any thing else to do now ... Just thought i put this up while i am playing around with many other networks i am trying to build ..... any idea for a network to distinguish even and odd numbers .... say like over the set of 0-16 ? |
||||||||
Title: Re: Neural Suggestion Post by Grimbal on Sep 18th, 2004, 11:39am Hm... If you represent your numbers in binary, I don't think you need even a single neuron. If you represent one input per number, it looks like a single layer thing. What would be less obvious is detecting multiples of 3 or 5 in a binary representation of numbers. |
||||||||
Title: Re: Neural Suggestion Post by raven on Sep 18th, 2004, 1:27pm Hello NN Peeps... This isn't really my realm of expertise (to say the least), but it is my brother's. He's been playing with robotics (and now AI) since the late 70's. He wrote a book that covers quite a range of types of AI and I remember reading (not necessarily comprehending) quite a lot of what you are talking about. I don't know, just a random suggestion, but it might be worth picking up a copy of the book, or dropping a note at his web site. He's busy, but I suspect he would be interested in tossing ideas around on his life long subject of interest. Just a thought... Book: "Hands-On AI with Java: smart gaming, robotics, and more" Author: Edwin Wise Author's Site (http://www.simreal.com/twiki/bin/view.pl/Simreal/WebHome) AI Links found on Author's Site (http://www.simreal.com/twiki/bin/view.pl/Simreal/AiLinks) Contact via Site (edwin@simreal.com) ISBN 0-07-142496-2 Copyright 2004, McGraw-Hill Companies. Publisher's Site (http://books.mcgraw-hill.com/getbook.php?isbn=0071424962&template=) Amazon Site (http://www.amazon.com/exec/obidos/tg/detail/-/0071424962/qid=1095538109/sr=1-1/ref=sr_1_1/103-9365030-9290240?v=glance&s=books) Have fun, Raven *Edited for better email address |
||||||||
Title: Re: Neural Suggestion Post by towr on Sep 19th, 2004, 7:26am Just a general note, neural networks are best at pattern recognition, not maths. Sure, there are some neat patterns in numbers, but don't ask a neural net to do things it isn't made to do. |
||||||||
Title: Re: Neural Suggestion Post by TenaliRaman on Oct 5th, 2004, 10:51am ah Grimbal! right u are! realised that later ... raven, thank you ... the site is nice .... will check it as i get time .... :) towr, well wouldn't disagree with you but i was just trying to have some fun with it... ;D |
||||||||
Powered by YaBB 1 Gold - SP 1.4! Forum software copyright © 2000-2004 Yet another Bulletin Board |