This is a read-only snapshot of the ComputerCraft forums, taken in April 2020.
Creator's profile picture

Artificial Intelligence in ComputerCraft!

Started by Creator, 08 March 2015 - 06:58 PM
Creator #1
Posted 08 March 2015 - 07:58 PM
Hi guys,

recently I saw a friend, who codes in C++, play around with a software that has the ability to learn. I asked him what it and he said something very uncllear about neural networks. When I got home I researched about neural networks, and found out a lot about them. I took it as a chanlenge and decided to write my own version for Computercraft.

I have been writing it for around two days and also used the possibility to enhance my OOP skills. The code can be found at the NeuralNet GitHub repositiory or you can donload it from pastebin:


pastebin get GHHCma5U NeuralNet

At some point, the program will ask you to chose between a file input and typed input. Please don't choose the file input option since I was not able to upload the training file to pastebin due to heavy load. However if you insist on getting it, you can find it here. Paste it in your computer under this directory: "NeuralNet/Training Files/XOR".

User input structure:

1st: It's in binary.
2nd: A user input for 2 input nodes would look like this : 101(XOR) function where the first two digits are the actual input and the last one the expected result. So the rule is: a number of 1 and 0 corresponding to the number of input notes + a number of 1 and 0 corresponding to the output nodes and giving the expected result.

I hope you enjoy it.

Give me feedback, or else how will I be able to improve it!

~Creator

PS: Training from the file can be extremely long. If you are ready to wait around 5 mins (on my PC from 2009) you are free to do it.
There are no screenshots since it is a command line script. It returns some debugging values as well as the rael output.
Edited on 09 March 2015 - 12:34 PM
InDieTasten #2
Posted 08 March 2015 - 10:44 PM
Nice, I actually started a similar project weeks ago :D/>
But I have just too much projects going on at the same time, so I haven't been doing anything with it. But maybe (for performance reasons) you can watch over my code. I actually only implement net generation and forward-passing, but the structure is really really raw, which makes it fast, and I think performance is a really big problem, which can't be resolved using OOP, cause OOP tends to slow things down via metatable lookups and calls.

And I did in fact comment some stuff, which is unusual for me, but I think the rawness of the data just forced me to^^

Oh, and yeah, documentation-wise: newNet(5,3,3,2) would generate a net with 5 inputs, 2 hidden layers with 3 nodes each and an output layer of 2 nodes of size. the rest should be fairly easy to understand. the datastructure in which the weights are stored should clear out, when you look at the return of the function call to newNet ;)/>
At forward-pass a copy of the net es made and the weights get overriden by the actual state(according to the currently tested input set) of the nodes.

Should make sense at least in my head.
RoD #3
Posted 08 March 2015 - 10:47 PM
I dont get how this thing work XD
InDieTasten #4
Posted 08 March 2015 - 11:18 PM
I dont get how this thing work XD
I like to call neural nets adaptable programs.
It allows for classification, like you could give it a npaint image with a handdrawn image, and it tells you the digit you draw, like handwriting recognition.
It works by changing how the input affects the output over a ton of input sets and tries to minimize the overall failrate of the net.
At the end you can just export it out of the learning environment and give it some actual handwriting to recognize, and so it will do, if you haven't forgotten anything in the learning period, which happen like a lot.
There is plenty of reading material on that topic out in the internetz. If it gets too easy you can combine them with genetic algorithms, and have like virtual entities evolving over multiple generations, that desire a specific goal you set xD
But as I've said, theres plenty of space to mess it up, and you sometimes just don't know, why it's not working^^
InDieTasten #5
Posted 08 March 2015 - 11:36 PM
And oh yeah, as in my code as a comment, you shouldn't have the same input/output pair in your testset twice or more. Just run through every possibility once (in the case of XOR there are 4) and run them again and again against your net, but in random order. That's important, otherwise your net can use the order and learning change to calculate the output correctly, like if it were to learn the order of the outputs, rather than the actual approximation of the XOR function. This is really rare on basic nets, but you will soon want to start with recursive nets with actual memory capabilities, and then you will forget it and be like why isn't it working^^

As a little fun fact, I started to understand how the human brain works by programming/creating neural nets, which is opposite of whats common I guess
Lupus590 #6
Posted 08 March 2015 - 11:50 PM
Can anyone think of anything that you may need a computer to learn in CC?
Or what is the MC application for this?

Path finding perhaps? Maybe rednet relay placing? Although, both of these can be done (probably quicker and easier) using conventional programming.
Edited on 08 March 2015 - 10:50 PM
InDieTasten #7
Posted 09 March 2015 - 12:34 AM
Can anyone think of anything that you may need a computer to learn in CC?
Or what is the MC application for this?

Path finding perhaps? Maybe rednet relay placing? Although, both of these can be done (probably quicker and easier) using conventional programming.
Well, who needs turtles to find a path? Is the same question as "Who needs a terminal, that handles handwriting recognition?". You can set your own goals in ComputerCraft, and primarily you do it because you think its cool. And I find neural nets particularly cool. There are actually dozens of fun experiments you can do them. Embedding them into games, like a net that learns to beat you in rock, paper, scissor because it detects the pattern of your choices for example ;)/> And the end-user doesn't even have to know it. For small applications and predictions of player behaviour you can easily do it in seconds. But as I've said, everyone has passions on different things. I think it's cool, for you it might doesn't sound as cool. You are perhaps interested in other stuff that I might not enjoy as much as you do ;)/>
longbyte1 #8
Posted 09 March 2015 - 01:07 AM
How many repetitions does it take for this network to produce reliable data? Also, is the output always a word-by-word version of one of the inputs?
InDieTasten #9
Posted 09 March 2015 - 01:08 AM
How many repetitions does it take for this network to produce reliable data? Also, is the output always a word-by-word version of one of the inputs?
What do you mean by "word-by-word" version?
nitrogenfingers #10
Posted 09 March 2015 - 03:14 AM
I downloaded this and ran it, after specifying the number of nodes in the net (which is a bit weird for the end user), the program crashed after entering my first command, line 167 (double expected, got nil). I tried a few different numbers of nodes and had no luck. I'm a little confused what this is doing; is it trying to predict what the next sequence of tokens will be given all tokens that have come previously?

Can anyone think of anything that you may need a computer to learn in CC?
Or what is the MC application for this?

So neural networks are bio-inspired learning algorithms designed for the handling of data or information that is too large for a statistical learning algorithm, or when dealing with fuzzy information like unsupervised image classification or handwriting recognition. Fuzzy data is abundant and overwhelming in the real world and we use ML for everything from scanning QR codes on our phones to determining what price to set a product at given its previous sale performance; but there is almost no fuzzy data in minecraft, because it is a discrete world built on comparatively simplistic algorithms. So the only information you can really get has to be provided from the players themselves, but even the most populated of servers would struggle to get the hundreds of thousands of training inputs necessary to have the system be useful.

I've yet to think of a ML problem in computercraft (let alone minecraft) that feels meaningful to solve but I'd probably recommend using a HMM or another Bayesian system over a net myself; one of my friends here worked on reinforcement learning to teach a computer to play atari games which is a problem you could solve in CC given enough time.

But that's just my two cents, don't let my spoil-sport opinions detract from this. It's a cool thing for CC to have :)/>/>
Edited on 09 March 2015 - 02:15 AM
Lupus590 #11
Posted 09 March 2015 - 09:59 AM
To some up then, no practical application in game, but rule of cool covers that.
Creator #12
Posted 09 March 2015 - 10:13 AM
I forgot to mention that the input follows this structure
1st it's binary
2nd an user input for 2 input nodes would look like this : 101(XOR) function where the first two digits are the actual input and the last one the expected result. So the rule is: a number of 1 and 0 corresponding to the number of input notes + a number of 1 and 0 corresponding to the output nodes and giving the expected result.

Moreiver, l dont totally get backpropagation, so if you eant to look ove my code to see how i implemented backprpagation ( class neuron, method updateweights)

The article the nature of code (chap 10 ) is about artificial intelligence and is very well written.

One problem it has is that it never works with the input 110. (remember binary)

When I come home I'll fix the issues pointed out by the community :)/>

~Creator

Ps: im on my phone, so ecxuse my typing errors.
Edited on 09 March 2015 - 12:35 PM
Creator #13
Posted 09 March 2015 - 05:27 PM
Here is a part of an answer from StackOverflow that I do not understand:

While I don't exactly understand your example, the question of backpropagation is fairly common. In the simplest case with strictly layered feed-forward and one output node:

First you need to propagate the information forwards. It looks like you may have this already, however make sure you keep track of what the value at each node was after the squashing function, lets call this o, and keep one for each node.

Once the forward propagation is done, for backpropagation you need to calculate the error. This is the difference between what was expected and what was given. In addition multiply this by the derivative in order to give a direction for the update later (the derivation of the derivative is complicated, but the use is very simple).

Error[output] = (Expected - Actual) * o(1 - o)

Then propagate the error at each node backwards through the network. This gives an estimate on the 'responsibility' of each node for the error. So the error at each node is the error at all nodes in the next layer weighted by the weights on each link. Again, we multiply by the derivative so we have direction.

Error[hidden] = Sum (Error[output]*weight[hiddenToOutput]) * o(1 - o)

Repeat this for every layer of links (input to hidden, hidden to hidden, hidden to output) as necessary.

Finally, the training occurs by updating the weights on the links. For this we combine all the information we have to get the final update.

Weight[hiddenToOutput] = weight[hiddenToOutput] + learningRate * error[output] * input

Where input is the value that went into the link (that is, 'o' from the previous layer, and error is from the following layer), and learningRate is some small number (eg. 0.01) to limit the size of our updates. Analogous calculation is done for the weight[inputToHidden] etc, layers.

((NB: this assumes the sigmoid squashing function))

Hope this helps. Additional info can be found in lots of places. I learned from Machine Learning by Tom M. Mitchell. It has a good pseudocode section.

If you want to read more, find the article here.
InDieTasten #14
Posted 09 March 2015 - 05:52 PM
Other than with an example computation I can't think of a simpler explaination ;)/> It's not as easy to understand initially, but once you get this part with the responsibility of nodes for the overall error, you should be good to go ;)/>
Creator #15
Posted 10 March 2015 - 06:55 AM
What I don't get is how to determine the error for each neuron.

~Creator
cdel #16
Posted 10 March 2015 - 12:02 PM
As an owner on Lua Land, I am considering setting this up at spawn for people to play around with and "teach".