124 posts
Location
Seattle, WA
Posted 03 July 2015 - 04:22 PM
Swarm
a clustered load balancing system for cc servers
~
Ever had a program that crunches numbers for people all day? Hit the limit on how slow CC can be? Is someone attacking your CC comp contributing to your server being slow due to shared resources? I have a solution, that's Swarm. Swarm is a direct hardware platform for CC servers. It consists of 4 parts;
- swarmd - node.js API used to register/communicate between nodes
- swarm-worker - node.js worker used to communicate with the API
- swarm-bridge - bridging controller, used from CC Comp to API to transparently serve requests.
- swarmc - a computercraft emulator in JS.
Swarm Organization (Github)
This project is not complete, however in only two days of development it has already gotten quite mature. The method of sharing resources between nodes is still being debated, currently they will statically share files between one another, aside from that memory
could also be support.
How it Works
What "Swarm" does is it uses only one in-game computer and uses the HTTP api to communicate with the outside hosted API.
The outside API coordinates with various "nodes", or workers. These workers are used as "Load Balancers" they balance the load based upon CPU usage. Say you have an intense CPU use on one of the machines, the API would automatically forward all traffic to the lesser loaded machine. These nodes would do whatever lua said they'd do, then on any sort of rednet calls they would be transparently passed back over the API to the in game server. Cool, huh?
Questions?
Slideshow on Concept
Developers
Edited on 18 April 2016 - 08:18 PM
429 posts
Posted 03 July 2015 - 04:28 PM
Swarm
a clustered load balancing system for cc servers
~
Ever had a program that crunches numbers for people all day? Hit the limit on how slow CC can be? Is someone attack your CC comp contributing to your server being slow due to shared resources? I have a solution, that's Swarm. Swarm is a direct hardware platform for CC servers. It consists of 3 parts;
- swarmd - node.js API used to register/communicate between nodes
- swarm-node - node.js worker used to communicate with the API (unpushed to git as of right now)
- swarm-bridge - bridging controller, used from CC Comp to API to transparently serve requests.
- swarmc - a C emulator for CC programs. (Performance is amazing in C)
This project is not complete, however in only two days of development it has already gotten quite mature. The method of sharing resources between nodes is still being debated, currently they will statically share files between one another, aside from that memory
could also be support.
How it Works
What "Swarm" does is it uses only one in-game computer and uses the HTTP api to communicate with the outside hosted API.
The outside API cordinates with various "nodes", or workers. These workers are used as "Load Balancers" they balance the load based upon CPU usage. Say you have an intense CPU use on one of the machines, the API would automatically forward all traffic to the lesser loaded machine. These nodes would do whatever lua said they'd do, then on any sort of rednet calls they would be transparently passed back over the API to the in game server. Cool, huh? It's important to note that all traffic is encrypted using challenge-auth at first, then when it get's to the node level it's encrypted with 3072 bit keys w/ randomly generated 128 len passwords! Security is builtin from the top down, ssl cert support is also built in (just make sure you have it signed).
Questions?
Slideshow on Concept
<coming soon>
Noooooooooo not yet whyyy have you done this? Now I'm pressured to finishing it!
also that's 4 parts
Edited on 03 July 2015 - 02:29 PM
389 posts
Posted 03 July 2015 - 04:29 PM
This… is so awesome, so awesome.
124 posts
Location
Seattle, WA
Posted 03 July 2015 - 04:41 PM
This… is so awesome, so awesome.
Thanks! I'm hoping people will use it when it's finished, a lot of work is being put into making it secure, and easy to setup :)/>
124 posts
Location
Seattle, WA
Posted 03 July 2015 - 05:18 PM
124 posts
Location
Seattle, WA
Posted 04 July 2015 - 12:16 AM
Also, for anyone interested in the source, I just published the worker source:
https://github.com/jaredallard/swarm-node (ill migrate to an org sometime soon)
389 posts
Posted 04 July 2015 - 08:01 AM
I'm still really stunned by the awesome that this project is, would it be possible/efficent/functional enough to have a load balancing system strictly between in game computers?
46 posts
Posted 04 July 2015 - 11:23 AM
I'm still really stunned by the awesome that this project is, would it be possible/efficent/functional enough to have a load balancing system strictly between in game computers?
too my knowledge this wouldnt work as computercraft is single threaded. i have previously tried this with a program that generates prime numbers upto 40,000 and distributes it between the two CC computers and the other computer just hangs until the first computer has got the first 20,000 primes. and this has no benefit and can be slower.
Edited on 04 July 2015 - 09:24 AM
389 posts
Posted 04 July 2015 - 02:28 PM
I'm still really stunned by the awesome that this project is, would it be possible/efficent/functional enough to have a load balancing system strictly between in game computers?
too my knowledge this wouldnt work as computercraft is single threaded. i have previously tried this with a program that generates prime numbers upto 40,000 and distributes it between the two CC computers and the other computer just hangs until the first computer has got the first 20,000 primes. and this has no benefit and can be slower.
Yeah, I was expecting something mentioned about threading, but surely one in game computer might run slower than another if it is excuting a while loop rapidly, or are processess sequential?
46 posts
Posted 04 July 2015 - 02:50 PM
I'm still really stunned by the awesome that this project is, would it be possible/efficent/functional enough to have a load balancing system strictly between in game computers?
too my knowledge this wouldnt work as computercraft is single threaded. i have previously tried this with a program that generates prime numbers upto 40,000 and distributes it between the two CC computers and the other computer just hangs until the first computer has got the first 20,000 primes. and this has no benefit and can be slower.
Yeah, I was expecting something mentioned about threading, but surely one in game computer might run slower than another if it is excuting a while loop rapidly, or are processess sequential?
yea i think thats why you get the yielding error to stop that from happening. but this is easily bypassed so it seems like an issue.
124 posts
Location
Seattle, WA
Posted 04 July 2015 - 06:11 PM
too my knowledge this wouldnt work as computercraft is single threaded. i have previously tried this with a program that generates prime numbers upto 40,000 and distributes it between the two CC computers and the other computer just hangs until the first computer has got the first 20,000 primes. and this has no benefit and can be slower.
That's the main why I introduced this, indep "threading". Hence why I have no plans for an ingame balancer.
46 posts
Posted 04 July 2015 - 10:30 PM
too my knowledge this wouldnt work as computercraft is single threaded. i have previously tried this with a program that generates prime numbers upto 40,000 and distributes it between the two CC computers and the other computer just hangs until the first computer has got the first 20,000 primes. and this has no benefit and can be slower.
That's the main why I introduced this, indep "threading". Hence why I have no plans for an ingame balancer.
this is what i was considering to do, to speed up more computational intensive applications and this seems to be a implementation with a lot of potential.
124 posts
Location
Seattle, WA
Posted 06 November 2015 - 06:30 PM
Hello everyone! I recently got a job and have had a
truly hard time getting back into the FOSS movement and working on code. However I am proud to announce the general basis for the worker emulator. It's currently semi-functional and can emulate CraftOS in a terminal! I went from C to JS. This isn't a big change, but it allows me to better write the code. Lua is compilied to JS using emscripten then ran as a VM ontop of the JS stack.
Here's it in action!
I already built the worker communication to "balancer" system so this will be in alpha soon! However, I plan to integrate redis into this stack so that we can have all balancers share the same Lua Stack! Look forward to some cool stuff coming out soon! :)/>
389 posts
Posted 19 November 2015 - 05:00 AM
Looks great!
124 posts
Location
Seattle, WA
Posted 13 February 2016 - 06:50 AM
Swarm is at a hiatus stage. I haven't seen much interest, and it requires advanced sysadmin knowledge in some forms. I may pick it up again if anyone is interested in utilizing it. Feel free to ping me on Twitter, or about anywhere else.
453 posts
Location
Holland
Posted 13 February 2016 - 12:09 PM
This is really cool!
124 posts
Location
Seattle, WA
Posted 06 April 2016 - 08:55 AM
So, I've dropped pretty much all my other projects on here. But I haven't dropped this one as I don't need CC really to work on it. swarmc currently seamlessly emulates ComputerCraft's craftos!
The swarm-node & swarmd is pretty garbage and needs to be redesigned.
I'll be working on this as I have time in between devops & microservice development for my startup!
130 posts
Location
Here
Posted 06 April 2016 - 10:24 PM
FINALLY! I'VE BEEN WAITING FOR THIS FOREVER!! PLEASE DON'T GIVE UP ON IT!!!! <3 <3 <3
124 posts
Location
Seattle, WA
Posted 14 April 2016 - 09:27 AM
More updates! This is just about the only thing I'm working on while on vacation in Oregon for Spring Break, so lots is being done. swarmc has been converted to ES6 (better in the long run) and uses it's own fork of lua.vm.js to isolate any issues that lua.vm.js has. swarmd, now is setup to be a messaging/job queue to allow real world scaling (Message/Job queues are used by companies like; Tiwtter, Facebook, Google, just about anything that needs highload availability). This will enable the ability to run multiple workers and theoretically never end in terms of scaling, though in order to reduce complexity the gateway will not be scalable.
In laymans terms; this project is maturing. The workers will be using docker to make it even more simpler, and the router is being worked on.