Posted 06 February 2014 - 07:20 PM
Hey guys,
I made an api that controls a redpower frame machine (yeah, i know, not latest version….), it works fine. What I wanted to add on to that computer is the ability to resume its work when loading back from shutdown (logging off, etc). I can launch it to go eat whole chunks, but it will stop dead in its track if I log off, inefficient. I made it remember its position just fine, the problem lies in controlling the flow so to say. I'd like, for now, to keep a shell open that allows me to add new tasks to the machine, clear tasks, check on things, with a lua shell (calling api functions). But the machine should keep running through its list of tasks in the background.
I've tried a parallel approach, a snippet I've seen on this forum
What I encountered is that some variables are apparently not shared between the code portions, i.e. I can modify a global variable but the parallel-ed dogoals doesn't see it. As I built persistence for the task list, state is saved to disk and loaded, so I could make the two portions seem to communicate that way. It seems very bad and sloppy to me, is there a better way?
Maybe I could just physically separate the load, like a computer that does the machine control, and its task list is updated via an event handler, listening to rednet messages from the controlling computer.
Cheers.
I made an api that controls a redpower frame machine (yeah, i know, not latest version….), it works fine. What I wanted to add on to that computer is the ability to resume its work when loading back from shutdown (logging off, etc). I can launch it to go eat whole chunks, but it will stop dead in its track if I log off, inefficient. I made it remember its position just fine, the problem lies in controlling the flow so to say. I'd like, for now, to keep a shell open that allows me to add new tasks to the machine, clear tasks, check on things, with a lua shell (calling api functions). But the machine should keep running through its list of tasks in the background.
I've tried a parallel approach, a snippet I've seen on this forum
local function run()
shell.run 'dogoals'
end
local function runShell()
shell.run 'shell'
end
shell.run 'clear'
parallel.waitForAny(run, runShell)
os.shutdown()
the dogoals program loads the api, and executes the loop inside that api that executes the tasks.What I encountered is that some variables are apparently not shared between the code portions, i.e. I can modify a global variable but the parallel-ed dogoals doesn't see it. As I built persistence for the task list, state is saved to disk and loaded, so I could make the two portions seem to communicate that way. It seems very bad and sloppy to me, is there a better way?
Maybe I could just physically separate the load, like a computer that does the machine control, and its task list is updated via an event handler, listening to rednet messages from the controlling computer.
Cheers.