Is there anyway to add a function to the parallel queue after the call, so that it is in the wait queue with the original functions?
This is a read-only snapshot of the ComputerCraft forums,
taken in April 2020.
Parallel Question -- Add routines to existing parallel call?
Started by surferpup, 08 February 2014 - 11:27 PMPosted 09 February 2014 - 12:27 AM
I already use parallel.waitForAny and parallel.waitForAll. The basic pattern of the call is parallel.waitForAny(function1,function2,function3).
Is there anyway to add a function to the parallel queue after the call, so that it is in the wait queue with the original functions?
Is there anyway to add a function to the parallel queue after the call, so that it is in the wait queue with the original functions?
Posted 09 February 2014 - 12:42 AM
Untested, but cool to look through. Thanks for the response.
Posted 09 February 2014 - 12:48 AM
You've actually inspired me to take a look into it again and perform some code cleanup and such.Untested, but cool to look through. Thanks for the response.
Posted 09 February 2014 - 12:52 AM
- In looking at your code, I noticed for the first time (derp) that coroutine.yield essentially is the equivalent of os.pullEventRaw(). Am I wrong?
- You use eventData to store { coroutine.yield() }. You later do a table.remove(eventData,1) Is that like removing an event from the queue? If not, what is that doing?
Edited on 08 February 2014 - 11:53 PM
Posted 09 February 2014 - 01:02 AM
coroutine.yield is Java-side.1. In looking at your code, I noticed for the first time (derp) that coroutine.yield essentially is the equivalent of os.pullEventRaw(). Am I wrong?
os.pullEventRaw and os.pullEvent are defined in `bios.lua` as the following
function os.pullEventRaw( sFilter )
return coroutine.yield( sFilter )
end
function os.pullEvent( sFilter )
local eventData = { os.pullEventRaw( sFilter ) }
if eventData[1] == "terminate" then
error( "Terminated", 0 )
end
return unpack( eventData )
end
No it removes the "targeted_event_#" prefix. Basically that was a feature I added where you could do this2. You use eventData to store { coroutine.yield() }. You later do a table.remove(eventData,1) Is that like removing an event from the queue? If not, what is that doing?
os.queueEvent("targeted_event_2", "mouse_click", 1, 1)
and it would give the mouse_click event only to the coroutine with the ID of 2.Thanks :)/> Funny that you should say that, writing that was how I learnt coroutines :)/>BTW – your code is a great tutorial in and of itself on coroutines.
EDIT: Oh this and CCKeyboard; I had a play around with coroutines for that.
Edited on 09 February 2014 - 12:08 AM
Posted 09 February 2014 - 01:08 AM
I am not sure I completely grasp java side vs bios.lua side. I get that CCraft Lua is running within Java (because Minecraft Client is running in Java). My point on coroutine.yield() is that it must return information which os.pullEvent() is expecting. I am doing a lot of thinking on events and coroutines at the moment as I contemplate the event portion of The Complete Monitor Button and Control Tutorial. I have some ideas I am trying to flesh out. I will work through your code some and see if it helps.
Posted 09 February 2014 - 01:17 AM
okay so coroutine.yield is the way to have a Lua program yield, as such this is implemented in LuaJ (the Java implementation of Lua). Since coroutine.yield does not process events, merely supplies them dan implemented os.pullEvent which also processes the events and searches for the terminate event and upon finding it will terminate your script (unless you've pcall'd the function). I then assume since coroutines are a little more difficult for new people to understand he added os.pullEventRaw as an easier way to understand the difference between it and os.pullEvent. however its always easier and better (less functions on the call stack) to just use coroutine.yield; its generally faster too.-snip-
Posted 09 February 2014 - 01:22 AM
however its always easier and better (less functions on the call stack) to just use coroutine.yield; its generally faster too.
So what I am hearing is that coroutine.yield can be used in place of os.pullEvent as long as I am willing to do the processing of the event? I am definitely going to have to play with this a bit.
Just tried that. It works! Wow. Learned something new here.
Posted 09 February 2014 - 01:23 AM
you can use coroutine.yield in place of os.pullEvent and os.pullEventRaw anywhere; os.pullEventRaw and coroutine.yield are identical, whereas os.pullEvent checks for a terminate event (commonly fired by CTRL+T) and when discovered ends you program. Thats all the difference there is.So what I am hearing is that coroutine.yield can be used in place of os.pullEvent as long as I am willing to do the processing of the event? I am definitely going to have to play with this a bit.
Posted 09 February 2014 - 01:35 AM
snap … crackle … zrzzzt – the sound of my mind being slightly blown. Reading this code is opening up a bunch of new possibilities to me. Thanks again.
Posted 09 February 2014 - 01:36 AM
you're welcome :)/>
Posted 09 February 2014 - 02:00 AM
The reason I had been pondering this issue at all is that a new member had posed the scenario where they have a monitor button which when pressed cause a redstone output to pulse at 1-second intervals. When the button is pressed again, the pulses would stop. In addition, a user might have other buttons which do other things.
So, I got to thinking about the fact that I always need to be listening for monitor events. I also need to have activities going on with various redstone outputs. Who knows, I may even need to send rednet messages off and even receive some. All of this gets a little overwhelming at first.
That's when I began thinking of pairing the pulse function with a listening function in parallel, and have my listener set variables which would start/stop my pulse function. Of course the listener would also respond to requests from the routine handling the button toggles.
That got me thinking of registering all of these interactive functions so that I could control a variable number of them. Then I got lost.
I think I am beginning to see a way through it all as I study your code.
So, I got to thinking about the fact that I always need to be listening for monitor events. I also need to have activities going on with various redstone outputs. Who knows, I may even need to send rednet messages off and even receive some. All of this gets a little overwhelming at first.
That's when I began thinking of pairing the pulse function with a listening function in parallel, and have my listener set variables which would start/stop my pulse function. Of course the listener would also respond to requests from the routine handling the button toggles.
That got me thinking of registering all of these interactive functions so that I could control a variable number of them. Then I got lost.
I think I am beginning to see a way through it all as I study your code.
Edited on 09 February 2014 - 01:01 AM
Posted 09 February 2014 - 02:19 AM
theoretically could be done without use of coroutines, but in this situation I do believe that coroutines simplify the matter greatly.
I'm glad that my code was able to be of use.
I'm glad that my code was able to be of use.
Posted 09 February 2014 - 03:19 AM
I'm glad that my code was able to be of use.
If I am reading everything correctly (and from experimenting a bit), there is no such thing as parallel events. The parallel API is simply a coroutine manager. Say I have three routines: A,B, and C. If any code is executing presently in A then B and C must be either dead or in a suspended state, waiting for A to yield. I am still sorting out your PRIORITY events logic, but essentially, you wrote your own version of parallel.
Edited on 09 February 2014 - 02:20 AM
Posted 09 February 2014 - 03:29 AM
Precisely, the Parallel API is simply a Coroutine management system. When one is running all the others are suspended or dead, normally suspended though, unless their runtime has complete in which case they're dead.If I am reading everything correctly (and from experimenting a bit), there is no such thing as parallel events. The parallel API is simply a coroutine manager. Say I have three routines: A,B, and C. If any code is executing presently in A then B and C must be either dead or in a suspended state, waiting for A to yield. I am still sorting out your PRIORITY events logic, but essentially, you wrote your own version of parallel.
The Parallel API gathers an event and then gives that information to each Coroutine one at a time — note that this is why you must yield, to allow other coroutines to run only one can be running at any given point in time; unlike in other programming where you'd use a thread which can concurrently run with other threads (in most cases) — this is why each function in used in the Parallel call gets the same events.
What I'm making use of with the Priority events is assume we have the following queue
each of those would go to the routines one at a time starting at the top and working down… however what I've done by implementing a priority event is to be able to queue an event that skips this line, this means (taking from the above queue) a routine would be resumed once for the modem_message and then resumed again (once all the others have been supplied the modem_message) with whatever you've put in the priority queue.modem_message
rednet_message
turtle_inventory
mouse_click
key
char
key
key
key
char
timer
alarm
Posted 09 February 2014 - 03:44 AM
Theoretically, if you knew a message wasn't for a routine, you could just skip sending that message to the routine. For example, if you knew routine A received only messages of type 1 and 2, you could simply bypass resuming A if 1 and 2 messages came by in the queue. This would be especially true if the message was one you generated specifically for routine 5 (like a terminate event specifically for routine 5 {"R_ID5","terminate").
I am a bit hazy still on your priority queue – how do you decide what gets placed in the priority queue? It almost seems as if you are determining which routine is a priority routine. Are there priority events – that is events which receive special attention?
I will have to try and run your code and see what it is doing. I am very fuzzy on the priority queue.
I am a bit hazy still on your priority queue – how do you decide what gets placed in the priority queue? It almost seems as if you are determining which routine is a priority routine. Are there priority events – that is events which receive special attention?
I will have to try and run your code and see what it is doing. I am very fuzzy on the priority queue.
Posted 09 February 2014 - 03:52 AM
yeah that was the only thing I had a lapse in judgement and need to fix, currently with my script if you supply a filter on you yield call it will ignore it and resume it with the next message, take a look at the Parallel API for the implementation of that, but basically you'd gather what they supply as the filter and when giving out events make sure the routine actually wants that one before resuming itTheoretically, if you knew a message wasn't for a routine, you could just skip sending that message to the routine. For example, if you knew routine A received only messages of type 1 and 2, you could simply bypass resuming A if 1 and 2 messages came by in the queue.
Function call… Line #133: function queuePriorityTargetedEvent( _id, … )I am a bit hazy still on your priority queue – how do you decide what gets placed in the priority queue? It almost seems as if you are determining which routine is a priority routine. Are there priority events – that is events which receive special attention?
Posted 09 February 2014 - 04:05 AM
Some thoughts:
There is one control routine and n number of target routines. There is a table of routines. If a target routine is aware and has access to it's own table member where such things as state, event filter, etc., the target routine could modify these parameters on the fly and the control routine would be aware of the changes.
You could actually have the control routine send a request to pause to the target routine which would then do what it needs to do to get in a paused state and yield. Then the control routine would cease sending events to the now paused target routine until the control routine decided the target routine needed to wake up. This would take care of the new member's scenario which I brought up earlier regarding the reeating redstone pulse being turned on or off by a monitor button.
I agree that with the filter you would have a replacement for the parallel API only with a few added features and the ability to register new "parallel" functions on the fly. With a little bit of work, the dead routines could be cleaned up or even restarted if necessary (essentially nil the dead one,re-create a new coroutine for it and restart it with parameters).
Powerful stuff, indeed.
There is one control routine and n number of target routines. There is a table of routines. If a target routine is aware and has access to it's own table member where such things as state, event filter, etc., the target routine could modify these parameters on the fly and the control routine would be aware of the changes.
You could actually have the control routine send a request to pause to the target routine which would then do what it needs to do to get in a paused state and yield. Then the control routine would cease sending events to the now paused target routine until the control routine decided the target routine needed to wake up. This would take care of the new member's scenario which I brought up earlier regarding the reeating redstone pulse being turned on or off by a monitor button.
I agree that with the filter you would have a replacement for the parallel API only with a few added features and the ability to register new "parallel" functions on the fly. With a little bit of work, the dead routines could be cleaned up or even restarted if necessary (essentially nil the dead one,re-create a new coroutine for it and restart it with parameters).
Powerful stuff, indeed.
Edited on 09 February 2014 - 03:06 AM
Posted 09 February 2014 - 04:26 AM
Oh I must have accidentally removed that at some point — I did an update months later and clearly forgot what some code did. I had it possible for a routine to query the CMS and if no ID was provided then it assumed the currently running, meaning that a routine could know everything about itself… It would never get the table element as this means it could modify important information and cause the CMS to crash (since tables are pass-by-reference not pass-by-value).There is one control routine and n number of target routines. There is a table of routines. If a target routine is aware and has access to it's own table member where such things as state, event filter, etc., the target routine could modify these parameters on the fly and the control routine would be aware of the changes.
I was contemplating adding signals in for pausing, resuming, and killing routines (for example SIG_KILL), that was a routine could deal with a change in its lifecycle appropriately (but never be able to stop it)You could actually have the control routine send a request to pause to the target routine which would then do what it needs to do to get in a paused state and yield. Then the control routine would cease sending events to the now paused target routine until the control routine decided the target routine needed to wake up. This would take care of the new member's scenario which I brought up earlier regarding the reeating redstone pulse being turned on or off by a monitor button.
It was meant to be a replacement but due to the lapse in judgement I do have that one major bug which only causes a problem in certain cases, for example thisI agree that with the filter you would have a replacement for the parallel API only with a few added features and the ability to register new "parallel" functions on the fly.
print("press any key")
os.pullEvent('key')
dead routines would eventually be cleaned up by the Java garbage collector, but I will add a dereference in there for dead routines to allow for a 'quicker' cleanup. as for restarting, well a dead routine cannot be restarted (unless I stored the function pointer to re-create it).With a little bit of work, the dead routines could be cleaned up or even restarted if necessary (essentially nil the dead one,re-create a new coroutine for it and restart it with parameters).
Posted 10 February 2014 - 10:09 AM
Okay so after reading the code today, things you asked for it to have it did actually have, I hadn't removed them accidentally I was just missing them with the quick skim I was doing…-snip-
I have written an update (found on pastebin and github), the code is a little easier to read and only one major bug that I've come across that I'm still working on; its a difficult one to source.
Posted 10 February 2014 - 01:08 PM
So for pausing, you would send a SIG_PAUSEid in to the the function(which would not be sent to others, which would need to catch it and do what it needs to do to be put into a paused state (do I leave redstone outputs ON or turn them OFF,etc) and then (here is what is unclear) either yield back normally and let the PauseRoutine finish processing the yield (set isPaused to true) or have the routine set it's own isPaused to true.
Routine:
That is what I tried in my little test.
I am also still unclear about how you are handling the event filter for each routine. One of the possibilities is to have a routine set it's own filter dynamically. I thought of the idea of using an eventType table and iterating through, however, one could use a space separated string of events and use string.find – it might be faster.
If my routine typically listens for rednet, key and monitor_touch events, it would set its eventFilter to "rednet key monitor_touch" – knowing full well that it also has to listen for "SIG_KILL" SIG_PAUSE" events. If it no longer needs key events, it would reset it's filter. This would potentially speed up the CMS, because the routine would not be resumed if the event is not for the routine.
Following each yield in the routine, there would have to be a KILL/PAUSE check before going on to process the rest of the event loop of that yield. It would require a coding style change (not any routine would work, only routines that implemented SAFE_KILL/PAUSE features. These could be templates, and they could be turned into functions within the routine (PAUSE CHECK, KILL CHECK).
This is all conceptual. And I will probably keep thinking that way until I get more dirty in your code. I am very excited about it's potential. The ability to add or remove "threads" at will is pretty powerful.
One final idea – instead of "waitForAny,waitForAll" as in the current parallel, you could also have a "waitForSpecific" where if a specific routine terminates then the program terminates. It could also be a value in the event variable _causesTermination. You just do a check to see if any of the _causesTermination==true routines are dead, and if they are, end the program. A a function waitForAll would set all routines[n]._causesTermination to false, where as a waitForAny would set them all to true. Using a waitForAll function and then following it with a setting of _causes termination on specific routines would afford a ridiculous amount of flexibility.
This is close to being a full-on replacement for parallel, and much more useful. Great work!
Routine:
Event Loop
while true do
event = { coroutine.yield() }
if event[1] == "SIG_PAUSE_"..myID then
do what I need to do to pause
elseif Event[1] == "somtehingElse" then
blah
end
end
CMS –
function pauseRotuine(id)
test if not dead
coroutine.resume(routine[id],"PAUSE_"..id)
rotuine[id].isPaused = true
end
That is what I tried in my little test.
I am also still unclear about how you are handling the event filter for each routine. One of the possibilities is to have a routine set it's own filter dynamically. I thought of the idea of using an eventType table and iterating through, however, one could use a space separated string of events and use string.find – it might be faster.
If my routine typically listens for rednet, key and monitor_touch events, it would set its eventFilter to "rednet key monitor_touch" – knowing full well that it also has to listen for "SIG_KILL" SIG_PAUSE" events. If it no longer needs key events, it would reset it's filter. This would potentially speed up the CMS, because the routine would not be resumed if the event is not for the routine.
Following each yield in the routine, there would have to be a KILL/PAUSE check before going on to process the rest of the event loop of that yield. It would require a coding style change (not any routine would work, only routines that implemented SAFE_KILL/PAUSE features. These could be templates, and they could be turned into functions within the routine (PAUSE CHECK, KILL CHECK).
This is all conceptual. And I will probably keep thinking that way until I get more dirty in your code. I am very excited about it's potential. The ability to add or remove "threads" at will is pretty powerful.
One final idea – instead of "waitForAny,waitForAll" as in the current parallel, you could also have a "waitForSpecific" where if a specific routine terminates then the program terminates. It could also be a value in the event variable _causesTermination. You just do a check to see if any of the _causesTermination==true routines are dead, and if they are, end the program. A a function waitForAll would set all routines[n]._causesTermination to false, where as a waitForAny would set them all to true. Using a waitForAll function and then following it with a setting of _causes termination on specific routines would afford a ridiculous amount of flexibility.
This is close to being a full-on replacement for parallel, and much more useful. Great work!
Edited on 10 February 2014 - 12:43 PM
Posted 10 February 2014 - 06:18 PM
Oops, I knew there was something I was forgetting. Its implemented now. The routine is definitely not in charge of setting its own isPaused, the routine should never be able to pull life-cycle control away from the CMS. it is possible that you can get a routine with getRoutine and be able to resume it manually, but thats the most control that I'll be giving to developers for routine life-cycles, otherwise security concerns start to come into play; imagine using this in an OS and then a program is suddenly able to perform cms.getRoutine(0) and take full control of the life-cycle of the OSes master routine!So for pausing, you would send a SIG_PAUSEid in to the the function(which would not be sent to others, which would need to catch it and do what it needs to do to be put into a paused state (do I leave redstone outputs ON or turn them OFF,etc) and then (here is what is unclear) either yield back normally and let the PauseRoutine finish processing the yield (set isPaused to true) or have the routine set it's own isPaused to true.
there's no point adding the id in there, the only routine to get the event is the one that's being resumed, no need to add unneeded overhead of custom strings.coroutine.resume(routine[id],"PAUSE_"..id)
when you yield whatever you supply as an argument is returned to the coroutine.resume call…I am also still unclear about how you are handling the event filter for each routine. One of the possibilities is to have a routine set it's own filter dynamically. I thought of the idea of using an eventType table and iterating through, however, one could use a space separated string of events and use string.find – it might be faster.
local function tester()
os.pullEvent('event_name')
end
local co = coroutine.create(tester)
print(coroutine.resume(co)) --# this will output `event_name`
if you look at lines 190/193 the filter is stored in a table under the routine id… 192 is where the checking is performed to make sure that the routine is ready to be resumed… Setting multiple filters gets a little more difficult, but not impossible I'd probably do the followingLine 192+
elseif count(filters[id]) == 0 or filters[id][eventData[1]] or eventData[1] == "terminate" then
filters[id] = {} --# clear all the filters, the routine was just resumed
local f = {resume(routine, unpack(eventData)} --# gather all the filters
--# add the filters to the table for easy checking
for _,v in ipairs(f) do
filters[id][v] = true
end
end
definition of `count`
local function count(t)
local c = 0
for _ in pairs(t) do
c = c + 1
end
return c
end
the above code should allow you do to the following (untested)
os.pullEvent("rednet_message", "key", "monitor_touch")
which would resume only once one of those 3 events fire.it is currently only resuming ones that need to be resumedThis would potentially speed up the CMS, because the routine would not be resumed if the event is not for the routine.
well currently the CMS will run until there are no more routines running, I may make some changes and add other functions to mimic the Parallel API as well as this suggestion.One final idea – instead of "waitForAny,waitForAll" as in the current parallel, you could also have a "waitForSpecific" where if a specific routine terminates then the program terminates. It could also be a value in the event variable _causesTermination. You just do a check to see if any of the _causesTermination==true routines are dead, and if they are, end the program. A a function waitForAll would set all routines[n]._causesTermination to false, where as a waitForAny would set them all to true. Using a waitForAll function and then following it with a setting of _causes termination on specific routines would afford a ridiculous amount of flexibility.
thanks.This is close to being a full-on replacement for parallel, and much more useful. Great work!
Edited on 10 February 2014 - 10:38 PM
Posted 10 February 2014 - 10:58 PM
Trying to understand why some of your code is working, I am running tests. Why does this work?
local nextId
do
local _NEXTID = 0
nextId = function()
local temp = _NEXTID
_NEXTID = _NEXTID + 1
return temp
end
end
for i= 0,4 do
print (nextId())
end
--0
--1
--2
--3
--4
Why does it remember _NEXTID between calls? I would have expected it to resolve temp to 0 rather than continuing a pointer to _NEXTID. To me, this is magic.
Edited on 10 February 2014 - 10:05 PM
Posted 10 February 2014 - 11:47 PM
_NEXTID is incremented each time the function is called, it just starts at 0. Then when the function is called a copy of the value in _NEXTID is taken and stored in temp, _NEXTID is then incremented to the next ID and temp is returned as the ID to use… Basically the system I'm using here is like a UUID where a new routine will never have the same ID as a previous one, if I didn't care for this I could have easily implemented the function like soWhy does it remember _NEXTID between calls? I would have expected it to resolve temp to 0 rather than continuing a pointer to _NEXTID. To me, this is magic.
local function nextId()
return #_ROUTINES + 1
end
I have a feeling based on your question however that you're unaware of how a `do block` works, as such here is a code example to try help…
do
local _NEXTID = 4
print(_NEXTID) --# outputs 4
end
print(_NEXTID) --# outputs nil, _NEXTID doesn't exist in this scope
so basically a do block allows us to create a self contained scope, that way the only time a localised variable can be accessed is from something within the same scope.what I'm doing in my code is I'm defining nextId in the local scope of the program, then in the `do block` I'm assigning the function to that variable, this means that the nextId function has access to the _NEXTID variable, but nothing else outside of the `do` does…
make sense?
Side-note: thank you for asking this question, you pointed out where my major bug lies, I should have started _NEXTID at 1 not 0…
EDIT: I also implemented the waitForAll and waitForAny for you, they replace run :P/>
Edited on 10 February 2014 - 11:11 PM
Posted 11 February 2014 - 12:12 AM
I have a feeling based on your question however that you're unaware of how a `do block` works, as such here is a code example to try help…
I completely understand that the do block defines a code block with it's own scoping. I realize that you declare nextID outside of the block, and reference it in the block by assigning the function to it.
I was surprised that _NEXTID held its value between calls. I realize that it is scoped in the do block yet outside of the function. I also realize that nothing outside of the do block has access to _NEXTID. I just had not seen this technique before.
make sense?
Mostly. What I am having a problem with is figuring out how I can use this. For example, I want to be able to access a variable from within a function that was defined out of the function:
Here is my example:
local function myFunc()
print("myVariable is "..tostring(myVariable or "NIL"))
end
local other
do
local myVariable = 42
other = myFunc
end
other()
--myVariable is NIL
Using the logic you described, when I execute other, I would have expected myVariable to be 42. It is NIL. Clearly assigning the function did not work the way I expected.
Side-note: thank you for asking this question, you pointed out where my major bug lies, I should have started _NEXTID at 1 not 0…
Glad I could help.
Posted 11 February 2014 - 12:23 AM
I was trying to give my co-routine function native knowledge of the routine variables (like isPaused and such)
Posted 11 February 2014 - 12:34 AM
Yeah of course _NEXTID holds its variable, the `do` isn't like a function where its reinitialised, its only initialised once and is cleaned up only once everything has dereferenced it. Its just like if you were to define a variable at the top of your program.I completely understand that the do block defines a code block with it's own scoping. I realize that you declare nextID outside of the block, and reference it in the block by assigning the function to it.
I was surprised that _NEXTID held its value between calls. I realize that it is scoped in the do block yet outside of the function. I also realize that nothing outside of the do block has access to _NEXTID. I just had not seen this technique before.
Ah see the problem here is that other is a pointer to the myFunc function, which has been compiled in a different scope. when it runs it checks for myVariable in its local scope and when that is not found it then checks in the global scope, it is in neither so it cannot find it. the easiest way to think about this is higher up scopes (parent) have no idea about lower/nested scopes (child) therefore cannot access them, however a child scope knows about any of its parent scopes and can access anything from them… if what they're looking for cannot be found in any of said 'parent' scopes then it will fall back to the global scope.make sense?
Mostly. What I am having a problem with is figuring out how I can use this. For example, I want to be able to access a variable from within a function that was defined out of the function:
Here is my example:local function myFunc() print("myVariable is "..tostring(myVariable or "NIL")) end local other do local myVariable = 42 other = myFunc end other() --myVariable is NIL
Using the logic you described, when I execute other, I would have expected myVariable to be 42. It is NIL. Clearly assigning the function did not work the way I expected.
There might be some way to do it with environments (maybe) or if we were using a language other than Lua (since last I remember they weren't in Lua) you could use, ummm, damn I've forgot the term for a function that uses the scope its within instead of the one it was compiled in…
making a little more sense now?
Posted 11 February 2014 - 12:37 AM
Absolutely. I suspected something similar was going on here.
Posted 11 February 2014 - 12:41 AM
a way I can immediately say for you to get it to work is to do the followingAbsolutely. I suspected something similar was going on here.
local function myFunc(var)
print("myVariable is "..tostring(var))
end
local other
do
local myVariable = 42
other = function() myFunc(myVariable) end
end
other()
but its not quite the same thing. EDIT: like I said there may be some way to do it with environments, not too sure, environments are an area of Lua that I've never really had the need to get into (and most people on here also don't use) therefore haven't really bothered learning it.
Edited on 10 February 2014 - 11:43 PM
Posted 11 February 2014 - 12:52 AM
I can see how that would work. I will give that a try. Thanks again.