This is a read-only snapshot of the ComputerCraft forums, taken in April 2020.
Antelux's profile picture

[Proof of Concept] - High Precision Timer

Started by Antelux, 09 April 2016 - 06:55 PM
Antelux #1
Posted 09 April 2016 - 08:55 PM
So, I've always wondered whether or not timers could be made to go faster than the default 0.05 seconds they can go at.

After a bit of testing, I've created a program that should allow for much, much higher precision for timers.

Basically, it works by doing as many addition operations it can within a second, automatically yielding in the same loop if necessary. The amount it can do can then be used as a base for further timers.

For example, if it can do 2000 loops in one second, then 1000 should be half a second, and so on.
Except, now that we have such a large number, we can have a "preciseness" of 1 / amount.
In this case, doing 2000 loops gets you a preciseness of 0.0005, as opposed to 0.05, the default value.

Of course, since this type of timer is based solely on the speed of the CPU, there's a function in the API that will automatically adjust the base number used based on the margin of error for the time passed. Though, just know that this will only work for timers that use a number which is a multiple of 0.05.

Now, how is this useful? Well, with this, you can have an FPS of 60 or more. Well, technically.
You see, even though your program may be running at an FPS of 60, from what I know, the screen will always be capped at 20 FPS. Do prove me wrong though, if you like. I haven't done extensive testing on this.

Another thing that should be noted is that this works in the way sleep would; It waits the requested time, and doesn't queue any events. So, that kind of kills some functionality.

All in all, however, I found this to be rather interesting, and I made it simply to make a proof of concept. You can find the code here (QY5cjxXL) if you want to mess around with it.

To use it, you simply post the code to the top of your program.
After, there are two functions you have to use: HPT.startTimer(number) and HPT.adjustTimer(number)

It's highly recommend that you do something like

HPT.adjustTimer(3)
In the beginning of your code before you start to use the timer.
This line fine tunes the timer to make it more accurate.

After that, you can call HPT.startTimer() to use the API.
If you want a specific FPS, you can just do HPT.startTimer(1 / FPS).
Edited on 12 April 2016 - 12:59 PM
Bomb Bloke #2
Posted 10 April 2016 - 03:47 PM
This is the sort of thing that could theoretically be useful with the likes of Note, though I have some misgivings.

Note is an NBS player, NBS files being MIDIs which have been converted to a format intended for playback via note blocks. The original idea was to use world editors to place lengthy redstone tracts to play the files back, but ComputerCraft obviously allows such structures to be packed down significantly.

Minecraft registers redstone state changes ten times a second, but the tempo of a given MID file typically won't be a factor of ten and so can't be played correctly. ComputerCraft can interact with note blocks as peripherals some arbitrary number of times per second, however; due to the timer system only allowing measurements within a twentieth of a second, there's still some degree of error relative to the original tunes.

I've managed to get it pretty accurate by going so far as to even check os.clock() when resuming and factoring in any changes when calculating the duration of the timer set just prior to the next yield, and to my ears I can't even notice it's off-beat with songs I know must be. Generally the playback duration of a tune will be accurate to within two seconds (assuming a solid server TPS of 20), so it's pretty good as-is, but if I were to improve on my timing then your sort of technique is the way I'd need to go.

I see a couple of problems, though. The first one is that it really doesn't sound server-friendly. If two computers try to use the technique at once (or even if one system uses it and other systems try to rely on 0.05s timers), do you believe there'd be negative effects?

The other is that in my personal tests of seeing how many reps I can get a "for" loop to do within a few seconds, I seldom got the same result twice in a row. I'm not sure how accurate the technique can be even without any other sources of CPU load to speak of? Any recommendations for measuring it at my end?

In any case, even if I'm not sure I'll make use of it (certainly I can't use the actual API you've built around the idea if it interferes with other events), mess around with it I shall, when I find the time.

You see, even though your program may be running at an FPS of 60, from what I know, the screen will always be capped at 20 FPS. Do prove me wrong though, if you like. I haven't done extensive testing on this.

You can certainly request changes to the display more than twenty times per second, but it beats me as to how often Minecraft ends up transmitting packets from the server thread to the client thread in order to get the actual CC displays to change on the user's screen. To further complicate matters, I rather suspect that some builds of ComputerCraft have changed the behaviour over time.
Antelux #3
Posted 10 April 2016 - 04:35 PM
I see a couple of problems, though. The first one is that it really doesn't sound server-friendly. If two computers try to use the technique at once (or even if one system uses it and other systems try to rely on 0.05s timers), do you believe there'd be negative effects?

At first, I didn't think it should. But, after some single player testing (since I don't really have a server to test on), turns out it sort of slows everything down.
Though, I made a quick change to the yield() function. It just yields right then and there rather than wait, which is why it slowed down the other computers. Not the most ideal thing to do, but now it just straight up queues and yields an empty event.

However, with that change in place, I was able to run four of them (which all gave accurate readings) and MiniatureCraft without problem, which normally does cause a bit of lag to other computers. So, I'll have to update the pastebin with the new code.

Though, I'd like to note that I suspect that programs that lag other computers will cause the timer program to become less accurate.

The other is that in my personal tests of seeing how many reps I can get a "for" loop to do within a few seconds, I seldom got the same result twice in a row. I'm not sure how accurate the technique can be even without any other sources of CPU load to speak of? Any recommendations for measuring it at my end?

Well, I had the same problem at first. But, here are two things I've done to fix the problem:
  • Make sure you're starting your timers at a new interval (e.x., for i = 1, 2 do os.startTimer(0.05); coroutine.yield("timer") end local timer = os.time()) to make sure you're starting when the timers are most accurate. Starting in between causes miscalculations since os.time() isn't very accurate.
  • Running the loop multiple times, and grabbing the average or max can also provide a more accurate reading. Though, I find that just doing the above point and running the loop once is help enough.
You can certainly request changes to the display more than twenty times per second, but it beats me as to how often Minecraft ends up transmitting packets from the server thread to the client thread in order to get the actual CC displays to change on the user's screen. To further complicate matters, I rather suspect that some builds of ComputerCraft have changed the behaviour over time.

Well, I know that much. But when I went to try to draw the screen at, say, 60 FPS, it appeared as if the screen couldn't handle it, skipping frames. It only seemed to work with 20 FPS for me. But again, I haven't done extensive testing here yet.
Edited on 10 April 2016 - 07:59 PM