Well, you could run tests like:
…
I briefly tried that, but I missed a few important things (like setting the cursor position before writing the randomized pixels)
(I had to lower the iterations to 100, otherwise I'd get too long without yielding in some tests)
My buffer API did pretty terrible, it added about a second to the total time (Which is an average of 0.0285. Got an average of 0.0175 without my API)
But my experimental program (which I'm calling 'smooth') reduced it from 1.75 seconds total to 1.6 seconds total ;)/>
The program catches term.write calls, and buffers them (with my buffer API). Then it draws the changes when the program it's smoothing yields.
You may have realized this means that the test you provided won't display anything when run by smooth, since it never yields. Which is why the time is reduced.
I'd argue this is good! Most programs draw their GUI, then wait for user input. This is kinda an exception.
I'll do some teaking to my API. It currently is only more efficient when less than half the screen is changed.
I'm using term.write instead of term.blit, I'd expect term.blit to be more efficient when more than half is changed.
If anyone has some programs that I could try to 'smooth', I'd love to experiment with them!