This is a read-only snapshot of the ComputerCraft forums, taken in April 2020.
nitrogenfingers's profile picture

Programming the Atari 2600 & Other Retro Computers/Consoles

Started by nitrogenfingers, 17 December 2012 - 10:32 PM
nitrogenfingers #1
Posted 17 December 2012 - 11:32 PM
I hate to add to the general section with non-CC related posts (now coming to outnumber their indigenous cousins), but this is somewhat related.

I'm battling a bit with trying to make some games for CC, and discovering optimization for more graphically intense games is actually quite a challenge. Where in previous games simply clearing and redrawing scenes was easy and simple, with more complex graphics this leads to frame tear and lag. Even now in many applications here on the forums (my own especially), on slower computers there is noticeable delays in screen refreshes. This I iterate is not at all the fault of computercraft, but the fault of the programmer.

What's interesting is that lower level access on a real computer rather than an emulator would help these problems. Back in the 80's when people programmed games for early desktop computers like the Atari ST, Sinclair ZX and QL, I'm informed that pure high-level programs written in BASIC and other languages were almost non-existent; games programming was done strictly in assembly code, much as C++ is used today. Harder to program in but the low level access allowed optimizations simply not possible with higher level tools. The more abstraction and layers built from that core, the less efficient programs become. We can see this in the history of languages, and why Fortran, C++ and even Assembly still see use today over much more convenient languages like Lua, and now rather astonishingly I'm seeing it in my and other people's work here on the forums. It's amazing what CC has taught me.

I'm realizing I don't know a lot about this sort of thing, so I'm doing some reading on the topic and trying to apply some 3D rendering techniques I do know something about to this work but it doesn't quite apply. What I'm curious about is how computers with far less memory, RAM and slower chips were able to run realtime games.

Enter the Atari 2600. It's known to be quite challenging to program due to the sheer limitations of it's hardware, but as the pure level of shovelware leading to the '83 crash suggests it must be possible to pick up. On no machine was optimization more important, and I'm fascinated to see if any of those techniques might apply to how we look at programming in CC, or just making myself and others better programmers in general.

So I'm going to look into it. In the weeks to come on top of my regular job and CC commitments and other odds and ends I'm going to look at trying to write a game for the grandfather of modern video games. So I guess I'm asking to those who perhaps know more about this sort of thing, am I insane? Can it be done? Is it worth the time or are there better approaches? If it is worthwhile, would you play my game?

Apologies to the admins. This will barring unforeseen circumstances be my only irrelevant general post.
CoolisTheName007 #2
Posted 18 December 2012 - 12:47 AM
This is an interesting topic, given that it generalizes to all dynamic graphical programs out there: GUI's, screensavers, games, ect.
I know for a fact that kazagistar was trying to optimize LyqydOS window system, i.e. rewritting only the parts of the screen that were changed.
EDIT: decided to implement a much more basic task in my scheduler, what will hopefully help running more basic tasks faster.
dissy #3
Posted 18 December 2012 - 03:59 AM
The Atari 2600 was indeed a very simplistic machine. With 4k (Yes 4096 bytes) of program ROM, and 128 bytes of RAM, you would run most of your game code during the vblank period, when the TV was done drawing a screen and moving the electron gun back up to the top corner to start drawing a new screen.
To squeeze out more performance, you could also run code while the electron beam was drawing a line on the TV.

You would literally update the screen during the time the screen was being drawn by the TV.
There was barely enough time to accomplish getting a shape drawn correctly, not to mention reading the joystick posistion and playing sounds.

The concept of sprites did not exist then as it does today. The 2600 is basically a pong console, and the TIA chip that handled talking to the TV was exceptionally limited in what it could do. The "sprites" consisted of a couple pixels you could turn on/off or set the color based on their posistion.
You had player 1&2, missle 1&2, the ball, and half of the background which the system would mirror on the other half.
To get a full background you would need to update the line being drawn once it reached half way, and do this for each scan line.
A lot of complex games ignored the built in sprites and used nothing but the background, with their own hit detection.

There are a bunch of very excelent programming tutorials over on the Stella site (One of the better 2600 emulators)
It used a form of the 6502 CPU, so if you know assembly for that it should be easy to adopt the changes.

Since our displays have not used electron beams in a number of years now, and even programming on top of 10 layers of graphics abstraction is still many times faster than the 2600 could run code at all, I'm not sure how well the techniques used on this system will translate over to current technology. But if anything it raises the appreciation while playing an Atari 2600 game.

This is the guide that I learned Atari 2600 programming from almost 10 years ago, and even if you don't plan to do a single bit of coding for the platform, it is a very interesting read and I couldn't recomend it more:
http://www.atariage.com/forums/topic/33233-sorted-table-of-contents/
nutcase84 #4
Posted 18 December 2012 - 12:48 PM
Very interesting reading… I would love to test anyone's programs. As long as there not dumb or there is lots of copys of them, it's fun to test.
BigSHinyToys #5
Posted 18 December 2012 - 01:51 PM
The main slow down in CC is the term calls. Computing a lot of data doesn't have as much of an affect as a lot of term calls.

When any program that emulated the screen in a table this will cause surviver problems with term heavy applications. I was having the problem with mouse File Browser when running on LyqydOS . I made the following changes to optimize it. I don't clear the screen and redraw every event. I have a flag variable that is set to true when the screen needs to be redrawn. I over draw instead of clear the screen as this caused flickering. I use term.write directly instead of write or print (This saves a lot of unnecessary code execution.) overall that did improve the performance a great deal when running on LyqydOS or just as normal. Those kind of optimizations can be made for menus and non heavy graphics.

splitter works by making the same call twice ounce on the real term and ounce on the monitor it doesn't have a table storing the screen so it cant pick up where it left off on a different monitor with out being cleared first to synchronize them It dose run very fast thought and is capable of running the "star wars" program on the terminal and a monitor simultaneously with no flicker.

links to both of those programs are in my profile page's about me section.

witch programs are you trying to speed up I can take a look at them if you would like (can't promise major improvements thought.).
ETHANATOR360 #6
Posted 18 December 2012 - 02:27 PM
yes i agree i have a very decent PC (8 gb RAM) and yet billysback's dodgers3d an attempt at 3d game system its just a simple sprite display but yet it lags extremely
and there are several good wiki books for programing older systems
nitrogenfingers #7
Posted 18 December 2012 - 03:32 PM
The main slow down in CC is the term calls. Computing a lot of data doesn't have as much of an affect as a lot of term calls.

When any program that emulated the screen in a table this will cause surviver problems with term heavy applications. I was having the problem with mouse File Browser when running on LyqydOS . I made the following changes to optimize it. I don't clear the screen and redraw every event. I have a flag variable that is set to true when the screen needs to be redrawn. I over draw instead of clear the screen as this caused flickering. I use term.write directly instead of write or print (This saves a lot of unnecessary code execution.) overall that did improve the performance a great deal when running on LyqydOS or just as normal. Those kind of optimizations can be made for menus and non heavy graphics.

I came to a similar conclusion. It's trickier particularly with games as large portions of the screen can require redraws at any one point. I think on a case-by-case basis there are ways to sectionalize and overdraw specific portions of the screen but this would be challenging (though I'm looking into it). Scrolling backgrounds for example was something I wanted to develop, but that may simply be too demanding.
I've removed flicker with the creation of a back buffer, though I've yet to determine if this has an effect on frame rate- will need testers to help me with this. I'll probably have to produce some benchmarking tools to help with this process.


There are a bunch of very excelent programming tutorials over on the Stella site (One of the better 2600 emulators)
It used a form of the 6502 CPU, so if you know assembly for that it should be easy to adopt the changes.

Since our displays have not used electron beams in a number of years now, and even programming on top of 10 layers of graphics abstraction is still many times faster than the 2600 could run code at all, I'm not sure how well the techniques used on this system will translate over to current technology. But if anything it raises the appreciation while playing an Atari 2600 game.

This is the guide that I learned Atari 2600 programming from almost 10 years ago, and even if you don't plan to do a single bit of coding for the platform, it is a very interesting read and I couldn't recomend it more:
http://www.atariage....le-of-contents/

Thanks for your comment! I was aware that a lot of challenges existed for the machine but that puts them into a much more tangible perspective. I'll definitely be taking a look at the materials available at Stella and AtariAge. Thank you again :)/>

As you say it's highly unlikely the concepts will be directly applicable but as the hardware and memory limitations of the system must be forefront in the 2600 developer's mind, it would certainly change the mindset from high-level abstract graphics programming to having a more grounded perspective as to the limitations of our own computers.
CoolisTheName007 #8
Posted 18 December 2012 - 10:21 PM
If you look at my github, there's a repository named utils, and in there there's a benchmark.lua file. It's pretty neat and documented (not mine).
billysback #9
Posted 19 December 2012 - 04:56 AM
something I found out is that not clearing the screen helps a tonne in games (similar to what shiny was saying)
one thing you could do for sprites is make it remember the old state of the pixel beneath it, then instead of redrawing the background every time you want the sprite to move you revert the sprite's current pixels and override the pixel's character and colour in the new location.

minimizing the proportion of the screen you are updating at a time also helps,
as instead of updating every pixel on the screen (presuming you have a big proportion of the screen to update, that is) you can split it up in to smaller sections.

A method I use in my java games that I guess could be transferred to CC (though I have not done this) is to add a buffer to the screen updating, instead of making things draw as they are asked to they draw to a buffer, once this buffer is ready it can replace the current screen and be reset.

Also, you can do simple things like only update when necessary (when things change), this can make little difference in games (physics and animations) but a huge amount of difference in things like menus.

All of these are just small things that I have personally found out when creating games, for whatever platform (though specifically CC)
dissy #10
Posted 19 December 2012 - 05:19 AM
*snip*

I've done something similar before with curses programs (aka unix terminal ansi screens)

A sub-thread will keep two screen buffers, current and new. The API to make changes will update the new buffer, and the thread will continually compare the two in a generic way to detect any differences.
If the two are different, another function kicks in to do a binary compare, and only update the cells that are different between the two buffers (in the case of curses, this is done on a per-line basis, which speeds up redraws for modem/serial connections)
Finally it updates the current buffer with the contents of new and releases the thread.

The CC term API would fit this model fairly well, and since it has full X/Y control there wouldn't need to be any special handling to update each line together.
CoolisTheName007 #11
Posted 19 December 2012 - 06:40 AM
snip
I wonder if using functions like term.clearLine and term.scroll, since they are implemented java side, would allow faster operation.