This is a read-only snapshot of the ComputerCraft forums, taken in April 2020.
BigTwisty's profile picture

Software Validation Api

Started by BigTwisty, 07 September 2013 - 03:52 PM
BigTwisty #1
Posted 07 September 2013 - 05:52 PM
Big Twisty’s Software Validation API

Download: http://pastebin.com/b585vtx3

Background:

Software Validation is a process of using a predefined group of tests to prove functionality of a piece of software. Have you ever had a perfectly working piece of code, but you found some way to make it better? How do you know you didn’t break something else in that massive operating system you’re putting together? This is where software validation comes in. It allows you to quickly and easily go back and test all the functionality that you previously proved out after any major (or minor) software change.

The typical development process, at least for me when I’m lazy, is this:
  1. Write a new piece of cool code that does FOO.
  2. Make sure it does FOO right.
  3. Notice it could also, with a bit of tweaking, do BAR as well.
  4. Tweak it (not twerk it, much different!)
  5. Make sure it now does BAR.
  6. Drink a beer.
  7. Use the code in a bunch of other stuff.
  8. Try to figure out why my other stuff isn’t working, not realizing that my cool code may do BAR now, but FOO got broke and I didn’t know it!
  9. Bang my head on a wall…
  10. Kick my dog…
  11. Write a darn Software Validation API so I don’t do it again!
This API is intended to allow the user to quickly throw together software validation scripts for their creations, changing the process to this:
  1. Write a new piece of cool code that does FOO.
  2. Start a software validation script for the code
  3. Add a step that tests FOO.
  4. …add BAR… add BAR test step…
  5. Run validation test and see that FOO doesn’t work.
  6. Fix it, drink beer, pet dog, smile at wife…

API Features:
  • Test generation
    • Each test is tagged for reference
    • Failed tests include expected vs received data
  • Error reporting support
    • Can test your code’s ability to throw errors when expected
    • Test continues even when errors are thrown
  • Can print failed tests with data to the screen
Useage Tutorial:
Spoiler

Let’s write a quick piece of code to test. Credit goes to theoriginalbit for the original version of this override for the assert() function. I’ve modified it a bit for my purposes, but the idea was his.

function assert(bool, message, throwback)
  throwback = throwback == 0 and 0 or throwback and (throwback + 1) or 2
  if not bool then
	error(message, throwback)
  end
end

Now let’s setup a new test.


os.loadAPI("btValidation")
test = btValidation.newTest( "output.txt" )
pline = btValidation.pcallLine()

This sets up a new test that will output all test data to “output.txt”. We’ll see what pline does later.
Now let’s run a small test on it.


test.note("assert")
test.try( "1.0", { assert, true, "test" }, { true } )
test.try( "1.1", { assert, false, "test" }, { false, "pcall: test" } )
test.try( "1.2", { assert, false, "test", 2}, { false, pline.."test" } )

The first line adds a note to the test, so we know what this section is testing.
The test.try() function is declared like this:


try = function( tag, testCall, expected )

tag: gives the test a reference, usually a number, so you know which test you’re looking at in the report.

testCall: A table containing the function and parameters you want run.

results: A table containing the expected results. The first item is a Boolean representing whether or not you expect the function succeed. If you don’t, the second item is the expected error message. If you do expect it to succeed, the remaining items in the expected table should contain the expected return values for the function.

Normally a well designed application will not throw errors referenced inside the application. Expected errors should “throwback” to the caller. In this case, the caller is always “pcall”, hence the expected error message for test 1.1. (see how handy those tags are?)
Notice that when using assert(bool, msg, 2) the throwback goes back one level further than pcall. The btValidation.pcallLine() function returns the expected error code prefix for the line in the API where the pcall occurs for just this purpose. It may only really be useful for testing error handling code, but it’s there anyway.

Feel free to play around with this if you like. If not, feel free to kick your dog or whatever…

BigTwisty out…

BigTwisty #2
Posted 08 September 2013 - 01:00 AM
After running an extensive validation test on my base class API, I found many things to fix for both APIs.

btValidation updates:
  • Added tryWrite() and tryRead() to handle attempts at writing to or reading from variables. This is handy for the class structure.
  • Removed need to reference error message throwbacks. Error messages are still validated, but throwbacks are removed if they are thrown back to expected places within the btValidation API code.
  • Made { true } the default return value, overrideable for expected fails or return values.
  • Various bug fixes.
theoriginalbit #3
Posted 08 September 2013 - 05:45 AM
Again, I'll post here like i did in the other thread
I suggest that you return bool in your assert function (like I have with my override) for this use case


local handle = assert( fs.open("file", 'r'), "Cannot open file for read", 0)
Since in Lua a value evaluates to true and nil evaluates to false.
BigTwisty #4
Posted 08 September 2013 - 07:54 AM
I implemented that, plus a bit more in the other class. The purpose of a software validation sweet is to be as simple as possible under the hood, as it is rather difficult to validate the validator. That particular code was one of the things I was testing.