This is a read-only snapshot of the ComputerCraft forums, taken in April 2020.
Thefdjurt's profile picture

Defined or undefined parameters?

Started by Thefdjurt, 17 February 2015 - 05:17 AM
Thefdjurt #1
Posted 17 February 2015 - 06:17 AM
Functions and methods in many different languages are basically the same thing.
However one thing that I have noticed is that some languages use defined parameters, whilst others use undefined parameters.

Not exactly understanding what I mean? Heres some examples.
–Defined:
Java, C#, C++, etc.; where if you define a method with parameters then that method can only be called with those parameters or they'll error. I think this might have something to do with the fact that they use methods not functions.
–Undefined:
Python, Lua, Javascript (scripting not programming but who cares), etc.; where if you run a function with defined parameters without said parameters you will not get an error until a argument is referenced and used and then recognized as null/nil. This is easy to workaround, only a few lines of code or in python's case while the defining of the parameters themselves.

Now, what do you guys have to say on this topic? What believe is better form defined or undefined parameters, or even the possibility to do both? Also, I hadn't done any research prior to creating this topic so if I wrote something which was is inaccurate I would appreciate if you would tell me so I may correct myself.
Edited on 17 February 2015 - 05:18 AM
SquidDev #2
Posted 17 February 2015 - 07:47 AM
–Undefined:
Python, Lua, Javascript (scripting not programming but who cares), etc.; where if you run a function with defined parameters without said parameters you will not get an error until a argument is referenced and used and then recognized as null/nil. This is easy to workaround, only a few lines of code or in python's case while the defining of the parameters themselves.

Whoah, Python? That requires a set number of parameters to be passed. Otherwise, I'm pretty split on this. I think generally though this boils down to dynamic vs static typing. And despite my love of Python (and Lua) I still think that static typing is an easier, nicer and more efficient way of doing things.
ElvishJerricco #3
Posted 17 February 2015 - 08:30 AM
Yea this is definitely just down to the static vs dynamic thing. And if I've learned nothing else from Haskell, it's that strong type systems provide much more than just compile time bug-saving. Types can be an incredibly powerful tool. Monads are so cool…
Bomb Bloke #4
Posted 17 February 2015 - 10:45 AM
Exactly - static types it is.

As for "pros and cons", it boils down to efficiency versus ease-of-use. The more specific your code is, the faster it can run - for example, using term.write() in ComputerCraft executes faster than print(), because print() has to check what you pass it and apply formatting if need be (eg, word-wrap, escape characters, and so on). It also has to concatenate multiple parameters, and calls tostring() on them all to boot. This all adds up to a lot more code being executed, slowing it down - but makes it a lot easier to use and reduces the length of your script. term.write(), on the other hand, requires more effort on your part if you want to use it with eg word-wrap… but you can be specific about the features you need, meaning that the only code being executed is the stuff that has to be executed for your actual use-case.

Lua's tables are another example. Prior to ComputerCraft (my first experience with Lua), I'd only vaguely thought about managing data in the way they do, and never imagined that there would be a language which actually uses a construct that's so inherently memory-inefficient. Now I cringe at the idea of doing it any other way - I still consider them grossly inefficient, but they're just so easy to use that I can code some very complex concepts with a very small amount of effort on my part.

Now consider the difference between a compiler which has to account for multiple data types being available to each initialised variable, and one which knows ahead of time that eg an int is an int and it's going to stay an int. Obviously the first one is much more convenient for the coder to play with, but the second one doesn't have to waste time checking all the possible scenarios that might apply.

So it's a case of apples and oranges - one way isn't inherently "better" than the other. If you want to write a simple script that does a complex job without much effort on your part, dynamic typing is more likely to be the sort of compiler trait that suits that task. If you want to write a script that maybe requires you to spend a bit more time being specific about what you want, but as a result gets the job done faster once you go to run it, well… you get the idea.

To boil the notion down to its basic components, I suppose you could say it's comparing the traits of a low level language with those of a high level language. Neither is "better", it's simply a case of using the right tool for the job. There's no point in writing pages of machine-code if you're only ever going to run the result once and it's just to eg rename a bunch of files. Likewise, there's no point in writing a full-on SHODAN-level artificial intelligence in Lua, because the time saved scripting it up will be lost the moment you try to execute it.