When you first programmed your model, you gave it something to calculate, and then you verified that the calculations were correct, after which you likely banged on the table and exclaimed “YES!”.
Later you needed to fix something. You discovered a bug, or you had another case to run your model on and you needed to add some feature to handle it. What did you do? Did you repeat the manual verification? Even if you did, one thing is clear: you won’t do it again and again and again. Eventually you’ll resort either to praying that your code won’t break, or to avoid changing it at all for fear it will break, or to the correct way of working.
Welcome to automated testing.
You create some very simple input for your model—a time series with only five records, a network with only three nodes, etc.—and you solve it by hand, or in a spreadsheet. You write a program that runs your model with that simple input and verifies that it produces the expected output. Then, you just need to press a button (or run a command) and have the thing tested in a few seconds.
Whenever you make a change to your model to handle more cases, you need to create and solve similar small inputs that will trigger the new functionality. Running the testing code will check your model against all such test cases you have created in the past. So you can go and change its code without fear.
You might already know, or might have deduced yourself, everything I wrote so far. There’s much more to it, however. I will expand next time.