NEWS, EDITORIALS, REFERENCE

Subscribe to C64OS.com with your favorite RSS Reader
January 4, 2019#76 Programming Theory

Exceptions, Try/Catch 6502

Post Archive Icon

I've been thinking about how to implement exception handling on the C64, for C64 OS. Exception handling would be useful for streamlining the handling of errors, and also for gracefully handling an application crashing under known exceptional conditions, with the ability to peacefully return back to the system's homebase application.

In this post, I'm going to talk about about why you would want to have exceptions, how they can improve your program design, and how they might be implemented for C64 OS.

Overview of Basic Program Flow Control

We're all familiar, even just from BASIC, with the usual program flow mechanisms. Execution of a program flows from top to bottom executing each step one after the next. The next simplest way to change program flow is with a branch which allows skipping over some code if a certain condition is not true. In BASIC this can be done with an IF statement that jumps over some code, to the following code.

The next most simple program flow control is the loop. In BASIC, there is a special language construct FOR/NEXT, which allows the program to return to the top of the loop and interate some number of times. In assembly, loops must be implemented with branches. But, instead of branching forward to skip some code, as in an IF statement, to loop the branch merely has to branch back to some previous part of the program so that it executes again. Usually the looping code contains a manual counter, and the branch returns to the top of the loop until the counter reaches some value.

Branching and its related offspring, looping, are the most fundamental flow control mechanisms of any program. They are what make a computer "compute" rather than merely calculate. The ability to check the state of a variable and make a decision about whether to continue execution here or there.1

The next flow control concept is the routine. A routine is a collection of instructions that are designed to go together, as a conceptual unit, that does something. In BASIC one can jump into a routine with the GOTO command. It redirects execution to the start of a routine, somewhere else. In assembly, the equivalent of GOTO is the JMP instruction. The problem with a JMP is that the end of that routine cannot return execution to the point following where the JMP was made. JMP or GOTO, in terms of flow control, is a one-way transfer of execution. It has its uses, but it isn't always enough.

 

The big brother of GOTO (or JMP) is GOSUB, or JSR in assembly. GOSUB standands for GO TO SUB-ROUTINE, and JSR stands for Jump Saving Return. Both are used for executing a subroutine. A subroutine is just like any other routine, a grouped set of instructions for performing a particular task, however, it is meant to be nested, called from within another routine which itself has not yet completed. Subroutines really unlock the power of a computer, by allowing for a certain level of abstraction. A routine could, for example, consist of nothing but a sequence of calls to other subroutines. And in even more abstract cases, a routine can call itself, resulting in recursion. I love recursion. When it works, and it's suitable, recursion can be very eloquent.

But how do subroutines work? How do they know where execution flow should return to when they're finished? At the lowest level2 they work by saving the address of the current execution point to the stack before transferring flow control to the subroutine. So, at a GOSUB, BASIC will push on to the stack, say, the current line number and execution offset within the line. JSR, Jump Saving Return, does exactly what it says. It saves the return address on to the stack before doing a JMP.

It's the return mechanism that interests us. When you're in BASIC you have only one way to leave the subroutine (besides just drilling deeper, or leaking stack memory), you have to use the RETURN command. Under the hood the RETURN command restores the current line number and offset into the line by pulling those values off the stack, and then continuing on from there. In BASIC, though, all of the stack manipulation is completely abstracted. There is no way in BASIC to leave the current routine without, for example, returning to the routine that called the GOSUB. Any alternative flow paths are impossible because RETURN is the only command available and its implementation doesn't support more complex behavior.

Let's look at a quick visualization to show this standard behavior, and what BASIC's GOSUB/RETURN pair can and cannot do.

Flow control block diagram.

On the left side we have the standard flow. The red block at the top is a routine. We can think of it as our whole program. The program starts at the top of the first block, and ends at the bottom of the first block. At the point of the black arrow, the program GOSUBs to a subroutine, the top of the purple (second) box. However, halfway down the purple routine, it GOSUBs again at the next black arrow, to the top of the blue (third) routine. The third box executes all the way through and at the bottom it RETURNs. The return path is indicated by following the green arrow back up to the top of the second black arrow, midway through the purple box. The rest of the purple routine then continues until the bottom of the purple box. There it RETURNS, which follows the next green arrow back again to the top of the red box's black arrow. Flow then continues to the end of the red box where the program ends.

That's a pretty standard, and totally possible, BASIC program flow using GOSUB to call and RETURN to come back from two nested subroutines. Now let's look at the right hand side of that block diagram.

It's very similar, except for that new pink arrow. The pink arrow starts part way through the blue (third) routine, possibly as the result of an error encountered during the execution. The pink arrow transfers flow control out of the blue routine, but not back to the purple routine. Instead it completely by-passes the purple (middle) routine and returns all the way back to somewhere inside the red routine just past where the initial call was to the purple routine. BASIC cannot do this. It cannot do this because, there is no language construct that permits it, and RETURN doesn't behave this way. And so, you just can't do it. (Short of calling a custom assembly routine.)

But this kind of flow adjustment is exactly what happens in other languages with the TRY/CATCH language construct, which is used for exception handling. BASIC can't do it, but the question is, is there some way we can get our assembly language programs to do this?

Program Flow, with Exceptions

Let's switch gears and look at a higher level language, that supports exceptions, and see how they are used to modify the typical call/return flow.

Javascript is a very popular language, and it supports a simple-to-use TRY/CATCH language construct.

As we go through this code, we'll see that the execution flow is very much like the block diagram on the right hand side above. We have three routines. Main is the whole program. It gets a message and displays it in a dialog box and then ends.

In order to get a message, it calls the getMessage routine. GetMessage obscures whether the message comes from a server (remote, with possibility of failure) or is just locally available. I did this with a "coin flip" to indicate that main cannot predict the source of the message.

GetMessage is what decides where the message should come from, but it is sufficiently generic that it doesn't know what to do if an error were to occur when trying to get a message from the server. As a consequence of its ignorance, it doesn't do anything at all for handling errors. Fifty percent of the time it just plows ahead, calls getMessageFromServer and blindly returns whatever it is given. The other half of the time, it returns a locally stored message.

The third routine is the very specific getMessageFromServer. This routine calls yet other routines (not shown here) to accomplish its job. The problem is that some of those calls could fail. First it gets a server connection. But if the server connection is invalid, what can it do? It can't proceed to check for messages. This is an exceptional situation. The program hasn't crashed, but this routine cannot fulfill its intended purpose. It also has no idea who has called it, nor with what intentions. Instead it uses the language construct "throw" to throw an exception, which ends its own control of execution flow.

Alternatively, though, maybe the server connection is established, so it proceeds. Next it asks the server for a messageCount. Uh-Oh. What happens if there are zero messages on the server to retrieve? Again, the program hasn't crashed, there is no bug, but the function is unable to do what it's supposed to do, because the server has no messages. This is a different kind of exceptional situation, but still exceptional. It throws again, but with a different exception string.

Lastly, it is possible that no exceptional situation arises. If the server connection opened, and there are messages to retrieve, it grabs one and returns it. The return statement necessarily returns flow control to the routine that called this routine. So we're back to getMessage, which returns the message back to main. Main converts the message to upper case and displays it, and the program ends.

This is curious though. What actually happens to the program flow when getMessageFromServer executes a "throw" ? Execution leaves getMessageFromServer, no further instructions in it get executed. However, it doesn't just return to getMessage, getMessage is also finished execution, no more instructions from it will be called. Flow then goes back all the way to main, because the call to getMessage was put inside a "try" block. Execution does not just continue after the call to getMessage though, instead it bypasses everything else in the try block, and starts executing the "catch" block instead. In this case, the main routine is in control of how an exception should be handled, and gracefully replaces the message with something of its own. The program continues as normal, and the message gets displayed.

What would happen if there is no try/catch? What would happen if the exception was never caught by anything and it just shot all the way back up to the top of the call stack? Well, this depends on the environment. In Javascript, in a webbrowser, not much happens. The exception gets logged in the error console, and that's about it. But in a program written in a compiled language that is running directly on the CPU, under the control of the operating system, things don't go quite so smoothly. The operating system itself catches the exception and acts on it, typically, by forcibly terminating the program. Usually you would say, at this point, that the program crashed.

Stack Manipulations in 6502

It can at first be a bit unclear what the advantage of exception handling really is. But, with time and experience it becomes more obvious how exceptions can help make programs cleaner and more robust. But, we'll return to look at some practical examples at the end.

The more pressing issue is, how can exceptions be implemented in 6502 assembly? First, how are they implemented in Javascript? Javascript is mostly an interpreted language. So, who the hell really knows how it's implemented. Further, it's probably implemented differently by every different Javascript engine. What about in lower level languages? Unfortunately, this gets complicated fast. Exception handling is often dependent on certain hardware features of the CPU. These are features that we don't really have. On the 6502, only the BRK instruction is the likely candidate for raising a hardware-level exception.

However, I don't want to use the BRK instruction. And, even if we were to use it, we still have the problem of how to register exception handling routines. These are the blocks of code defined by the "catch".

Even if we can register a try scope, and its catch block, there is the further technical issue of understanding how execution flow is actually transferred out of the current routine and into the exception handling routine. To understand how this can be possible, it is necessary to better understand how subroutines actually work.

In a high level language, like Javascript, it is not possible to move the execution point without using the standard flow control constructs of the language. IF/ELSE, FOR, SWITCH, RETURN, etc. There is simply no way to get execution to bounce from inside one defined routine into the middle of some other routine. The language makes it impossible by not providing any constructs to do it. And that's good, because the result would typically become an unholy mess of spaghetti code.

In assembly though, we are working at a much lower level. From the CPU's perspective (the 6502) there are only a handful of registers. The program counter, the stack pointer, the status register, plus the accumulator, and X and Y index registers. That's it! The program counter holds a memory address for where execution currently is, and it always increments as the CPU executes, unless an instruction explicitly modifies it. From the CPU's perspective, there really is no concept of being inside a routine. Routines are purely an organizational fantasy of the programmer.

Sort of. Almost. There is also the stack. The JSR and RTS instructions (as well as interrupts and the RTI instruction) make automatic use of the stack to store and restore execution addresses. This allows the CPU to run "subroutines." But, from a certain perspective, everything is still flatly laid out. It is only the stack pointer, an 8-bit number, that indicates the call depth. But there are also instructions for manipulating the stack pointer. You have to be very careful, of course, but just think about how it works:

Now, I've done some incredibly dumb things here, so this isn't how you would ever write your code. But it's done like this to illustrate a point. I've also simplified away the idea of calling to a server. But there are still three routines: main, getMessage and getOtherMsg. Main starts by calling getMessage. At this point, the return address to the middle of main is automatically pushed to the stack, and execution jumps to the start of getMessage. The flip-a-coin is still there to illustrate some alternatives exist here. If heads we'll JSR deeper into another subroutine. Otherwise we just grab a pointer to msg1 with the X and Y registers. The last step of getMessage is to call a routine to lowercase our message (implementation not shown here) and then returns to main to display the message.

Inside getOtherMsg, typically we should grab a pointer to an alternative message, and do an RTS. The RTS would return us to getMessage, which would then JMP down to done, which would do the lowercase routine before it does an RTS again returning the pointer to the alternative message back to main. That's the typical flow, it's what you'd expect in a high-level language.

But, instead, we do something way more dangerous and way less orthodox, but again it's to illustrate a point. Upon calling getOtherMsg the return address back to the middle of getMessage is pushed onto the stack. What getOtherMsg does is pulls two bytes off the stack and throws them away. Those were the two bytes that would direct the CPU back to getMessage, the calling routine. The only thing left on the stack now is the return address back to main. So we set our X/Y pointer to a message and RTS. But this RTS completely bypasses anything else that might have happened in getMessage, including the call to lowercase the message text, and delivers the X/Y pointer straight back to main to be displayed.

Is it crazy to do that? Yeah, it's pretty crazy. Especially if these routines are far apart, and even worse if they're maintained by different people. If getMessage pushes something temporary to the stack, with the expectation of pulling it back off later before doing its own RTS, then getOtherMsg's assumption that only the return address is on the stack, and hence pulling a fixed number of bytes (2) from the stack, would be catastrophic. The RTS at the end of getOtherMsg would send the CPU off to some random place and it would be game over.

But. BUT. This illustrates an important point. Inside a routine in Javascript it cannot arbitrarily decide to return to the routine that called the routine that called it. It's just not possible. Whereas in assembly, it's trivial. You just remove some bytes from the stack. Because, everything is flat, and it is only the stack pointer that indicates call depth.

Exceptions in 6502

The most interesting part about our crazy manoeuvre, pulling bytes off the stack before the RTS, is that it demonstrates that execution flow can be manipulated by modifying the stack. The trick then is to find a safe and reliable way to modify the stack.

Every routine that is called with JSR pushes a minimum of two bytes onto the stack, which is the return address whither the call was made. However, as mentioned above, we don't actually know that to skip back 2 call levels we merely need to pull 4 bytes, because we can't be certain that those routines didn't push additional bytes to the stack for their own local variables. A routine must pull the exact number of bytes from the stack that it pushed before doing its own RTS. But we can't know or predict what those intermediate routines might push. Furthermore it's a bit of stretch to know how many routines back we ought to return. Take the example of getOtherMsg, above. It assumes that between itself and main there is just one routine, getMessage. But if getMessage were refactored as two routines, then getOtherMsg would be 3 calls deep rather than 2, but it would have no way of knowing that.

There are a couple more twists to all of this before we talk about solutions.

The flow control mechanism of exception handling doesn't just cause a routine to return back multiple depths of calls in one step, but it redirects execution to the start of the special "catch" block within the routine that's several calls back.

And the last twist, what happens if an exception is thrown inside the catch block of a routine? As it turns out, this is fairly standard behavior. Perhaps a routine's job is to open a server connection, and then download some content and put it in a file. It opens the connection in a try block. Then it writes the data to a file with a file writing routine, but the file writing routine is inside its own try/catch. The file fails to be written, maybe the disk is out of space, or a the media has been ejected, so it throws an exception. The networking code then catches it, because it needs to try to close the network connection, but it doesn't know how to handle the failed file write code, so it throws the file write exception again to pass it up the chain. Somehow, try/catch blocks have to support being nested.


Now let's talk about my proposed implementation. Something that will handle all of these situations.

My proposal is that the operating system (C64 OS) support exception handling by providing 3 system routines, 2 macros to streamline their use, plus a fixed-length table for registering try/catch blocks. The table is entirely abstracted from the user, but is used by the 3 routines to support the implementation.

The three routines are: try, endtry and exception. With macros for try and endtry that allow them to be called with an argument in just one line each. Here's how it would look to code with them.

This code brings back some of the complexity of trying to retrieve messages from a server. And it also assumes some behavior about how you might interact with the server code. This is completely made up for the example, but we'll just run with it to show how the exception handling works.

Starting in main, we'll get a message, display it, and the program ends. However, getting a message is an abstract and unreliable process, so we want that to be in a "try" block. We thus wrap the contentious code with two macros, #try and #endtry. These are both just one-liners, so from a code clarity standpoint it's pretty light. But, we need to define a "catch" block too; the block that will only get executed if an unhandled exception is raised after executing the #try but before executing the #endtry. The catch block is wrapped merely with two labels. I've used catch and endcatch, but those labels are arbitrary. The label at the start of the catch block is specified in the #try macro, the label at the end of the catch block is specified in the #endtry macro. If you were to have multiple try/catch pairs inside a single naming scope, such that you can't have labels catch and endcatch more than once, they can be uniquely labeled, numbering them or whatever is your style.

I said above the catch is executed only if an "unhandled exception" occurs between #try and #endtry. What do I mean by unhandled? Inside our try block we call getMessage. getMessage could define its own try/catch, and any exception that happens downstream from that could get handled by its catch and never get propagated up to the try/catch in main.

Inside main's try it calls getMessage. getMessage does a coin flip to decide if it should get a message from a server or just return a pointer to a local message. If it's a local message, no exception can be raised and a pointer is returned. But let's walk down the path of getting a message from the server instead.

Inside getServerMessage there are a few steps, two of which are prone to failure: opening the server connection and checking to see if any messages exist. First we call openConnection. We can just assume, for simplicity's sake, that this code configures some underlying state, and if it cannot open a connection it returns with the carry set. If the carry is clear we branch to continue on. Otherwise we "JMP exception". Exception is one of the 3 system routines that implement the whole mechanism. Note that we are JMPing to exception (not JSRing), this immediately ends this routine's control of execution flow. We'll come back to how this all works below.

Assuming the connection opened, we'll call another routine to get a message count from the open connection. And we can assume this works by referencing the underlying state that was setup when the connection was opened. The number of messages is returned in the X register, which we check to see if it's zero. If it's zero, there are no messages so we JMP exception for that too. Otherwise, we call a routine to get the message, which sets a pointer to the message and we return to getMessage, which returns the pointer back to main, which is still inside the try block.

The contents of the try block continue to execute, in this case a call to toUpperCase, which we presume will produce an uppercase version of the message. Then the #endtry macro is encountered, with the endcatch label as an argument. The endcatch label is used to indicate where execution should continue next. That is, because no exception was raised, endtry unregisters the try then skips the catch block and carries on following endcatch. The message gets displayed and the program ends.

Let's consider for a moment instead what would happen if one of the two exceptions was raised in getServerMessage. Execution doesn't proceed beyond the JMP exception in getServerMessage, execution never returns to getMessage, and when it returns to main it begins at the catch label. In our simple example program the pointer gets set to a generic message and the program continues, the message gets displayed and the program ends.

How do #try, #endtry and exception work?

Above is how you would use try/catch and exception raising in your code, but it doesn't explain how they work. So, here's how they work.

The system maintains a table for registering exception handlers. Each table entry is 3 bytes, and the maximum number of entries is fixed. The length of the fixed table determines how many levels deep try/catch's can be nested inside one another. I would decide this depth arbitrarily. Just as the 6502's stack is 256 bytes big, and C64 OS's screen compositor has 4 layers, and the event model can buffer 3 mouse events, and the keyboard buffer can hold 10 characters, etc. The number of nested try/catch blocks might have a fixed limit of say, 3, or maybe 5, I don't know yet what is reasonable, but it'll be less than 10.

The #try macro pushes an entry onto the end of this table. The #endtry macro removes the last entry from this table. The question is, what does #try physically put in the table? For starters, the macros just wrap calls to the try_ and endtry_ system routines with an inline .word argument. (The trailing underscores are so the macro name and routine name don't collide.)

UPDATE: July 17, 2019

Since the time of this writing, I realized that it doesn't make sense to have the macros make calls to try_+service or endtry_+service. These would be hard references to the service module's offset. It would only be suitable for calls made by other system code. For more information about this, see the post Distributed Jumptable.

The actual implementations of try and endtry are so small that they have been moved to workspace memory. So, they are no longer part of the service module. The C64 OS booter now handles installing the exception handling routines into a much lower level part of the system. To be honest, this feels like it makes more sense anyway, as exception handling is a very low-level feature.

The call to try_ has an inline argument, which is a pointer to the catch block. This is the address of where execution should be directed if an exception is raised. That's two bytes in the table entry. The third byte is the current stack pointer. Now, the JSR to try_ adds two bytes to the stack. Therefore, try_ needs to grab the current stack pointer, decrement it twice and then insert it into the table along with the pointer to the catch block. Then try_ returns and the code immediately following the #try macro continues to execute as normal.

Until the code encounters the #endtry macro. This is a JMP to the endtry_ routine with an inline pointer to the endcatch label. The endcatch label is where execution should continue because the try has ended without an exception, so we want to skip the catch block. Endtry_ therefore pulls the last entry off the exceptions table and discards it. (It can actually do this very simply by just decrementing the index variable into the exceptions table.) Then it uses the inline pointer to endcatch and JMP's there. And that's it. The try block is done, so the reference to the catch block is removed from the exceptions table, and flow continues at the endcatch label. Very simple.

The final piece of the puzzle is the exception routine. When we're in a routine that encounters an error which leads to an exceptional situation we JMP exception. Not every routine needs to raise an exception, errors are still reasonable when the routine you've just called is the routine that directly produces the error. Exceptions are handy so you don't need to handle passing that error condition through multiple levels of routines. In our example of the getServerMessage routine, it makes a call to openConnection. OpenConnection returns an error status. But the abstract routine that wraps opening the connection raises an exception if openConnection returns an error. Where execution goes after that entirely depends on what is in the exceptions table.

To raise the exception we "JMP exception". The exception routine then looks at the exceptions table. It pulls the stack pointer from the exceptions table and sets the real stack pointer to the pulled value. It pulls the pointer to the catch block, then it decrements the exceptions table index variable to remove the entry from the table. And it JMPs to the catch block. And that is it. It's very clean.

Any number of intermediate routines, and any number of inline variables they may have set, are immediately rolled back from the stack simply by restoring the stack pointer to the value that's in the exceptions table. And jumping to the catch block moves execution back to the middle of that routine, with the stack ready for that routine to do its regular RTS. It's like magic.

Nested exceptions

Now you might be wondering, this seems a bit brutal to just wrench the flow of execution past a bunch of intermediate routines. Sure, their stack variables get unrolled, but what about static state that they manage, how does that get undone? What happens if a file gets opened, but it needs to get closed? So let's look at that example. Let's open a file, log our message, and close the file before returning it to be displayed.

Once again, this is kinda dumb, I wouldn't do it like this in a real context, but it is just for the sake of demonstrating how try/catch can be nested.

As before, main has a try/catch block. This time, though, getMessage has some extra code for appending the message to a log file. The log file is specified by a log file reference, the creation of which I'm not showing. The first thing it does is opens the logfileref. And it does this with flags for write and for append.

The problem we now face is that if we go into getServerMessage, an exception could be raised, which would take execution back to main without giving us an opportunity to close that log file. To deal with this, we can wrap the getServerMessage in a try block. This pushes a catch block in getMessage onto the exceptions table. If getServerMessage raises an exception, the first handler found in the exceptions table is the last one pushed on. As this catch gets pulled and used, the catch in main remains in the exceptions table. The getMessage catch closes the log file, and then, before the endcatch it does a JMP exception again!

The file is safely closed. But the original exception gets propagated to the next handler, which is in main. And everything proceeds as before, no message got logged, but no files got left dangling open either. So that's pretty good.

Exception Objects

There is one thing we haven't covered yet. Usually when an exception occurs an exception object, a structure that defines the type of exception and other essential properties, gets generated and propagated through the exception handlers.

In most event models, the current event object is passed from event handler to event handler. For example, when you have a hierarchy of display nodes, nested, parent to child to child to child. The user clicks on a deeply nested child which receives the event first. But without knowing what to do with it, it passes the event to its parent. And this process continues until one of the parents has something it needs to do when it's clicked.

However, there is only ever one currently active event. And, in fact, in Javascript, even though the event is passed as an argument from node to node to node, the event object is simultaneously available as a global variable called "Event". In effect the "passing" is only a passing of control, no actual event data is being copied with each pass. This is how event passing works in C64 OS as well. A toolkit widget's event handler does one of two things: If it can process the event it reads the current event from the input module. Otherwise it calls its parent's event handler. The event object itself doesn't get copied anywhere while one widget calls the next widget's handler, all the way up the hierarchy.

This is the model I propose for passing exception objects. In C64 OS, there is only ever one current exception object at a time. The code that first encounters an error which requires it to raise an exception can point a global pointer to the exception object, just prior to the "JMP exception". An intermediate exception handler, such as in the last example above, where getMessage catches the exception from getServerMessage and uses the opportunity to close the logfile, it isn't actually producing an exception itself. It just intercepts an exception, does some clean up, and then propagates it. In this case, the catch in getMessage doesn't need to modify the global exception pointer, nor does even need to read that object or care what it is. It does the clean up, it calls JMP exception, and boom, execution flow moves to the next registered catch.

Benefits of Exceptions

I searched for this question on StackOverflow, and found the following answer that I find the most compelling:

The advantage of exceptions are two fold:
  • They can't be ignored. You must deal with them at some level, or they will terminate your program. With error codes, you must explicitly check for them, or they are lost.
  • They can be ignored. If an error can't be dealt with at one level, it will automatically bubble up to the next level, where it can be. Error codes must be explicitly passed up until they reach the level where it can be dealt with.
StackOverflow — James Curran — Oct 13, 2008

Unhandled exceptions close the program. How would this manifest in C64 OS? If your code does not call #try, then no catch will be registered in the exceptions table. The index into the exceptions table will be zero. If some code JMPs to the exception routine, then the exception routine finds that there are no registered catches, then the program is done. Before C64 OS first runs an application it saves the stack pointer. When exception finds no registered catches, it simply JMPs to a killapp routine. This restores the stack pointer, frees the application's allocated memory, drains the event queues, possibly closes all of the open files, possibly closes all the open network connections (not sure how best to cleanly handle these last two yet), and then begins the process of loading and running the current Homebase app.

This is why exceptions can't be ignored. Any unhandled error conditions don't just leave your program in an inconsistent and unreliable state. If you don't handle an exception, the OS quits your program for you. This may be annoying, but it's better than having the whole program hang or wander off into the wilderness to die, screwing up who-knows-what-else along the way.

Another benefit: exceptions clean up the code by separating the error handling from the main logic. This is what it means that exceptions can be ignored. Middling routines that don't know how to handle errors, that aren't resonsible for deciding what to do with an error, literally don't have to do anything related to errors. All of these routines get dramatically simplified. They can live on in a blissful world pretending that errors don't exist. If something they call produces an error, then something that called them will deal with it. That's really great.


A Few Practical Examples

I said I'd come up with a few practical examples. But, I admit it's hard to come up with anything that doesn't seem totally contrived, as in the working examples I gave above.

The general idea is that you have a routine, that has a name. Its name describes the thing that it is supposed to accomplish. Often a routine is designed to return a result. Sometimes the result is what you want, sometimes it's something you don't want. But in other cases it's neither what you want, nor what you don't want, it's something else entirely, something more-or-less unrelated to the named purpose of the routine, because some intermediate step fails. This is the occasion for an exception.

It's kind of a judgement call, in my opinion, when an exception is appropriate. If the routine is, open a file. And the response is either, "okay the file is open", or "sorry the file can't be opened." To me, this is not worthy of an exception. But if the routine is, "get the current mouse speed in settings", and then internally that requires finding a file, on a device, opening the file, finding a setting, closing the file, validating the setting, and returning it. What you expect is a number representing the mouse speed. But if what you get is a serial bus error, well then that's best dealt with by an exception. Because it's not the sort of thing that can reasonably by expected to be handled by the code that made the request.

I took the time to look this up on StackOverflow again, and found a great answer.

My personal guideline is: an exception is thrown when a fundamental assumption of the current code block is found to be false.

Example 1: say I have a function which is supposed to examine an arbitrary class and return true if that class inherits from List<>. This function asks the question, "Is this object a descendant of List?" This function should never throw an exception, because there are no gray areas in its operation - every single class either does or does not inherit from List<>, so the answer is always "yes" or "no".

Example 2: say I have another function which examines a List<> and returns true if its length is more than 50, and false if the length is less. This function asks the question, "Does this list have more than 50 items?" But this question makes an assumption - it assumes that the object it is given is a list. If I hand it a NULL, then that assumption is false. In that case, if the function returns either true or false, then it is breaking its own rules. The function cannot return anything and claim that it answered the question correctly. So it doesn't return - it throws an exception.

This is comparable to the "loaded question" logical fallacy. Every function asks a question. If the input it is given makes that question a fallacy, then throw an exception. This line is harder to draw with functions that return void, but the bottom line is: if the function's assumptions about its inputs are violated, it should throw an exception instead of returning normally.

The other side of this equation is: if you find your functions throwing exceptions frequently, then you probably need to refine their assumptions.

StackOverflow — The Digital Gabeg — Nov 6, 2008

That really puts it succinctly. When you ask to open a file, there is a tacit assumption that the file may not be openable. So, it's not an exceptional situation, it's not a violated assumption, to be told that the file couldn't be opened.

If on the other hand the routine that tries to get your mouse settings, it makes the tacit assumption that the IEC bus will work properly. If that assumption is violated, then the question the function asks cannot be answered. Boom. Raise an exception.


I haven't actually implemented this yet. But I think I'm going to. Feedback is welcome!

  1. I don't know if that's technically correct, I can't seem to find a good definition of the difference between "compute" and "calculate." But you get the idea. []
  2. More or less, I'm skipping some degree of detail here, for the sake of simplicity, and because the details vary slightly depending on the language and specific CPU. []

Do you like what you see?

You've just read one of my high-quality, long-form, weblog posts, for free! First, thank you for your interest, it makes producing this content feel worthwhile. I love to hear your input and feedback in the forums below. And I do my best to answer every question.

I'm creating C64 OS and documenting my progress along the way, to give something to you and contribute to the Commodore community. Please consider purchasing one of the items I am currently offering or making a small donation, to help me continue to bring you updates, in-depth technical discussions and programming reference. Your generous support is greatly appreciated.

Greg Naçu — C64OS.com

Want to support my hard work? Here's how!