Should a unit test incorporate tests for error conditions? For example, what if the database connection fails? Remember that the line by line debug testing should have tested all exception and error handling code at least once. So, if the developer does line by line testing for all of the code, including error handlers, then there should be at least some confidence that the error handling code has been tested and works. The problem with that approach is that it is not verifiable and the word of the developer would have to be taken. For most organizations, this is usually adequate.
For organizations that require 100% verified unit test coverage, then developers will have to come up with ways to make the software fail in order to prove that the error handling code works properly. I question the cost / benefit ratio of including unit tests that test error handlers. The problem is that it is at least difficult and sometimes close to impossible to test out error conditions without modifying the code or the environment. If the code or environment is changed, then it can be argued that the actual production code is no longer being tested. The only value of testing error handlers is that the actual error handling code can be verified as correct.
For organizations that insist on 100% code coverage, the easiest approach to testing the error handling code might be to include a test function within the class that when called throws the appropriate error. The problem with this is that the code has to be modified to call this function. Perhaps a better way would be to have the developer think of a way to make the code fail without having to modify any code or the environment. For example, if an exception is thrown when opening a file that does not exist, then it would be fairly easy to pass in a file name that does not exist or is locked for exclusive access. But, there are other cases that are not so easy and coming up with a real test to cause the software to fail is problematical, time-consuming, and costly.