This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software. It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license. I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research. All errors are my own.
In an earlier post, I observed in a footnote that Jay is more interested in detecting errors than in guaranteeing correct output, because once an error has been detected, the problem can be contained and the cause of the error can be identified and eliminated. I suggested that this approach is far easier than the problem of guaranteeing correctness tackled in some research on reliable computing. I recently had an opportunity to speak with Dr. Mary Loomis about this research, and she was intrigued by this idea and encouraged me to explore it further. At this point, I’m not aware of any academic work on the topic, but please let me know if you know of any academic papers that might be relevant.
Anyway, I thought it might be interesting to state the idea more precisely with the aid of a toy model. Let’s assume that we need to perform a computation, and that the cost of acting on an incorrect output is unacceptably high, while the cost of inaction or delaying action is relatively low. This might be the case for a decision to make a loan: making a loan to an unqualified borrower could be very costly, while turning away a potential borrower or delaying disbursement of the loan carries a far smaller opportunity cost. Let us further assume that any system component we deploy will be unreliable, with a probability e of producing incorrect output.