Category Archives: Highly-evolvable enterprise software

Deletion and the blind sweeper

This post is part of my research on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi for sharing with me the ideas developed here.  All errors are my own.

One challenge in designing enterprise systems is how to think about the deletion of records.  I discussed the issue with Jay yesterday, and he had several suggestions.

First, to prevent fraud, sabotage and accidents, the ability to delete records should restricted to a very small group of senior personnel or forbidden entirely.  When a person attempts to delete a record, the system should required approval from another person.

Second, the system must be designed to cleanse itself of unneeded records that contain potentially damaging confidential information.  These situations occur frequently in service industries where companies handle confidential information belonging to their customers.  For example, a bank or health insurance firm handles personal information about customers, and a management consulting firm handles confidential information belonging to client companies.  Such information must be destroyed when it is no longer needed.

The system should function much like a blind sweeper that cleans conference rooms after meetings end and the participants have left.  The sweeper cannot access the content of any forgotten or abandoned papers, so he simply burns them all.  Such a system must keep track of who needs the information.  Each of these people can be thought of as a person in the conference room.  Only when all the people have left can the blind sweeper enter and destroy the information in the room.  To ensure consistency, destruction of information should occur in two phases.  First, the information to be destroyed is isolated.  Then, when all information to be destroyed has been accounted for, the destruction should take place.

The blind sweeper approach is similar to garbage collection in software, where references to a piece of information lock it in place.  When there are no references to a piece of information, the system deletes it.

Applying industrial engineering to information technology

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Shinsei Bank for supporting this research and to Jay Dvivedi for mentoring me in the art of enterprise systems.  All errors are my own.

The fourth edition of Herbert Simon’s Administrative Behavior contains a brief section titled “Applying information technology to organization design”.  In the industrial age, Simon says, organization theory was concerned mainly with how to organize for the efficient production of tangible goods.  Now, in our post-industrial society, problems of physical production have diminished in importance; the new challenge is how to organize for effective decision-making.  Simon characterizes the challenge as follows:

The major problems of organization today are not problems of departmentalization and coordination of operating units.  Instead, they are problems of organizing information storage and information processing–not division of labor, but factorization of decision-making.  These organizational problems are best attacked, at least to a first approximation, by examining the information system and the system of decisions it supports in abstraction from agency and department structure. (1997, 248-9)

In essence, Simon proposes that we view the organization as a system for storing and processing information–a sort of computer.  Extending the computer metaphor, organizations execute software in the form of routines (March and Simon call them performance programs).  Department structure, like the configuration of hardware components in a computer, has some relevance to the implementation of decision-making software, but software architects can generally develop algorithms without much concern for the underlying hardware.

Continue reading

Parity and reconciliation

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

The systems in a large bank handle huge volumes of transactions collectively worth many millions or billions of dollars, so errors or sabotage can cause serious damage in short order. For this reason, Jay often compares running bank systems to flying jumbo jets. To make matters worse, the jets must be maintained and modified in flight. Jay’s approach, which he demonstrated to great effect when he successfully migrated Shinsei from mainframes to inexpensive servers, focuses on building up new systems one component at a time in parallel with existing systems and achieving functional parity. Once parity has been reached, the old systems can be shut down.

Continue reading

Ensuring correctness vs. detecting wrongness

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research.  All errors are my own.

In an earlier post, I observed in a footnote that Jay is more interested in detecting errors than in guaranteeing correct output, because once an error has been detected, the problem can be contained and the cause of the error can be identified and eliminated.  I suggested that this approach is far easier than the problem of guaranteeing correctness tackled in some research on reliable computing.  I recently had an opportunity to speak with Dr. Mary Loomis about this research, and she was intrigued by this idea and encouraged me to explore it further.  At this point, I’m not aware of any academic work on the topic, but please let me know if you know of any academic papers that might be relevant.

Anyway, I thought it might be interesting to state the idea more precisely with the aid of a toy model.  Let’s assume that we need to perform a computation, and that the cost of acting on an incorrect output is unacceptably high, while the cost of inaction or delaying action is relatively low.  This might be the case for a decision to make a loan: making a loan to an unqualified borrower could be very costly, while turning away a potential borrower or delaying disbursement of the loan carries a far smaller opportunity cost.  Let us further assume that any system component we deploy will be unreliable, with a probability e of producing incorrect output.

Continue reading

Finishing information goods

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

When I introduced the idea of information assembly lines, I noted Jay’s emphasis on separating work-in-progress from completed work as a distinguishing characteristic of the architecture:

Just like an assembly line in a factory, work-in-progress exists in temporary storage along the line and then leaves the line when completed.

This sounds straightforward enough, but it turns out to have some profound implications for the way we frame information in the system.  In order to clearly separate work-in-progress from finished goods, we need to shift our conceptualization of information.  Instead of seeing an undifferentiated store of variables that might change at any time (as in a database), we must distinguish between malleable information on the assembly line and a trail of permanent, finished information goods.  We might imagine the final step of the assembly line to be a kiln that virtually fires the information goods, preventing them from ever changing again.  To underscore the point: finished goods are finished, they will never be worked on or modified again, except perhaps if they become damaged and require some repair to restore them to their proper state.

Separation of work-in-progress from finished goods allows us to divide the enterprise software architecture into two separate sub-problems: managing work-in-progress, and managing completed goods.  Managing work-in-progress is challenging, because we must ensure that all the products are assembled correctly, on time, and without accidental error or sabotage.  Fortunately, however, on a properly designed assembly line, the volume of work-in-progress will be small relative the volume of completed goods.

Managing completed goods is much simpler, though the volume may be extremely large.  Since completed goods cannot be modified, they can be stored in a write-once, read-many data store.  It’s much easier to maintain the integrity and security of such a data store, since edit operations need not even exist.  Scaling is easy–when a data store fills, just add another–since no modification of existing records implies no interdependence between new and existing records (there can be only one-way dependence, from existing to new).  Also, access times are likely to be much less important for completed goods than for work-in-progress.

The idea sounds attractive in principle, but how can this design cope with an ever-changing world?  A simple example shows how shifting our perspective on information makes this architecture possible.  Many companies have systems that keep track of customers’ postal addresses.  Of course, these addresses change when customers move.  Typically, addresses are stored in a database, where they can be modified as necessary.  There is no separation between work-in-progress and completed goods.

Information assembly lines solve the problem differently.  A customer’s postal address is not a variable, but rather our record of where the customer resides for a period of time.  Registering a customer address is a manufacturing process involving a number of steps: obtaining the raw information, parsing it into fields, perhaps converting values to canonical forms, performing validity checks, etc.  Once the address has been assembled and verified, it is time-stamped, packaged (perhaps together with a checksum), and shipped off to an appropriate finished goods repository.  When the customer moves, the address in the system is not changed; we simply manufacture a new address.

The finished goods repository contains all the address records manufactured for the customer, and each record includes the date that the address became active.  When retrieving the customer’s address, we simply record the address that became active most recently.  If an address is manufactured incorrectly, we manufacture a corrected address.  Thus instead of maintaining a malleable model of the customer’s location, we manufacture a sequence of permanent records that capture the history of the customer’s location.

In this way, seeing information from a different perspective makes it possible to subdivide the enterprise software problem into two loosely-coupled and significantly simpler subproblems.  And cleverly partitioning complex problems is the first step to rendering them tractable.

Data warehousing vs. information assembly lines

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

Reporting and analysis are important functions for enterprise software systems.  Information assembly lines handle these functions very differently from data warehousing, and I think the contrast may help clarify the differences between Jay’s approach and traditional design philosophies.  In brief, data warehousing attempts to build a massive library where all possibly useful information about a company is readily available–the ideal environment for a business analyst.  By contrast, information assembly lines manufacture reports and analyses “just-in-time” to meet specific business needs.  One might say that data warehousing integrates first and asks questions later, while information assembly lines do just the opposite.

The Wikipedia article on data warehouses indicates the emphasis on comprehensive data integration.  Data warehouses seek a “common data model for all data of interest regardless of the data’s source”.  “Prior to loading data into the data warehouse, inconsistencies are identified and resolved”.  Indeed, “much of the work in implementing a data warehouse is devoted to making similar meaning data consistent when they are stored in the data warehouse”.

There are at least two big problems with the data warehousing approach.

First, since data warehousing integrates first and asks questions later, much of the painstakingly integrated data may not be used.  Or they may be used, but not in ways that generate sufficient value to justify the cost of providing the data.  The “build a massive library” approach actually rules out granular investment decisions based on the return from generating specific reports and analyses.  To make matters worse, since inconsistencies may exist between any pair of data sources, the work required to identify and resolve inconsistencies will likely increase with the square of the number of data sources.  That sets off some alarm bells in a computer scientist’s brain: in a large enterprise, data warehousing projects may never terminate (successfully, that is).

Second, data warehousing ignores the relationship between the way data are represented and the way they are used.  I was introduced to this problem in my course on knowledge-based systems at MIT, where Professor Randall Davis emphasized the importance of choosing knowledge representations appropriate to the task at hand.  Predicate logic may be a great representation for reasoning about mathematical conjectures, but it may prove horribly cumbersome or even practically unusable for tasks such as finding shortest paths or detecting boundaries in images.  According to Davis and his colleagues, awareness of the work to be done (or, as Jay would say, the context) can help address the problem: “While the representation we select will have inevitable consequences for how we see and reason about the world, we can at least select it consciously and carefully, trying to find a pair of glasses appropriate for the task at hand.”1

The problem, then, is that data warehousing does not recognize the importance of tuning data representations to the task at hand, and thus attempts to squeeze everything into a single “common data model”.  Representations appropriate for analyzing user behavior on web sites may be poorly suited to searching for evidence of fraud or evaluating possible approaches to customer segmentation.  Consequently, data warehousing initiatives risk expending considerable resources to create a virtual jack-of-all-trades that truly satisfies no one.

The information assembly lines approach focuses on manufacturing products–reports or analyses–to satisfy the needs of specific customers.  In response to a need, lines are constructed to pull the data from where they live, machine the data as necessary, and assemble the components.  Lines are engineered, configured, and provisioned to manufacture specific products or product families, so every task can be designed in with an awareness how it serves the product’s intended purpose.

If the data required for business decision-making change very slowly over time and the conceivable uses for the data are relatively stable and homogenous, then perhaps developing a unified data model and building a data warehouse may make sense.  Needless to say, however, these are not the conditions faced by most enterprises: the data environment evolves rapidly, and different parts of the business require varied and ever-changing reporting and analysis capabilities.

It’s actually kind of hard to see why anyone (other than system vendors) would choose the data warehousing approach.  Information assembly lines are modular, so they can be constructed one at a time, with each line solving a specific problem.  Performance criteria are well-defined: do the products rolling off the line match the design?  Since information assembly lines decompose work into many simple, routine tasks, tools developed for use on one line (a machine that translates data from one format to another, for example) will likely be reusable on other lines.  Thus the time and cost to get a line up and running will decrease over time.

1 Davis, Shrobe & Szolovits, “What is a Knowledge Representation?“.  This paper provides an insightful introduction to the problem.

Comments policy

I’m new to posting my research findings in a blog format, and I’m finding that the format results in some interesting challenges, including how to handle comments.  Although I am very grateful to commenters for raising questions and suggesting alternative perspectives, some kinds of comments are not appropriate and will not be accepted.

  • Ad hominem attacks.  This is a blog about ideas, so feel free to criticize ideas-but not individuals.
  • Non-public information concerning the matters I am investigating.  Online discussions are well-suited for debating ideas, but they are methodologically problematic as a mechanism for gathering empirical evidence.

If in doubt, you may contact me directly at djb@davidjamesbrunner.org

The actual and the possible

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research.  All errors are my own.

Judging from comments on earlier posts, there seems to be some confusion whether these posts articulate design principles that Shinsei seeks to follow, versus descriptions of how Shinsei’s systems actually work in practice.  Jay has mentioned to me on several occasions that he believes current implementations capture only “15%” (I think the number should be understood as an impressionistic estimate rather than a precise value) of the design principles. From studies of specific systems at Shinsei, I’ve observed implementations that adhere to these design principles to greater or lesser degrees, but none that could be characterized as complete, reference implementations.

Thus, the ideas developed here should be understood as a sketch of systems as they might be and are, at least in some cases, in the process of becoming, rather than as descriptive research about existing systems.  That said, audited cost estimates suggest that Shinsei may be capable of building systems at a small fraction (perhaps less than one tenth) of the cost of traditional vendors, and I think it’s likely that these architectural principles–even if only partially implemented–contribute to this performance differential.

In conclusion, then, this research falls squarely in the domain of theory building rather than theory testing.  Ultimately, the value of such research is perhaps a metaphysical question; I refer skeptics to my defense here.

The event processing perspective

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research.  All errors are my own.

When I adopted the information factory metaphor, I found that Caltech professor K. Mani Chandy had been there first.  Chandy and his colleagues have written a number of papers and a book1 about system architectures, and I’d like to introduce his perspective here and draw some connections to the theory that I’m developing.  Chandy and his colleagues propose a typology of system architectures based on the way that subsystems interact.  They distinguish three modes of interaction2:

  • Time-driven or schedule: “Groups of components interact at scheduled times.”
  • Request-driven or pull: “A component requests information from other components, which then reply to the requests.”
  • Event-driven or push: “A component sends information to other components when it discovers state changes relevant to its listeners.”

According to Chandy, time-driven and request-driven architectures are relatively common, while event-driven architectures are relatively new to the enterprise computing landscape.  In their book, Chandy and Schulte give two reasons for the rising interest in event-driven systems: first, “an explosion of event ‘streams’ flowing over corporate networks” facilitates programmatic access to events, and, second, “Companies today are operating at a faster pace, so early notification of emerging business threats and opportunities is increasingly important” (xii).

On a practical level, event-driven systems encounter several difficulties.  First, what constitutes an event?  Chandy et al. posit that “An event is a significant change in the state of the universe. A significant state change is one for which an optimal response by the system is to take an action.”  This definition makes events sound objective, but as students of Simon we know that an organization (or any other complex system operating in the real world) can never determine an optimal response to any situation.  Significant changes are therefore in the eye of the beholder: an event is a change in the state of the universe in response to which a system believes that action should be taken.

This leads to a second difficulty.  If our systems are to be decentralized, how can we ensure consistent interpretation of changes in the universe?  One component’s event might be another’s noise.  This, I think, is why Chandy et al. note the importance of shared models:

An agent that is responsible for initiating a response takes action based on its estimates of the state of the universe. Its estimates are based on a model of the universe which, in turn, is based on models that it shares with agents that provide it with information.

If there were objective criteria for defining events, shared models would not be necessary: all (properly functioning) system components would agree on which changes in the state of the universe represent events (to which the system should respond).  Since no such objective criteria exist, components must possess models sufficiently similar to yield complementary behavior.  As enterprise systems become increasingly complex, I suspect that the problem of inconsistent models will pose ever-larger challenges, since this is essentially the old organizational problem of differentiation and integration playing out in a new domain3.

So are Shinsei’s information assembly lines event processing systems?  I think that they may be, but they differ in an important way from the systems described by Chandy and Schulte.  Many of Shinsei’s systems are event driven, in the sense that events (e.g., customer requests for banking services) trigger the production of an information product (e.g., a credit decision or a funds transfer).  In the language of the factory metaphor, Shinsei manufactures custom products on demand rather than standard products in advance of demand.

Shinsei does not, however, seem to have a role for abstract events-as-messages that distribute information to listeners.  Instead, events propagate along the assembly line rather than through meta-level communication channels.  The manufacturing metaphor may help clarify the distinction.  Consider the management of parts inventory on an assembly line.

The event-driven architecture described by Chandy and Schulte would use a sensor to detect when the supply of parts at a workstation declines below a certain level.  The sensor would respond by generating an “almost out of parts” event, which would be sent to the factory’s order management department, which would respond to the event by placing an order with the appropriate supplier.  Two distinct interaction channels can be distinguished: a physical channel composed of trucks and parcels, and a communication channel composed of sensors and computer networks.

By contrast, Shinsei’s approach resembles the Japanese kanban system.  The physical channel for movement of parts is designed to trigger the replenishment of inventory, so no “out-of-band” communication is necessary.  Events are detected and handled implicitly through the design of the process.  In this approach, there is only one interaction channel.

This distinction, between systems that internalize event handling and those that handle events explicitly using dedicated communication networks external to the underlying process, may be a useful dimension for classifying enterprise system architectures.

The relative effectiveness of the two approaches may be an empirical question, and it could depend on the goals of the system.  For straightforward business processes, explicitly modeling events and creating a communication network to detect and process them seems likely to add unnecessary complexity, which could render systems more difficult to modify and ultimately decrease agility.  However, if businesses want to run data fusion algorithms over event streams (Chandy and Schulte use the term “complex event processing”), then it may be necessary to have explicit event representations and dedicated event processing systems.  Hybrids of the two models may also be possible.

1 An abridged edition is available free of charge from Progress Software.  All page number references are to the abridged edition.

2 Definitions from Chandy et al., “Towards a Theory of Events“, 2007.

3 When organizations develop differentiated subunits, the decision-makers in these subunits tend to focus on achieving subunit goals.  This reduces complexity, but often results in suboptimization.  I can see no reason why enterprise systems should not suffer from similar problems.

Context as constraints in search space

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

It’s hard to have a conversation with Jay without the issue of “context” coming up–often in a sharp rebuke along the lines of “the problem with him/her/them/it/you is that the context is missing”.  Clearly context plays a critical role in Jay’s design philosophy, but I’ve had considerable difficulty understanding what this important, easily-overlooked ingredient actually is.  In the last iteration of my theory, I equated context with elemental subsystems where tasks are performed (in this iteration, I’ve relabeled these elemental subsystems workstations and defined them slightly differently).  I’m not sure the context-as-elemental-subsystem formulation is entirely misguided, but I now think it’s only halfway correct (at best) and possibly misleading.  So, in this post, I take another stab at articulating the meaning of context with respect to enterprise software architecture.

To describe the concept of context, Jay often uses the example of a computer booting up and recognizing its configuration.  The operating system detects the processor, memory, storage devices, network interfaces, and other peripherals that define its physical structure.  Thus the operating system becomes aware of its context; that is, the environment within which the system operates.

During my last visit to Shinsei, I had a discussion with Pieter Franken that helped clarify the role of context in enterprise systems.  Pieter walked me through the architecture of the funds transfer system.  The behavior of this system, he explained, must be decided within a sequence of environmental constraints.  First, there are the rules and regulations that define the space of allowable transfers.  Then, there are the restrictions on the sender, which depend on the characteristics of the sender and the bank’s relationship with him or her.  Next, there are restriction on the recipient, similarly contingent on the recipient’s identity and relationship to the bank.  Finally, there are restrictions associated with a given transfer depending on characteristics of the transaction.  These constraints, Pieter explained to me, represent the context within which an actual transfer of funds takes place (or not).

Thus the funds transfer process must “boot up” much like a computer, becoming aware of its context by recognizing first the regulatory environment within which it operates, then the sender who seeks to invoke the process, then the proposed recipient, and then the characteristics of the transaction.  Only once the process has constructed an orderly model of its environment is it prepared to carry out the work.

Defined in this way, context represents a set of constraints in the action space within which a system operates.  In the case of an operating system, awareness that the computer is not connected to a wired network invalidates a large number of possible actions, such as sending or receiving data over the wired network interface.  Similarly, in the case of the funds transfer system, awareness that that regulations forbid transfers to a particular country rules out actions associated with performing such transfers.

Restricting the action space has several possible benefits.  First, the system can figure out what to do more easily, because it searches for an appropriate action within a smaller and simpler space.  Second, errors and rework may be avoided, since the system is less likely to choose invalid actions if these actions are placed off limits a priori rather than detected post hoc by filters applied a posteriori.  Third, simplifying the action space reduces the number of potential security vulnerabilities.

It seems, then, that awareness of context may be understood as the possession of a well-structured model of how environmental conditions constrain or otherwise influence the space of actions available to the system.