Tag Archives: information assembly line

Finishing information goods

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

When I introduced the idea of information assembly lines, I noted Jay’s emphasis on separating work-in-progress from completed work as a distinguishing characteristic of the architecture:

Just like an assembly line in a factory, work-in-progress exists in temporary storage along the line and then leaves the line when completed.

This sounds straightforward enough, but it turns out to have some profound implications for the way we frame information in the system.  In order to clearly separate work-in-progress from finished goods, we need to shift our conceptualization of information.  Instead of seeing an undifferentiated store of variables that might change at any time (as in a database), we must distinguish between malleable information on the assembly line and a trail of permanent, finished information goods.  We might imagine the final step of the assembly line to be a kiln that virtually fires the information goods, preventing them from ever changing again.  To underscore the point: finished goods are finished, they will never be worked on or modified again, except perhaps if they become damaged and require some repair to restore them to their proper state.

Separation of work-in-progress from finished goods allows us to divide the enterprise software architecture into two separate sub-problems: managing work-in-progress, and managing completed goods.  Managing work-in-progress is challenging, because we must ensure that all the products are assembled correctly, on time, and without accidental error or sabotage.  Fortunately, however, on a properly designed assembly line, the volume of work-in-progress will be small relative the volume of completed goods.

Managing completed goods is much simpler, though the volume may be extremely large.  Since completed goods cannot be modified, they can be stored in a write-once, read-many data store.  It’s much easier to maintain the integrity and security of such a data store, since edit operations need not even exist.  Scaling is easy–when a data store fills, just add another–since no modification of existing records implies no interdependence between new and existing records (there can be only one-way dependence, from existing to new).  Also, access times are likely to be much less important for completed goods than for work-in-progress.

The idea sounds attractive in principle, but how can this design cope with an ever-changing world?  A simple example shows how shifting our perspective on information makes this architecture possible.  Many companies have systems that keep track of customers’ postal addresses.  Of course, these addresses change when customers move.  Typically, addresses are stored in a database, where they can be modified as necessary.  There is no separation between work-in-progress and completed goods.

Information assembly lines solve the problem differently.  A customer’s postal address is not a variable, but rather our record of where the customer resides for a period of time.  Registering a customer address is a manufacturing process involving a number of steps: obtaining the raw information, parsing it into fields, perhaps converting values to canonical forms, performing validity checks, etc.  Once the address has been assembled and verified, it is time-stamped, packaged (perhaps together with a checksum), and shipped off to an appropriate finished goods repository.  When the customer moves, the address in the system is not changed; we simply manufacture a new address.

The finished goods repository contains all the address records manufactured for the customer, and each record includes the date that the address became active.  When retrieving the customer’s address, we simply record the address that became active most recently.  If an address is manufactured incorrectly, we manufacture a corrected address.  Thus instead of maintaining a malleable model of the customer’s location, we manufacture a sequence of permanent records that capture the history of the customer’s location.

In this way, seeing information from a different perspective makes it possible to subdivide the enterprise software problem into two loosely-coupled and significantly simpler subproblems.  And cleverly partitioning complex problems is the first step to rendering them tractable.

Data warehousing vs. information assembly lines

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

Reporting and analysis are important functions for enterprise software systems.  Information assembly lines handle these functions very differently from data warehousing, and I think the contrast may help clarify the differences between Jay’s approach and traditional design philosophies.  In brief, data warehousing attempts to build a massive library where all possibly useful information about a company is readily available–the ideal environment for a business analyst.  By contrast, information assembly lines manufacture reports and analyses “just-in-time” to meet specific business needs.  One might say that data warehousing integrates first and asks questions later, while information assembly lines do just the opposite.

The Wikipedia article on data warehouses indicates the emphasis on comprehensive data integration.  Data warehouses seek a “common data model for all data of interest regardless of the data’s source”.  “Prior to loading data into the data warehouse, inconsistencies are identified and resolved”.  Indeed, “much of the work in implementing a data warehouse is devoted to making similar meaning data consistent when they are stored in the data warehouse”.

There are at least two big problems with the data warehousing approach.

First, since data warehousing integrates first and asks questions later, much of the painstakingly integrated data may not be used.  Or they may be used, but not in ways that generate sufficient value to justify the cost of providing the data.  The “build a massive library” approach actually rules out granular investment decisions based on the return from generating specific reports and analyses.  To make matters worse, since inconsistencies may exist between any pair of data sources, the work required to identify and resolve inconsistencies will likely increase with the square of the number of data sources.  That sets off some alarm bells in a computer scientist’s brain: in a large enterprise, data warehousing projects may never terminate (successfully, that is).

Second, data warehousing ignores the relationship between the way data are represented and the way they are used.  I was introduced to this problem in my course on knowledge-based systems at MIT, where Professor Randall Davis emphasized the importance of choosing knowledge representations appropriate to the task at hand.  Predicate logic may be a great representation for reasoning about mathematical conjectures, but it may prove horribly cumbersome or even practically unusable for tasks such as finding shortest paths or detecting boundaries in images.  According to Davis and his colleagues, awareness of the work to be done (or, as Jay would say, the context) can help address the problem: “While the representation we select will have inevitable consequences for how we see and reason about the world, we can at least select it consciously and carefully, trying to find a pair of glasses appropriate for the task at hand.”1

The problem, then, is that data warehousing does not recognize the importance of tuning data representations to the task at hand, and thus attempts to squeeze everything into a single “common data model”.  Representations appropriate for analyzing user behavior on web sites may be poorly suited to searching for evidence of fraud or evaluating possible approaches to customer segmentation.  Consequently, data warehousing initiatives risk expending considerable resources to create a virtual jack-of-all-trades that truly satisfies no one.

The information assembly lines approach focuses on manufacturing products–reports or analyses–to satisfy the needs of specific customers.  In response to a need, lines are constructed to pull the data from where they live, machine the data as necessary, and assemble the components.  Lines are engineered, configured, and provisioned to manufacture specific products or product families, so every task can be designed in with an awareness how it serves the product’s intended purpose.

If the data required for business decision-making change very slowly over time and the conceivable uses for the data are relatively stable and homogenous, then perhaps developing a unified data model and building a data warehouse may make sense.  Needless to say, however, these are not the conditions faced by most enterprises: the data environment evolves rapidly, and different parts of the business require varied and ever-changing reporting and analysis capabilities.

It’s actually kind of hard to see why anyone (other than system vendors) would choose the data warehousing approach.  Information assembly lines are modular, so they can be constructed one at a time, with each line solving a specific problem.  Performance criteria are well-defined: do the products rolling off the line match the design?  Since information assembly lines decompose work into many simple, routine tasks, tools developed for use on one line (a machine that translates data from one format to another, for example) will likely be reusable on other lines.  Thus the time and cost to get a line up and running will decrease over time.

1 Davis, Shrobe & Szolovits, “What is a Knowledge Representation?“.  This paper provides an insightful introduction to the problem.

Workstations

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

This is the first of what I intend to be a series of short posts focusing on a few important aspects of the information factory perspective that I’m starting to develop.  In the previous iteration of this work, I defined Contexts as elementary subsystems where tasks are performed.  In this iteration, in keeping with the information assembly line metaphor, I’ve decided to replace Contexts with workstations.  The basic idea doesn’t change: a workstation is an elementary subsystem where a worker, in a role, performs a task.  I’d like to add a few nuances, however.

First, at least for the time being, I’m going to rule out nesting of workstations.  Workstations can be daisy-chained, but not nested.  A hierarchical structure similar to nesting can be achieved by grouping workstations into modular sequences, but these groupings remain nothing more or less than sequences of workstations.  Conceptually, workstations divide the system into two hierarchical levels: the organization level (concerned with the configuration of workstation sequences) and the task level (concerned with the performance of tasks within specific workstations).  This conceptual divide resembles, I think, the structure of service-oriented architectures, in which the system level (integration of services) is conceptually distinct from the service level (design and implementation of specific services).

The purpose of the workstation is simply to provide a highly structured and controlled environment for performing tasks, thereby decoupling the management of task sequences (organization level) from the execution details of specific tasks (task level).  Workstations are thus somewhat analogous to web servers: they can “serve” any kind of task without knowing anything about the nature of its content.  Each workstation is provisioned with only those tools (programs, data, and personnel) required to perform the task to which it is dedicated.  The communication protocol for a workstation is a pallet interface, by which the workstation receives work-in-progress and then ships it out to the next workstation.  Pallets may also carry tools and workers to the workstation in order to provision it.

An implementation of the workstation construct requires an interface for pallets to enter and leave the workstation, hooks for loading and unloading tools and workers delivered to the station on pallets, and perhaps some very basic security features (more sophisticated security tools can be carried to the workstation on pallets and installed as needed).

Information assembly lines

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

In my previous post, I explained my (admittedly somewhat arbitrary) transition from version zero to version one of my architectural theory for enterprise software.  The design metaphor for version one of the theory is the high-volume manufacturing facility where assembly lines churn out large quantities of physical products.  Design metaphors from version zero of the theory (the zoo, the house, the city, and the railway) will probably appear at some point, but I’m not yet exactly sure how they fit.

Jay often describes business processes at Shinsei as computer-orchestrated information assembly lines.  These lines are composed of a series of virtual workstations (locations along the line where work is performed), and transactions move along the line from one workstation to the next on virtual pallets.  At each workstation, humans or robots (software agents) perform simple, repetitive tasks.  This description suggests that the salient features of the information factory1 include linear organization, workstations, pallets, and finely-grained division of labor.

How does this architecture differ from traditional approaches?  Here are a few tentative observations.

  • No central database. All information associated with a transaction is carried along the line on a pallet.  Information on a pallet is the only input and the only output for each workstation, and the workstation has no state information except for log records that capture the work performed.  In essence, there is a small database for each transaction that is carried along the line on a pallet.  In keeping with the house metaphor, information on the pallet is stored hierarchically.  (More thoughts about databases here.)
  • Separation of work-in-progress and completed work. Just like an assembly line in a factory, work-in-progress exists in temporary storage along the line and then leaves the line when completed.

In order to make the system robust, Jay adheres to the following design rules.

  • Information travels in its context. Since workstations have no state, the only ways to ensure that appropriate actions are taken at each workstation are to either (a) have separate lines for transactions requiring different handling or (b) have each pallet carry all context required to determine the appropriate actions to take at each workstation.  The first approach is not robust, because errors will occur if pallets are misrouted or lines are reconfigured incorrectly, and these errors may be difficult to detect.  Thus, all pallets carry information embedded in sufficient context to figure out what actions should be taken (and not taken).
  • All workstations are reversible. In order to repair problems easily, pallets can be backed up when problems are detected and re-processed.  This requires that all workstations log enough information to undo any actions that they perform; that is, they must be able to reproduce their input given their output.  These logs are the only state information maintained by the workstations.
  • Physical separation. In order to constrain interdependencies between workstations and facilitate verification, monitoring, isolation, and interposition of other workstations, workstations are physically separated from each other.  More on this idea here.

The following diagram depicts the structure of an information assembly line.  The line performs six tasks, labeled a through f.  The red arrows indicate logical interdependencies.  The output of a workstation is fully determined by the output of the preceding workstation, so the dependency structure resembles that of a Markov chain.  Information about a transaction in progress travels along the line, and completed transactions are archived for audit or analysis in a database at the end of the line.  Line behavior can be monitored by testing the output of one or more workstations.

info-assembly-line

Information assembly line

By contrast, here is a representation of a system designed according to the traditional centralized database architecture.  The system has modules that operate on the database to perform the same six tasks.  Although the logical interdependency structure is the same in theory, the shared database means that every module depends on every other module: if one module accidentally overwrites the database, the behavior of every other module will be affected.  Moreover, all transactions are interdependent through the database as well.  It’s difficult to verify that the system is functioning properly, since database operations by all six modules are interleaved.

Traditional system architecture with centralized database

Traditional system architecture with centralized database

Clearly, the information assembly line architecture requires more infrastructure than the traditional database approach: at a minimum, we need tools for constructing pallets and moving them between workstations, as well as a framework for building and provisioning workstations.  In addition, we also need to engineer the flow of information so that the output can be computed using a linear sequence of stateless workstations.  There are at least two reasons why this extra effort may be justified.  At this stage, these are just vague hypotheses; in future posts, I’ll try to sharpen them and provide theoretical support in the form of more careful and precise analysis.

First, the linear structure facilitates error detection and recovery.  Since each workstation performs a simple task on a single transaction and has no internal state, detecting an error is much simpler than in the traditional architecture.  The sparse interdependency matrix limits the propagation of errors, and reversibility facilitates recovery.  For critical operations, it is relatively easy to prevent errors by using parallel tracks and checking that the output matches (more on reliable systems from unreliable components here).

Second, the architecture facilitates modification and reconfiguration.  In the traditional architecture, modifying a component requires determining which other components depend on it and how, analyzing the likely effects of the proposed modification, and integrating the new component into the system.  If the number of components is large, this may be extremely difficult.  By contrast, in the information assembly line, the interdependency matrix is relatively sparse, even if we include all downstream dependencies.  Perhaps more importantly, the modified component can easily be tested in parallel with the original component (see the figure below).  Thus, the change cost for the system should be much lower.

info-assembly-line-parallel-operation

Parallel operation in an information assembly line

1A search for the term “information factories” reveals that others have been thinking along similar lines.  In their paper “Enterprise Computing Systems as Information Factories” (2006), Chandy, Tian and Zimmerman propose a similar perspective.  Although they focus on decision-making about IT investments, their concept of “stream applications” has some commonalities with the assembly-line-style organization proposed here.