Tag Archives: Jay Dvivedi

Spiral development

It has been a while since I’ve posted anything.  One reason (in addition to being distracted by other projects and travel) is that I’ve had a great deal of difficulty figuring out how to fit the ideas from my recent conversations with Jay into the conceptual structure that I’ve been developing in my last dozen or so posts.  In the interim, it happens that I’ve been spending a lot of time with my undergraduate advisor, who always reminds me of the importance of spiral development.

So, in keeping with the spiral development philosophy, I’ve decided that it’s time to declare version zero of my architectural theory complete (woefully fragmentary and immature though it be) and move on to version one.  The new version emphasizes a different metaphor, which I hope may be more fruitful and amenable to formal theoretical treatment.  Some of the concepts from version zero, such as the zoo metaphor and mutually verifying dualism, may remain (though perhaps, I hope, with less unwieldy labels), others may persist as echoes of their former selves (Contexts and Interacts are likely candidates), and others may vanish.

If you feel that there are troubling inconsistencies between the versions, please do not hesitate to bring them to my attention.  They will most likely indicate areas where my thinking has evolved or progressed; as such, addressing them explicitly may help to deepen the ideas.  Similarly, if you believe some ideas from version zero deserve more prominence in version one, please let me know.

Nesting Contexts

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

Simon emphasizes the importance of hierarchy in managing complexity.  In Jay’s architecture, hierarchy manifests itself in the nesting of Contexts.  A Context is a logical location where an agent, in a role, goes to perform an action.  Nesting these Contexts enables the creation of specialized locations for different kinds of actions, while hiding the complexity associated with specialization from the rest of the system.

Jay uses the metaphor of a house to explain the nesting of Contexts.  A house is where a family performs most of its daily activities.  Within the house, there are multiple rooms — kitchens, bedrooms, dining rooms, living rooms, bathrooms — designed to accommodate different kinds of tasks.  Within the rooms, there are multiple furnishings specialized for a variety of purposes, such as bookshelves, stoves, refrigerators, showers, beds, tables, and desks.  Some of these furnishings are further subdivided: trays in the refrigerator designed for storing eggs, drawers in the desk designed for storing hanging files, etc.

The cost of building separate spaces and the inconvenience of moving between them limits the extent of nesting.  There probably are no houses with separate washbasins designed specifically and exclusively for hand washing, shaving, and tooth brushing (although washbasins in kitchens, bathrooms, and garages are often specialized for their respective tasks).  For computer systems, however, the cost of building separate spaces and the effort required to move between them is extremely low, to the point that most actions with meaningful interpretations at the business process level can probably be separated within hierarchically nested contexts.

Physical constraints on symbolic systems

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research.  All errors are my own.

One of Jay’s design rules to which he attaches great importance is physical separation of software modules (i.e., Contexts) and physical motion of information between them.  According to this rule, software modules should be installed on physically separated computers.

Yesterday, I had the opportunity to discuss Shinsei’s architecture with Peter Hart, an expert on artificial intelligence and the founder and chairman of Ricoh Innovations, Inc.  Peter was very intrigued by Jay’s use of design rules to impose physical constraints on software structure.  I’d like to acknowledge Peter’s contribution to my thinking by introducing his perspective on possible implications of such physical constraints.  Then, I’ll describe my follow-up conversation with Jay on the topic, and conclude with some of my own reflections.  Of course, while I wish to give all due credit to Peter and Jay for their ideas, responsibility for any errors rests entirely with me.

Peter’s perspective

Peter approached the issue from a project management perspective.  Why, he asked me, are software development projects so much more difficult to manage compared to other large-scale engineering projects, such as building a bridge or a factory? The most plausible explanation he has found, he told me, is that software has many more degrees of freedom.  In contrast to mechanical, chemical, civil, or industrial engineering, where the physical world imposes numerous and often highly restrictive constraints on the design process, there are hardly any physical constraints on the design of software.  The many degrees of freedom multiply complexity at every level of the system, and this combinatorial explosion of design parameters makes software design an enormously complex and extraordinarily difficult problem.

Thus, Peter suggested that artificial imposition of physical constraints similar to those found in other engineering domains could help bring complexity under control. These constraints might be designed to mimic constraints encountered when performing analogous physical tasks in the real world. There is a tradeoff, since these constraints close off large swathes of the design space; however, if the goal of the designer is to optimize maintainability or reliability while satisficing with respect to computational complexity, then perhaps the benefit of a smaller design space might outweigh possible performance losses.

Jay’s perspective

After my conversation with Peter, I asked Jay why he places so much importance on physical separation and physical movement.

To begin with, he said, it is difficult to create and enforce boundaries within a single computer.  Even if the boundaries are established in principle, developers with “superman syndrome” will work around them in order to “improve” the system, and these boundary violations will be difficult to detect.

Work is made easier by keeping related information together and manipulating it in isolation.  Jay uses the analogy of a clean workbench stocked with only the necessary tools for a single task.  Parts for a single assembly are delivered to the workbench, the worker assembles the parts, and the assembly is shipped off to the next workstation.  There is never any confusion about which parts go into which assembly, or which tool should be used. Computer hardware and network bandwidth can be tuned to the specific task performed at the workstation.

Achieving this isolation requires physical movement of information into and out of the workstation.  Although this could be achieved, in theory, by passing data from one module to another on a single computer, designers will be tempted to violate the module boundaries, reaching out and working on information piled up in a motionless heap (e.g., shared memory or a traditional database) instead of physically moving information into and out of the module’s workspace.

When modules are physically separated, it becomes straightforward to reconfigure modules or insert new ones, because flows of information can be rerouted without modifying the internal structures of the modules. Similarly, processes can be replicated easily by sending the output of a workstation to multiple locations.

Finally, physical separation of modules increases system-level robustness by ensuring that there is no single point of failure, and by creating opportunities to intervene and correct problems.  Inside a single computer, processes are difficult to pause or examine while operating, but physical separation creates an interface where processes can be held or analyzed.

Concluding thoughts

The idea of contriving physical constraints for software systems seems counterintuitive.  After all, computer systems provide a way to manipulate symbols largely independent of physical constraints associated with adding machines, books, or stone tablets. The theory of computation rests on abstract, mathematical models of symbol manipulation in which physical constraints play no part.  What benefit could result from voluntarily limiting the design space?

Part of the answer is merely that a smaller design space takes less time to search.  Perhaps, to echo Peter’s comment, software development projects are difficult to manage because developers get lost in massive search spaces.  Since many design decisions are tightly interdependent, the design space will generally be very rugged (i.e., a small change in a parameter may cause a dramatic change in performance), implying that a seemingly promising path may suddenly turn out to be disastrous1.  If physical constrains can herd developers into relatively flatter parts of the design space landscape, intermediate results may provide more meaningful signals and development may become more predictable.  Of course, the fewer the interdependencies, the flatter (generally speaking) the landscape, so physical separation may provide a way to fence off the more treacherous areas.

Another part of the answer may have to do with the multiplicity of performance criteria.  As Peter mentioned, designers must choose where to optimize and where to satisfice.  The problem is that performance criteria are not all equally obvious.  Some, such as implementation cost or computational complexity, become evident relatively early in the development process.  Others, such as modularity, reliability, maintainability, and evolvability, may remain obscure even after deployment, perhaps for many years.

Developers, software vendors, and most customers will tend to be relatively more concerned about those criteria that directly and immediately affect their quarterly results, annual performance reviews, and quality of life.  Thus, software projects will tend to veer into those areas of the design space with obvious short-term benefits and obscure long-term costs.  In many cases, especially in large and complex systems, these design tradeoffs will not be visible to senior managers.  Therefore, easily verifiable physical constraints may be a valuable project management technology if they guarantee satisfactory performance on criteria likely to be sacrificed by opportunistic participants.

Finally, it is interesting to note that Simon, in The Sciences of the Artificial, emphasizes the physicality of computation in his discussion of physical symbol systems:

Symbol systems are called “physical” to remind the reader that they exist as real-world devices, fabricated of glass and metal (computers) or flesh and blood (brains).  In the past we have been more accustomed to thinking of the symbol systems of mathematics and logic as abstract and disembodied, leaving out of account the paper and pencil and human minds that were required actually to bring them to life.  Computers have transported symbol systems from the platonic heaven of ideas to the empirical world of actual processes carried out by machines or brains, or by the two of them working together. (22-23)

Indeed, Simon spent much of his career exploring the implications of physical constraints on human computation for social systems.  Perhaps it would be no surprise, then, if the design of physical constraints on electronic computer systems (or the hybrid human-computer systems known as modern organizations) turns out to have profound implications for their behavioral properties.

1 When performance depends on the values of a set of parameters, the search for a set of parameter values that yields high performance can be modeled as an attempt to find peaks in a landscape the dimensions of which are defined by the parameters of interest.  In general, the more interdependent the parameters, the more rugged the landscape (rugged landscapes being characterized by large numbers of peaks and troughs in close proximity to each other).  For details, see the literature on NK models such as Levinthal (1997) or Rivkin (2000).

More on Contexts, and a critique of databases

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

In an earlier post, I posited that Contexts serve as elementary subsystems in Shinsei’s architecture. What does this claim entail?

If Contexts are to be effective as elementary subsystems, then it must be possible to describe and modify the behavior of the system without examining their internal mechanics.  At least three conditions must be satisfied in order to achieve this goal.

  1. The normal behavior of the Context is a simple, stable, and well-defined function of its input1.
  2. Errors can be detected, contained, and repaired without inspecting or modifying the contents of any given Context.
  3. Desired changes in system behavior can be made by reconfiguring or replacing Contexts, without modifying their internal mechanics.

The first condition requires that a Context be a highly specialized machine, a sort of “one trick pony”.  This renders the behavior of the Context more predictable and less sensitive to its input.  For example, using a mechanical analogy, a drilling machine may drill holes of different depths or sizes, or it may wear out or break, but it will never accidently start welding.  The narrower the range of activity modes possessed by a component, the more predictable its behavior becomes.  The Context also becomes easier to implement, since developers can optimize for a single task.  In this respect, Contexts resemble the standard libraries included in many programming languages that provide simple, stable, well-defined functions for performing basic tasks such as getting the local time, sorting a list, or writing to a file.

The second condition–that errors can be detected, contained, and repaired at the system level–depends on both component characteristics and system architecture2.  To detect errors without examining the internal mechanics of the Contexts, the system must be able to verify the output of each Context. Since errors are as likely (or perhaps more likely) to result from incorrect logic or malicious input as from random perturbations, simply running duplicate components in parallel and comparing the output is unlikely to yield satisfactory results. In an earlier post, I describe mutually verifying dualism as a verification technique. To contain errors, thereby ensuring that a single badly behaved component has limited impact on overall system behavior, output must be held and verified before it becomes the input of another Context.  Finally, repair can be enabled by designing Contexts to be reversible, so that an erroneous action or action sequence can be undone.  All outputs should be stored in their respective contexts so that the corresponding actions can be reversed subsequently even if reversal of downstream Contexts fails.

To allow for changes in system behavior without modifying the internal mechanics of Contexts requires only that the system architecture permit replacement and reordering of Contexts.  For an example of such an architecture, let us return to the programming language analogy and consider the case of software compilers.  Compilers allow reordering of function calls and replacement of one function call with another.  Equipped with appropriate function libraries, programmers can exert nuanced control over program behavior without ever altering the content of the functions that they call.

From the preceding discussion, it becomes clear that our goal, in a manner of speaking, is to develop a “programming language” for enterprise software that includes a “standard library” of functions (Contexts) and a “compiler” that lets designers configure and reconfigure sequences of “function calls”.  The limits of the analogy should be clear, however, both from the characteristics of Contexts described elsewhere and from the error detection, containment, and recovery mechanisms described above.

In conclusion, it seems worthwhile to highlight why traditional software design does not satisfy these requirements.  The most important reason is probably the use of centralized databases, the core component of most applications and enterprise systems (note that Contexts store their own data, so Jay’s architecture has no central database).  The database provides a data storage and retrieval module with a well-defined interface and several desirable properties.  Yet the database can by no means be considered an elementary subsystem: the design of its tables, and sometimes even its indices, are directly linked to almost all aspects of system-level behavior.  Although the interface is well-defined, it is by no means simple; indeed, it consists of an entire language with potentially unlimited complexity.  Errors can be reversed midway through a transaction, but they are often difficult to detect or repair after a transaction has completed.  Significant changes in system-level behavior almost always require modifications to the structure of the database and corresponding modifications to the internal mechanics of many other components.  Indeed, even seemingly trivial adjustments such as changing the representation of years from two digits to four can become herculean challenges.

1 In computer science terms, this function defines the interface of the Context and serves to hide the implementation-specific details of the module (see Parnas 1972).

2 The seminal work on this problem is, I think, von Neumann’s 1956 paper “Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components“.  Fortunately, the problem faced here is somewhat simpler: while von Neumann was seeking to build organisms (systems) that guarantee a correct output with a certain probability, I am concerned only with detecting and containing errors, on the assumption that the errors can be corrected subsequently with the aid of additional investigation.  Thus it is sufficient to detect and warn of inconsistencies, which is a far easier task than attempting to resolve inconsistencies automatically based on (potentially incorrect) statistical assumptions about the distribution of errors.

Contexts as elementary subsystems

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

Contexts are the elementary building blocks in Jay’s system architecture.  I’ll define Contexts precisely below, but let me begin with a passage from The Sciences of the Artificial that provides a frame for the discussion.

By a hierarchic system, or hierarchy, I mean a system that is composed of interrelated subsystems, each of the latter being in turn hierarchic in structure until we reach some lowest level of elementary subsystem.  In most systems in nature it is somewhat arbitrary as to where we leave off the partitioning and what subsystems we take as elementary.  Physics makes much use of the concept of “elementary particle,” although particles have a disconcerting tendency not to remain elementary very long.  Only a couple of generations ago the atoms themselves were elementary particles; today to the nuclear physicist they are complex systems.  For certain purposes of astronomy whole stars, or even galaxies, can be regarded as elementary subsystems.  In one kind of biological research a cell may be treated as an elementary subsystem; in another, a protein molecule; in still another, an amino acid residue.

Just why a scientist has a right to treat as elementary a subsystem that is in fact exceedingly complex is one of the questions we shall take up.  For the moment we shall accept the fact that scientists do this all the time and that, if they are careful scientists, they usually get away with it. (Simon, 1996, 184-5)

For Jay, the Context is the elementary subsystem.  Like an atom, the Context is in fact a complex system; however, designed properly, the internal structure of the Context is invisible beyond its boundary.  Thus, system architects can treat the Context as an elementary particle that behaves according to relatively simple rules.

What is a Context?

A Context is a logical space designed to facilitate the performance of a small, well-defined set of actions by people acting in a small, well-defined set of roles.  Metaphorically, Contexts are rooms in a house: each room is designed to accommodate certain actions such as cooking, bathing, sleeping, or dining. Contexts exist to provide environments for action.  Although Contexts bear some resemblance to functions or objects in software programs, they behave according to substantially different design rules (see below).

Defining the Context as the elemental subsystem enables us, by extension, to define the elemental operation: a person, in a role, enters a Context, performs an action, and leaves the Context.  All system behavior can be decomposed into these elemental operations, I’ll label them Interacts for convenience, where a person in a role enters, interacts with, and leaves a Context.  The tasks performed by individual Interacts are very simple, but Interacts can be daisy-chained together to yield sophisticated behavior.

Design rules for Contexts

Creating Contexts that can be treated as elementary subsystems requires adhering to a set of design rules.  Below, I describe some of the design rules that have surfaced in my conversations with Jay.  These rules may not all be strictly necessary, and they are probably not sufficient; refining these design rules will likely be an essential part of developing a highly-evolvable enterprise software architecture based on Jay’s development methodology.

  1. Don’t misuse the context. Only allow those actions to occur in a Context that it was designed to handle; do not cook in the toilet or live in the warehouse, even if it is possible to do so.  Similarly, maintain the integrity of roles: allow a person to perform only those actions appropriate to his or her role.  The repairman should not cook; guests should not open desk drawers in the study.
  2. Physically separate contexts. Locate Contexts on different machines.  Never share a databases among multiple contexts.
  3. Only Interacts connect a Context to the rest of the system. Data enter and leave a context only through Interacts, carried in or out by a person in a role.
  4. There is no central database. Every Context maintains its own database or databases as necessary.
  5. Each Context permits only a limited set of simple, closely related actions. Contexts should be like a European or Japanese house where the toilet, bath, and washbasin are in separate rooms, rather than like a US house where all three are merged into a single room.  If a Context must handle multiple modes of operation or multiple patterns of action, it should be decomposed into multiple Contexts.
  6. Avoid building new Contexts. If a required behavior does not appear to fit in any existing Contexts, decompose it further and look for sub-behaviors that fit existing Contexts. Build new Contexts only after thorough decomposition and careful consideration.
  7. Only bring those items–those data–into the Context that are required to perform the task at hand.
  8. Control entry to the Context. Ensure that only appropriate people, in appropriate roles, with appropriate baggage (data) and appropriate intentions can enter.
  9. Log every Interact from the perspective of the person and the Context. The person logs that he or she performed the action in the Context, while the Context logs that the action was performed in the Context by the person.  This creates mutually verifying dualism.

Why bother?

The purpose of establishing the Context as an elementary subsystem is to simplify the task of system design and modification.  As Simon points out, “The fact that many complex systems have a nearly decomposable [i.e., modular], hierarchic structure is a major facilitating factor enabling us to understand, describe, and even “see” such systems and their parts.” (1996, 207) Establishing the Context as an elementary subsystem in enterprise software is a technique for rendering enterprise software visible, analyzable, and comprehensible.

Bounding and restricting the Context vastly simplifies the work of implementors, enabling them to focus on handling a small family of simple, essentially similar actions.  The Context can be specialized to these actions, thereby reducing errors and  increasing efficiency.

Contexts hide the complexity associated with data and problem representations, databases, programming languages, and development methodologies, enabling system architects to focus on higher-level problems.  In discussions with Jay, he almost never mentions hardware, software, or network technologies, since he can generally solve design problems without considering the internal structures of his Contexts and Interacts.

Since myriad organizational processes are assembled from a relatively small library of simple actions combined in different ways, systems that support these processes exhibit similar redundancy.  Thus, Contexts designed to handle very simple actions can be reused widely, decreasing the cost and time required to develop new systems.

Finally, it is possible that Contexts, by explicitly associating people and roles with all actions, may help clarify accountability as organizational action develops into an increasingly complex mixture of human and computer decision-making.

Concluding thoughts

In essence, Contexts and Interacts are artificial constructs intended to allow high-level system design problems to be solved independently of low-level implementation problems.  The extent to which the constructs achieve this goal depends on the effectiveness of the design rules governing the constructs’ behavior.  Positing Contexts and Interacts as the elementary subsystems in Jay’s development methodology establishes a theoretical structure for further inquiry, but neither guarantee their fitness for this purpose nor implies the impossibility of other, perhaps more effective elementary subsystem constructs.

On several occasions, I’ve been asked how this approach differs from service-oriented architectures.  I’ll explore this question in a subsequent post.

Creating computer-orchestrated knowledge work

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research.  All errors are my own.

Jay recently introduced me to Pivotal Tracker, a “lightweight, free, agile project management tool”.  It looks like a promising step toward computer-orchestrated knowledge work.  To explain what I mean, let’s start by thinking through the relationship between structured work, unstructured work, and computers.

Using computers to orchestrate highly structured work is relatively straightforward, because structure translates relatively directly into software algorithms1.  Much knowledge work, and especially sophisticated knowledge work at the core of modern economies such as research, design,  product development, software development, strategic analysis, financial modeling, and general management, is relatively unstructured.  Can computers support such general knowledge work?

One way to leverage computers in unstructured work is to decompose the work and factor out structured subproblems that can be delegated to computer systems.  Done effectively, this enables the structured subproblems to be solved more rapidly, reliably, and inexpensively.  In cases where performance on structured and unstructured subproblems complement each other, computerization of unstructured subproblems may lead to qualitative improvements in overall problem-solving performance.  In other words, computerization may result not only in efficiency gains but also in qualitatively better output.

For example, computer-aided design software and spreadsheets make possible more sophisticated building designs and financial models by efficiently solving critical structured subproblems.  Factoring the (relatively) unstructured task of designing a building or modeling the growth of a new business into unstructured, creative subproblems (sculpting the contours of the building, selecting the parameters in the model) and structured, algorithmic subproblems (mathematical calculations, visualizing data, storing and retrieving work in progress) enables architects and business analysts to focus their attention on creative tasks while computers handle routine processing.

If the complementarity between structured subproblems and unstructured subproblems be sufficiently strong, factoring out and computerizing structured subproblems will increase human employment.  If architects can deliver better designs at lower cost, demand for architects will rise.  If business analysts can deliver deeper insight faster, demand for business analysts will rise.  The degree of complementarity depends to some extent on inherent characteristics of the problem domain, but problem factoring and computer system design influence the degree of complementarity as well2.  Thus, advances in computer-orchestrated work may have significant implications for firm performance and economic growth.

Seen from this perspective, the Pivotal Tracker is an intriguing technology.  Its design is premised on the agile programming technique of structuring software development as a series of short (one to four week) iterations.  Development work is further decomposed into a large number of small, modular “stories” which (as far I understand the methodology) describe bits of functionality that deliver incremental value to the customer.  During each iteration, the development team implements a number of stories.

Although originally intended for managing software development, Pivotal Labs, the company behind Pivotal Tracker, proposes using the tool for managing just about any kind of project.  From the FAQ:

A project can be anything that you or your team works on that delivers some value, and that is large enough to benefit from being broken down into small, concrete pieces. For example, a project may be to develop software for an e-commerce web site, build a bridge, create an advertising campaign, etc.


Pivotal Tracker screen shot. The active stories for the current iteration are shown on the left, and the backlog is on the right.

The reason Pivotal Tracker (PT) represents a step forward in the computerization of knowledge work is that the tool goes beyond simply tracking progress on a collection of tasks.  To begin with, PT enables quantitative planning and analysis by asking users to rate the complexity of each story on a point scale.  Several scales are available, including a three point scale. Constrained scales enforce discipline in problem decomposition: for example, using a three point scale, stories cannot be rated accurately if their complexity exceeds three times the complexity of the simplest (one point) stories.

PT uses these complexity ratings to measure the rate of progress in terms of points completed per iteration (termed velocity) and estimate the time remaining until project completion.  According to Pivotal, estimates of future progress based on historical velocity prove relatively accurate.  PT orchestrates the work by maintaining a queue of active stories to be completed in the current iteration and a prioritized backlog of stories for completion in future iterations.  After an iteration ends, PT moves stories from the backlog to the active queue.  PT manages the active queue to keep the project moving forward at a constant velocity (complexity points per iteration), helping the team stay on schedule and avoiding last-minute dashes.  All of this occurs transparently, without burdening the team members.

PT also handles simple work flow for each story.  Team members take ownership of stories by clicking a start button on the story, and then deliver them to the requester for approval when finished.  This clearly delineates accountability and enforces separation of worker and approver.

Technologies for computer orchestration of knowledge work are still relatively primitive, but Pivotal Tracker seems to represent a significant step forward.

1 Work is structured to the extent that it can be executed predictably using specialized routines.  More here.  For a rigorous study of the tasks amenable to computerization, see Autor, Levy & Murnane 2003.

2 Regarding the importance of how problems are factored, see von Hippel, 1990.  On the implications of computer system design, see Autor, Levy & Murnane, 2002 and Zuboff, 1989

Computer-orchestrated work

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research.  All errors are my own.

In my research on computer-assisted organizing, I set out to understand how computers alter the fabric of organizations.  Here’s how I framed the problem in my dissertation:

In the computer age, complex information processing tasks are divided between humans and computers.  Though designed and developed by humans, computers are autonomous agents that function as independent decision-makers.  The dynamics of electronic information processing influence the dynamics of organizing and organizations in ways that cannot be understood in purely  human terms. (Brunner, 2009)

My dissertation focused primarily on two aspects of this transformation: how computers drive further specialization in information processing work, and how computer-assisted work increases business scalability.  A third aspect of the transformation had been on my mind ever since my days as a management consultant: it seems that computers and people are trading places within organizations.

In the past, humans created organizational structure through their patterns of interaction, while computers were plugged in to this structure to perform specific tasks. Increasingly, these roles are reversed: computers create organizational structure, while humans plug in to the computer system to perform specific tasks.  Shinsei Bank’s mortgage loan operations provide an elegant example of the phenomenon. Rather than human credit approvers managing the loan application process from beginning to end and using computers to perform calculations or look up policies, a loan application system manages the process, calling on human appraisers, data entry clerks, analysts, or supervisors to provide input as necessary.

In Jay’s words, the computers orchestrate the work.  The Oxford English Dictionary defines orchestrate as follows:

To combine harmoniously, like instruments in an orchestra; to arrange or direct (now often surreptitiously) to produce a desired effect.

The word seems apt. In computer-orchestrated work, computers arrange and direct business processes in order to “combine harmoniously” the work of individuals.  Much like an assembly line, computer-orchestrated work enables individuals to focus on simple, well-defined tasks, while computers handle the coordination and integration of these fragmentary outputs. As bureaucracy1 eliminated the reliance of organizations on specific individuals by defining roles, so computer-orchestrated work enables organizations to survive without the patterns of human interaction that define and sustain the structure of traditional organizations.

Computer-orchestrated work may greatly increase organizational performance. By lowering to nearly zero the marginal cost of coordination and integration, computer-orchestrated work makes possible greater specialization, which accelerates learning and increase efficiency. Moreover, computer-orchestrated work lowers the costs of monitoring and metering, potentially reducing agency costs. Computer-orchestrated work is easier to analyze and modify, which facilitates innovation and increases the returns to highly-skilled human labor (c.f. Zuboff, 1989). Although the design challenges are significant, computer-orchestrated work may be an essential tool for creating more intelligent organizations.

1In the Weberian sense, as a highly effective organizing technology.

Design metaphors: zoo, house, railway and city

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

How to deal with the extreme complexity of enterprise software? The traditional approach relies on a set of abstractions related to data models and databases, interfaces, processes, and state machines.  These conceptual tools are rooted in the theory of computer science. Jay takes a different approach: he attempts to mimic physical systems that solve analogous problems in the real world.  Much as modern operating systems mimic the file and folder model that people use to organize paper documents, Jay’s software architectures mimic zoos, houses, railways and cities.

Enterprise software is filled with virtual things that come into existence, experience a variety of transformations, and finally disappear or solidify into permanent log records.  These things may be users, customers, transactions, and so forth.  To Jay, these things are the animals in a virtual zoo.  They have each have their own life cycles and their own needs.  Cages must be used to separate different kinds of animals in order to care for them and prevent them from interfering with each other.  Processes specific to each kind of animal must be implemented for rearing, feeding, healing, disposing. The first step in designing a system is to identify the animals that will inhabit the system, separate them, and cage them.

The house metaphor complements and overlaps the zoo metaphor.  Just as people live in houses in the physical world, representations of users and customers live in virtual houses. Users store their virtual belongings–records, certifications, etc.–in their houses, which they access using virtual keys.  Houses have different rooms which are used for different tasks, and each room has equipment appropriate for the tasks to be performed there.  Users, as represented in the system, must be aware of their context so that they perform the appropriate tasks in the appropriate places. Jay emphatically forbids “cooking in the toilet.” Users must also be aware of their roles: a guest in the house behaves differently from a plumber, and a plumber differently from the owner of the house.

When users are needed outside their houses to perform tasks, they travel on virtual trains to stations where operations are performed.  They are aware of their destination from the outset, so they take with them only those belongings required to complete the task. After completing the task, they return to their houses.  All of this happens transparently to the actual user: the architecture of the system does not dictate the structure of the user interface.

Together, the houses, trains, and stations make up a virtual city that models the work of the bank.  This metaphor seems rather distant from the mechanics of an enterprise software application, at least compared to the familiar desktop metaphor, and I’m still putting together the pieces–so please consider this post a tentative first step toward articulating the idea.  I’ll revisit the topic and add more detail in future posts. In any case, the key takeaway seems to be that the intricate and highly modular division of labor in the real world may be a useful metaphor to guide the design of modular systems in virtual worlds.

Mutually verifying dualism

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise architectures.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for sharing with me the ideas developed here.  All errors are my own.

For more than a year, Jay and I have been carrying on a dialogue about his methodology for developing enterprise software. This methodology appears to depart in many ways, sometimes radically, from traditional approaches.  As a first step toward characterizing this methodology, I’m going to begin by writing a series of blog posts about the guiding principles that Jay has described in our conversations.

These principles are not necessarily mutually exclusive or collectively exhaustive, but they capture the essence of the methodology.  At this point, I don’t fully understand how these principles fit together, and Jay has encouraged me to focus on thoroughly understanding each of principle in isolation before attempting to assemble the principles. Consequently, these blog entries may seem fragmentary or disconnected. As the research progresses, I hope to integrate and synthesize these principles into a coherent set of theoretically grounded design rules. Let’s dive into a principle.

Mutually verifying dualism: Model all operations as pairs of reciprocal actions between two agents in autonomous, reciprocal roles.

In this context, the term autonomous means that neither agent can dictate the behavior of the other; the two agents belong to separate control hierarchies and maintain their own records.  Transactions provide a simple example, since they naturally lend themselves to dualism.  Without dualism, we might model a transfer of a security from Alfred to Bernard as a unitary transfer operation recorded in a centralized transfer log. Such centralization may be convenient from a design perspective, but it has several drawbacks.  First, an agent that controls the transfer functionality can perform transfers without consulting Alfred, Bernard, or any other user.  Second, fraudulent or erroneous transactions may be impossible to detect, because the centralized transfer log is the sole source of transfer information and cannot be cross-checked. Third, transfer records are lumped together in a single log, so retrieving transfers performed by a particular user requires extracting them from a large database that may contain billions of records for millions of users. (The magic of modern databases makes this extraction of needles from a haystack possible and even straightforward, but it seems like a lot of infrastructure for an essentially simple task.  As I’ll discuss in a future post, Shinsei’s philosophy is not to drop the needles into the haystack in the first place.)

Alternatively, following the principle of mutually verifying dualism, we can break the transfer into a pair of reciprocal operations between Alfred, in the role of provider, and Bernard, in the role of recipient.  The transfer occurs only if both Alfred and Bernard cooperate, and the transfer can be verified by comparing Alfred’s record of the transfer with Bernard’s and ensuring that the records match.

Mutually verifying dualism requires that all operations be modeled in this way, entailing a shift in the way we conceptualize many operations.  For example, consider Carl logging in to a savings account online.  This operation would traditionally be modeled by the system as a unitary login event and recorded in a centralized log.  To apply the mutually verifying dualism principle, we break the login operation into a pair of reciprocal operations between Carl and Carl’s Savings Account (assuming, for the moment, that he is logging in to his own account). Carl, modeled in the system as a user with its own identity and records, requests access to Carl’s Savings Account and records the result.  Inversely, Carl’s Savings Account, similarly modeled as an independent entity with its own identity and records, grants (or denies) access to Carl and records the result. Using this approach, fraudulent or erroneous logins can be detected (sometimes) by reconciling the user and account records.

Image by Theon from Wikimedia Commons, used under Creative Commons license

Shinsei visualizes its network of mutually verifying operations as a geodesic sphere. Image by Theon via Wikimedia Commons, used under Creative Commons license.

In principle, mutually verifying dualism resembles double-entry bookkeeping, which helps detect errors by breaking unitary financial flows into dual credit/debit operations.  As in double-entry bookkeeping, mutually verifying dualism breaks down if a saboteur takes control of the systems on both sides of the interaction (e.g., the user system and the savings account system), or symmetric errors occur in both systems.  The probability of these relatively unlikely events can be further reduced by designing a network of mutually verifying dualisms. The imagery used by Shinsei to describe the approach is a geodesic sphere, where the location of every vertice can be verified from multiple, independent perspectives.

Perhaps the most important implication of mutually verifying dualism is that records of past events can be reconstructed if any system component breaks irreparably.

Mutually verifying dualism creates some complications for system design.  Since interacting agents must belong to separate control hierarchies, centralized designs are infeasible a priori. Duality implies redundancy, so changes need to be propagated to all concerned agents. Modularity probably precludes the possibility of unified or universal data models.

What is enterprise software?

This post is part of my collaborative research with Shinsei Bank on highly-evolvable enterprise software.  It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license.  I am indebted to Jay Dvivedi and his team at Shinsei Bank for supporting this research.  All errors are my own.

Since this research project focuses on improving the architecture of enterprise software, it seems like a good idea to explain what I mean by the term “enterprise software” and why I think that enterprise software architecture represents such a challenging problem. I’m on the lookout for a well-developed software typology to leverage here, but I haven’t found one yet. The Wikipedia entry on enterprise software is pretty much devoid of insight. So, what follows is my own take on the issue. Definitions always need a fair amount of batting around to get them into shape, so consider this “iteration zero”. Please feel free to suggest ideas, complications, or references.

Enterprise software refers to programs for which organizational considerations fundamentally influence both design and function.  This influence has several dimensions, not all of which are unique to enterprise software:

Many users: In contrast to single-user applications such as word processors or spreadsheets, enterprise software is used simultaneously by tens, hundreds, or thousands of users.

Many, diverse and interrelated roles: Enterprise software allows users to be associated with roles that determine the ways in which they can interact with the system and with each other.  This contrasts with many multi-user applications such as social networking, online collaboration, or communications applications that support only one or a small number of roles.  In principle, the role dimension of enterprise software resembles the groups and permissions functionalities in multi-user operating systems, but the universe of roles in enterprise software is often far more elaborate. As I’ll describe in a subsequent post, roles play a central role in Shinsei’s software design principles.

Conflicting interests: Organizations are riven with conflicting interests, from inter-departmental battles to team-level skirmishes to individual rent-seeking. Enterprise software becomes another means for pursuing these conflicting interests.  Thus users routinely and strategically misuse or attempt to misuse the system (“misuse”, of course, being in the eyes of the beholder). In our conversations at Shinsei, Jay often emphasizes the importance of assuming that users will hijack the systems and use them to pursue their own goals at the expense of the organization.  This reality differs from the assumption of many collaborative multi-user systems such as shared spreadsheets or social networking services which generally operate on the assumption that users will only invite or “friend” others who share their interests, at least within the domain of the program’s activity.  If these interests are found to diverge, the offending party will be uninvited or “unfriended”.

Modeling many, diverse and interdependent phenomena as they unfold over time, often spanning months or years: The purpose of an enterprise software application is to track the state of the business.  Generally this means modeling contracts, perquisites, and transactions; stocks and flows of money, people, and materials; and budgets and forecasts, among other phenomena.  These phenomena often span hundreds or thousands of classes, depend on each other in complex ways, and persist for many years.  The complexity of this modeling task is probably several orders of magnitude greater than that of more narrowly focused applications that handle a small class of largely independent tasks, often relatively stateless and completed in a few minutes, hours, or days.

Highly politicized: Enterprise software directly influences the dynamics of power and decision-making in its host organization.  For example, the design of a system may constrain or relocate decision-making authority by forbidding certain actions (e.g., price discounts over a certain level) or requiring additional approvals to complete a task. Consequently, the design and modification of  enterprise software is highly politicized and often requires the involvement of senior managers. As a result, design of enterprise software often becomes a political rather than a technocratic endeavor. This contrasts with the design of systems further down the technology stack (e.g., operating systems or databases) or function-specific systems that do not span group boundaries.

Regulatory interdependencies: Since enterprise software directly influences organizational decision-making and records the consequences of organizational activity, its design depends on regulatory constraints in these domains, and use or misuse of the software may have regulatory implications.

It seems to me that these characteristics clearly distinguish enterprise software from other application types, while also highlighting the difficulty of the design challenge faced by enterprise software architects. The goal of this research, then, is to develop some design rules to render this challenge more tractable.