Every programmer who is passionate about their job wants to write a better code. This may be the reason why I look into the very fundamental principles of programming languages and software design, searching for universal ingredients of reliable and efficient software. In this article, I will take a closer look at one of them, namely extensibility, which is quite a general and ambiguous concept, although understood here as a potential to add new functionality or adapt to a new environment (external requirements) as quickly and painlessly as possible. It necessarily begins with breaking the codebase up into chunks (files, modules, classes) generally referred to as components. Extensibility, therefore, addresses the need of adding new components or replacing the existing ones.

I wrote this as a summary of my thoughts on the topic to present a brief and concise system, which, hopefully, can be used as guidelines by people who write software with the power users in mind, where scripting, automation or extensibility is an essential design goal. Also, I look forward to discussion and feedback, in case I missed something.


I would like to start from the end, by introducing the most expressive system first, and discuss the related matters later. There is a diverse variety of extensibility mechanisms, but it turns out that they share a common ground, a very obvious and general idea — aliasing — using aliases to designate connections between components, in other words, delegating some functionality to a component behind an alias. Thus, extensible program code operates with aliases; components that they are actually pointing to are determined some time later statically or dynamically, at runtime. As an example, think of the logger alias (available either as a parameter or a global variable, keyword, etc) to output logs regardless of implementation details. Electrical outlets and appliances may serve as a physical analogy; the outlet is a fixed designated place, analogous to an alias, where we can plug some appliance into it. One can find examples everywhere, such as using domain names instead of IP addresses, using file system links and shortcuts instead of real filenames, etc.

Perhaps one of the earliest examples in software development was using symbolic labels in place of an address in assembler instructions; an actual address was determined for every label automatically during translation (i.e. statically). Similarly a closure, as a variable holding an address of an actual function, serves as an alias (dynamically defined) within the caller function context. Objects in the OOP paradigm are datatables, with each row holding an actual function address, which serve as compound aliases.

Sometimes there is the same predefined set of aliases for all components. One important example is Unix shell standard streams, the three of them (stdin, stdout, stderr) enable the user to connect console utilities into processing chain. Please note that it is enough to have at least 2 connections (serving as an input and output) to construct arbitrary complex systems.


Often extensibility depends on the idea to declare a set of extension points and for other components to subscribe or register as participants for a particular extension point. As an example, let's suppose that MyFancyBookReader module registers for the OpenEPUB extension point to declare ability of rendering EPUB content. Extension points delegate some functionality to all participants or at least one. Another variation is an ordered sequence of participants executed one after another until one of them performs a useful action. This pattern is referred to as chain of responsibility.

Extension points serve as aliases with one distinction — a whole group of participants is hidden behind a single alias, i.e. we see one-to-many relation. However, a set can be presented as a linked list, thus a (possibly very long) chain of one-to-one linked participants may be used instead. Therefore, formally, one-to-many variations do not add anything in terms of expressiveness and flexibility to the single alias based extensibility; however, in practice they may be extremely useful from other points of view and often very naturally express design goals.

Also, AOP (aspect-oriented paradigm) is a curious extensibility mechanism in a way that it allows to extend a set of aliases itself. Additional code can be attached (or weaved, according to AOP terminology), extending the main functionality, in every "join point". Every join point serves as a potential alias referring to "an aspect" for it to step in. On one hand, there was a hope that the ability to extend functionality in an unexpected way would be good. On the other hand, this is often the very focus of criticism, for control flow becomes obscured and overall is "too much flexible". Even for debug purposes, when we are not concerned with performance and reliability, it is always possible to reserve an additional alias such as logger to output data for debug purposes only.


Aliasing, capturing an idea of connecting components in any combinations through predefined connection points, is just one ingredient of the extensibility recipe. Obviously, not all components are interchangeable and not all combinations make sense. Therefore, we need a validation device to check the correctness, that would let through only good combinations and discard everything else, i.e. associate components with types to ensure compatibility.Please note that weak type system and the lack of formal verification are, paradoxically, detrimental to extensibility, think of it as if it becomes too loose! For, as codebase grows, almost any change will break the implicit existing expectations leading to subtle regressions and crashes. One can compare this with a scene from Gravity (2013) where the Clooney's character died because of loosely coupled ropes.

Think of the network of connected components as the one consisting of providers associated with some features and consumers, expecting particular features from providers; validation should ensure that expectations of consumers toward the connected providers are met.Let us use the term alphabet to refer to the set of all such features or characteristics which are taken into account during validation. Consider a tool designed to index files on a file system; different file type specific extractors may comprise an alphabet as follows:

{HtmlExtractor, PdfExtractor, SvgExtractor, PngExtractor, DjvuExtractor}

Now we want to declare FullTextSearch expectation for full text search engine to accept content from all document extractors except djvu:

DocExtractors = {HtmlExtractor, PdfExtractor, DjvuExtractor}
FullTextSearch = DocExtractors AND NOT({DjvuExtractor})

We can see that a fully expressive total validation system allows to select any subset of the alphabet as the good one. Conversely, less expressive systems feature less precise means of expressing subsets, limited only to some major classes, such as only text or image file extractors in the example above. From set theory we know of at least one way to build a total system:

  • Directly enumerating appropriate symbols from alphabet.
  • Combining already defined subsets using set-theoretical operations, such as AND, OR, NOT.

Please note that in software design only conjunction is generally used to express new requirements.


Interfaces, as a set of functions with strictly defined semantics, while oblivious to implementation details, is a quite common way to express requirements for components to comply with. Please note that the term 'interface' is used here in a general way, whilst concrete programming languages offer their own syntactic devices, such as typeclasses in Haskell, to this end. Software writers define new interfaces either by specifying all member functions explicitly, or by further constraining existing interfaces, inheriting and adding additional requirements to them.

Consider the Location interface consisting of GetLatitude() and GetLongitude() functions. Another interface GpsLocation extends it with two new required functions GetAltitude(), GetSpeed(). Clearly, GpsLocation further constraints Location, therefore not all components supporting Location support GpsLocation as well; in other words, GpsLocation defines a subset among all components providing Location functionality. Interface that extends several "parent" interfaces corresponds to a conjunction of sets defined by parents individually.

By analogy, we can interpret the complement operation, disjunction of two interfaces, as a union set of components that provide either functionality. Is it useful in practice and when? Consider VNC protocol to remotely control another computer and share its screen. VNC was designed with an idea to tolerate different methods (encodings) of transmitting framebuffer due to performance and bandwidth consideration. Server and client negotiate first on an exact method out of several available options of transmitting graphic primitives.

Therefore, interface disjunction captures an idea that a common purpose (such as screen sharing) may be achieved by conceptually different approaches of varied efficiency in different circumstances (such as sending bitmaps versus vector primitives) each one formalized by a different interface. Contrary to interface conjunction, which requires support of all parent interfaces, it is enough to implement at least one of the parent interfaces to support interfaces disjunction.

However, cases where provider and consumer both support more than one alternative interface result in ambiguity, where it is not clear which one to use actually. Resolution requires a notion of preferences — allowing to determine which method is better in a given situation. This is a reason why conventional type systems do not allow disjunctive (meta)interfaces, for it goes beyond passive validation toward active adjusting that takes performance considerations into account as well.


In the example with file extractors we built consumer's expectations from an alphabet populated by providers' features. Consider an opposite situation with a developer writing a new plugin, such as new file type extractor, intending to embed it into existing codebase presented by a set of expectations, such as FullTextSearch required by indexing engine to be able to process a file. In essence, this is equivalent to classification as expressing properties of a new component by means of expectations it meets. Although there are many classification systems, they mostly belong to either of two branches as follows:

Hierarchy. All classified objects are grouped into categories, which in their turn are grouped into more general categories, and so on. Each subcategory contains strict subset of objects that belong to a parent category. Among common disadvantages are ambiguities in cases where an object does not fit exactly into any one category or can be placed into several categories with comparable confidence.

List of tags. Each tag corresponds to a set of classified objects that hold that particular property. List of tags serves as a conjunction of all such individual tags' sets. The common problem is difficult navigation and exploration of similar objects.

A more expressive system, namely multiple hierarchies classification, which allows to place an object simultaneously into several hierarchical categories describing different aspects, appears to represent quite a good generalization, for it combines features of both elementary approaches. One curious example found in zoology is platypus, which is classified as mammal (so we put it under category `.. /

chordates / mammals`) and at the same time has enough similarity with reptiles. We want to preserve this information by additionally classifying as `reproduction / egg-laying`.


Classification, in broad terms, is a way to express new things by means of existing definitions. From extensibility perspective, thinking of components' expectations and requirements (function signatures, interfaces) as "existing definitions" allows to interpret classification as a way to seamlessly introduce new components into existing codebase. Taken literally, it is often equated to describing an object as "it is", such as a linked list is indeed a container. Instead, another approach is to classify its behavior, rather than the object itself, in a few desired contexts only. Consider a library working with graphs, operating over nodes and edges to calculate metrics, such as distance, as follows (in declarative ASP syntax):

distance(A, A, 0):- node(A).       % distance between a node and itself is zero

distance(A, B, 1):-                % distance between a node and its neighbours is one
  edge(A, B);
  node(A); node(B).
distance(A, B, 2):-                % have one node in between
  edge(A, C); distance(C, B, 1);
  node(A); node(B); A!=B.

On the other side, suppose there is a database with people and relations between them:

person(bob; mike; tom).
knows(bob, mike). knows(bob, tom).

Now we would like to find out if two given persons have friends in common. To solve this, we can view people and their relations as a social graph to check whether distance between them in the graph equals 2:

node(Person):- person(Person).
edge(A, B):- knows(A, B).
edge(A, B):- knows(B, A).

haveMutualFriends(A, B):- distance(A, B, 2).

This demonstrates local or context depended classification. We "temporarily" declare an object (a person in the example) as having specific properties (as a node) within specific context (module, file, etc) to solve a particular task. One can think of this approach as a variation of multiple hierarchies classification system.

In practice, languages that do not support this sort of typing introduce ambiguity. For example, a great variety of string processing functions exist; in OOP languages, should all of them be members of the string class or not? They are directly related of course, but for every single task only a small selection of them is needed.


Modern software solutions often consist of separate tools and technologies with a mix of pieces written in statically typed as well as dynamic, scripting languages. Besides, there are projects, such as computer and network monitoring software Hugin, that consciously pursue a design goal to allow extending functionality by writing plugins in any language. This situation raises interesting questions about communication and validation on inter-component level. Plain text formats prove again and again their usefulness (one of the best known examples is HTTP protocol) for basic string manipulation routines are present in any language no matter what.

Since in heterogeneous environment we cannot rely anymore on built-in validation mechanisms such as type systems provided by individual languages, the idea is to build string based validation by requiring each component to have a signature — arbitrary string describing its features. Signatures, then, are matched againstspecification — text encoding a subset of compatible components. Additionally we require the validation mechanism to be implemented as a fixed and preferably very simple algorithm, taking specification and signature strings as its input to decide compatibility using basic routines. What is the most expressive validation system all these requirements leave us with? Several algorithms ordered by complexity are considered below:

  • Prefix matching. Probably the simplest algorithm is to test whether specification matches the beginning of a signature, i.e. is a prefix of the signature. Prefix check can serve as single hierarchy validation, checking a single feature or a whole category of features.
  • Subsequence matching. In contrast to prefix matching, it tests whether a specification is a subsequence of a signature, i.e. the signature contains all symbols from the specification possibly with some additional symbols. The algorithm is expressive enough to serve as multiple hierarchy validation.
  • Regex matching. It turns out that regular expressions are powerful enough to encode total validation system. In other words, it is possible to construct a regexp to match against any required combination of conjunction, disjunction or negation of individual features or their classes.


Now we can recognize the concept of extensibility as a joint enterprise of two opposite forces — flexibility and rigidity — each playing an important role.

Flexible system enables components to be connected in any desired way. It is sufficient for total flexibility to fulfill, perhaps indirectly, the following criteria:

  • Ability to declare a fixed set of aliases for components, that can be viewed as connection slots or ports, such as inputs and outputs, which essentially constitute an ingrained part of a formal description and documentation.
  • Ability to attach (replace, detach) components in a desired way statically or dynamically. One useful variation is based on one-to-many type of connections.

Rigid system, on the other hand, ensures that all possible combinations of components are correct and optimal. It features the following properties:

  • Ability to define shared alphabet as a set of terms for consumers and providers to negotiate before the actual communication. For the former, to communicate their expectation and formal requirements (such as interfaces), and for the latter, to communicate their properties and features. Such an alphabet constitutes another ingrained part of formal description.
  • Ability to derive new requirements by conjunction, disjunction (which may require performance considerations) or negation of the existing ones.
  • Provide classification means as a way to declare support of one or more requirements globally or locally (within specific context).