The Design of Everyday Code

There is something weird about our industry – we like memes; short acronyms that we cannot forget: DRY, GRASP and (the most famous one) – SOLID. The hero of my today’s ramblings. SOLID is considered the foundation of the modern software development practice, next to clean code, test driven development and probably a few others. What bothers me, is that we got a bit dogmatic here, not getting the essence of those principles. What does SOLID really stand for? Why are those practices essential? What’s so appealing about them?

SOLID deconstruction

Let's start with a bit of history. SOLID was actually coined by Michael Feathers, based on early 2000s work of Robert C. Martin (better known as Uncle Bob) – who came up with a set of programming principles and called them SOLID. Regardless of naming, it's all about five distinct rules: Single responsibility, Open-Closed, Liskov substitution, Interface segregation and Dependency inversion.

Just to recap; SOLID is an acronym made from the first letters of each of the principles, which together constituents the basis of Object Oriented Design. In its original form SOLID stands for[1]:

  • The Single Responsibility Principle – a class should have one, and only one, reason to change. And to be precise, a reason means concept, a class should encapsulate a single concept.
  • The Open Closed Principle – you should be able to extend a classes behaviour, without modifying it.
  • The Liskov Substitution Principle – derived classes must be substitutable for their base classes. In plain English – we must be able to plug-in different behaviours based on the same base type.
  • The Interface Segregation Principle – make fine grained interfaces that are client specific. Like a single responsibility, but for interfaces.
  • The Dependency Inversion Principle – depend on abstractions, not on concretions, which refers us back to the substitution principle.

Kevlin Henney did a great job drilling down those principles in his talk "SOLID deconstruction"[2] – so that I don't think I should elaborate more upon this.

Essentially, with Kevlin’s approach, we are left with the Single Responsibility Principle (SRP), as all other are either not relevant (Open-closed in its original sense), very language dependent (Interface Segregation) or repetitive (Dependency Injection being a way of implementing SRP with Liskov substitution).

We can obviously go further – and that's what Kevlin essentially did in his talk. We can ponder, what _single_ exactly stands for in the Single responsibility principle. Kevlin's interpretation of _single_ was related to cohesion, in the sense of Tom DeMarco's definition or Glenn Vanderburg's article on that topic.[3]


"(...) cohesion is a measure of the strength of association of elements inside a module."
-- Tom DeMarco, Structured Analysis and System Specification


We can also leave it like this and try to tackle the S in SOLID slightly differently.

Entering the Design 101

With the technical aspects of SOLID being so neatly described and understood, there isn’t much space left for 'another clean code article'. Is there any purpose in referencing those great talks and articles (which apparently I already did)? So let me take a different approach and give SOLID, clean code etc. a psychological twist. Let's ponder a bit why that clean, SOLID code, flavoured with appropriate test coverage, is easier to work with and easier to maintain, easier to extend. I've never seen any doubt about this.

Our work, the code we cut, the quality of software we deliver is driven by various principles we have deep in our heart. But we hardly ever question them or think what is so appealing about these? Where is this traction coming from?

That way we enter the uncharted waters of behavioural psychology and cognitive science.

As ridiculous as this may sound, it appears to be critical for understanding the importance of SOLID principles. Some well-known papers and researches in this area are the foundation of every designer's creative work. Let's see if they overlap with programmers' creative work as well. In the next few paragraphs, we will look at the essence of physical objects design, affordances and signifiers, how they impact our perception, how physical objects design take advantage of the way our brain and memory works.

Design of physical things

"(...) *affordance* refers to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used."
-- Donald A. Norman, The Design of Everyday Things

The term affordance was introduced by Donald Norman in early nineties, in "The Design of Everyday Things".[4] One might wonder how on earth it's related to programming.

Let me run this by example. Have a look and compare these two: class and class strategy. Both do pretty much the same – abstract particular SQL operations (probably not in the best possible way). But it’s not the leaky abstraction I’d like to focus on – it’s the intention these two snippets reveal.

The first break multiple principles of OOP, starting with the single responsibility and closeness for modification. A new requirement for SQL update operation will require changing the class, adding a new method (probably following the existing coding standards) and… running away. That’s not a situation we would like to find ourselves in.

On the other hand, the latter snippet, the SQL related class strategy, builds up a completely opposite attitude, when it comes to modification of the code. Limited scope of required changes (or safe extension of existing code) makes us feel far less anxious when it comes to this legacy.

public class Sql {
   public Sql(String table, Column[] columns);
   public String create();
   public String insert(Object[] fields);
   public String selectAll();
   public String fieldByKey(String keyColumn, String keyValue);
   private String columnList(Column[] columns);
   private String valuesList(Object[] fields, final Column[] columns);


abstract public class Sql {
   public Sql(String table, Column[] columns)
   abstract public String generate();
public class CreateSql extends Sql {
   public CreateSql(String table, Column[] columns)
   @Override public String generate()
public class SelectSql extends Sql {
   public SelectSql(String table, Column[] columns)
   @Override public String generate()
public class InsertSql extends Sql {
   public InsertSql(String table, Column[] columns)
   @Override public String generate()
   private String valuesList(Object[] fields, final Column[] columns)
public class FindKeyBySql extends Sql {
   public FindKeyBySql(String table, Column[] columns, String keyColumn, String keyValue)
   @Override public String generate()
public class ColumnList {
   public ColumnList(Column[] columns)
   public String generate()


In the design theory, affordances provides strong clues how to operate things, physical things.

Surprisingly, when it comes to a virtualized world of computer programs, judging just by those two code listings, it tends to be exactly the same. With affordances taken advantage of, the user knows what to do with a glimpse of an eye: no comments, no instructions are needed.

This is based on two premises, two fundamental principles of Design for Understandability and Usability:

  • provide a good conceptual model
  • make things visible

The conceptual model allows us to predict the effects of our actions, to understand and be able to reason about them. It is the model, we've built through experience, trainings, instructions – like the design patterns, conventions, best practices (things extremely popular in our industry). Affordances support the conceptual models and through making the clues visible, give users clear information how to operate everyday things. Affordances act as indication, visual signal that can be meaningfully interpreted and make the relevant parts visible.

We can apply similar rules to both physical objects and code (especially in micro scale – blocks, methods, functions, classes): if simple things need labels, or instructions, or pictures – the design has failed. The difference is, we are not addressing a user in a sense of consumer of an end product, it is rather the fellow developer who sits next to us and who will literally use the code that has been written.

Problems start when things get complex, inherently complex by the nature of the problem domain being solved; when a number of available actions exceeds our limited brain capacity. In fact… what are those brain limits?


Can't remember all these

The answer is simple and might surprise you. It is seven (more or less).

This number hasn't come up out of nowhere – it's from one of the highly cited papers in psychology: "The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information", by George Miller.[5] In its essence, the paper states that immediate memory (or working memory – which might be a more natural term) imposes several limitations on the amount of information people are able to receive, process and remember. Whether it's seven or five, or any other number – it is disputable and research dependent. Some theories are against measuring memory capacity in terms of discrete numbers at all. Others prove that the capacity can be as little as three (three words, three concepts, three numbers).

These are all valid points – but regardless of the exact number – they essentially tell us not to overwhelm users with information. Be kind, limit the numbers of clues, elements required for understanding the code that has just been written. Whether these are method parameters, class attributes, number of interfaces being implemented: these are all prerequisites (an upfront investment) for grasping the essential complexity of the problem. Properly structured information (through separate concepts), in a bite size portions (small entities) – that is exactly what the 'Functions' chapter of the clean code book was about.[6]

But what happens if the number of elements, concepts exceeds the magical number, no matter if it's seven, five, four or three? That's where the learning process needs to start.

How does memory work?

Let me refer to another psychological paper, this time from the late sixties: "Human memory: A proposed system and its control processes" by Richard Atkinson and Richard Shiffrin. It suggests that memory is made of a series of stores and describes it in terms of the information flow through the system (called after authors, the Atkinson–Shiffrin model).

The good news is that we are no longer limited to a single, limited capacity working memory. There are multiple stores (or registries) which are used in different situations (and under different stimuli) – as depicted below.



The model describes human memory as three separate components:

  • sensory register
  • short-term store
  • long-term store

Obviously, since the time the research was published, many discussions and criticism have aroused – nevertheless the model had some significant influence on subsequent memory research.

From our programming perspective, the most important bit is the rehearsal process, but we will get to this later. We don't have much interest in the sensory memory: short duration (less than 1/2 second) but of large capacity. This is not something influencing how programmers work. Interesting things start when sensory information is processed to a short-term storage.

This is storage of a limited capacity (seven items, seven chunks of information – which brings us back to the Miller's work on the working memory) and limited duration (up to 30 seconds – depending on research).

This is becoming crucial in software development, where building complex abstractions is a natural process. The whole thing takes place in our brains during either writing new code or understanding an existing one and heavily use the short-term memory, a very fragile storage, where information can be lost as a matter of distraction or passage of time. The fragility of this process is something we've experienced badly with any desk visit, which ruined the whole model we had in mind. Avoidance is pretty straightforward and it's the practical application of well known principles: build small entities (classes) with meaningful visual clues (good naming), so that the abstraction needed for understanding remains concise.

Otherwise, we will be forced to continuously rehearse the information – which ultimately means remembering things, putting into a long-term memory, learning things. A long-term memory is a store of unlimited capacity and duration, but not obvious to write. It requires significant effort (for many people) – it has a huge write latency (think of optical drives – it always works but the save process takes time). If the code we write is easy to understand and doesn't require up front learning, the whole development cycle is becoming much faster and requires far less time to understand and becoming effective.


So what is the point of knowing all that? Well, there is an amazing number of things we need to remember on the daily basis. Classes, methods, functions scattered around different packages of our applications. We've found a way to handle all those complexities, but as I've shown, this learning comes with a price.

Thankfully, there are ways to work easier with code and the key to it is hold by the psychology of human thought and cognition – by how our mind works. There are instructions embedded to things we interact with – both physical as well as virtual (remember the code snippets?). These helpful signifiers are in the code itself and come from authors' abilities to make their intent clear and leverage things people are expected to know (like design patterns).

With every line of code, a micro scale design is done. It will either confuse fellow programmers or will make their lives much easier. These physical objects design principles can help us understand the essence of coding rules, which we might have become dogmatic about: SOLID, DRY and many others. And ultimately – it will help us become a better programmer, a more emphatic programmer.


[1] The original essays from Uncle Bob’s “Clean Code” book are available on the old Object Mentor blog, together with a discussion which reveals some details (from 2005)

[2] Kevlin's video from NDC 2016



[5] Classics in the History of Psychology: