Software design aspects

Software design: The art of fitting a software system to a context

Software design happens on a lot of levels, on one hand we have the small, functions, classes and the like. On the other hand we have the large, integration into an organization, interplay with other software and hardware, interplay with larger systems - cars, aircraft, phones and their users. It is the designers job to make the fit as good as possible with the context within its limits - and propose that the limits may be incorrect when the need arises.

This includes software architects, software developers, software testers (and potentially others, depending on the roles in your organisation) and can include roles from the context, e.g. business architects, operations engineers, end users.

Due to the commonly large number of people involved in the design of a large system and its parts - process and communication problems can occur in such a way that one role ends up with a lot of trouble that could have been easily avoided by a change by another role. It takes experience to recognize when this happens - and do something about it. I'll discuss some of the scenarios I've run into under each aspect.

Aspects

Software design is often divided into aspects. Common divisions include functional and non-functional (called quality attributes in testing circles), or as in the case of ISO 9126 Functionality, Reliability, Portability, Efficiency, Maintainability and Usability - others being a part of one of these six. In ISO 25010, 8 main aspects are used. There are many possible breakdowns, I will not use any particular, but rather explore some aspects that I personally have experience with in the rest of this article.

But first, an aside on interaction between aspects.

Some aspects are weakly connected or should be, unless you explicitly design in a connection. An example would be modularity and correctness, how correct a system is, should not have any dependence on how modular it is.

Others are strongly connected, unless you explicitly design for de-coupling. An example would be testability and changeability. If it's easy to test, it will make it easier to change as well, since you can just change it and test for differences.

Pretty much all aspects have at least some impact on most others, indirectly if not directly.

A trap sometimes fallen into by, in particular, business people is the thought that these are all disconnected. As a developer you might hear things like "Portability is not important, don't spend any money on that". When in fact, if you spend just a couple of hours making sure that you understand the portability consequences of your design choices you'll avoid a whole world of trouble later, when requirements change.

In fact, I would argue that it is your professional duty as a designer to keep all design aspects in mind when designing, partly because nobody else will, and partly because there are expectations from stakeholders on how long/much effort is required to change something in the system. This mindset helps you as a designer to set expectations correctly so that you can communicate these to stakeholders. Note 1

Testability

Software testability - Ease and simplicity of testing software (automatically)

Executive summary

If you can't show that your code does what you intended, what good is it? Make it easy to test, then you don't have to spend less time testing or writing tests. And in turn worrying about if it works or not.

Example 1

Using Selenium for UI testing is fairly common since it is one of the more stable ways to test UIs - at least compared to image recognition. A simple and useful practice when using Selenium is to add custom attributes to elements in order to find them easily in the tests. An extension to this is adding new elements or tags for pure testing purposes (e.g. marking up numbers so that there is no need for pattern matching).

This practice makes the tests more resilient to UI changes and changes potentially hours of tester work into minutes or seconds of developer work.

I highly recommend for testers to just ask a dev to put in custom attributes and tags for testing ease since it is very simple to do in most cases. If allowed, testers themselves can put in said custom attributes, provided they know how.

Example 2

Splitting code into decision and dependency parts makes it easier to unit test. Functional core, Imperative shell is a pattern of this. This style can be used effectively at the unit level and to some degree on higher level tests. The main reason this style makes it easier to test is because it get rids of test doubles in favour of pure values - something useful for API tests as well. This style also aids maintainability since there are no test doubles per se, that needs maintenance.

TDD

Test Driven Development forces testability by design - since you write the tests first. While it is certainly possible to write the same tests and code as you would non-TDD, the design pressure is to make the code easier to test in the first place.

In fact I would consider this one of the primary benefits of TDD.

Continuous Integration

If you are doing continuous integration (in the sense of integrating to a common place at least once a day) testing and testability by extension becomes much more critical. If it is easy to write and maintain tests you can have more tests for the same work input - and more (reliable) tests tend to mean smoother integration with the rest of the codebase.

In fact I would go so far as to say that I've never seen a codebase that was not testable by design work in CI, either resulting in lots of broken builds (that the rest of the development team has to put up with) or "fake" CI (using a CI deamon and calling it good, but still having lots of feature branches).

Performance

Software efficiency: The amount of resources used for a given amount of useful work.
Response time: The time from input to output (success or failure) in a system.

Performance is not an aspect in itself, but is a useful aggregation (in my opinion). Generally response time or Time behaviour is what is primarily talked about when talking software performance, but resource utilisation is a close second.

Software performance is always a trade-off in terms of money spent (development time, hardware purchased/rented) vs user experience. Alas, talking to a UX expert can be very useful in determining limits when designing UIs. Consider that if your system is fast enough, you don't need a progress/loading indicator.
If your system is not directly user-facing, talk to your users or a business representative (e.g. Product Owner) about reasonable performance limits.

I like to split performance into External and Internal, because of how a software designer should treat them can be very different.

Internal performance is what I call CPU and RAM utilisation, disk access can potentially fall here, but it depends on your execution environment.

External performance is everything else, network attached resources (databases, disks, etc.), attached hardware (sensors, serial & parallel buses, printers, etc)

Internal

Generally speaking, one can wait to do optimization of internal performance to after profiling, that is, after first design & implementation. This has big advantages, in that you can address hot spots first and only spend enough time to optimize what is needed and nothing else.

Caveat: You need to consider how difficult it will be to multithread/distribute the workload. It is a good idea to design systems in such a way to have clear internal boundaries that make it possible to break into multiple threads. Consider that mutable data makes multithreading much more difficult when choosing data structures.

Second caveat: A profiler will not directly show you if you are using an inefficient algorithm. Note 2

External

If you are lucky, you can get away with designing, implementing and profiling here as well. But sometimes this is not feasible - or the profiling is more difficult than calculating ahead of time. Low speed serial buses or sensors can be reasonably calculated in microseconds in many cases. A profiler can help a simple test program figure out how fast a device is, then you can extrapolate for different possible designs and compare.

If you have a domain expert for the attached resource, use them. This includes DBAs, hardware experts, operations experts, etc. They should know roughly the performance impact of your intended designs. Talk to them, see if they know a better way and what constraints that puts on your design. You will need to bring something tangible to them though, e.g. an SQL query & expected load to a DBA.

Testing

Performance is testable by a number of different kinds of performance tests . Do the types of tests that makes sense for your application. Use of profilers while performance testing can enhance understanding of what is being tested. Be aware that a profiler can affect the performance of your software negatively so don't Always run with a profiler attached.

Note 1: This is one of the reasons Domain Driven Design works well, it aligns the domain with the software - so that a hard change in the domain is hard-ish in the software, and an easy change in the domain should be easy in the software.

Note 2: Algorithms is a complete science in itself, one worth knowing at least the basics of, even if you only design web apps.

First version: 2020-07-05

Added intro: 2020-08-??

Added performance: 2020-08-17

by Peter Lindsten.