Designing infrastructures and services is a very challenging task: designers should take in account the maintainability of the infrastructure or of the service, along with guaranteeing a quick delivery when service requests come thus being in compliance with security requirements.

While designing infrastructures or services, professionals should consider to:

  • take in account existing corporate workflows and processes
  • comply to corporate standards
  • comply to regulatory
  • comply to the expected Service Level Agreements
  • guess an easy path to keep everything current, limiting or even avoiding service interruption while patching and upgrading
  • design the backup and restore processes
  • design and implement business continuity if necessary

You may be tempted to think that the difficulty is having a thorough knowledge of design patterns and protocols, but it isn’t like so: this is still just scratching the surface. Despite it seems that the designer should “only” turn the requirements into reality, in the real world things go differently: professionals are often engaged with a very few and undermined requirements, coming from people that are more close to the business than to the IT.

This means that in practice it’s up to the designer to ensure that the provided requirements are all the ones necessary for the use case, adding other ones if necessary.

In addition to that the professional should also design the whole process, not only the infrastructure or services: in a lean environment that relies on service requests the designer should also take care to design all the necessary workflows that integrate what they are designing into the existing set of workflows. This means that before designing the resource, the designer should guess the necessary workflows to enable user requests to get access to the resource or withdraw from it. This should comply not only to the corporate standards but also to security regulations. And most of all, it should be easy to use, since it is targeted to users that already have a lot of things to do rather than learning inefficient processes. I think that this part, that is probably the most challenging, is often underestimated, with the outcome of an overall delay when it comes to delivering service, causing wasting of time and money one projects slowing down time to market.

Documenting the project is a critical part of the design process: since there are both business and technical audiences, it is not easy to draw up something that can look appealing and clear to the business and provide both enough implementation details for the technical people.

For this reason, the traditional top-down approach is to write:

  • the High Level Design Document: this document is mostly business oriented, describing the project in business terms, highlighting costs, benefits but it must also provide the bare minimum level of technical details to have this business-oriented people be able to figure out the final outcome, as well the necessary details to start writing a more detailed technical document - Personally, I like the approach described in the NASA Systems Engineering Handbook: according to them, writing the High-Level Design Document gets improved and updated while the technical documents that cover and address the low-level requirements get written, since only during that stage a lot of hidden problems are unveiled and addressed: this enable also to provide more accurate estimates by the way.
  • the Software Design Document: this is the document actually used to implement the project, so it must be clear enough and provide enough details to enable people working on the project's implementation to work as much independently as possible, avoiding pointless meetings that only lead to to waste their time and delay the project final delivery. Mind this document is quite agnostic and provides only a few hints about deployment - deployments are managed through a different document (each deployment is an instance, so a dedicated document is necessary). If the deployment is small, probably a diagram with a few details and a testing report are enough, but on huge deployments, it is necessary to write a Deployment Plan Document.
Agile purists will object that the top-down approach is superseded by bottom-up and that Agile transferred this part of the job to teams. That is the theory - in my personal experience the Agile bottom-up leverages on a lot of meetings necessary to make people aware of what's happening - this works nice with small projects, but it tends to become less effective with mid-size project and does not work at all with big projects. The outcome of that approach is having poorly written documentation, components designed using different styles of the involved people, that may hide unexpected later problems (especially from the operating or maintenance point of view) because of the limited point of view available to them. In addition to that, on big projects it is not as cost effective as they say, if you consider the cost of the increased number of meetings and the number of participants - remember that the cost of a member is its actual pay plus the missing earnings for not having him producing. So long story short, ... I've nothing against using bottom-up design in Agile, if it is properly used in the correct use cases, but blindly using it will only lead your projects into a mess: as someone else already said, "Agile comes with no brain, please use yours".

Writing both the above documents is a task that is not easy at all, so I wrote this post to provide a template with a use case of a SDD Template Software Design Document For Kubernetes Service - more specifically, a microservice of an existing service.

As you will see, this document leverages and is in compliance with the company wide Software Design Document: this global one is used to provide taxonomy, standards and best practices every project in the corporate must comply with..

Read more >

The aim of this post is showing a tidy way to structure a C o C++ project managing the build lifecycle using the GNU Make and packaging it as RPM.

The post demonstrates a full featured C project managed by make and packaged as RPM, showing how to set up a tidy structure, develop and package a C application with its own shared objects, that reads the configuration from a file, validates settings, logs events into a file and handles error conditions printing to standard error and setting properly shell return code.

This post is certainly useful not only to developers, but to anybody who wants to learn how to build third part C or C++ software, since it clearly describes the compilation and linking process. In addition to that, we also learn how to create the product certificate that  can be exploited by the subscription-manager to know that the product is installed on the system.

The application is then packaged, besides as a gzipped tarball, also as RPM, creating the application package, the package with the development resource files (the C include files) and the package with the debug information that can be used with a debugger to troubleshoot things.

This post is focused on the C programming language, but the very most of the concepts related to  the build life-cycle managed with GNU Make shown apply to C++ too: I chose C only to show a way of doing things that works also with a legacy (but yet powerful) programming language. In addition to that, be wary that I'm striving to cover most of the scenarios: this means that I'm showing things that are not always necessary in every use case.

Read more >

Infrastructures are the foundations used to provide services: since services are subjected to confidentiality and availability requirements, infrastructures must be designed so to provide several confidentiality and availability tiers. This way a service can be placed on the part of the infrastructure that meets the availability and confidentiality requirements for its use case. This means that one of the very first things to do when designing infrastructures is defining the corporate's standard tiers.

Read more >