DevOps, the melding of Software Development with IT Operations, has revolutionized how software is built today. But creating excellent software doesn’t just start at the development phase. The real work is done in the requirements phase because code simply represents the details of the requirements.
Gathering solid, accurate requirements leads to better software and significantly higher project success rates. The opposite is also true… vague or bad requirements contribute to more project delays and outright failures than anything else.
But in order to gather solid requirements we need to understand the business’s domain and sub-domains to ensure implemented solutions meet its needs.
Therefore we need a methodology to integrate requirements gathering with software development so domain expertise is turned into code as soon as possible. The importance of this cannot be understated because practically all projects utilize multiple levels of translation when converting business domain knowledge into code.
Think about the steps for a moment:
- Domain expertise is obtained from the Subject Matter Experts (SMEs) and translated into requirements.
- Requirements are translated into a system design.
- A system design is translated into code.
Unfortunately information is lost whenever a translation is required — no translation is ever 100% accurate because we have yet to find a method to flawlessly communicate knowledge between disparate parties. Therefore instead of losing information via translation, we need a way to minimize the number of necessary steps and thus the number of required translations.
As such, I’ve come up with a system to do just that: DevReqs.
The idea is to quickly break the business’s domain into sub-domains then strategically decompose and map the relevant sub-domains to loosely-coupled, cohesive functions that can be composed to form an application — starting immediately in the requirements workshops, where SMEs present their requirements, and incrementally building all the way to deployment using CI/CD.
Side Note
Cohesive functions have one, well-defined purpose and only do that one thing. This allows them to be automatically tested and validated more easily than functions that do myriad things. Further, single-purpose functions allow their arguments to be validated in a more robust manner.
DevReqs combines the Development and Requirements phases (adding IT Ops at the end). It includes a set of processes and tools that methodically turn business domain knowledge into functioning programs, and although it can be shoehorned to work with just about any programming paradigm, it works seamlessly and best with Functional Programming – so that’s what I highly recommend using.
The tenets of DevReqs are…
- Simplicity: focus on making everything intuitive and easy to understand.
- Minimalism: make things as small as possible, but no smaller.
- Functionality: separate functionality into cohesive, loosely-coupled units.
- Testability: ensure every non-trivial piece of code can be automatically tested.
- Automation: automate everything that’s possible.
These all work together, so while it may be possible to make a function smaller (Minimalism) by using tricks that decrease readability and increase complexity, we wouldn’t do that because it breaks the Simplicity tenet. The goal is to find a suitable balance between tenets.
Of course what we’re really interested in are the benefits, so here are a few…
- Minimizes the time from requirements gathering to deployment.
- Minimizes the effort necessary to create a functioning application.
- Produces reliable, robust, high-quality code.
- Automatically tests every non-trivial unit of code.
- Eliminates major bugs and minimizes overall bugs.
- Creates easy-to-maintain-and-enhance applications.
- Eliminates (or significantly reduces) manual testing.
- Automatically transfers knowledge throughout the organization.
In short, DevReqs combines proven tools, paradigms and techniques so projects can deliver rock-solid, bug-free, easy-to-maintain-and-enhance code.
Old Technology?
Now there’s some debate surrounding Smallworld GIS in general, and Magik in particular, being old technology, but that’s somewhat of a misnomer. While the tech hails from the 1990s, with the MagikFP library and the GSSKit Framework, it’s quite easy to use the latest paradigms and tools (including Functional Programming, Property-based automated testing, Reactive Programming and MVVM) to develop robust, reusable Smallworld components that can be connected in various ways to build well-architected applications.
The problem is Magik developers have been taught to use outdated methodologies and paradigms, so the resulting applications become brittle monoliths with state scattered throughout. If you create software like that in any environment or language you’ll run into trouble as the system gets bigger.
What’s needed is a way to give developers proven tools, techniques and reusable libraries to help them build solid, bug-free code.
I’ve discussed Functional Programming, Observables, Monads, Property-Based Testing, Prototypal Objects, Closures and Currying in other articles, so I won’t repeat them here. However all these concepts are foundational to DevReqs and promote good coding practices that eliminate entire bug classes and catch bugs at the earliest possible moment.
Keep in mind, the goal is not to instantly switch everything at once, but to move incrementally in the right direction whenever new development (or updates to legacy code) is required. Make one positive change and pat yourself on the back. Then at the next opportunity make another change, then another and another and so on…
If you keep doing that, you’ll naturally be improving code quality and system reliability.
Starting Out
DevReqs includes everything needed to produce better software starting immediately in the requirements phase.
The requirements workshop is a great place to begin because it’s one of the only times we have most of the necessary people in one place and thus can get questions answered instantly while the application is at the top of everyone’s minds, rather than trying to track down resources months later when they may have forgotten some of the details.
It’s also the point where we’re typically starting with a blank sheet.
Usually, as the requirements are being fleshed out, they’re recorded somewhere (using Word documents, spreadsheets and other office tools for example). DevReqs adds another recording mechanism: actual code.
What?
That’s right. I said it. We start writing code immediately.
I can hear you now… “That’s Crazy!”
Well, hold on a moment there bucko and hear me out…
We’re not actually writing detailed code just yet, but creating the scaffolding that defines what the application should look like. So we’re setting up the source control system, automatically creating the folder structure and defining high level modules and source files (of course most of these artifacts will be empty at first, but the goal is to build a skeleton we can flesh out as the process unfolds).
The overarching idea is to document the requirements in code as well as in the usual manner (with documents and spreadsheets). As we move ahead, we continually reconcile the code with the documentation to ensure everything is in sync.
A key feature is that only a few people (sometimes only one) need to know about the code, so everyone else thinks the workshops are being run in the traditional way.
MagikFP makes creating this scaffolding simple by including the create_magik_fp_app function that automatically generates a sample Functional Programming project in a specified folder with standard subfolders to hold business logic, library functions, application functions, front and back end components, automated tests and module.def files. And it does it in just a few seconds. You can then directly create a Git repo within a matter of minutes.
One important function that’s automatically created is the run() function that includes a template with a safe_pipe, default error handler and default return values. That’s where the application will start and it’s where we begin to define the requirements.
Other important artifacts are the automated test files (one for each generated function). I’ll have more to say about these later, but for now, keep in mind writing automated tests is an integral part of DevReqs. Testing is something we think about before writing code, not something we do at the end of development.
Requirements Gathering
Gathering accurate requirements is the single most important thing to get right. If we get code wrong or we mess up a test, these can usually be fixed without too much fallout. However recording an incorrect requirement or, worse, entirely missing one can cause a domino effect and affect every subsequent phase of the software development cycle.
To minimize erroneous and missed requirements, DevReqs uses a variation of the Event Storming Process popularized by Domain Driven Design. Its goal is to explore a business’s domain and subdomains in order to fully understand the requirements (or, in the case of legacy applications, how they work).
There are 6 distinct steps…
- Exploration.
SMEs (and others) write down whatever required Domain Events they can think of in an unstructured manner. Domain Events might include, “Design Rejected,” “Design Approved,” and “Design Submitted.”
Note Domain Events are always referenced using the past tense because they describe events that have already occurred.
At this point it doesn’t matter what the various participants come up with, it’s basically a brainstorming session, so chronological order and duplication are ignored. The idea is to discover as much about the business domain as possible – we can always sort events and eliminate duplicates later, but if something is missed, it won’t be implemented in the code.
The overriding question is, “what are all the possible things that can happen in this domain?” - Chronological Order.
The next step takes the results from step 1 and puts them into chronological sequences. We start with the simplest, “sunny day,” path and then build out to account for alternative scenarios such as errors or failed approvals.
In the example, above, the, “sunny day,” path is represented by the top sequence and the bottom path represents the, “rainy day,” sequence of Domain Events.
At this point we have a sequence of events (or more likely a set of sequence of events) that describe how Domain Events chronologically link together in order to fulfill specific requirements.
We also use this step to remove duplicate Domain Events, consolidate events described at too low a level, eliminate things that aren’t required and double-check we didn’t miss anything. - Triggers.
Now we’re interested in how Domain Event sequences are kicked off. We need to determine what or who triggers them and how they’re triggered. Therefore we define commands (such as, “Submit Design,” or “Publish Design”) and actors (such as a designer, or process, like a task scheduler).
We associate each command with its sequence of events. So, for example, a requirement to submit a design will have the, “Submit Design,” command associated with its sequence of Domain Events. We also note the actor (a designer in the example above) responsible for triggering the sequence. - Policies.
In many instances a sequence of events is automatically triggered by a policy (there’s no specific actor that initiates the sequence). An example might be when a design is approved, it is automatically moved through a series of events and published. Therefore the, “Published,” sequence might be automatically triggered by the, “Approved Design,” policy using its associated command (“Publish Design”), rather than by an actor.
In the example above we see the, “Design Approved,” event triggers the, “Publish Design,” command via the, “Approved Design,” policy which kicks off the, “Design Published,” sequence. - External Systems.
If data are required from an external system (or data must be sent to an external system) then these are represented as special entities that allow remote systems to trigger commands (or ensure remote systems are notified when appropriate) via, say, REST API calls.
The example above shows how the, “Design Approved,” event causes the, “Approved Design,” policy to trigger the, “Publish Design,” command that starts the, “Design Published,” event and sends the published design to an, “External System.”
And here we see the reverse scenario where an, “External System,” triggers the, “Create Design,” command to start the, “Design Created,” sequence of events. - Mapping to Cohesive Functions.
At this point we should have defined all relevant Domain Events (reddish boxes), ordered them as chronological sequences, inserted triggering commands (bluish boxes) and denoted what invokes the trigger (which must be either an Actor, a Policy or an External System). Actors are represented in yellowish, Policies are brown and External Systems are green.
Now it’s simply a matter of mapping each box to a function.
Command boxes are directly mapped to high-level functions describing, “what,” the requirement does (e.g. “Create Design) while Domain Event boxes (e.g. “Design Submitted”) map to lower-level sub-functions that are called via a command function’s safe_pipe and perform the necessary actions to fulfill the business logic associated with the sequence.
Policies are mapped to high-level functions that also have a top-level safe_pipe which will eventually be implemented by policy sub-functions.
External Systems representations are similar to Command and Policy functions — mapped to high-level functions containing a safe_pipe. However these functions will usually end up being hooked into GSSKit so they can communicate with the appropriate external system.
Actors are a different animal. They are mapped to temporary functions that won’t be implemented. Think of these as placeholders representing the user or process that will eventually replace the function.
After these 6 steps have been completed, we should have a set of functions that define an application which in turn implements the requirements taken directly from the SMEs. This is the initial Feature List.
Deeper Mappings to Functions
Once the Requirements Gathering phase has completed we have a set of cohesive skeleton functions, appropriately named to represent what the functions do as well as sub-functions that represent how logic will be implemented (along with their associated automated test files). We then plug them into the framework under run(). This ensures our code clearly documents each specific requirement.
Keep in mind, at this stage, the functions will be nearly empty (usually containing brief comments and code to write out messages stating which requirements they’ll be implementing) and the associated test files will consist mainly of generic template code.
We’ve defined sub-functions to capture specific Domain Events but as new information is gleaned about a particular requirement, we create additional sub-functions to capture this information and further decompose how the top-level function accomplishes its task.
Basically we continue to decompose requirements into sub-requirements (and map each one to a sub-function) until we can’t decompose any further.
At the end of the Requirements Gathering and Function Mapping phases, we end up with a set of cohesive top-level functions (each documenting one high-level requirement) that call one or more sub-functions (each describing how to implement a particular piece of the high-level requirement). Sub-functions can also call further sub-functions – and so on until an end is reached.
The emphasis on creating functions that declaratively describe each sub-domain, Domain Event, sequence of Domain Events, Command, Policy and External System makes it easier to understand and implement the underlying code.
Once this is done, we automatically have our final Feature List that defines the items we need to implement in order to satisfy the requirements.
Additionally, we have a comprehensive set of functions we can map directly to stories in an automated fashion. So not only do we have a functioning bare-bones application, we also have a set of skeleton Agile stories that can be later refined and estimated.
Notice what we’ve done here… we’ve decomposed the requirements directly into functions and recomposed those functions into an application – albeit a barely functioning application, but an application nonetheless.
Notice what we’ve done here… we’ve decomposed the requirements into functions and recomposed those functions into an application – albeit a barely functioning application, but an application nonetheless.
We’re still not out of the requirements phase and yet we have:
- A Feature List.
- Skeleton Functions that will implement the feature list.
- The Business Domain Logic translated directly into executable functions.
- Actual code representing the domain’s events, sequences, commands, policies and external systems organized into cohesive functions and separated into business logic, application logic, frontend and backend components.
- Skeleton automated test files for each function.
- A working application that follows standardized best practices including loosely-coupled components, separation of business logic and best-of-breed folder and file structure.
- A Git repository.
- A set of skeleton Agile stories.
- The MagikFP library containing lots of reusable components and functions.
- A built-in Property-Based automated testing library (MagikCheck).
- The optional GSSKit Framework containing a loosely-coupled and automatic way to integrate with GSS if necessary.
Even at this early point, we’re far ahead of the usually vague descriptions that make up requirements. It makes us think logically about how a requirement can be implemented and forces us to think about real-world issues instead of glossing over them as is the tendency when recording requirements in English. And because the auto-generated application template is based on proven best-practices, we simply plug our code into the right places to obtain standardized, well-architected software.
Even better, DevReqs significantly reduces the number of translations required to obtain functioning code. Missing information can be brought up to the group immediately for resolution, rather than finding out about it months later when coding usually begins.
Further, as data mapping and definitive formats are defined during the design phase, we can add these to the code so we have a functioning prototype that can read or write the specific formats required by the application. Sometimes these may just be place holders, but even so, we know exactly where they will be located in the code.
And when actual input and output formats (such as JSON payloads) have been specified, we can immediately implement them in the relevant functions and produce actual results for workshop participants to review. (I’ll get to this a bit later, but because we use a standard Prototypal Object to pass data between functions and components, reading and writing various formats is already built-in.)
We can also add additional features that aren’t captured in code at this point (such as a comment to remove redundant code, in multiple requirements, and refactor it into a generalized library function).

Once requirements (and sub-requirements) have been mapped to functions, we run our application and its associated tests to ensure it executes without errors – this will confirm the folder structure, ensure product.def, module.def, load_list.txt and other application files are valid, run the automated tests and then write out information statements about what each requirement should produce.
If that works as expected, we now have a functioning bare-bones application that captures and identifies all the requirements.
The next step is to compare information written by the bare-bones app to the requirements captured in the Word documents and spreadsheets – which we should have been doing anyways.
If they match, great.
If not, determine which piece needs updating and update the relevant part (but usually you’ll discover it’s the Word documents/Spreadsheets that are missing information because writing code forces you to think in a more detail-oriented manner than simply writing sentences where words may have multiple meanings).
As the requirements workshops eventually give way to design workshops, where details are fleshed out, we’re in a position to start thinking about how to implement our skeleton functions. At this point there are two general choices:
- Custom code.
- Existing functions and methods (such as built-in MagikFP library functions or reusable functions and methods from other applications or the core product).
For (2), we simply hook in the functionality by calling the appropriate functions or methods.
For (1), we create higher-level code – but we want to resist the urge to get into the lower-level details at this point (after all, we’ve got to leave something for the developers to do during the build phase – but we can certainly define high-level logic in our functions). Of course we implement basic automated tests for any code we’ve written.
Focus on the Business Logic
As we’ve seen, the create_magik_fp_app auto-generated template separates application logic (including frontend and backend code) from business logic, so as we define and map requirements, it’s important to ensure associated code is put in the proper places.
This is crucial in order to create independent, loosely-coupled, pluggable components (so, for example, general application logic, such as writing a file to a folder, should be separated from business logic, such as calculating how many wood poles are in a specific region).

But there’s more to it than just the significant benefits of loose coupling. Business logic is the reason we create software. Having an award-winning GUI everyone fawns over or a robust, scalable, fast database flawlessly serving millions of records is meaningless if we don’t implement the correct business logic because the application will not meet the business’s needs and thus fail to accomplish its goal.
And that’s the main reason we separate business logic… so we can focus on it independently of everything else. Think of it this way, everything else, while necessary, is there to support the business logic. Without the business logic, everything else is unnecessary.
MagikFP’s overarching goal is to allow developers to spend the majority of their time writing business logic.
There are 3 general types of business logic.
- Core: usually complex logic businesses require to solve their problems. This is unique to a particular business and gives it a competitive advantage (an example is bundling work tasks into physically nearby locations so crews can minimize travel time). MagikFP requires code that implements core logic be placed in the application’s .app file(s).
- Supporting: usually simpler logic or convenience functions used to support the Core business logic. It is unique to the business, so there are no common off-the-shelf solutions, but does not provide any competitive advantages (an example is retrieving specific data from a third-party source and integrating into the application. It doesn’t matter whether data are obtained via a file import, REST API or some other means, but a custom ETL process is required). Code that implements supporting logic is placed in the .fns file(s).
- Generic: can be simple or complex, but is based on widely available solutions or standards. It may be more effective to use a generic solution than to re-invent the wheel and create a customized one (an example would be authentication and authorization). Code that implements generic logic is placed in the application’s .lib file(s). The idea is to customize an available solution for use in the business domain. Completely generic logic is generally located in the MagikFP .lib files.
MagikFP’s overarching goal is to allow developers to spend the majority of their time writing business logic (and the majority of that time focusing on core logic). It does this by providing reusable functions and components that can be easily assembled to create a template application into which custom business logic can be plugged.
create_magik_fp_app automatically helps us by creating the appropriate folders and files to separate out business logic – so we just need to ensure we put the different types of code in the correct places.
Developing the Code
Once the requirements and design phases are complete, we can move to the development phase. This is usually where development-related tasks are set up… but since we’ve used DevReqs, that part has already been done. There’s a functioning skeleton application that can be executed. And there are automated test files. And there are basic stories for each function. And there’s a source control repo… well, you get the idea.
Of course we’ll probably have to hook these things into the CI/CD pipelines and perform other tasks, but for the most part, the development environment is ready to go.
So developers aren’t starting from a blank page. Their main tasks are to implement each existing function according to the specifications captured in the code and comments of that function, write tests, refactor when necessary and update the relevant stories.
And because they’re working with cohesive functions within an application that has already been well-architected, they can focus on implementing one small piece of functionality rather than worrying about how to fit their work into the overall architecture. It also allows developers with various degrees of experience to contribute at their level because junior developers can be assigned simpler functions while more experienced ones work on the more complex stuff.
If necessary they can also refer back to the Word documents/spreadsheets, but since we should have verified the skeleton application is in sync with the traditional documentation, developers normally would only need to work from the skeleton application.
An added advantage is that because the application has already been decomposed into logical functions, developers can tackle one function at a time – which forms a cohesive unit of work. And since we’re using the Functional Programming development paradigm, the goal is to create as much functionality in the form of Pure Functions. I’ve written extensively about why functional programs (and pure functions) are better as well as how to implement functional programming in Magik, so I won’t repeat any of it here other than to give the following recommendations…
- Try to keep functions as pure as possible.
- Separate pure functions from impure functions. Pure functions form the core part of the application and should be used to implement most of the business logic. Some application and library functions will, by necessity, have to be impure and these will wrap the pure core to insulate it from the outside world (such as databases, files, GUIs, APIs and other necessary code that rely on side effects).
- Develop code using a test-first mentality. This could be Test Driven Development (TDD) or something else, but the important idea is to ensure automated tests are written for every piece of non-trivial functionality when that functionality is created.
- Once the business logic is completed, wire up any frontend, backend or interfaces (via GSSKit) to the business logic. If it was done correctly, business logic will be completely independent of the backend which will be independent of the frontend which will be independent of GSS.
In fact, since we view non-business-logic code simply as implementation details, we’re able to plug and unplug these components as necessary. For example, our frontend could just as easily use ReactJS as a SWIFT GUI and our backend could use a NoSQL database as easily as it uses VMDS.
The reason is because MagikFP functions generally take beeble_objects as input and return beeble_objects as output. This makes it very easy to pass data and functionality around the application’s components in a standardized manner. The beeble_object is really a glorified concurrent_hash_map with extra behaviour and syntactic sugar added to make reading and writing code easier. It also ensures no logic leaks out and pollutes other components. - Most backend functionality should be implemented in a reactive way using observables to retrieve and save data. This allows the application to plug in various backends in a loosely-coupled manner. If we decide to change a backend, we simply use an observable that knows how to communicate with that new backend. Since the observable interface doesn’t change and the business logic code is using this interface, the business logic code doesn’t need to change.
- Use beeble_objects and beeble_observables to ensure business logic is completely isolated from frontend and backend code and no business logic leaks into these components.
…what’s even better is to proactively adopt practices that eliminate entire classes of bugs and minimize others.
As we’ll see, one of the fundamental principles of DevReqs is comprehensive, automated testing. But what’s even better is to proactively adopt practices that eliminate entire classes of bugs and minimize others. Functional Programming, with its emphasis on pure functions and immutability, fills that role in DevReqs. If we can eliminate certain types of bugs (such as, say, concurrent processing errors), there’s no need to write tests for them. And that’s a win for everyone.
…high code coverage by itself does not necessarily mean better code, so DevReqs combines good coding practices with comprehensive automated tests and higher code coverage to produce robust, solid code.
Functional Programming is also far more conducive to achieving higher code coverage rates for automated tests than, say, Object Oriented Programming because of the linear nature of function composition. However high code coverage by itself does not necessarily mean better code, so DevReqs combines good coding practices with comprehensive automated tests and higher code coverage to produce robust, solid code.
Model-View-ViewModel (MVVM)
One problem I’ve noticed after working with Smallworld for more than two decades is that most applications tightly couple the GUI (frontend) code with the business logic. Even when attempting to separate concerns using the GUI/Engine concept, frontend logic leaks into engine logic and vice versa.
There are many reasons for this, but suffice it to say, modern programming principles tell us that completely decoupling the frontend is a good thing. So we’ll just go with that and I won’t attempt to justify the reasoning here.
Of course there are a number of ways to decouple the frontend, but my favourite is MVVM. I have specific reasons for this choice but, given this is already a very lengthy piece, I won’t delve into them here. Keep in mind, however, that if you have your own favourite – such as MVC for example – feel free to use it instead of MVVM. The concepts should be the same.
Okay, back to MVVM.

In general, the View interacts with the user and passes data to the ViewModel which then writes or reads data to/from the Model. It then processes the data to a set of bound properties. The View updates itself after observing (via the Observer pattern) changes to the bound properties of the ViewModel.
The View simply contains visual elements (such as buttons, input boxes and drop down lists). It is responsible for GUI layout, animation and initializing GUI components but does not contain any application or business logic.
The ViewModel is the layer between the View and the Model and can be thought of as a canonical representation of the View. So the ViewModel provides a set of attribute interfaces, each representing a GUI component in the View. GUI components are bound to the ViewModel attributes – so when an attribute changes in the ViewModel, that change is automatically reflected in the View and vice versa.
The upshot is we never have to deal directly with the View. We simply update the ViewModel and the View automatically changes, or when a user changes a value in the View the ViewModel’s associated attribute is updated and an event is triggered.
But why the heck would we want to do this? Wouldn’t it be simpler to eliminate the ViewModel layer and just update the View?
Sure it would. And that’s why Magik developers have been doing it that way for decades. But this simplicity comes at a huge cost – it makes it very difficult to automatically test GUIs.
Of course if you’re not into automated testing, then tightly coupling GUI and business logic code may not seem problematic to you (although there are other bad things that happen even if you take automated testing out of the mix).
However you are not that kind of developer. You’re a DevReqs developer, so you want automated testing built in at every level and for every component – including the GUI.
And that’s what the ViewModel enables. It makes it extremely easy to test GUIs because your automated test code can simply read and write from/to the ViewModel attributes to programmatically test GUI code. It doesn’t matter what the actual GUI looks like because the ViewModel decouples the GUI (which is hard to programmatically read from or write to) from the business logic.
If you find yourself having trouble testing GUIs, try MVVM and see how easy it is to create frontend automated tests.
Automated Testing
In DevReqs, there are two types of automated testing…
- Example-based-testing using MUnit.
- Property-based-testing using MagikCheck.
If you’re not familiar with MUnit, the Smallworld online help has good information on it. For MagikCheck, I wrote a comprehensive article about it and how it can be integrated with MUnit.
As such I won’t cover the details of either here, but keep in mind DevReqs uses both testing libraries to ensure code is tested via two different and complementary approaches.
The fundamental idea is that code without proper automated tests is incomplete…
The fundamental idea is that code without proper automated tests is incomplete because tests verify a system’s specifications are complete. If you’re not already doing so, I’d encourage you to think about how easy it is to automatically test the code you write. Code that depends on hidden values, mutable state spread throughout the codebase and tight-coupling is difficult to test. Code following Functional Programming best practices is easy to test. Remember, if you make your code hard to test, you won’t test it.
When writing automated tests, there are two main camps.

There’s the TDD idea of writing a test first, then make it Green by implementing code so that test passes. Then write the next test and make it Green by implementing the code for that test. Keep repeating these steps (refactoring when necessary to remove duplication and/or make other changes that result in better code) until you’ve implemented the desired functionality.
There’s nothing wrong with this development methodology and it has advantages (just search online for, “TDD advantages” and you’ll come up with lots of examples).
However, I’m not a fan of TDD. Again, there’s nothing wrong with it but I’ve been writing programs for more than four decades and my brain doesn’t work that way. There’s just something uncomfortable about writing the test before the code. I don’t like it. It doesn’t bring me joy so, as Marie Kondo would say, “if it doesn’t bring you joy, get rid of it.”
But if you’re a fan and TDD brings you joy, then knock yourself out. You do you. It’s a fine way to develop code.
So… what camp do I subscribe to?
I simply reverse the process. I write a cohesive bit of code then immediately write the test. If the test fails, I revise the code (or the test) until the test passes (i.e. turns Green). Then I move onto the next cohesive bit of code and repeat the process until all the desired functionality has been implemented.
The end result is the same, so pick your poison and don’t get into a religious war about testing. Just ensure you do it.
Another benefit of doing it this way is because tests are written as part of the coding task, there is no separate line item in the project plan for writing automated tests. The advantage is nobody can apply pressure to shorten the testing phase, in order to reach an arbitrary deadline, because the testing phase is split amongst many different functions.
And recall, when using the Functional Programming paradigm, most of the code should be Pure, so it makes writing automated tests much easier. If we add MVVM to the mix, it makes writing automated tests for the GUI much easier.
And since we’ve decomposed the requirements into cohesive functions, we simply have to test each of these functions individually and when we compose them to form the application (using a pipe or safe_pipe), no additional testing is required. If the individual functions passed then the composition of those functions will also pass. Composition does not introduce additional errors. Pretty neat eh? Try that with your Object Oriented Programming!
Remember: There are many ways to implement a requirement but always choose designs that make your code understandable, easy to maintain and easy to test.
Using Assertions for Real-time Testing
Back in the 1990s I read, “Writing Solid Code,” by Steve Maguire. It described how to write bug-free code and was eye-opening to say the least. One of the main points in the book was the use of assertions to add assumptions to code.
Assertions take an expression that evaluates to a Boolean value that should always be true unless there is a bug in the code — such as: assert(p_name _isnt _unset). If the value is false, then the assertion raises an assert_error condition (where the default behaviour of printing a traceback and aborting can be modified using higher order functions).
Assertions ensured I explicitly thought about what assumptions I was making. I liked the idea and started using them. However, from my perspective, assertions were more of a thing to use for debugging, rather than testing. I’ve since changed my mind about that and now believe assertions are extremely important for testing.
Why?
Well, think about what we’ve discussed so far — especially related to automated testing.
These techniques and strategies are limited to the phases before deployment into production. After code is put into production, and run continually against live data, none of the strategies monitor the code for bugs.
But assertions do.
They’re there monitoring for bugs in real-time, on real data and while the code is running in a real business scenario. They’re also there during the development phase. It’s another layer of protection that helps create better software.
Side Note
To understand when to use an assertion it is necessary to understand the difference between an error and an illegal condition. The former is expected to occur as code runs (e.g. trying to read from a non-existent file) but the latter is never expected to occur (e.g. passing _unset to a function that is going to try to add a number to it) because if it does, then that is a bug. Assertions must only be used to uncover illegal conditions, never to handle errors.
The idea is to ensure important functions do a minimal amount of bug checking (such as validating parameters) and log any illegal conditions that appear. MagikFP includes an assert function that fires if its first argument evaluates to false and can automatically write a message (and optional traceback) to a log file using the built-in logging library (the Zaphod Logger). Both assert and the Zaphod Logger can be configured in myriad ways via higher order functions.
If you simply use assert to do nothing more than validate arguments passed to the most important functions, your codebase will be continually checking itself in real-time all day long. But you can also use it to ensure specific conditions are correct before calling other functions, verify the state of a system, check a function’s return value or perform bounds checks on indexed collections, among other things.
However there are some things to keep in mind.
- assert must never be used to handle errors. It is there simply to monitor code for bugs. Real errors that must be handled are handled by the error handling code. Assertions only check for things that should never occur in an application, so removing an assertion should never change the way a piece of code works — if it does, then you’re using it incorrectly.
- Expressions contained in assertions should not cause side effects. Assertions are meant to be minimally intrusive and must never change the state of an application if true (of course if the assertion fails, an assertion_error condition will be raised).
- assert is a function, so it does slightly change the execution environment compared to if it wasn’t there. For 99.9% of cases, this is not a problem. However if your code is doing some sort of low-level or fine-grained, time-sensitive processing, keep this in mind (as assert may change the way that code operates — see point 1 above).
- There will be a very small performance hit each time assert is called. So if you’ve added lots of assertions or call a function, that contains an assertion, millions of times in a loop, that might impact your application’s performance. And if an assertion is triggered, additional time may be spent logging data or invoking functions. Just as with anything else, use them judiciously when they make sense and where they can provide the most value.
Targeted use of assertions in important parts of code allow us to continuously monitor our applications for bugs (in development, test and production environments) during actual usage. Spending a small amount of time strategically placing assertions into our code to immediately find and eliminate bugs, before they’re committed into the main branch, saves substantial time down the road by not having to fix them during the testing phase (or worse, in production) — at a far greater cost of effort, time and money. Fixing a bug immediately also ensures we learn from our mistakes and don’t keep repeating them because we weren’t aware of the problem until after development completed.
If you want to write bug-free code, you must thoroughly test all your code — and assertions add another layer of testing to your toolkit.
Alright, I’m going to wrap up this section by reiterating what I’ve mentioned twice already, because it is very important: If an error can occur during the course of running an application, assert must never be used to check for it — rather use standard error handling techniques. Use assert only to check for illegal conditions that should NEVER occur in bug-free code.
Code Reviews
Once code development and automated tests have been completed, the next step is a code review. We require this in order to find bugs, transfer knowledge and ensure conventions and best practices were followed.
The best way to perform code reviews is to have the developer and reviewer go over the code line by line (either for all the code if it’s new functionality or just the diffs if enhancements were made to existing code) where the reviewer asks questions, provides suggestions and generally drives the review.
The developer answers questions and provides justification for anything brought up by the reviewer. Developers are also responsible for recording required changes and then implementing whatever was agreed upon.
The reviewer treats automated tests as part of the code, so the rules that apply to application code, also apply to automated tests.
Better results generally come from a two-way conversation between developer and reviewer, rather than the reviewer simply reading through the code and pointing out areas of concern. As the dialog progresses, both parties ensure they are on the same page and it allows them to understand the code at a deeper level (for example, the reviewer might bring up a point the developer did not think about or the developer might teach the reviewer something he or she did not know).
If a bug is found, a discussion should ensue asking what steps, processes or tests would have found it prior to the code review. Was the bug a result of a risky coding practice (such as mutable state or hidden dependencies) or were, say, the automated tests insufficient? Then classify the bug and have a discussion about how similar bugs can be eliminated or found earlier. This will incorporate valuable feedback into the development cycle and teach all parties how to build better code.
But perhaps the biggest benefit of a code review is it allows other developers to understand what the code does. Too often we see cases where most, or sometimes all, of the knowledge about an application is locked in the brain of one person. If that person leaves, the knowledge transfer task is huge and rarely effective. By exposing multiple developers to what everyone else is doing in an interactive and incremental fashion, knowledge transfer is automatically propagated throughout the organization.
A final benefit of code reviews comes down to human nature. People simply write cleaner, clearer code when they know someone else will be reviewing it in detail. They’ll clean up the dreaded TODOs, ensure their comments are correct, comply with coding standards and put more effort into making their code better and easier-to-read than if they think nobody will pay any attention to it.
And, although most code reviews will consist of two people, multiple reviewers can participate if necessary, but it’s probably best to limit the number of reviewers to a maximum of three.
Well-thought-out code reviews not only help create a stronger codebase and provide a second set of eyes to verify code, but they also build cooperation between team members… and that’s why code reviews are an integral part of DevReqs.
Bugs
The sad fact of life in software development is that everyone expects bugs in applications.
Developers aren’t surprised when a bug rears its head, Project Managers build time into the schedule to search for and fix bugs and customers put terms into contracts stating how many bugs are acceptable in Production.
With thinking like that, it’s no wonder bugs are prevalent in software.
American Philosopher, William James, said, “Thoughts become perception, perception becomes reality. Alter your thoughts, alter your reality.“
What would happen if developers were truly surprised when a bug was found in their code?
What if Project Managers didn’t allocate time to fix bugs?
And what if contracts stated that no bugs were acceptable in Production?
That would certainly change the thinking surrounding software projects and how they are run… and, more importantly, emphasis would be placed on developing code that minimized the risk of bugs — whether through using less risky designs, more comprehensive automated testing or better processes and techniques.
Unfortunately the, “bugs will always be with us,” mentality runs deep and my feeling is quite a few folks reading this will never believe bugs can be completely eradicated.
However I think they can be.
Side Note
When I refer to bug-free code, I mean significant bugs that cause incorrect results or application failure. Low severity bugs, such as misspelled or grammatically incorrect GUI labels, will still have to be found by humans — at least until we have high-functioning AI testing systems.
We’ve already discussed a number of methods and tools that can be used to find bugs as early as possible or eliminate entire bug classes altogether, but for them to work, development practices that have been in vogue for decades need to be rethought. Buy-in from all parties to use paradigms such as Functional Programming and comprehensive automated testing is required.
That’s a big shift in mindset. And it won’t happen overnight — or, for some, perhaps ever, because Smallworld developers have highly ingrained habits and many find it difficult to change. But if we attempt to move in that direction and our projects start to see a significant reduction in bugs, then others will be interested in what we’re doing. And when we demonstrate bug-free implementations, followed by more of them, that will most definitely elicit interest.
As they say in the novel-writing world, “show, don’t tell.” And the way to ensure bug-free-development proliferates is to show people we can write bug-free code rather than just explaining it to them. Just as Roger Bannister showed a man could run a sub-four-minute mile, we need to show Smallworld Developers that bug-free-development can be done. Then others will start doing it.
DevReqs provides a framework for getting started. However there’s one more thing… we need to continuously feed information back into the DevReqs process.
To do that, I’ve listed five steps to follow if a bug is found outside unit testing.
- Understand the bug. In order to fix a bug we need to understand its root cause. In many cases a bug causes a symptom and it’s that symptom that is fixed in order to let a test case pass. This is not a good idea in much the same way stitching up a bleeding person without first fixing the underlying cause of the bleeding is not a good idea.
- Fix the bug. Once we understand the root cause, we need to fix it the right way. There are many ways to fix a bug and sometimes, due to time constraints or other pressures, developers take the easy way out and implement a kludge. This simply incurs technical debt and must never be done.
- Classify the bug. Keep a list of general bug classifications (such as, “bad arguments passed to a function,” “concurrency problems,” “missed requirement,” or, “insufficient automated test cases”). See if the the current bug falls into an existing classification and, if not, create a new classification for it. This allows us to group bugs together and apply fixes that should take care of future bugs in a particular category.
- Ask, “how could this bug have been automatically found?” Perform a deep dive to uncover how you could have automatically caught this type of bug.
- Implement an automated way to catch the bug in the future. Once you have an answer to the question in (4), add the solution to your growing arsenal so these types of bugs are automatically found going forward.
DevReqs is an iterative process that starts with concrete things to do and techniques to adopt, but it also uses a data-driven approach to continue improving its ability to find and eliminate bugs as early as possible.
Continuous Integration (CI)
At this point DevReqs just piggybacks on DevOps’s coattails. There’s nothing new here.
Developers merge changes to the main branch as often as it makes sense. After the build, all automated tests are run.
Recall that developers run their own automated tests during the development phase and these tests are run again along with integration tests and additional tests – such as infraction checks, customer-specific tests or, say, tests to ensure a Smallworld session opens and the data model is at the correct version.
This minimizes the chance of integration problems that can occur when implementing lots of code in one big-bang release.
After all tests have successfully passed, artifacts are packaged for deployment.
Continuous Integration depends heavily on automated testing to ensure the application works as expected. And since automated testing is a foundational piece of DevReqs, where every single non-trivial function has an associated test, it fits seamlessly with CI.
Continuous Delivery (CD)
Continuous Delivery extends CI by automatically deploying code changes to a staging or production environment, after the build stage, by manually clicking a button. Additional solution tests might also be incorporated at this step.
After all automated tests have passed, application artifacts can be automatically deployed whenever it makes sense.
The idea is to allow the business to deploy changes when it wants to. By deploying early and often, we don’t have to deal with wading through large amounts of code to debug problems if something goes wrong.
Continuous Deployment (also CD)
Continuous Deployment is basically Continuous Delivery with automatic deployment to the target environment (i.e. there’s no manual intervention required – such as clicking a button).
As long as changes pass all automated tests, deployment automatically occurs. Human oversight is reduced to simply verifying things went well and end users are generally using the latest and greatest codebase.
In order to succeed, however, we must have great faith in our automated tests. Usually that’s not the case because most projects don’t implement comprehensive tests that have high code coverage. Many projects have some basic example-based unit tests coupled with a smattering of integration tests. And a large percentage of those don’t have decent tests at all – but rather add fairly useless tests simply to check boxes in contracts that require automated tests, so extensive manual testing is still necessary.
However DevReqs requires all non-trivial functions to have an associated test. And since DevReqs aims to create mostly pure functions and uses MVVM for GUIs, tests are easier to write and more effective – so we can be far more confident code that makes it through DevReqs has been properly tested and is more likely to do well with Continuous Deployment.
So not only do end users benefit from the latest code, but developers can witness their work in production very soon after finishing it.
DevReqs Summary
We’ve covered a lot of ground in this article but still only touched upon some fundamental concepts (which I’ve linked resources to for further reading).

As we’ve seen, DevReqs is a system that applies existing tools and processes in a highly disciplined manner to create fully tested, robust, stable software using automation whenever possible. Individual parts build on one another and interact in synergistic ways, pushing each to an extreme and making them better in the process, to form a framework for creating great software.
The ultimate goal is to be proactive and find automatic ways to gather accurate requirements and immediately turn them into code as well as ensure applications are thoroughly tested.
We shouldn’t constantly be reacting to bugs and missed requirements in a project but need to anticipate issues, follow a solid, well-thought-out plan and mindfully drive the process in the right direction.
We shouldn’t constantly be reacting to bugs and missed requirements in a project but need to anticipate issues, follow a solid, well-thought-out plan and mindfully drive the process in the right direction.
Simply checking boxes in a project plan and churning out code willy-nilly may ultimately get a project implemented, but it’s unlikely to get it done in the shortest timeframe, with a minimum of effort and with the highest quality, easiest-to-maintain-and-enhance software possible.

Smallworld developers have lagged the industry in best practices and continue to develop software like it’s 1999 (I think Prince wrote a song about that – or perhaps it was about partying, but regardless, building software like we did in 1999 is passé and riddled with errors and bugs — not to mention time-consuming manual processes).
It’s time to join the Facebooks, Googles, Microsofts and Amazons of the world and start developing Magik code using modern principles and paradigms. Coding practices that made sense back in the 1990s, when hardware was less powerful and talking about a cloud usually meant a strong possibility of rain, may not make sense today given the huge technological advances we’ve experienced in the intervening two decades. We need to systematically reduce buggy code by using safer designs and techniques — which are readily available and used every day in the broader software development realm. DevReqs brings these things to Smallworld.
However don’t expect decades of legacy processes and code to change overnight, but focus on making small, incremental improvements every time you touch an application and eventually those improvements will accumulate and spread to create a paradigm shift that brings Smallworld development into the modern era.