• Skip to primary navigation
  • Skip to main content

MarkHing.com

Carefully Crafted Content

  • Home
  • Smallworld GIS
  • Software
  • Investing
  • Random Musings
  • Blog
  • About
    • Privacy Policy
    • Terms of Service
    • GNU General Public License
  • Contact Me
  • Show Search
Hide Search

Smallworld GIS DevReqs

Mark Hing · Feb 12, 2023 ·

DevOps, the melding of Software Development with IT Operations, has revolutionized how software is built today. But creating excellent software doesn’t just start at the development phase.

The real work is done in the requirements phase because code simply represents the details of the requirements. Gathering solid, accurate requirements leads to better software and significantly higher project success rates. The opposite is also true… vague or bad requirements contribute to more project delays and outright failures than anything else.

What’s needed is a methodology for integrating and automating requirements gathering with software development so domain expertise is turned into code as soon as possible.

The importance of this cannot be understated because practically all projects utilize multiple levels of translation when moving from business domain knowledge to code.

Think about the steps for a moment:

  1. Domain expertise is obtained from the Subject Matter Experts (SMEs) and translated into requirements.
  2. Requirements are translated into a system design.
  3. A system design is translated into code.

But information is lost whenever a translation is required — no translation is ever 100% accurate because we have yet to find a method to flawlessly communicate knowledge between disparate parties. Therefore instead of losing information via translation, we need a system that minimizes the number of necessary steps and thus the number of required translations.

As such, I’ve come up with a system to do just that: DevReqs.

The idea is to quickly implement loosely-coupled, cohesive functions that can be composed to form an application — starting immediately in the requirements workshops, where SMEs present their requirements, and incrementally building it out all the way to deployment using CI/CD.

DevReqs combines the Development and Requirements phases (adding IT Ops at the end). It includes a set of processes and tools that quickly and methodically turn business domain knowledge into functioning programs, and although it can be shoehorned to work with just about any programming paradigm, it works seamlessly and best with Functional Programming – so that’s what we’ll be using in this article.

The tenets of DevReqs are…

  • Simplicity: focus on making everything intuitive and easy to understand.
  • Minimalism: make things as small as possible, but no smaller.
  • Functionality: separate functionality into cohesive, loosely-coupled units.
  • Automation: automate everything that’s possible.

These all work together, so while it may be possible to make a function smaller (Minimalism) by using tricks that decrease readability and increase complexity, we wouldn’t do that because it breaks the Simplicity tenet. The goal is to find a suitable balance between tenets.

Of course what we’re really interested in are the benefits, so here are a few…

  • Minimizes the time from requirements gathering to deployment.
  • Minimizes the effort necessary to create a functioning application.
  • Produces reliable, robust, high-quality code.
  • Automatically tests every non-trivial unit of code.
  • Eliminates major bugs and minimizes overall bugs.
  • Creates easy-to-maintain-and-enhance applications.
  • Eliminates (or significantly reduces) manual testing.
  • Automatically transfers knowledge throughout the organization.

Keep reading to see how your project can achieve these benefits and more…

Old Technology?

Now there’s some debate surrounding Smallworld in general, and Magik in particular, being old technology, but that’s somewhat of a misnomer. While the tech hails from the 1990s, with the MagikFP library and the GSSKit Framework, it’s quite easy to use the latest paradigms and tools (including Functional Programming, Property-based automated testing, Reactive Programming and MVVM) to develop robust, reusable Smallworld components that can be connected in various ways to build easy-to-maintain and enhance applications.

The problem is Magik developers have been taught to use outdated methodologies and paradigms, so the resulting applications become brittle monoliths with state scattered throughout. If you create software like that in any environment or language you’ll run into trouble as the system gets bigger.

I’ve discussed Functional Programming, Observables, Monads, Property-Based Testing, Prototypal Objects, Closures and Currying in other articles, so I won’t repeat them here. However all these concepts are foundational to DevReqs, so it’s a good idea to become at least somewhat familiar with them.

Keep in mind, the goal is not to instantly switch everything at once, but to move incrementally in the right direction whenever new development (or updates to old code) is required. Make one positive change and pat yourself on the back. Then at the next opportunity, make another change, then another and another and so on and so on and so on…

If you keep doing that, you’ll naturally be improving code quality and system reliability.

Starting Out

We start the process during the requirements workshops. It’s a great place to begin because it’s one of the only times we have most of the necessary people in one place and thus can get questions answered instantly while the application is at the top of everyone’s minds, rather than trying to track down resources months later when they may have forgotten some of the details.

It’s also the point where we’re typically starting with a blank sheet.

Usually, as the requirements are being fleshed out, they’re recorded somewhere (using Word documents, spreadsheets and other office tools for example). DevReqs adds another recording mechanism: actual code.

What?

That’s right. I said it. We start writing code immediately.

I can hear you now… “That’s Crazy!”

Well, hold on a moment there bucko and hear me out…

We’re not actually writing detailed code just yet, but creating the scaffolding that defines what the application should look like. So we’re setting up the source control system, creating the folder structure and defining some high level modules and source files (of course most of these artifacts will be empty at first, but the goal is to build a skeleton we can flesh out as the process unfolds).

The overarching idea is to document the requirements in code as well as in the usual manner (with documents and spreadsheets). As we move ahead, we continually reconcile the code with the documentation to ensure everything is in sync.

A key feature is that only a few people (sometimes only one) need to know about the code, so everyone else thinks the workshops are being run in the traditional way.

MagikFP makes creating this scaffolding simple by including a create_magik_fp_app function that automatically generates a sample Functional Programming project in a specified folder with standard subfolders to hold business logic, library functions, application functions, automated tests and module.def files. And it does this in just a few seconds. You can then directly create a Git repo from this within a matter of minutes.

One important function that’s automatically created is the run() function that includes a generalized template with a safe_pipe, default error handler and default return values. That’s where the application will start and it’s where we begin to define the requirements.

Other important artifacts are the automated test files (one for each generated function). I’ll have more to say about these later, but for now, keep in mind writing automated tests is an integral part of DevReqs. Testing is something we think about before writing code, not something we do at the end of development.

As new cohesive requirements are identified we create top-level functions, appropriately named to represent what the functions do, (along with their associated automated test files) and then plug them into the framework under run(). This ensures our code clearly documents each specific requirement.

Keep in mind, at this stage, the functions will be nearly empty (usually containing brief comments and code to write out messages stating which requirements they’ll be implementing) and the associated test files will consist mainly of generic template code.

As new information is gleaned about a particular requirement, we create sub-functions to capture this information and specify how the top-level function accomplishes its task. Basically we continue to decompose requirements into sub-requirements (and map each one to a sub-function) until we can’t decompose further.

As we progress, we end up with a number of cohesive top-level functions (each documenting one high-level requirement) that call one or more sub-functions (each describing how to implement a particular piece of the high-level requirement). Sub-functions can also call further sub-functions – and so on until an end is reached.

Once this is done, we automatically have our Feature List that defines the items we need to implement in order to satisfy the requirements. Further, we have a comprehensive set of functions we can map directly to stories in an automated fashion. So not only do we have a functioning bare-bones application, we also have a set of skeleton Agile stories that can be later refined and estimated.

Notice what we’ve done here… we’ve decomposed the requirements into functions and recomposed those functions into an application – albeit a barely functioning application, but an application nonetheless.

Notice what we’ve done here… we’ve decomposed the requirements into functions and recomposed those functions into an application – albeit a barely functioning application, but an application nonetheless.

We’re still not out of the requirements phase and yet we have:

  • A Feature List.
  • Skeleton Functions that will implement the feature list.
  • The Business Domain Logic translated directly into executable functions.
  • Skeleton automated test files for each function.
  • A working application that follows standardized best practices including loosely-coupled components, separation of business logic and best-of-breed folder and file structure.
  • A Git repository.
  • A set of skeleton Agile stories.
  • The MagikFP library containing lots of reusable components and functions.
  • A built-in Property-Based automated testing library (MagikCheck).
  • The optional GSSKit Framework containing a loosely-coupled and automatic way to integrate with GSS if necessary.

Even at this early point, we’re far ahead of the usually vague descriptions that make up requirements. It makes us think logically about how a requirement can be implemented and forces us to think about real-world issues instead of glossing over them as is the tendency when recording requirements in English. It also significantly reduces the number of translations we need to do in order to obtain functioning code. Missing information can be brought up to the group immediately for resolution, rather than finding out about it months later when coding usually begins.

Further, as data mapping and specific formats are defined during the design phase, we can add these to the code so we have a functioning prototype that can read or write the specific formats required by the application. Sometimes these may just be place holders, but even so, we know exactly where they will be located in the code.

And once specific input and output formats (such as JSON payloads) have been defined, we can immediately implement them in the relevant functions and produce actual results for workshop participants to review. (I’ll get to this a bit later, but because we use a standard Prototypal Object to pass data between functions and components, reading and writing various formats is already built-in.)

We can also add additional features that aren’t captured in code at this point (such as a comment to remove redundant code, in multiple requirements, and refactor it into a generalized library function).

Requirements Definition Workshop
Defining Requirements

Once the requirements (and sub-requirements) have been defined, we run our application and its associated tests to ensure it will execute without errors – this will confirm the folder structure, ensure product.def, module.def, load_list.txt and other application files are valid, run the automated tests and then write out information statements about what each requirement should produce.

If that works as expected, we now have a functioning bare-bones application that captures and identifies all the requirements.

The next step is to compare the information written by the bare-bones app to the requirements captured in the Word documents and spreadsheets – which we should have been doing anyways.

If they match, great.

If not, determine which piece needs updating and update the relevant part (but usually you’ll discover it’s the Word documents/Spreadsheets that are missing information because writing code forces you to think in a more detail-oriented manner than simply writing sentences where words may have multiple meanings).

As the requirements workshops eventually give way to design workshops, where details are fleshed out, we’re in a position to start thinking about how to implement our skeleton functions. At this point there are two general choices:

  1. Custom code.
  2. Existing functions and methods (such as built-in MagikFP library functions or reusable functions and methods from other applications or the core product).

For (2), we simply hook in the functionality by calling the appropriate functions or methods.

For (1), we can create higher-level code – but we will want to resist the urge to get into the lower-level details at this point (after all, we’ve got to leave something for the developers to do during the build phase – but we can certainly define high-level logic in our functions). Of course we implement basic automated tests for any code we’ve written.

It’s also important to note the MagikFP library separates application logic (including frontend and backend code) from business logic, so as we define requirements, we must ensure we put the associated code in the proper places.

This is crucial in order to create independent, loosely-coupled, pluggable components (as a simple example, general application logic, such as writing a file to a folder, should be separated from business logic such as calculating how many wood poles are in a specific region).

But there’s more to it than just the significant benefits of loose coupling. Business logic is the reason we create software. We may have an award-winning GUI everyone fawns over and a robust, scalable, fast database flawlessly serving millions of records, but if we don’t implement the correct business logic, the application will not meet the business’s needs and thus fail to accomplish its goal.

And that’s the main reason we separate business logic… so we can focus on it independently of everything else. Think of it this way, everything else, while necessary, is there to support the business logic. Without the business logic, everything else is unnecessary.

MagikFP’s overarching goal is to allow developers to spend the majority of their time writing business logic.

MagikFP’s overarching goal is to allow developers to spend the majority of their time writing business logic.

create_magik_fp_app automatically helps us by creating the appropriate folders and files to separate out business logic – so we just need to ensure we put the different types of code in the correct folders and files.

Developing the Code

Once the requirements and design phases are complete, we can move to the development phase. This is usually where development-related tasks are set up… but since we’ve used DevReqs, that part has already been done. There’s a functioning skeleton application that can be executed. And there are automated test files. And there are basic stories for each function. And there’s a source control repo… well, you get the idea.

Of course we’ll probably have to hook these things into the CI/CD pipelines and perform other tasks, but for the most part, the development environment is ready to go.

So developers aren’t starting from a blank page. Their main tasks are to implement each existing function according to the specifications captured in the code and comments of that function, write tests, refactor when necessary and update the relevant stories.

If necessary they can also refer back to the Word documents/spreadsheets, but since we should have verified the skeleton application is in sync with the traditional documentation, developers normally would only need to work from the skeleton application.

An added advantage is that because the application has already been decomposed into logical functions, developers can tackle one function at a time – which forms a cohesive unit of work. And since we’re using the Functional Programming development paradigm, we want to create as much functionality in the form of Pure Functions. I’ve written extensively about why functional programs (and pure functions) are better as well as how to implement functional programming in Magik, so I won’t repeat any of it here other than to give the following recommendations…

  • Try to keep functions as pure as possible.
  • Separate pure functions from impure functions. Pure functions form the core part of the application and should be used to implement most of the business logic. Some application and library functions will, by necessity, have to be impure and these will wrap the pure core to insulate it from the outside world (such as databases, files, GUIs, APIs and other necessary code that rely on side effects).
  • Develop code using a test-first mentality. This could be Test Driven Development (TDD) or something else, but the important idea is to ensure automated tests are written for every piece of non-trivial functionality when that functionality is created.
  • Once the business logic is completed, wire up any frontend, backend or interfaces (via GSSKit) to the business logic. If it was done correctly, business logic will be completely independent of the backend which will be independent of the frontend which will be independent of GSS.

    In fact, since we view non-business-logic code simply as implementation details, we’re able to plug and unplug these components as necessary. For example, our frontend could just as easily use ReactJS as a SWIFT GUI and our backend could use a NoSQL database as easily as it uses VMDS.

    The reason is because MagikFP functions generally take beeble_objects as input and return beeble_objects as output. This makes it very easy to pass data and functionality around the application’s components in a standardized manner. The beeble_object is really a glorified concurrent_hash_map with extra behaviour and syntactic sugar added to make reading and writing code easier. It also ensures no logic leaks out and pollutes other components.
  • Most backend functionality should be implemented in a reactive way using observables to retrieve and save data. This allows the application to plug in various backends in a loosely-coupled manner. If we decide to change a backend, we simply use an observable that knows how to communicate with that new backend. Since the observable interface doesn’t change and the business logic code is using this interface, the business logic code doesn’t need to change.
  • Use beeble_objects and beeble_observables to ensure business logic is completely isolated from frontend and backend code and no business logic leaks into these components.

…what’s even better is to proactively adopt practices that eliminate entire classes of bugs and minimize others.

As we’ll see, one of the fundamental principles of DevReqs is comprehensive, automated testing. But what’s even better is to proactively adopt practices that eliminate entire classes of bugs and minimize others. Functional Programming, with its use of pure functions and immutability, fills that role in DevReqs. If we can eliminate certain types of bugs (such as, say, concurrent processing errors), there’s no need to write tests for them. And that’s a win for everyone.

…high code coverage by itself does not necessarily mean better code, so DevReqs combines good coding practices with comprehensive automated tests and higher code coverage to produce robust, solid code.

Functional Programming is also far more conducive to achieving higher code coverage rates for automated tests than, say, Object Oriented Programming because of the linear nature of function composition. However high code coverage by itself does not necessarily mean better code, so DevReqs combines good coding practices with comprehensive automated tests and higher code coverage to produce robust, solid code.

Model-View-ViewModel (MVVM)

One problem I’ve noticed after working with Smallworld for more than two decades is that most applications tightly couple the GUI (frontend) code with the business logic. Even when attempting to separate concerns using the GUI/Engine concept, frontend logic leaks into engine logic and vice versa.

There are many reasons for this, but suffice it to say, modern programming principles tell us that completely decoupling the frontend is a good thing. So we’ll just go with that and I won’t attempt to justify the reasoning here.

Of course there are a number of ways to decouple the frontend, but my favourite is MVVM. I have specific reasons for this choice but, given this is already a very lengthy piece, I won’t delve into them here. Keep in mind, however, that if you have your own favourite – such as MVC for example – feel free to use it instead of MVVM. The concepts should be the same.

Okay, back to MVVM.

Model-View-ViewModel (MVVM)
Model-View-ViewModel (MVVM)

In general, the View interacts with the user and passes data to the ViewModel which then writes or reads data to/from the Model. It then processes the data to a set of bound properties. The View updates itself after observing (via the Observer pattern) changes to the bound properties of the ViewModel.

The View simply contains visual elements (such as buttons, input boxes and drop down lists). It is responsible for GUI layout, animation and initializing GUI components but does not contain any application or business logic.

The ViewModel is the layer between the View and the Model and can be thought of as a canonical representation of the View. So the ViewModel provides a set of attribute interfaces, each representing a GUI component in the View. GUI components are bound to the ViewModel attributes – so when an attribute changes in the ViewModel, that change is automatically reflected in the View and vice versa.

The upshot is we never have to deal directly with the View. We simply update the ViewModel and the View automatically changes, or when a user changes a value in the View the ViewModel’s associated attribute is updated and an event is triggered.

But why the heck would we want to do this? Wouldn’t it be simpler to eliminate the ViewModel layer and just update the View?

Sure it would. And that’s why Magik developers have been doing it that way for decades. But this simplicity comes at a huge cost – it makes it very difficult to automatically test GUIs.

Of course if you’re not into automated testing, then tightly coupling GUI and business logic code may not seem problematic to you (although there are other bad things that happen even if you take automated testing out of the mix).

However you are not that kind of developer. You’re a DevReqs developer, so you want automated testing built in at every level and for every component – including the GUI.

And that’s what the ViewModel enables. It makes it extremely easy to test GUIs because your automated test code can simply read and write from/to the ViewModel attributes to programmatically test GUI code. It doesn’t matter what the actual GUI looks like because the ViewModel decouples the GUI (which is hard to programmatically read from or write to) from the business logic.

Automated Testing

In DevReqs, there are two types of automated testing…

  1. Example-based-testing using MUnit.
  2. Property-based-testing using MagikCheck.

If you’re not familiar with MUnit, the Smallworld online help has good information on it. For MagikCheck, I wrote a comprehensive article about it and how it can be integrated with MUnit.

As such I won’t cover the details of either here, but keep in mind DevReqs uses both testing libraries to ensure testing is done via two different and complementary approaches.

The fundamental idea is that code without proper automated tests is incomplete…

The fundamental idea is that code without proper automated tests is incomplete because tests verify a system’s specifications are complete, so ensure you have at least a high-level understanding of both testing paradigms before you start your actual DevReqs journey.

When writing automated tests, there are two main camps.

Test Driven Development (TDD)
Test Driven Development

There’s the TDD idea of writing a test first, then make it Green by implementing code so that test passes. Then write the next test and make it Green by implementing the code for that test. Keep repeating these steps (refactoring when necessary to remove duplication and/or make other changes that result in better code) until you’ve implemented the desired functionality.

There’s nothing wrong with this development methodology and it has advantages (just search online for, “TDD advantages” and you’ll come up with lots of examples).

However, I’m not a fan of TDD. Again, there’s nothing wrong with it but I’ve been writing programs for more than four decades and my brain doesn’t work that way. There’s just something uncomfortable about writing the test before the code. I don’t like it. It doesn’t bring me joy so, as Marie Kondo would say, “if it doesn’t bring you joy, get rid of it.”

But if you’re a fan and TDD brings you joy, then knock yourself out. You do you. It’s a fine way to develop code.

So… what camp do I subscribe to?

I simply reverse the process. I write a cohesive bit of code then immediately write the test. If the test fails, I revise the code (or the test) until the test passes (i.e. turns Green). Then I move onto the next cohesive bit of code and repeat the process until all the desired functionality has been implemented.

The end result is the same, so pick your poison and don’t get into a religious war about testing. Just ensure you do it.

Another benefit of doing it this way is because tests are written as part of the coding task, there is no separate line item in the project plan for a testing task. The advantage is nobody can apply pressure to shorten the testing phase, in order to reach an arbitrary deadline, because the testing phase is split amongst many different functions.

And remember, when using the Functional Programming paradigm, most of the code should be Pure, so it makes writing automated tests much easier. If we add MVVM to the mix, it makes writing automated tests for the GUI much easier.

And since we’ve decomposed the requirements into cohesive functions, we simply have to test each of these functions individually and when we compose them to form the application (using a pipe or safe_pipe), no additional testing is required. If the individual functions passed then the composition of those functions will also pass. Composition does not introduce additional errors. Pretty neat eh? Try that with your Object Oriented Programming!

Code Reviews

Once code development and automated tests have been completed, the next step is a code review. We require this in order to find bugs, transfer knowledge and ensure conventions and best practices were followed.

The best way to perform code reviews is to have the developer and reviewer go over the code line by line (either for all the code if it’s new functionality or just the diffs if enhancements were made to existing code) where the reviewer asks questions, provides suggestions and generally drives the review.

The developer answers questions and provides justification for anything brought up by the reviewer. Developers are also responsible for recording required changes and then implementing whatever was agreed upon.

The reviewer treats the automated tests as part of the code, so the rules that apply to application code, also apply to automated tests.

Better results generally come from a two-way conversation between the developer and reviewer, rather than the reviewer simply reading through the code and pointing out areas of concern. As the dialog progresses, both parties ensure they are on the same page and it allows them to understand the code at a deeper level (for example, the reviewer might bring up a point the developer did not think about or the developer might teach the reviewer something he or she did not know).

But perhaps the biggest benefit of a code review is it allows other developers to understand what the code does. Too often we see cases where most, or sometimes all, of the knowledge about an application is locked in the brain of one person. If that person leaves, the knowledge transfer task is huge and rarely effective. By exposing multiple developers to what everyone else is doing in an interactive and incremental fashion, knowledge transfer is automatically propagated throughout the organization.

A final benefit of code reviews comes down to human nature. People simply write cleaner, clearer code when they know someone else will be reviewing it in detail. They’ll clean up the dreaded TODOs, ensure their comments are correct, comply with coding standards and put more effort into making their code better and easier-to-read than if they think nobody will pay any attention to it.

And, although most code reviews will consist of two people, multiple reviewers can participate if necessary, but it’s probably best to limit the number of reviewers to a maximum of three.

Well-thought-out code reviews not only help create a stronger codebase and provide a second set of eyes to verify code, but they also build cooperation between team members… and that’s why code reviews are an integral part of DevReqs.

Continuous Integration (CI)

At this point DevReqs just piggybacks on DevOps’s coattails. There’s nothing new here.

Developers merge changes to the main branch as often as it makes sense. After the build, all automated tests are run.

Recall that developers run their own automated tests during the development phase and these tests are run again along with integration tests and additional tests – such as infraction checks, customer-specific tests or, say, tests to ensure a Smallworld session opens and the data model is at the correct version.

This minimizes the chance of integration problems that can occur when implementing lots of code in one big-bang release.

After all tests have successfully passed, artifacts are packaged for deployment.

Continuous Integration depends heavily on automated testing to ensure the application works as expected. And since automated testing is a foundational piece of DevReqs, where every single non-trivial function has an associated test, it fits seamlessly with CI.

Continuous Delivery (CD)

Continuous Delivery extends CI by automatically deploying code changes to a staging or production environment, after the build stage, by manually clicking a button. Additional solution tests might also be incorporated at this step.

After all automated tests have passed, application artifacts can be automatically deployed whenever it makes sense.

The idea is to allow the business to deploy changes when it wants to. By deploying early and often, we don’t have to deal with wading through large amounts of code to debug problems if something goes wrong.

Continuous Deployment (also CD)

Continuous Deployment is basically Continuous Delivery with automatic deployment to the target environment (i.e. there’s no manual intervention required – such as clicking a button).

As long as changes pass all automated tests, deployment automatically occurs. Human oversight is reduced to simply verifying things went well and end users are generally using the latest and greatest codebase.

In order to succeed, however, we must have great faith in our automated tests. Usually that’s not the case because most projects don’t implement comprehensive tests that have high code coverage. Many projects have some basic example-based unit tests coupled with a smattering of integration tests. And a large percentage of those don’t have decent tests at all – but rather add fairly useless tests simply to check boxes in contracts that require automated tests, so extensive manual testing is still necessary.

However DevReqs requires all non-trivial functions to have an associated test. And since DevReqs aims to create mostly pure functions and uses MVVM for GUIs, tests are easier to write and more effective – so we can be far more confident code that makes it through DevReqs has been properly tested and is more likely to do well with Continuous Deployment.

So not only do end users benefit from the latest code, but developers can witness their work in production very soon after finishing it.

DevReqs Summary

We’ve covered a lot of ground in this article but still only touched upon some fundamental concepts (which I’ve linked resources to for further reading).

DevReqs Phases
DevReqs Phases (right click and open in a new tab to view larger image)

As we’ve seen, DevReqs is really a system that applies existing tools and processes in a highly disciplined manner to create fully tested, robust, solid software using automation whenever possible.

The ultimate goal is to be proactive and find automatic ways to gather accurate requirements and immediately turn them into code as well as ensure applications are thoroughly tested.

We can’t constantly be reacting to bugs and missed requirements in a project but need to anticipate issues, follow a solid, well-thought-out plan and mindfully drive the process in the right direction.

We can’t constantly be reacting to bugs and missed requirements in a project but need to anticipate issues, follow a solid, well-thought-out plan and mindfully drive the process in the right direction.

Simply checking boxes in a project plan and churning out code willy-nilly may ultimately get a project implemented, but it’s unlikely to get it done in the shortest timeframe, with a minimum of effort and with the highest quality, easiest-to-maintain-and-enhance software possible.

Windows 98 IBM PC
Cutting-edge 1999 Technology. We’ve moved on from this setup when creating Smallworld applications, so why are we still using development practices from this era?

Smallworld developers have lagged the industry in best practices and continue to develop software like it’s 1999 (I think Prince wrote a song about that – or perhaps it was about partying, but regardless, building software like we did in 1999 is passé and riddled with errors and bugs — not to mention time-consuming manual processes).

 It’s time to join the Facebooks, Googles, Microsofts and Amazons of the world and start developing Magik code using modern principles and paradigms. However don’t expect decades of legacy processes and code to change overnight, but focus on making small, incremental improvements every time you touch an application and eventually those improvements will accumulate and spread to create a paradigm shift that brings Smallworld development into the modern era.

Smallworld

About Mark

Mark Hing, a Value Investing and Smallworld GIS specialist, created the Automatic Investor software and is the author of "The Pragmatic Investor" book.

(Buy the book now on Amazon.ca)
(Buy the book now on Amazon.com)

As President of Aptus Communications Inc., he builds cutting-edge FinTech applications for individual investors. He has also used his software expertise to architect, develop and implement solutions for IBM, GE, B.C. Hydro, Fortis BC, ComEd and many others.

When not taking photographs or dabbling in music and woodworking, he can be found on the ice playing hockey -- the ultimate sport of sports.

linkedin   Connect with Mark

All views expressed on this site are my own and not those of my employer.

Copyright © 2023 Aptus Communications Inc. All Rights Reserved.