One of the most frequently asked questions I’ve heard regarding Smallworld 5 (SW5) is, “why does starting a session take so long compared to version 4?”
It’s a fair question and one that has some complex answers. So I’m going to explain some of the reasons and provide basic tips on how to improve start up performance that will have your Smallworld 5 installation humming along.
In order to do that, however, we’ll need to look at the differences between versions 4 and 5. So let’s do that now…
Smallworld GIS Versions 4 and 5
Smallworld 5 requires more resources (such as CPU and memory) because it runs on the Java Virtual Machine (JVM) and, quite frankly, the JVM is a resource hog.
Starting a new instance of a JVM as well as loading and verifying bytecodes, not to mention other overhead tasks that weren’t necessary in Smallworld 4 (SW4), plus executing extra code introduced by SW5-specific implementation details, simply takes time.
And that in a nutshell is why SW5 generally takes longer to start than SW4.
However there are quite a few benefits because the JVM has been around for a long time, is stable, optimized, secure, runs in myriad environments, receives consistent updates and can execute code quickly once its initial bootstrap is complete.
With that preamble out of the way, let’s take a look at what’s going on under the covers in order to identify and work around issues that may affect performance.
While version 5 is mostly backwards compatible with version 4 at the Magik level, the core technology beneath is different.
How different?
Think about the difference between a Volkswagen Jetta TDI diesel and a Tesla Model 3. Sure, they’re both cars you drive in similar ways, but under the hood, the propulsion mechanisms are worlds apart.
That’s akin to the differences between Smallworld 4 and 5, “under the hood.”
So first, let’s explore the Smallworld 4 GIS architecture to understand how it works.
Smallworld GIS Version 4
SW4 compiles Magik code into a bytecode format that runs on a proprietary Virtual Machine (VM) – written in C specifically for Smallworld. It uses images and compiled magik files to load and execute pre-compiled code.
Magik code is lexed (i.e. tokenized) and parsed (i.e. generates a parse tree) using the magik_lexical_scanner and magik_parser classes respectively. Bytecodes are then generated and executed on the Smallworld VM.
Since Smallworld controls all parts of the process and the VM is specifically tuned for the Smallworld environment, components are built to work with one another in an efficient manner.
Think of how Apple tailors its OS to work specifically with its hardware versus how Microsoft’s Windows OS needs to support multiple underlying hardware platforms and you’ll have some idea of how SW4 differs from SW5.
The upshot is SW4 startup overhead is minimal because pre-compiled code can be immediately loaded and executed without going through a number of additional layers.
Further, because the VM is specialized and specific to Smallworld, it requires less resources than version 5 (which requires the generalized JVM) to run.
Smallworld GIS Version 5
On the other hand, SW5 compiles Magik code to Java bytecodes that run on the JVM. Precompiled code is usually saved in Java Archive (JAR) files so the lexing, parsing and static compilation steps can be skipped in subsequent invocations.
SW4 images are no longer used because Java requires all code to be loaded via its Class Loader and the Class Loader doesn’t understand Smallworld images. Without images, SW5 now uses Sessions and requires all code be loaded into the JVM and all initialization code be executed when starting an application – which incurs additional overhead at startup.
Further, SW5 does not use built in Magik classes to parse Magik source and generate bytecodes. Rather a grammar file, describing the Magik language, is fed to an external tool, the ANTLR parser generator, and ANTLR generates a lexer and parser. SW5 uses these generated tools to create parse trees that are then used to produce Intermediate Representations which are finally transformed, by a backend, into Java bytecodes (that can be executed by the JVM).
The result is Smallworld no longer has complete control over how bytecodes are generated or executed, and this has particular implications when it comes to performance because additional overhead is introduced in the process.
Smallworld 5 Overhead
Additional overhead in SW5 takes many forms including low-level call backs to Java for type conversion (a very slow operation), memory alignment on 64 bit boundaries (that can be slow on certain platforms) and the Java bytecode verifier (which verifies all bytecodes before execution to ensure instructions are legal but can add significant overhead).
Interaction between datastore collection streams and Magik (now running on Java) is also slower than in SW4, so any code that touches the datastore may incur a performance penalty. This isn’t specifically related to the JVM, but is a consequence of the current SW5 implementation.
Some of these items can be somewhat mitigated by refactoring custom application code. However there are some parts that are outside developers’ control.
But there is an upside to all this because unlike the SW4 VM, the JVM can dynamically optimize executing code.
There are two types of compilation: static, where Magik source code is compiled to Java bytecodes prior to execution, and dynamic, where the JVM constantly analyzes running code looking for hotspots (code that is executed multiple times above a threshold) in order to compile code (using the Just in Time, JIT, compiler) to a native format and perform various optimizations – such as in-lining method bodies and/or using registers rather than memory to temporarily store values.
Dynamic compilation is a multi-threshold process, so if code that was previously dynamically compiled passes further thresholds, the JIT applies further optimizations and replaces the existing natively compiled code with the newly optimized one.
Once the code has been natively compiled/optimized, it’s cached in the JVM’s code cache so subsequent invocations use the optimized code rather than having to interpret bytecodes again and again. Keep in mind the code cache has a maximum default size, and when it’s full, no further dynamic compilation will be done — so it is important to understand how an application behaves (and how much code is dynamically compiled) in order to appropriately set the maximum code cache parameter.
Of course these optimizations only apply to the current executing JVM process, and when that process ends all optimizations are lost — so new JVM processes will go through the optimization steps again.
Side Note
There’s nothing in the specification that requires a JVM to implement a JIT compiler, so it’s entirely possible to have a fully compliant JVM that doesn’t include a JIT and therefore doesn’t perform any dynamic optimization. However every major JVM implementation includes a JIT compiler, so we’ll just assume the ones used with SW5 will have one.
In essence, SW5 Magik is compiled to Java Classes (bytecodes), loaded by the JVM’s Class Loader, verified by the Bytecode Verifier and executed in the Execution Engine — which initially interprets each bytecode but also profiles the code and invokes the JIT when necessary to optimize and cache frequently used code.
The JVM also includes a Security Manager and one or more Garbage Collectors.
As you might imagine, the JVM is a complex piece of software that takes some time to get up to speed, but once there, it’s stable and relatively fast.
So while the end result can be better overall application performance (especially after a session has been running for some time), it can also lead to longer initialization and startup times while the overhead is executed or the JIT pauses to dynamically compile code.
Another important point to note is that since the JVM/JIT adds dynamic optimization to SW5, that SW4 didn’t have, it’s important to write Magik code in ways that can take advantage of these new facilities.
Although it’s beyond the scope of this article to delve into such topics, a general rule of thumb is that shorter, simpler Magik methods and procedures are better than longer ones (of course this is a good software design principle regardless of JVM optimizations) – as an example, because there is a bytecode length limit on in-lined code, shorter methods allow for potential optimizations that longer ones don’t.
As such, Magik developers would do well to learn how the JVM operates in order to write better performing code.
Garbage Collection
Another potential performance issue surrounds garbage collection. In SW4, developers could force garbage collection to better manage performance (for example, ensuring lots of smaller garbage collections were run rather than fewer very large ones).
With SW5, developers no longer have that option. They can still tell the system to perform a garbage collect (i.e. system.gc(_true)), but the JVM does not necessarily have to comply (the JVM does provide flags to control various aspects of garbage collection, but on a practical level, this is difficult to do manually and is best left for the JVM to manage).
The result is if a large garbage collection task runs, the session may pause for a significant amount of time until it completes.
To get around this, some JVM implementations use multi-threaded garbage collectors in an attempt to minimize completely pausing an application in a, “stop the world,” event — some examples are the Z Garbage Collector (ZGC) and Garbage First (G1) garbage collectors.
However different JVMs are free to implement garbage collection as they see fit, as long as it conforms to the specification, so there is no guarantee how garbage collection will be done or when it will take place.
Keep in mind various types of applications may perform better with a particular garbage collector and most Java distributions can be configured (via command line flags) to use different garbage collectors — other than the default one. In fact, garbage collection can be one of the biggest causes of performance problems.
If garbage collection is an issue in your application, it may be worthwhile to profile your application and then research JVMs and the garbage collectors they support.
Moreover, SW5 is now dependent on how the JVM executes code, which can lead to further overhead. Different versions of the Java Runtime Environment (JRE) can have drastically different effects on performance.
The upshot is code that ran adequately on SW4 may have to be analyzed and refactored in order to perform satisfactorily on SW5 – even with the additional resources required by SW5 – and some details that could be managed in SW4 are now managed by the JVM outside the Smallworld environment.
Java
As we know, SW5 requires Java to run. However there can be some confusion about the various distributions and versions available.
Java has a number of distributions provided by companies such as Oracle, IBM, Amazon and others.
Historically users could download a JRE (the runtime containing the code necessary to run Java applications such as SW5) or a Java Development Kit (JDK – that includes a JRE plus additional development tools like the Java compiler). Today, most distributions provide a JDK/JRE combination without a separate JRE.
For the most part, different distributions contain the same functionality and are usually compiled from the same base source files, so SW5 should run correctly on any JDK that is compliant with the JDK 11 standard. However GE has not tested SW5 with all distributions.
Therefore Oracle distributions should be used. Oracle provides two different distributions.
- OracleJDK: this version requires a licensing fee for Production deployments and includes updates as well as support.
- OpenJDK: this version can be freely used in Production deployments, but updates for the current version aren’t released once a new version is unveiled – and there is no commercial support.
The choice most likely comes down to whether your organization requires commercial support or not, but since SW5 runs on both distributions, most installations tend toward OpenJDK.
Furthermore, different JDK versions can significantly affect performance. Substantial performance improvements were incorporated into JDK 13 and later.
A nice feature is that JDK versions are usually backwards compatible with earlier versions, so it’s best to keep your installation updated with the latest stable version if possible.
In addition, Java should be installed locally and the JAVA_HOME environment variable set to point to the required JDK distribution.
With that said, let’s now turn our attention to some standard recommended practices that can improve performance.
JAR Files
When Magik code is entered at the command prompt or read from a file, it is automatically lexed, parsed and compiled to Java bytecodes – after which it is executed. This may require a significant amount of time. In order to skip these steps, compiled code can be stored in JAR files.
If JAR files are present for products and modules, they are loaded and executed rather than compiling from source files.
To improve performance, JAR files should always be used and they should be placed on a fast file server connected to machines running SW5 client sessions.
Database Context
When a session opens a database, time is spent creating information about data collections and records in Magik. This process is repeated each time the database is opened. A Database Context provides a means of storing the required information so the creation steps don’t have to be subsequently repeated and the database opens much faster.
To improve performance, Database Context files should be used and the database context folder should be placed on a fast file server (preferably as a sub-folder in the SW5 Product folder) connected to machines running SW5 client sessions. Ensure the SW_DB_CONTEXT_DIR environment variable is set to point to the appropriate database context folder.
Serialization
When a session starts, it must scan product folders to find modules that make up the application. This can be time consuming. Therefore it is recommended Serialized product and module definitions be created so on subsequent session starts, definitions can be read directly from the serialized file in order to eliminate the scanning steps.
To improve performance, Serialized product files should be stored on a fast file server connected to machines running SW5 client sessions.
Core Smallworld Product
As with all files required by SW5, the Core Smallworld Product and custom products should be installed on a fast file server connected to machines running SW5 client sessions. This is particularly important because products contain resources that need to be accessed in an efficient manner.
Sessions
Once the standard performance recommendations, described above, have been implemented, the next step is to analyze SW5 Sessions.
Recall Sessions have replaced SW4 images and are defined via Session Definitions (usually contained in products’ configuration modules) that load the appropriate product’s (and sub-product’s) code and set up required start up actions (such as opening a database).
SW5’s architecture is more complex than SW4’s and consequently incurs additional overhead at startup. Therefore it’s important to analyze a product’s session definition to see if modules loaded at startup are actually required or can be deferred (for example, perhaps the Google mapping module can be loaded only when it is used rather than automatically loaded into all sessions at startup) and whether functionality in startup procedures can be optimized or eliminated.
To aid in tracing session startup, a procedure named start_session() can be used.
After setting an appropriate logging level (e.g. magik_session.log_level << 1), the following code (taken from the SW5 help documents) can be run.
Magik> start_session("cambridge_db", "cambridge_db_open")
$
Product cambridge_db is already loaded from C:\alchemy\camdb789\cambridge_db
LOGGING(1): Looking for a sub-product of type config_product for product 'cambridge_db'
LOGGING(1): Found sub-product 'cambridge_db_config' of type config_product
LOGGING(1): Loading session 'cambridge_db_open'
LOGGING(1): [cambridge_db_open] Looking for parent session 'cambridge_db_closed' in the same product
LOGGING(1): [cambridge_db_closed] Looking for parent session 'swaf' in product 'sw_core'
LOGGING(1): [cambridge_db_closed] Looking for a sub-product of type config_product for product 'sw_core'
LOGGING(1): [cambridge_db_closed] Found sub-product 'sw_core_config' of type config_product
LOGGING(1): [swaf] Running :load_code_proc
LOGGING(1): [cambridge_db_closed] Running :load_code_proc
LOGGING(1): [cambridge_db_open] Running :load_code_proc
LOGGING(1): [cambridge_db_open] Running :open_database_proc
$
Note how information is displayed showing each sub-product, session and startup procedures as they are encountered during session initialization. This procedure can be helpful in determining which products, modules and startup procedures to analyze for a SW5 application.
It’s a high-level way to profile code.
However it’s just a starting point and additional in-depth profiling should be done using more powerful tools.
Profiling Tools
An important component for improving performance is to profile code, because profiling can identify parts of an application that don’t perform well. In many instances it’s difficult to understand how much time is spent in various routines and the amount of resources used simply by looking at the source code.
Profiling allows us to gather relevant metrics we can then use to address performance issues by refactoring our code.
I’ve listed a few options below, but keep in mind there are many tools available. Some are easy to use and others require significant technical expertise. For now, here are some of the easier to use ones…
Java VisualVM is a useful (and free) tool for understanding what is happening in the JVM. Following are some recommendations on what to look for.
- On the Sampler tab, click CPU to gather information on which Java methods are using the most time. The underlying Java methods can be mapped to Magik methods using the naming convention described in the SW5 online help documentation. Sorting by Self-time (CPU) generally gives the most useful information.
- Use Monitor to look at heap, classes and threads when CPU is high.
- Click Heap Dump to take a heap dump when memory usage is high.
- The Heap graph displays the total heap size and how much of the heap is currently used.
- The Threads graph displays an overview of the number of live and daemon threads in the application’s JVM.
- Use the Thread Dump button to take a thread dump to capture and view exact data on application threads at a specific point in time.
- Unfortunately the Profiling tab does not work with SW5, so you will have to do profiling another way.
Realworld Diagnostics is a tool that can be used in a variety of ways, however in the context of upgrades and migrations it is helpful because it captures many internal metrics that aren’t easily available by other means.
It will also collect and display information in interactive graphs and charts, on a dashboard, that can be clicked to drill down. There is a specialized dashboard designed specifically for profiling SW5 session startup and other tools for understanding how code is executed by the JVM.
Diagnostics can be used for far more than simply improving performance, but it requires a paid license.
In addition, there are a number of tracing and logging tools built into the Smallworld environment and myriad open source and paid Java tools that can be used to obtain metrics and performance data. Magik scripts can also be written to simulate loads and monitor hardware responses.
Disable the Method Finder
The method finder loads as soon as the first piece of Magik code is loaded. However since it is usually only used by developers, it doesn’t need to be started for, say, end-user or job server sessions.
You can disable it, so it doesn’t start, by setting the method_finder.auto_start? shared variable to _false.
This will save some startup time and not load unnecessary code. If disabled, the method finder can be started manually when needed.
One warning: be sure this is the first thing your session does because if it isn’t, and other Magik loads first, the method finder will be automatically loaded before your code runs to disable it.
Generally sessions are registered in register.magik files, so you could put the statement at the start of the block in the register.magik file that’s responsible for starting everything — as shown in the following example (line 9).
In the Cambridge application, the main register.magik file is located in the folder:
…\cambridge_db\config\magik_sessions\source
#% text_encoding = iso8859_1
_package sw
!global_auto_declare?! << _true
$
_block
method_finder.auto_start? << _false
_local open_database_proc <<
_proc@open_database_proc( a_session )
ace_dir << a_session.open_database()
write( "Database Opened: ", ace_dir )
_endproc
.
.
.
_endblock
$
You can find which code is loaded first by viewing the buffer when a session starts. For the Cambridge session example below, line 17 shows the path to the register.magik file that’s first loaded.
Smallworld Core Spatial Technology Version Information:
Core Product:
Release No. 5.2.7.0
Release Spin No 462 (22/03/2021 10:03:00)
Gis C Utils 5.5.9
Datastore C 7.8.2
Datastore Magik 10.1.2
Layered Products:
sw_core 5.2.7.0 Core product with engine and generic GUI classes
sw_kernel 5.2.7.0 Low-level kernel classes
sw_addon_google_maps 5.2.7.0 Smallworld Addon for Google Maps(TM)
Found smallworld_registry: D:\Smallworld\smallworld_registry (using environment variable SMALLWORLD_GIS)
Loading D:/Smallworld/cambridge_db/config/magik_sessions\source\register.magik
.
.
.
Keep in mind, however, some applications may load other Magik code before loading register.magik, so look at the load_list.txt file responsible for loading register.magik and ensure register.magik is listed first. If it’s not, create a new file (e.g. disable_method_finder.magik), put the block of code in that file and ensure that file is referenced first in load_list.txt.
And if you want to control in which sessions the method finder starts or not, an environment variable can help.
In the gis_aliases file, add an environment variable (let’s call it disable_mf) and set it to true in the session stanzas where you want to disable the method finder. The following example sets the variable (in line 24) for the cambridge_db_open stanza.
#
# Smallworld Core Windows Platforms Standard Core Product Aliases
#
# The Smallworld Product's standard aliases file should not be edited
# by hand. A user can have personal aliases by placing them in a file
# named `gis_aliases' in his/her home directory, but the alias names
# must be chosen not to clash with those in the standard aliases file.
#
cambridge_db:
title = Start Smallworld Cambridge DB Closed
session = cambridge_db:cambridge_db_closed
product = cambridge_db
args = -cli
cambridge_db_open:
title = Start Smallworld Cambridge DB Application
session = cambridge_db:cambridge_db_open
SW_CONSTRUCTION_PACK_DIR = C:/Temp
SW_ACE_DB_DIR = %SMALLWORLD_GIS%/../cambridge_db/ds/ds_admin
splash_screen = %SMALLWORLD_GIS%\sw_core\resources\base\bitmaps\smallworld_gis_splash.png
product = cambridge_db
args = -cli
DISABLE_MF = true
cambridge_db_open_no_auth:
title = Start Smallworld Cambridge DB Application with no authorisation
session = cambridge_db:cambridge_db_open_no_auth
SW_CONSTRUCTION_PACK_DIR = C:/Temp
SW_ACE_DB_DIR = %SMALLWORLD_GIS%/../cambridge_db/ds/ds_admin
splash_screen = %SMALLWORLD_GIS%\sw_core\resources\base\bitmaps\smallworld_gis_splash.png
product = cambridge_db
args = -cliq
start_gss:
title = Smallworld Geospatial Server Application
session = gss:gss_closed
CONFIG_FOR_STARTUP = magikCLIConfig
product = gss
Then in the code block, back in register.magik, add the statements shown in lines 9 to 13…
#% text_encoding = iso8859_1
_package sw
!global_auto_declare?! << _true
$
_block
_if system.getenv("disable_mf") = "true"
_then
write("DISABLING MF")
method_finder.auto_start? << _false
_endif
_local open_database_proc <<
_proc@open_database_proc( a_session )
ace_dir << a_session.open_database()
write( "Database Opened: ", ace_dir )
_endproc
magik_session.register_new(
"cambridge_db_closed",
:parent_session, "sw_core:swaf",
:optional_products, { :sw_core_lp, :cambridge_db_lp },
:load_modules, { :cam_db_swaf_professional_application, :cam_db_swift_view_application } )
magik_session.register_new(
"cambridge_db_open",
:parent_session, "cambridge_db_closed",
:startup_proc, :startup,
:open_database_proc, open_database_proc )
magik_session.register_new(
"cambridge_db_open_no_auth",
:parent_session, "cambridge_db_closed",
:startup_options, {:authorisation, :none },
:startup_proc, :startup,
:open_database_proc, open_database_proc )
magik_session.register_new(
"gss_closed",
:parent_session, "sw_core:swaf",
:add_products, {:service_framework ,:web_plot, :web_apps},
:load_modules, {:gss_basic_vertx_application, :gss_admin_application},
:post_build_proc, hide_unwanted_applications
# under msf was{:munit_core_mods,:munit_xml,:magik_mock,:msf_test_services_application}
)
magik_session.register_new(
"gss_camdb_vertx_open",
:parent_session, "gss_closed",
:add_products, {:cambridge_db},
:load_modules, {:gss_basic_camdb_vertx_application},
:package, :user,
:startup_proc, :startup_proc_no_cli,
:open_database_proc, open_database_proc ,
:startup_options,
{
:ds_environment_options,
{
:nslots, 10000,
:npcls, 65536
}
} )
_endblock
$
This will only disable the method finder if the disable_mf environment variable is set to “true” (which we did only for the cambridge_db_open stanza). All other sessions won’t have this variable set, so the method finder will continue to start by default.
That way you can control whether the method finder starts, or not, from gis_aliases.
Next Steps…
Performance tuning is a complex task that must take into account all components that can affect SW5 applications (such as the network, storage, CPU, memory, virtualization, software versions and others).
However, in the majority of cases, performance issues are usually application/software related. Therefore it makes sense to learn how to use specific tools in order to profile code and uncover where problems lurk. These tools may also help you understand the underlying SW5 technologies, so code can be developed in a manner that takes advantage of them.
I’ve touched upon a few improvements you should implement if you’ve not already done so. Hopefully that will give you a good start in turning your SW5 system into a well-oiled, faster running machine.