People photographed from above, sitting at a table.

Agile development is a method we use today. In this particular context, agile means receiving feedback on new features from customers as quickly as possible. From an IT perspective, this means implementing changes at the minimum possible cost and risk. The budgeted funds should be allocated for the actual development work, not for the rollout. The only way to achieve this goal is to automate all IT processes. As a result of this, application runtime environments are increasingly becoming fine-grained, allowing maintenance and configuration work to be outsourced.

In the 1990s, the applications still ran on dedicated computers, where users had to maintain and configure all the hardware and software themselves. Virtual machines, which were used in the 1980s, abstracted from the hardware, eliminating the need for configuration and maintenance. Only the operating system still needed to be set up as a software component. The container technology in use today provides for hardware and software abstraction. One single container typically runs on all systems, regardless of whether you’re using Windows or Linux or Intel’s x64 or Apple’s M1 chip, as long as the appropriate container runtime environment is running on it.

So far so good. But what exactly are containers? A container is a specific instance of a container image, which is a software package with all necessary dependencies (external libraries or tools) that is more or less isolated from its environment.

Container and IaC – a match made in heaven

Containers really make the most of their potential in combination with Infrastructure as Code (IaC). IaC is a method that entails documenting all the steps in the periodic set-up of an IT system as source code and storing it in a source control management system (SCM) such as Git. In this case, source code includes shell scripts and configuration files for configuration management programs like Ansible.

The aim of IaC is to automate the installation and configuration of IT systems and provide for transparency during these two steps. This may be less relevant for a local system, since the program within the container runs autonomously. Along with the developers, external users can also start it. However, if you deploy the container in the cloud, you suddenly face the prospect of a lot of configuration work if the goal is to reliably operate and scale the system. To ensure system resilience, containers are usually operated redundantly. A container orchestrator like Kubernetes can prove useful to help maintain a constant overview. In addition to this, it may make sense – depending on the traffic generated by the website being run – to use a load balancer. Other resources such as DNS entries, TLS certificates, notifications of unexpected system errors and logging must also be configured. The container image quickly becomes only a small part of the overall system. Without the necessary infrastructure, no one else can operate the system on their premises. A solution is needed to store my container image, including the configurations of my infrastructure, to ensure that the system as a whole can be ported. With IaC, this is no longer an issue, because package managers like Helm make it possible to do just that. They facilitate the exchange of containerised applications with other applications by storing the actual container image together with a template of the required infrastructure. With GitOps, the existing infrastructure (current configuration) is continuously compared with the infrastructure declared in the source code (target configuration), which can protect the system from untraceable changes. External systems have read-only access (exclusively via pull requests) to the source code stored in the repository, meaning that configuration changes can only be made by the system that is being modified.

Advantages of containerisation in a business setting

Thanks to new container technology, the service providers and stackholders do not have to tend to the underlying hardware or software, saving both of them time and money. As service providers, the software developers can focus their full attention on their core task: the conceptual implementation and programming of the requested functionalities. They can distribute the software as a container image between themselves during the development process without facing issues like missing or incorrect program libraries. Installing the software as a turnkey program is also easier, provided the client has already set up the same container runtime environment. On the flip side, the customer no longer needs additional specialised personnel to maintain the underlying software or hardware. This task can be transferred to someone else quite easily, which is made possible by the high degree of standardisation of the container runtime environments.

How can containerisation be effectively deployed in the insurance industry?

Insurer companies tend to have monolithic systems that are highly customised that have grown and expanded over many decades. The core was developed by individual technical specialists and implemented in a programming language that is now very outdated. The problem with this is that monolithic systems are often difficult to maintain or expand. Legacy systems like this are often poorly documented or not documented at all. In many cases, only the original developers can really understand them. They are often taken out of service altogether. New employees are difficult to find on the market, since no one wants to work with such outdated programming languages anymore.

Nevertheless, digitalisation continues to gather pace in the insurance industry, though it is being slowed down by the legacy systems that insurers operate. Microservices, a software architecture that allows for modular software development, is the perfect option if you wish to use the technologies described above. Changes or upgrades to the architecture can be made more transparent and easy to understand for everyone thanks to IaC and GitOps. Defined API interfaces allow the individual containers to be easily swapped out during operation without having the overall system run erratically. In this way, knowledge about the application landscape can be consolidated and it remains in the company long into the future, even after the technical specialists who created it are no longer working there.

Conclusion

Do defined APIs make the work of the IT team easier or better? In some cases, yes, but not always. Although it has become easier to set up individual subcomponents of a system, the overall system has become more complex. It consists of significantly more individual components that interact much more frequently. Defined APIs are great, but let’s not kid ourselves: everyone involved in the process needs to have a much broader range of knowledge of the application landscape that the company will operate if it is to run smoothly.

You will find more exciting topics from the adesso world in our latest blog posts.

Picture Marvin Forstreuter

Author Marvin Forstreuter

Marvin Forstreuter is a Java trainee in the Line of Business Insurance at adesso's Hanover location. His focus is on software development with Java and web development. He is also interested in topics related to machine language processing and artificial intelligence.

Save this page. Remove this page.