Embracing and Managing the Inevitable Change

Architects deal with change. Change is something that occurs regardless of how it is approached. Try to control it, and it takes unpredictable forms. Direct it, and it will linger around your fingers, surprising you with momentum you never accounted for. Try to suppress it, and it will come from the outside. Ignore it, and you will be left outside. It’s an endless balancing act between systemic risks and opportunities.

This one is for leaders who are not just managing technology, but shaping the future of their organisations through strategic decisions around IT evolution.

If it works, there is a risk in changing it

Automation is not necessarily something performed purely by machines. From an enterprise perspective, any stable and predictable activity or function can be considered automated, because it just means there is no need to pay attention to it. When circumstances are right, things just happen automatically, even if a significant amount of manual work is involved.

Stability creates a baseline that helps organisations achieve things. Changing something functional requires attention at a minimum. Attention, that is away from something else. It also introduces the element of systemic risk that can propagate through the enterprise in unpredictable ways. Large-scale business environments are so complex that it is no wonder that compartmentalising parts of them through modularisation has become a norm.

Slips still happen. Banks renewing their information systems and losing messaging history or completely trashing their capabilities to process payments are not unheard of. Was it really worth it to renew the core system with layers and layers of legacy in it?

In hindsight, it’s easy to find cases where a bit more conservative treatment would have been a sensible choice. I.e. isolating functionality into modules and refactoring them. On the other hand, if the environment has evolved into the form of infamous overcooked spaghetti, biting the bullet might be just the thing to do.

Every now and then, a discussion gravitates towards the idea of accepting that a critical mass has been achieved. It means that any significant change in the environment will result in consequences that far outweigh any imaginable benefit. These consequences for businesses might relate to inertia and dependencies associated with, for example, information systems, operations, personnel, or even customers. Often, it is a combination of several factors.

The sensible solution in these cases is to go into maintenance mode. The focus is on cost control and protecting the existing revenues. This typically occurs by building barriers that prevent competitors from challenging the status quo. These barriers tend to increase the inertia instead of providing customers with something new and innovative. Cases such as making contract termination ridiculously expensive or keeping customer data as a hostage are not unheard of. The result is an increased number of stakeholders who have their own interests to protect. These structures tend to make the change even harder. Sometimes, a conscious decision on stagnation can be just the right thing to do.

Despite this dynamic, things still change. New entrants on the marketplace challenge conventions. They offer new services and products with lucrative value propositions. We sometimes consider the idea of running the existing business in a purely exploitative mode and starting a new company that would compete in the same market, but with fewer restrictions and more freedom to change.

While the idea may sound appealing, there are challenges associated with this approach. There is the obvious problem with running two companies instead of one. These companies need to operate in very different ways and have their own unique cultures. To truly separate them, they should be completely different companies. In that way, the new company does not enjoy the established market position the old one has. It’s tempting to offer services between the companies to get the best of both worlds. With that kind of approach, you’ll also introduce the old cost structure and processes into the new venture. Ouch. It’s a classic situation of having and eating the cake. Cannibalising your own business might sound like a good idea, but in practice, it’s proven to be very hard.

Don’t change things that are not meant to be changed

There is a huge difference between configuring and tailoring in tech. Tailoring in this context means modifying a product to function in ways the baseline product does not. It does not refer to tailored solutions that are built to a specific need from scratch. It might be an individual feature or set of features, or a more radical change of behaviour.

There is a considerable risk involved that the tailoring does not deliver the business benefits that justify the related lifecycle costs of the solution. That’s a diplomatic way of saying it. Sometimes, tailoring a perfectly good software feels like changing the accelerator and brake pedals just because the customer prefers them that way. The change seems superficial: it’s just switching two pedals, which would make the customer buying an expensive car happy. Unfortunately, it makes very little sense on many levels. Instead of tinkering with internal mechanisms of a car to meet this kind of request, “tailoring products” should aim towards things like preprogramming the car radio with the local radio stations.

SaaS approaches have fortunately driven some sense into these endeavours. There are also numerous ways to extend solutions instead of modifying their core. If we keep thinking about the cars, there is really no need to rebuild the engine to replace worn-out windshield wipers. Everyone who has been in the game for some time knows software does not always follow this logic. Some solutions offer a plugin approach for extensions, and when done well, it’s a nice way to enhance functionality. Some solutions take the platform approach and allow programming against an API. There is a built-in isolation mechanism that keeps the extension loosely coupled.

While these kinds of setups are not inherently broken, they are not issue-free. There’s the old saying about making things idiot proof, and a better idiot is built. The number of deployed plugins is a function of the trouble to be expected in UX and functionality. The problem in programming against APIs is that it allows the creation of arbitrary code that can introduce performance issues, vulnerabilities, funky behaviour, maintenance load, or simply add the amount of plain old WTF.

One test of the changes is how much rework they will require when a system update occurs. Ideally, they will pass the regression testing with flying colours. It is highly recommended to take the rocket scientist approach here: always expect things to blow up.

Built to be replaced: Embrace the intentional stagnation

How to build and run an environment that is reasonably cost-effective and flexible? There are some universal principles to consider. That’s the tricky thing about them: they are to be considered. They might lead to different outcomes in different situations. There’s no escaping the need to be context-aware. Sometimes, going for a monolithic approach and executing it in a waterfall fashion makes perfect sense. Sometimes it is a recipe for a catastrophe.

Consider data handling as an example. Applications and even platforms tend to have their own ways of managing data. Due to the cloud just deciding, the system that handles the main data is not that straightforward. These systems offer APIs for data access, but often with limitations. Using SaaS packaged solutions may come with costs related to resource usage: API calls have limits, and additional costs are incurred after exceeding a certain tier. This is not ideal for heavy operational use. Some solutions and platforms even guard their data from outsider systems accessing it and encourage “native” data use with their product and services. Some vendors kindly take over the data management from their clients and build barriers around it.

Don’t let your data become a hostage

One way to own your data is to create microservices that handle it. Instead of relying solely on the application's database, any operation on it should be reflected in a separate service. These services can also reflect updates in them, which should be reflected in the relevant application databases. Some even take it so far that the applications don’t have their own data storage, but operate solely on top of these services. It’s something to consider, but it comes with additional considerations – security, to begin with.

Security refers to the availability, integrity, and confidentiality of information. This way, a limited set of main data services offers a unified access to core data. Applications can still have their own local data that enables more elaborate functionality. Just make sure it is not mission-critical. Separating data from functionality is one of the key ideas behind data fabric architectures. Replacing things becomes ridiculously easy compared to organically evolved application landscapes.

Embrace the change wisely

Let’s end with some practical questions to consider in the context of your organisation:

  • “How are we currently assessing the systemic risks of IT change in our enterprise?”

  • “Are there areas where we are inadvertently falling into maintenance mode due to legacy constraints, and what are the long-term business implications?”

  • “How are we ensuring data independence and preventing data from becoming a hostage in our cloud deployments?”

  • “What is our strategy for 'built to be replaced' architectures?”

About the Authors

Juho Jutila is a Business Architect with over 20 years of experience in building competitive strategies and leveraging both emerging and proven technologies to help global organisations succeed. Before Vuono Group, he worked as a consultant at Accenture, Columbia Road and Futurice, to name a few.

Antti Lammi is a Tech Leader who enjoys challenges and thrives in an innovative, lean and open work environment. He is currently the Head of Technology at Futurice, leading technology competence in Finland and responsible for over 200 developers and tech specialists.


If you enjoyed this, check out the previous articles in the series:

Previous
Previous

Helsinki-Vantaa Airport Radically Boosts Vehicle Throughput with Moovy

Next
Next

AI Gala 2025