
(Image credit: wal_172619.)
When COBOL developers built insurance data processing systems on mainframes back in the ’70s, few would have predicted that their code would still be running 50 years later. Yet many insurers still operate these systems today. Hardware and O/S have been upgraded many times over, paper tape has given way to SSD, new APIs provide integration to other systems, and plenty of the business logic has been rewritten. But the core of the original application is still recognizable.
With newer tech, though, we have grown accustomed to the idea that software inevitably becomes obsolete. With the original network computing model, client/server, the shortcoming was scalability and dependence on desktop operating systems subject to forced obsolescence every few years.
N-tier systems with web front ends emerged to solve these problems, but they depended on proprietary application servers and vendors whose fortunes waxed and waned. Monolithic architectures also contributed, making components difficult to upgrade without the impacts rippling across the entire solution.
Side effects of distributed computing were technology diversity and role specialization. IT departments needed specialists for the client technology, the application server, and the database. Coordinating groups of specialists required heavyweight processes and formal organizational structures to ensure alignment. IT departments became more complex as they organized around the increasingly complex technology portfolios they had to support.
Rethinking the “Good Ole Days”
Contrast this with early IT. Back then, just about everybody was a developer, and developers were analysts in many cases, often working closely with the business units they supported. Many tested their own code. There were also computer operators and managers, but there was less specialization, and there were fewer roles in general. IT specialization that did exist was most often along business lines.
This post is not a nostalgia piece reminiscing on the good ole days (of which I was never a part, anyway). Mainframes were expensive and limited then—and they still are today. Much of the code running on them is a mess, poorly understood, and difficult to maintain. There are good reasons companies try to sunset these legacy systems over and over again.
What is the lifespan of modern replacements? Historically, “modern” systems last somewhere between eight and ten years before they are replaced at great expense, discarding the prior solution and rebuilding it from the ground up in the technology du jour. Replacement is wasteful since the asset and original investment are lost, even the parts that work well—not to mention the associated risk. But what is the alternative? Modern legacy technology is even harder to support than the old legacy.
Looking Ahead—Beyond Application Turnover
Several trends suggest that we may be exiting the cycle of application turnover that we have been stuck in for 30 years. Whether this will pan out is hard to say. However, it is reassuring that some of these ideas are recycled and effectively built durable applications in the past.
- Cloud platforms and services. Much like the mainframe, cloud platforms decouple applications from the underlying hardware and software, allowing older applications to take advantage of newer technology advances without requiring a rewrite.
- Standing, product-oriented teams. A return to standing teams that support a solution across its entire lifespan is a significant development emerging from DevOps. Much like the past’s mainframe applications, applications supported by standing teams tend to evolve at a faster rate than applications maintained by a support team, which helps prevent application deterioration. Standing team members are stakeholders in avoiding workarounds and shortcuts, paying down technical debt, and keeping the technology stack current.
- Business alignment. Closer alignment with business than with a technology specialization is also reemerging. IT teams are becoming more federated, in some cases merging with the business units that prioritize their efforts.
- Better architectures. Modern architectures deemphasize traditional tiering in favor of vertical alignment. Microservice architectures encourage components that can be modified and deployed independently without impacting other parts of the solution.
- APIs, services, and service marketplaces. Services and APIs have come of age, allowing for highly decoupled architectures where components can be upgraded or replaced independently. The explosion of third-party services provided by software vendors allows for applications to be composed rather than built from scratch. When new, better services become available, they can be easily swapped into the application.
- Refactoring support. Application refactoring has matured tremendously. Modern applications are built with high levels of automation, especially around testing, which allows them to be refactored aggressively at relatively low risk.
Does this mean we might see an end to enterprise software obsolescence? We are probably a way off from that, but the trend is definitely in the right direction. It is encouraging that these developments better support the continuous evolution of applications rather than the “boom and bust” replacement life cycle that has dominated IT for the last 30 years.