I’m not an application architect, directly. I do have experience from the infrastructure side, where the “app guys” talk to the “business guys” and draw up all their requirements, then hand us a list of specifications that are way overkill for the use case and completely unrealistic in terms of their expected costs/timeline. It’s always interesting, because I feel year over year, the various technologies they seem to leverage changes so rapidly, to the point that the technology they built out to be the new “base” technology is now obsolete. And proof that the IT gods have a sense of irony, even the idea of “app guys” have fallen by the wayside. The new fancy “DevOps” model means that the business guys talk directly to us…of course their ideas on cost and schedule remain unrealistic. For this blog installment, I want to dive into a very specific piece of our assigned readings, specifically the Gartner paper on application integration guidelines (G00245391) and even more specifically on the paper’s first recommendation.
Take a holistic approach to integration, and avoid focusing on individual point solutions.
This is much much easier said than done. It’s probably easy to do, if you’re building an app from the ground up, but that is a rare case. Instead with integrations, you’re working with multiple systems, often supporting multiple internal factions (with their own divergent goals and workflows), where one system ends up being prioritized over the other. I’ve also seen integrations struggle with simply the wrong technology. Seriously, because the CIO wrote on his blog we’re going to leverage <X> technology, now everything must be built using <X>, performance and cost be damned. I think that’s why using micro services is going to be such a big deal in the next coming years. Now, all of a sudden, you CAN specifically tailor the platform/middleware to a specific application without any of the overhead or costing issues that would have plagued such an endeavor in the past.
Another downside to perceived slights (optics of putting specific tech above above the wants of the guys who think they’re responsible for all the revenue) from the business side is the growth of so called “shadow IT.” This is when people who only know enough to be dangerous spin up a POC system, show it to a bunch of execs, and start relying on it…with the expectation that IT has to fix it when it breaks. So, you may ask…what’s the harm? If the business wants to architect their own IT solutions (probably due to a sense that IT is too slow/too expensive, see above) why not just let them? Because you end up a architecture with more hidden silos than NORAD.
Even though we spent hundreds of millions of dollars on the infrastructure…we we’re spending it anyway. That’s the fallacy in what most people think. When it’s being spent in departments and in divisions the money is being spent. It’s just not being seen.
The above quote from Charlie Feld is especially relevant. Various departments enacting IT solutions all on their own is not saving you money, it’s disguising cost…and that’s not even factoring in the cost to maintain or fix “shadow IT” infrastructure, because the summer intern that set it up wasn’t thinking past a three month timeline. But this situation often arises, because the business may believe they can do it cheaper/better than IT. And maybe the dollar amount IS less in the short term, but it shows very little foresight.