Blog 02: Part 03 – Architecting the Cart Before the Horse

I’m not an application architect, directly.  I do have experience from the infrastructure side, where the “app guys” talk to the “business guys” and draw up all their requirements, then hand us a list of specifications that are way overkill for the use case and completely unrealistic in terms of their expected costs/timeline.  It’s always interesting, because I feel year over year, the various technologies they seem to leverage changes so rapidly, to the point that the technology they built out to be the new “base” technology is now obsolete.  And proof that the IT gods have a sense of irony, even the idea of “app guys” have fallen by the wayside.  The new fancy “DevOps” model means that the business guys talk directly to us…of course their ideas on cost and schedule remain unrealistic.  For this blog installment, I want to dive into a very specific piece of our assigned readings, specifically the Gartner paper on application integration guidelines (G00245391) and even more specifically on the paper’s first recommendation.

Take a holistic approach to integration, and avoid focusing on individual point solutions.

This is much much easier said than done.  It’s probably easy to do, if you’re building an app from the ground up, but that is a rare case.  Instead with integrations, you’re working with multiple systems, often supporting multiple internal factions (with their own divergent goals and workflows), where one system ends up being prioritized over the other.  I’ve also seen integrations struggle with simply the wrong technology.  Seriously, because the CIO wrote on his blog we’re going to leverage <X> technology, now everything must be built using <X>, performance and cost be damned.  I think that’s why using micro services is going to be such a big deal in the next coming years.  Now, all of a sudden, you CAN specifically tailor the platform/middleware to a specific application without any of the overhead or costing issues that would have plagued such an endeavor in the past.

Another downside to perceived slights (optics of putting specific tech above above the wants of the guys who think they’re responsible for all the revenue) from the business side is the growth of so called “shadow IT.”  This is when people who only know enough to be dangerous spin up a POC system, show it to a bunch of execs, and start relying on it…with the expectation that IT has to fix it when it breaks.  So, you may ask…what’s the harm? If the business wants to architect their own IT solutions (probably due to a sense that IT is too slow/too expensive, see above) why not just let them?  Because you end up a architecture with more hidden silos than NORAD.

IST 495 student?

 

Even though we spent hundreds of millions of dollars on the infrastructure…we we’re spending it anyway.  That’s the fallacy in what most people think.  When it’s being spent in departments and in divisions the money is being spent.  It’s just not being seen.

The above quote from Charlie Feld is especially relevant.  Various departments enacting IT solutions all on their own is not saving you money, it’s disguising cost…and that’s not even factoring in the cost to maintain or fix “shadow IT” infrastructure, because the summer intern that set it up wasn’t thinking past a three month timeline.  But this situation often arises, because the business may believe they can do it cheaper/better than IT.  And maybe the dollar amount IS less in the short term, but it shows very little foresight.

Blog 02: Part 02 – ERPs, CRMs, and MADs, Oh my!

Metaphor for post-merger app architecture.

ERPs and CRMs are great examples of the end goal of enterprise architecture, single, scalable systems that can be used as databases of record and provide a holistic set of operational services across the entire enterprise.  They are wonderful in theory, but in practice, in my experience it can be difficult to condense to a single system due to internal organizational changes as well as mergers, acquisitions, and divestitures (MAD).  Several years ago I worked for an organization which had just been purchased by a larger company.  The larger company was trying to get a foothold in the industry and did so through acquiring several smaller companies in the same industry in a short period of time.  There were five companies total, but since we were the largest of the five, my physical site location became the headquarters of the new division and the onus was on us to consolidate the MANY redundant systems we now had in the environment.  See, a few of the recently acquired companies had made acquisitions and mergers of their own in the recent past, so all in all, there was something like 25 unique ERP systems in active use across the new business unit.

They ranged  architecturally from Oracle, DB2, SAP, Solomon, Baan, a hacked together system using the 1996 version of Apple FileMaker, and my favorite: A 30 year old system written in FORTRAN and running on DEC Alpha hardware that still occasionally throws errors that say “If you see this, call Bob.” Bob retired a long time ago.  Most of you are probably slapping your heads, I know I did when I first discovered this. But to understand how the architecture got to that state, you have to remember the mad sprint of acquisitions.  Just because there is a massive reorg going on, doesn’t mean day to day operations can stop.  Customers still require service.  So it was an early game of taking inventory of everything that we had, doing triage, and determining what would go and in what order.  Let’s just say that the integration efforts in these cases was…not good.

Solomon vs Baan. The Marquis of Queensberry wasn’t born yet when these technologies came on the scene.

It wasn’t just the IT ecosystem that was fragmented either.  The rapid string of acquisitions came with a lot of the personnel on edge, as many folks were let go to avoid duplicate roles.  In the IT space, everyone thought that the way their legacy organization handled things was the ONLY way to do things and there exists even today some animosity between employees who were around during the time of the merging.  There was a lot of fighting, a lot of strong, differing opinions. Now, almost 15 years later, they are down to a single ERP.  Basically.  There are still several of the original ERPs floating around out there in read only mode, because some historical data wasn’t migrated over, but is required for regulatory reasons.  The more tenured folks still complain that the ERP system isn’t as good as their home grown FileMaker solution, but there is peace, more or less.

Architecting integrations between this eclectic bunch of systems was incredibly challenging.  In Part 03 of this week’s blog I’ll be taking a look at architecture of individual applications.

Blog 02: Part 01 – The aaS Revolution

I have a decidedly love hate relationship with Software/Platform as a Service offerings.  I’m slightly biased, since my specialty for the last few years has been mainly infrastructure projects.  I also believe that in many cases people use the term “cloud” without really knowing what it means.  They may as well say “Æther” because it’s this magical technology that just makes everything better.

Marketing department reacting to “the cloud”

My organization has been non-stop consolidating various minor data centers/computer rooms sprinkled throughout the world located at our facilities into single regional strategic data centers, one per Americas, EMEA, and ASPAC region.  This even includes minor infrastructure one would typically find at a local site, e.g. file/print servers, domain controllers, TACACS/ACS appliances.  The file/print servers are being replaced with cloud storage options, in our case we have a private instance of Box.com.  The domain controllers are simply being removed outright.  The greater security advantages of this are greatly touted, but now up time of network circuits is absolutely critical.  Business applications and supporting databases which may be running on site are being either 1) decommissioned, 2) migrated to AWS, or 3) hosted in a strategic DC.  The strategic direction is pretty clear: shared services are the future.  Very legacy shop floor systems notwithstanding, (and I mean LEGACY.  Like, DEC Alpha hardware running OpenVMS/VAX, a production database someone built in a version of Apple FileMaker that went EOL in 1998, and an eclectic bunch of control software for shop floor PLCs that still require physical licensing dongles), the directive is all new applications should be built in AWS, barring any export control considerations.

I’d like to type out some thoughts on a few of the recommendations from the two Gartner readings on application architecture.

  • Build a core competency in the application architecture discipline – Strongly agree and by extension, there needs to be a standard across the enterprise in terms of acceptable technologies with which to build solutions.  Tech stacks are not a new thing, but they lag somewhat in being kept current.  This causes “shadow IT” to happen, i.e. unsanctioned systems/ processes being built by people on the business side who know enough to be dangerous or cobbled together over the summer by an intern.  These “solutions” may be great proof of concepts, but are rarely scalable or meet any sort of security/risk compliance standards.  (Interns for whatever reason LOVE to store company IP in “free” third party cloud storage offerings we have no control over)
  • Leverage separation of concerns and create discrete functional components to enable dynamic scale and improved stability, and to minimize component dependency – Modularity and seamless communication between systems is key.  We have so many systems where the “integration” is a “swivel chair,” i.e. a physical person manually handling the data transfer.  Standardization of data inputs and outputs makes changing up system components much much easier.  For whatever reason, lots of off the shelf solutions love to use proprietary data formats, which should never be allowed to happen, IMO.
  • Utilize in-memory data management’s improved performance to enable rapid, dynamic navigation of application capabilities – The bane of my existence are end user complaints that something is “slow.”  There are so many possible things that can contribute to an end user’s perception of latency, but in my experience everyone loves to blame the network.  Especially the application teams responsible for the “slow” application.  In memory data management can help and with the application and database in EC2 and RDS instances respectively, can actually be doable where in the past it would have been too expensive.  But in many cases throwing CPU or RAM at something is simply masking the root cause.  It’s amazing how lazy programmers can be, dirty fixes become permanent, etc.  I’ve fixed so many “network” issues simply by having someone re-write a DB query to NOT pull in an extra few million superfluous records than were actually needed.
  • Application architects must work collaboratively with enterprise, technical, business and information architects as part of the organization’s solution architecture efforts to ensure that the systems built conform to the requirements, principles and models that support the enterprise’s change agenda, as captured in the gap plan and transition road map – This right here is the reason why EA initiatives fail.  Too often the business and IT use the “college student working on a group project framework” where communication is at the bare minimum, everyone goes off and does their own piece of the overall problem, then everything is cobbled together at the end right before the deadline.  Not only is it critical for all stakeholders to be engaged throughout the process, but it is critical that they also are speaking the same language.  I’ve sat in meetings where representatives from both groups go to battle over essentially the same outcome, but phrased in a different way.