Blog 04: Part 02 – I&O Leader by Committee

In my last post a few days ago I alluded to being assigned to a new project and the Gartner reading on the things your should do during your first 100 days as a new Infrastructure & Operations leader (G00201291) is turning out to be pretty topical.  GE is an interesting organization to work within due to its size and the number of industries that it operates in.  Building gas or steam turbines may be similar to building an aircraft engine, but very different from building a locomotive, MRI machine, or water treatment plant.  Not to mention the sales and service aspects, supporting these products is very different as well.  Due to this, EA initiatives were handled mainly from the level of these so called “Tier 1 Businesses.”  This meant that GE healthcare could tailor their architecture towards their specific business needs and goals and GE Power could do the same.  The net effect of this though is a loose confederacy of architectures when looking at GE as a whole. And I’m being kind.  There are certain systems, e.g. HR and some IT shared services such as email, collaboration, and end user support services that are leveraged across the enterprise, but for the most part each of the high level businesses operate within in their own silos.

So why is this important?  Well, within the last two years GE Digital was created and since then all IT personnel formally working for the tier 1, tier 2, and tier 3 businesses have not been reorganized into this new Information Technology/Operations Technology business unit.  IT now is a completely horizontal function.  And the transformation, which also came with a voluntary job reduction package that a lot of employees took advantage of, was a bit disruptive in terms of current ability to support a lot of these legacy systems.  There are now a few cases where the only folks who were familiar with an application’s architecture are gone now…and knowledge transfer is was minimal or non-existent.  So what does all of this rambling have to do with this week’s topic?

With the creation of the new centralized, horizontal IT organization coupled with the removal of IT personnel within the “businesses” the main goal is to create one single architecture across all these previously siloed tier 1 businesses.  But while GE Digital does have a CIO (and, a CEO!) the actual architecture designs are being left to the EA team to design, the solutions architecture team to build, and the business folks to just kind of….accept, I guess.  It is going to be an interesting next few months as this becomes fleshed out, I know for a fact that the businesses who formally had IT teams of their own are feeling that loss, so this is just as much as a shift in culture as it will be a shift in technical architecture.   Rereading this post, I didn’t do a very good job of tying my story back to the reading, but what I want the key take away to be here is that not always is their a single IT/OT leader in a position to make decisions all on their own.

Blog 04: Part 01 – Getting Stuck in the Weeds

I want to remind everyone that my background is from IT, specifically the infrastructure (boring!) side of IT, so in the Gartner reading (Robertson, G00160635) in this section on the importance of not concentrating solely on enterprise technology architecture I can’t help but agree this is a pretty common mistake of EA teams.  Part of it isn’t our fault.  In every professional organization that I’ve been a part of, the EA team is part of the IT organization.  The architects are usually highly technical people with education in IT disciplines.  As such, while acknowledging that they are creating a holistic architecture, there is a tendency to overly focus on the “lower abstraction” levels of the design.  Arguing over an ISP circuit verses an MPLS circuit, for example.  Yeah, they’re different, cost different, and have implications for the rest of the design…but at the end of the day it only represents a communications channel.  Choosing one or the other is a stupid hill to die on, in the grand scheme of things, especially if you are going to need that political capital later on.

This is all exacerbated by overly technical conversations driving away stakeholders from the business side.  Literally.  I have been on calls where over a few sessions the non-IT folks simply stopped coming due to nothing within their scope being discussed.  Even if there is a highly technical facet of the architecture that MUST be discussed in a cross functional meeting, you have to put the technology in terms that your business partners can understand, which usually time and money.

Any “moratorium on ETA” approach will make people angry. Get used to it. EA isn’t about taking the path of least resistance. Persevere. But also challenge yourself and others to do things differently than before.

I had this situation happen to me just this morning.  I’ve been just assigned a new project, to assist in migrating several applications from infrastructure that is owned by a business unit that has been divested.  The TSA clock is ticking.  On the call were a few IT project managers, solutions architects, enterprise architects, and all of the executive sponsors from the business side for the ~15 applications that have to move.  The conversation very quickly got into the topic on how Chef scripts can be written to move some of these apps over to AWS…and you could just tell that 75% of the people on the call were mentally checking out.  Fact is, those folks DO NOT CARE about how their applications work and what needs to happen to migrate them, so long as they are moved without disruption or incident without breaking the bank.

Blog 03: Part 03 – Disaster Recovery

Going to depart from my original plan for entry three this week to talk about disaster recovery, since I experienced a hard drive failure on my work laptop on Wednesday and DR is an important aspect of data architecture, so it still fits thematically with the other entries.  First though, some context.  I would say that my organization has an unstructured data problem.  We have tons and tons of unclassified data sitting on end user hard drives, network file shares, and cloud storage environments.  Tons of it.  Duplicate data.  Incorrect data.  Corrupt data.  End users who bother to backup their personal data do it poorly: It’s not common for someone’s personal file share to be filled with manual redundant copies, e.g. Jan, Jan-Feb, Jan-Mar, Jan-Apr, etc etc  Some of them even encrypted their data, which sounds good on the face of it, but they didn’t use an appropriate managed encryption system, so not if but WHEN they lose/forget their password, nobody is able to unlock it for them.  For the rest of the user base, they don’t backup.

Nobody cares about backups, until they need them…then it’s always IT’s fault that they don’t exist and everyone is scrambling to and paying a lot of money to recover the data.  I think my favorite story was a sales guy who spilled wine on his laptop.  Hard drive was toast, but we have an agreement with data recovery vendors who can actually recover data from fairly destroyed drives, as long as we pay through the nose.  The sales guy insisted he had important data that needed recovery, so away the drive was sent.  When the ~$8,000 bill arrived, it prompted some questions:  he had to justify the expense.  Turns out, the data he was after was for his fantasy football team.  Whoops.

For the last several years, end user PCs have been backed up to the cloud (much like our servers).  It happens automatically, many times a day, and incrementally, only files that have been changed are backed up.   And that’s great for the end user, but all of these backups of (questionably useful) data take up bandwidth and bandwidth isn’t free.  In fact, due to the expenses of bandwidth plus the costs of the backup service itself increasing from $4 per user per month to $9.50 per user per month, the company made the decision to end the cloud backup offering.  At the same time, for data loss prevention issues, all writing to externally mounted volumes is now blocked as well.  The only means for end users to backup their data is by manually copying to internal cloud storage services designed for collaboration, not archival purposes.   This has generated a great deal of animosity towards IT, however it has saved quite literally millions of dollars, since we have ~300K employees globally.

 

 

Blog 03: Part 02 – Mastering the Data

After last post’s discussing some of the pitfalls of leaping into a big data initiative unprepared, I’d like to focus now on some of the less technical reasons why an organization may struggle with data management.  It’s the annoying little brother of Big Data and the the subject that makes everyone’s eyes glaze over when it’s brought up: Governance!  In this case Master Data Management (MDM).  At its core, MDM is a single file, a single point of reference, which links all enterprise data together in a common point of reference.  This is critical, especially in larger organizations with lots and lots of (dare I say BIG) data where the data is shared with different business functions and discrepancies in the data could be problematic, especially with non or poorly integrated applications.

The Gartner reading (G00214129) does a great job at highlighting the pitfalls of paying lip service to MDM, the most common way I’ve seen this happen is putting the onus of governance on the data entry folks.  Gartner states the issues with that are:

  • Loss of morale as some users leave the team. IT shared services is not a desirable career move for a lot of “power users” that could have seen line management as their future career
    path.
  • Realization that the movement of “governance” from line of business to shared services creates a vacuum in business as those users are removed from any responsibility for data governance. The shared services team “loses touch” with business and so “governance” starts to erode and break down.
  • The end state results in the accidental separation of “data entry” (which works well in shared services) and data stewardship that breaks down, since the expectation is that shared services can execute this, when in fact they cannot, so it does not happen.

For bullet two, this happens even faster than most people think, because often time the shared service that data entry is farmed out to is considered IT or contracted out; it’s severed from the business from the onset.

The key takeaway here is that a successful BIG DATA program isn’t simply about storing large amounts of data on silicon. Data should be considered an enterprise resource and it must be maintained so it remains an asset, not a liability, to the business.

Blog 03: Part 01 – BIG DATA

“Big Data” is one of those terms that I rank up there along with “Cloud” and “Internet of Things” as one of those IT buzz words people like to throw around to sound like they’re actually saying something of substance. It’s true that one of the great use cases of computers is the organization (and analysis) of data.  But now data is BIG.  We keep it in a lake.  Or a warehouse. And we have entire teams of analysts whose sole job is generating business value from interpretating the data…but as it turns out, that is much easier said than done.  Because it is much much easier to collect and store data then it is to draw meaningful conclusions from it.

IBM has a nice graphic here on the so-called “Four Vs” of data and I agree this makes for a nice alliterative categorization.  I’ll talk about each in turn.

  • Volume – As I mentioned in the intro, in terms of technical expertise and physical technology, it is much easier and cheaper to store data today then ever and organizations seem to choose the digital pack rat methodology:  when in doubt, keep it!  Even at the individual level, people horde data…in this case unstructured data, but data none the less.  Despite my organization having a fairly draconian document retention policy, it is rarely enforced and on average each employee has about 3GB worth of data per year of service.  But just like no one can ever pull out that file they KNOW they have SOMEWHERE in a timely manner, enterprise data is often at the mercy of poorly written database queries or business analysts who must have been sleeping during STAT 200.
  • Variety – All data is equal, but some is more equal than others.  And to manage this inequality it is important to have a data classification model to assist in prioritizing how data should be accessed and secured.  The general rule of thumb, from least to most important, is: External, Internal, Confidential, and Restricted.  Equifax should take note.
  • Velocity – This is one of the chief contributors to the Big Data problem, the speed with which data is created.  BA’s work at a pretty high level, generating high level dashboards that aggregate and distill everything down to a few charts or bullet points, but very often the devil is in the details.  Lots of shop floor equipment now comes standard with logging capability and there is a ton of operating data available, but much of it is fluff.  Who cares if your machine can log operating temperature within a tenth of a degree when temperature doesn’t impact any production metric.
  • Veracity – This is the killer.  The trouble with a better “save” than sorry mentality towards data retention means that very often, you have conflicting sources for the same data.  This is especially problematic in integrating multiple systems together: which system shall be the source of record?  And how can we reconcile?

Blog 02: Part 03 – Architecting the Cart Before the Horse

I’m not an application architect, directly.  I do have experience from the infrastructure side, where the “app guys” talk to the “business guys” and draw up all their requirements, then hand us a list of specifications that are way overkill for the use case and completely unrealistic in terms of their expected costs/timeline.  It’s always interesting, because I feel year over year, the various technologies they seem to leverage changes so rapidly, to the point that the technology they built out to be the new “base” technology is now obsolete.  And proof that the IT gods have a sense of irony, even the idea of “app guys” have fallen by the wayside.  The new fancy “DevOps” model means that the business guys talk directly to us…of course their ideas on cost and schedule remain unrealistic.  For this blog installment, I want to dive into a very specific piece of our assigned readings, specifically the Gartner paper on application integration guidelines (G00245391) and even more specifically on the paper’s first recommendation.

Take a holistic approach to integration, and avoid focusing on individual point solutions.

This is much much easier said than done.  It’s probably easy to do, if you’re building an app from the ground up, but that is a rare case.  Instead with integrations, you’re working with multiple systems, often supporting multiple internal factions (with their own divergent goals and workflows), where one system ends up being prioritized over the other.  I’ve also seen integrations struggle with simply the wrong technology.  Seriously, because the CIO wrote on his blog we’re going to leverage <X> technology, now everything must be built using <X>, performance and cost be damned.  I think that’s why using micro services is going to be such a big deal in the next coming years.  Now, all of a sudden, you CAN specifically tailor the platform/middleware to a specific application without any of the overhead or costing issues that would have plagued such an endeavor in the past.

Another downside to perceived slights (optics of putting specific tech above above the wants of the guys who think they’re responsible for all the revenue) from the business side is the growth of so called “shadow IT.”  This is when people who only know enough to be dangerous spin up a POC system, show it to a bunch of execs, and start relying on it…with the expectation that IT has to fix it when it breaks.  So, you may ask…what’s the harm? If the business wants to architect their own IT solutions (probably due to a sense that IT is too slow/too expensive, see above) why not just let them?  Because you end up a architecture with more hidden silos than NORAD.

IST 495 student?

 

Even though we spent hundreds of millions of dollars on the infrastructure…we we’re spending it anyway.  That’s the fallacy in what most people think.  When it’s being spent in departments and in divisions the money is being spent.  It’s just not being seen.

The above quote from Charlie Feld is especially relevant.  Various departments enacting IT solutions all on their own is not saving you money, it’s disguising cost…and that’s not even factoring in the cost to maintain or fix “shadow IT” infrastructure, because the summer intern that set it up wasn’t thinking past a three month timeline.  But this situation often arises, because the business may believe they can do it cheaper/better than IT.  And maybe the dollar amount IS less in the short term, but it shows very little foresight.

Blog 02: Part 02 – ERPs, CRMs, and MADs, Oh my!

Metaphor for post-merger app architecture.

ERPs and CRMs are great examples of the end goal of enterprise architecture, single, scalable systems that can be used as databases of record and provide a holistic set of operational services across the entire enterprise.  They are wonderful in theory, but in practice, in my experience it can be difficult to condense to a single system due to internal organizational changes as well as mergers, acquisitions, and divestitures (MAD).  Several years ago I worked for an organization which had just been purchased by a larger company.  The larger company was trying to get a foothold in the industry and did so through acquiring several smaller companies in the same industry in a short period of time.  There were five companies total, but since we were the largest of the five, my physical site location became the headquarters of the new division and the onus was on us to consolidate the MANY redundant systems we now had in the environment.  See, a few of the recently acquired companies had made acquisitions and mergers of their own in the recent past, so all in all, there was something like 25 unique ERP systems in active use across the new business unit.

They ranged  architecturally from Oracle, DB2, SAP, Solomon, Baan, a hacked together system using the 1996 version of Apple FileMaker, and my favorite: A 30 year old system written in FORTRAN and running on DEC Alpha hardware that still occasionally throws errors that say “If you see this, call Bob.” Bob retired a long time ago.  Most of you are probably slapping your heads, I know I did when I first discovered this. But to understand how the architecture got to that state, you have to remember the mad sprint of acquisitions.  Just because there is a massive reorg going on, doesn’t mean day to day operations can stop.  Customers still require service.  So it was an early game of taking inventory of everything that we had, doing triage, and determining what would go and in what order.  Let’s just say that the integration efforts in these cases was…not good.

Solomon vs Baan. The Marquis of Queensberry wasn’t born yet when these technologies came on the scene.

It wasn’t just the IT ecosystem that was fragmented either.  The rapid string of acquisitions came with a lot of the personnel on edge, as many folks were let go to avoid duplicate roles.  In the IT space, everyone thought that the way their legacy organization handled things was the ONLY way to do things and there exists even today some animosity between employees who were around during the time of the merging.  There was a lot of fighting, a lot of strong, differing opinions. Now, almost 15 years later, they are down to a single ERP.  Basically.  There are still several of the original ERPs floating around out there in read only mode, because some historical data wasn’t migrated over, but is required for regulatory reasons.  The more tenured folks still complain that the ERP system isn’t as good as their home grown FileMaker solution, but there is peace, more or less.

Architecting integrations between this eclectic bunch of systems was incredibly challenging.  In Part 03 of this week’s blog I’ll be taking a look at architecture of individual applications.

Blog 01: Part 03 – Is Amazon Taking Over the World?

I started off this blog post by typing “Is Amazon Taking Over the World?” and it turns out there is quite a bit of debate on this.  Some analysts say yes, others say no.  But one thing these articles all get wrong is the entire context of what Amazon seems to be doing.  My analysis?  Yeah, they’re taking over…but not how analysts seem to think.

Most of the articles seem to be talking about how online retailers are causing brick and mortar stores to become extinct.  This DOES appear to be happening, but this is not the greatest threat.  For example, everyone is making a big deal about the recent Amazon purchase of Wholefoods.  “Oh no!” Everyone says, “Now Amazon is going to destroy super markets.  Now I’ll only be able to buy bread, milk, and eggs by having them directly delivered to my doorstep via drone!”  No, I don’t think Amazon actually cares about food delivery.  Or books.  Or any of the other products they sell.  The real money is in their cloud platform, Amazon Web Services.  Amazon doesn’t care about selling you products directly, they want to be EVERY OTHER business’s back end.

There are so many small businesses out there that are hamstrung in the IT services and logistics department.  They simply aren’t large enough to have IT staff.  They probably don’t deal in volumes large enough to warrant shipping contracts that provide the same value as “Free Two Day Shipping.”  But Amazon is going to sell them that capability.  Soon small businesses will be able to handle all of their back office operations via amazon.  Consumers will be able to purchase their products from far away and have them show up at their door step at no additional cost.  All these articles reference some future battle between Amazon and Walmart, likening it to a battle of the titans.  Big box stores have spent the last 25 years pricing mom and pop out of business, leveraging their size and infrastructure to do so.  It won’t be Amazon directly that destroys Walmart, it will be thousands and thousands of small businesses who are able to compete again.

I don’t mean to sound Anti-Amazon here.  If anything, it’s admiration, in how they’ve positioned themselves to corner the cloud market.  They’re already not the ONLY cloud player, with Microsoft’s Azure cloud offering.  I think in Microsoft’s case, they’re currently a bit too focused on the front end with Office, Skype for Business, OneDrive, but I also expect more players to emerge to challenge Amazon in this space.


Thoughts on Micro-services: GE Predix

The Russinovich blog on Azure Microservices did a good job at outlining the concept, but I wanted to take a little time to put on my company hat and talk about GE Predix.

Over the last few years, General Electric has been undergoing pretty rapid change.  They’ve shed themselves of their financial services business, spinning off the retail banking division as it’s own independent company Synchrony  Financial and sold everything else to Wells Fargo.  They’ve also divested their consumer appliances business last year and recently divested their Oil & Gas division to Baker Hughes.  So what’s going on?  Well, the stated goal is they want to be a top software company in the world by 2020 and their plan to get there is to dominate what they call the “industrial internet” with a service known as Predix.

At a high level, Predix is a framework that consists of a portfolio of microservices available in an “App Store” of sorts that are specifically tailored for industrial applications.  The advantage to Predix is there is great emphasis on data and analytics as well as robust security which stems from the fact the platform was designed from the ground up to work together.  There’s nothing particularly groundbreaking here about the technology, but the idea is to leverage their already sizable installed base in the “operations technology” space where there is currently very little competition.

 

Blog 01: Part 02 – How I learned to Stop Worrying and Love the Disruption

Everyone’s looking to be the next disruptor.  The next big thing.  Often though, it’s not merely envisioning the next great technology, but also being in the right place at the right time.  There’s a degree of randomness to it.  I’m reminded of the huge failure of the Apple Newton which was released almost a decade prior to the Palm Pilot.  IBM’s OS/2 losing out to Windows 95.  Arstechnica did a great write up on how and why that battle shook out, great to read over your morning coffee some morning.  The iPod wasn’t the first portable MP3 player by any stretch, but certainly the most popular.  Heading back to the late 70s/early 80s where you had the shift from time sharing mainframes to PCs.  That’s disruptive.  And in the last example, ironic, that in cloud computing and “as a service” offerings, we’ve come full circle.

But there is no denying the impact the “cloud” has been making as of late and it’s a bit of a wake up call for an infrastructure oriented IT professional, now due to the proliferation of application, platform, and even infrastructure as a service options available.  In my particular situation a lot of my current day to day involves assisting business partners with migrating their applications over to EC2/RDS instances in Amazon’s cloud.  This has been going on for the last few years, but has really been building up a head of steam in recent months.  This was the chief driver for my entering into the EA MPS program.  True there will always be jobs for knowledgeable IT infrastructure people, but they are becoming increasingly pidgin holed as it becomes cheaper for organization to purchase these services from third parties.   I’m betting it will be easier for someone with a strong technical background to learn the business side of things, rather then the converse.

“I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.”

-Douglas Adams, The Salmon of Doubt

There was a great set of readings on cloud in this lesson, I especially enjoyed the anti-cloud pieces.  Not because I myself am anti-cloud per say, I’m only just turning 35 this November so I don’t yet have the animosity some of my more senior colleagues harbor for it, but because the cloud simply is NOT the magic bullet, the solution for everything.  And the subscription model is NOT always cheaper.  I’m going to go through the Gartner Top 10 SaaS Myths article and talk about about those points and why I agree.

Myth 03: It’s Cheaper – Probably the big one, right?  Cost is king, after all.  But in the case of cloud, not always is this true.  For example, in our portfolio we have some applications which I will describe as “very legacy” (to be charitable).  These applications have dependencies on older, EOL operating systems and often run on hardware that may be a decade out of vendor support.  You may recognize this has a horrible situation, but these applications exist in that odd state where they are both too critical to the business to decommission, yet not important enough to spend any money to maintain or upgrade.  And yet, in the zeal for migrating everything without a hardware dongle over to cloud, application owners were SHOCKED to find out that they were now incurring charges for these systems which previously were “FREE.”  You and I might see this as simply paying down that technical debt, but I know the appearance of these bills broke the heads of quite a few folks from the business side of the shop.  Another thing that has happened a bit with some of our newer systems, some of the savings have been eaten up by mismanagement of cloud resources.   For example, I can look in any random AWS S3 bucket and see in some cases several hundred gigs of orphaned volumes.  Sure, it’s only a few cents a GB/month in charges, but in the aggregate we’re talking a sizable amount.

Myth 10: It’s All Integrated – Now, I know the Gartner article is specifically speaking to ERP systems as a service and I’m speaking more generally, but it’s not good to EVER assume integrations.  I know of several cases where an application has been migrated to AWS, but the web front end and database is still running on prem.  Depending on the type of integration, I would say existing in a cloud is even tougher.  For example, I know of one integration between an ERP system and shipping PCs in a warehouse that is set up like this:  ERP system running on Windows, hosted internally in data center in Georgia has script which every hour exports shipping data as a CSV file into a SAMBA share on a Unix box in a Data Center in Cincinnati.  The shipping PCs, again, running Windows, map to this share and the FedEx shipping software is configured to pull the data from the CSV file and print labels.  It’s kludge as hell.  But it works. (The backstory to WHY it’s set up this way is because the shipping PCs used to reach directly to a file share for the data, but these PCs were managed by FedEx and the shippers liked to surf some of the more questionable sections of the Internet in their downtime, which caused a rather nasty piece of software to crawl into and take down several domain controllers one Friday afternoon.)  And since it works, there is zero interest in fixing it.  But now it’s causing some issues with the pressure to move everything to the cloud.  Again, can’t avoid paying that technical debt forever.

Despite these edge cases that I’m sure any company who has existed for more than a few years will have, the cloud model is certainly the future.

 

Blog 02: Part 01 – The aaS Revolution

I have a decidedly love hate relationship with Software/Platform as a Service offerings.  I’m slightly biased, since my specialty for the last few years has been mainly infrastructure projects.  I also believe that in many cases people use the term “cloud” without really knowing what it means.  They may as well say “Æther” because it’s this magical technology that just makes everything better.

Marketing department reacting to “the cloud”

My organization has been non-stop consolidating various minor data centers/computer rooms sprinkled throughout the world located at our facilities into single regional strategic data centers, one per Americas, EMEA, and ASPAC region.  This even includes minor infrastructure one would typically find at a local site, e.g. file/print servers, domain controllers, TACACS/ACS appliances.  The file/print servers are being replaced with cloud storage options, in our case we have a private instance of Box.com.  The domain controllers are simply being removed outright.  The greater security advantages of this are greatly touted, but now up time of network circuits is absolutely critical.  Business applications and supporting databases which may be running on site are being either 1) decommissioned, 2) migrated to AWS, or 3) hosted in a strategic DC.  The strategic direction is pretty clear: shared services are the future.  Very legacy shop floor systems notwithstanding, (and I mean LEGACY.  Like, DEC Alpha hardware running OpenVMS/VAX, a production database someone built in a version of Apple FileMaker that went EOL in 1998, and an eclectic bunch of control software for shop floor PLCs that still require physical licensing dongles), the directive is all new applications should be built in AWS, barring any export control considerations.

I’d like to type out some thoughts on a few of the recommendations from the two Gartner readings on application architecture.

  • Build a core competency in the application architecture discipline – Strongly agree and by extension, there needs to be a standard across the enterprise in terms of acceptable technologies with which to build solutions.  Tech stacks are not a new thing, but they lag somewhat in being kept current.  This causes “shadow IT” to happen, i.e. unsanctioned systems/ processes being built by people on the business side who know enough to be dangerous or cobbled together over the summer by an intern.  These “solutions” may be great proof of concepts, but are rarely scalable or meet any sort of security/risk compliance standards.  (Interns for whatever reason LOVE to store company IP in “free” third party cloud storage offerings we have no control over)
  • Leverage separation of concerns and create discrete functional components to enable dynamic scale and improved stability, and to minimize component dependency – Modularity and seamless communication between systems is key.  We have so many systems where the “integration” is a “swivel chair,” i.e. a physical person manually handling the data transfer.  Standardization of data inputs and outputs makes changing up system components much much easier.  For whatever reason, lots of off the shelf solutions love to use proprietary data formats, which should never be allowed to happen, IMO.
  • Utilize in-memory data management’s improved performance to enable rapid, dynamic navigation of application capabilities – The bane of my existence are end user complaints that something is “slow.”  There are so many possible things that can contribute to an end user’s perception of latency, but in my experience everyone loves to blame the network.  Especially the application teams responsible for the “slow” application.  In memory data management can help and with the application and database in EC2 and RDS instances respectively, can actually be doable where in the past it would have been too expensive.  But in many cases throwing CPU or RAM at something is simply masking the root cause.  It’s amazing how lazy programmers can be, dirty fixes become permanent, etc.  I’ve fixed so many “network” issues simply by having someone re-write a DB query to NOT pull in an extra few million superfluous records than were actually needed.
  • Application architects must work collaboratively with enterprise, technical, business and information architects as part of the organization’s solution architecture efforts to ensure that the systems built conform to the requirements, principles and models that support the enterprise’s change agenda, as captured in the gap plan and transition road map – This right here is the reason why EA initiatives fail.  Too often the business and IT use the “college student working on a group project framework” where communication is at the bare minimum, everyone goes off and does their own piece of the overall problem, then everything is cobbled together at the end right before the deadline.  Not only is it critical for all stakeholders to be engaged throughout the process, but it is critical that they also are speaking the same language.  I’ve sat in meetings where representatives from both groups go to battle over essentially the same outcome, but phrased in a different way.