Topic 6 – Understanding the Emerging Business Architecture Layer

Post 1 – Help! We Don’t Have A Formal Business Architecture

I can’t remember a week that has passed in which some organization or another was forced to reduce payroll expenditures. As a long-time employee of a very large organization I have seen this play out more times than I can count. In my experience is plays out like this: 1) The decision to make payroll reductions is made by C-level executives, who then provide guidance to middle managers on the distribution of these cuts across organizational divisions and departments. These decisions are often subjective, and based on the knowledge, experience and gut-feelings of those with P&L responsibilities. 2) After a round of cuts, there is generally a scramble to figure out how to distribute the tasks of those cut among the survivors. It’s is generally a messy process and often times the gaps in business process coverage aren’t discovered until the last moment, forcing a crisis or near-crisis event. 3) Once the dust settles the staff is leaner, and in the short-term the organization’s balance sheet improves. However, there there is business process spaghettification. People end up taking on tasks that fall outside of their expertise and/or become overburdened and work less efficiently.

Notice that there is no mention of anyone consulting the Enterprise Business Architecture (EBA). Some organizations have no formal EBA, and therefore make decisions informed by experience and best-intentions. In some cases it works out, but in almost every instance such a scenario would be improved in consultation with the EBA. Gartner’s definition of EBA is as follows (Burton, 2008):

“That part of the enterprise architecture process that describes — through a set of requirements, principles and models — the future state, current state and guidance necessary to flexibly evolve and optimize business dimensions (people, process, financial resources and organization) to achieve effective enterprise change.”

EBA is a tool for understanding the business, business processes, and the relationships between people, departments and the technology that bind them. For mid-to-large sized organizations, it’s a critical tool for assuring that business decisions are carefully vetted and that far-reaching implications are understood. Having studied the models of business processes and the networks of information flow and relationships between all of the various EBA artifacts, informed decisions can be made. Agile organizations would be well-served to work with their Enterprise Architects to begin the process of building an Enterprise Business Architecture to inform on business decisions and to serve as a means of understanding current business current and future states.

EBA can be considered as a “channel for engagement” with the business (Burton, 2009). In other words, it’s a great segue-way for Enterprise Architects to engage the business on a topic in which the business will have a great deal of interest. If the business is lacking a formal business architecture, then it would be no surprise to me that the business would be eager to meaningfully engage on this topic and become a motivated participant in EBA codification.


Burton, Betsy. (2008). Understand Enterprise Business Architecture to Realize your Future State. Gartner. ID: G00158306

Burton, Betsy. (2009). Use EBA to Effectively Manage Business
Transformation, Turmoil and Evolution. Gartner. ID: G00165663



Post 2 – Is Our Organization Ready for Enterprise Business Architecture?

I’m not beyond doing a bit of lobbying to realize my goal of becoming a practicing Enterprise Architect. I found an opening in my organization’s recent reorganization effort, which entailed breaking down the old functional divisions and matrixing them together to provide integrated services. Another element of this reorganization effort is to streamline business processes across the organization. The organization has announced the creation of a Transformation Management Office, and also re-established an Enterprise Architecture team. Thankfully, my politicking hasn’t gone unnoticed (nor have the exorbitant bills for my grad school tuition). I was recently promised that I would have a role in this business process re-engineering effort – possibly as the second member of the EA group. However, getting what I’ve asked for may come bundled with a great many headaches. As I reflect on what this effort may entail, I have to wonder if the organization is ready for it. What will this effort entail and do we have the resources to dedicate to doing the job right?

Organizations aren’t born ready for Enterprise Business Architecture (EBA). There are a number of prerequisites for determining whether an effort to implement EBA is likely to succeed: availability of information, processes and skills (DeGennaro & Miers, 2012). All of these are challenges for my organization. Availability of information is a concern only in that, as an organization that grows through acquisition, there are a great many information silos to sort through to access all of the information needed to fully understand the business information flows. A second challenge is that the organization has no formal business architecture. A key executive told me during an interview for another course that others in the organization “probably don’t know what Business Architecture is”. This speaks not only to the lack of codified business processes, but also indicates the significant training needs that will be required to implement a standardized business process across the organization.

One specific challenge to address is that the organization lacks true business analysts – the employees in these roles are more akin to software systems experts that help the business to train employees in the use of software, provision new users, and prepare requirements documentation for new projects. However, since they are not embedded in the business these business analysts do not contribute meaningfully to whatever business architecture may exist, nor do they advocate on behalf of the business. Any architects assigned to work on business architecture development will be well served to take care not to be morphed into business architects or business relationship managers by virtue of their efforts (DeGennaro & Miers, 2012).

However, the organization also sports a number of very positive attributes that will ease the business process re-engineering process. For one, the organization has a well-defined strategy thanks to its collaboration with a well known management consulting firm. Moreover, the organization’s top executives appear to be genuinely supportive of the effort as indicated by regular emails coming from the C-Suite executives outlining the planned activities and future plans.

So, on one hand we have some technical, people and process challenges; On the other we have a dedicated executive team. How does this bode for the organization’s likelihood of successfully implementing business architecture? I suspect that, so long as the executive support does not waiver as the complaints from middle-managers start pouring in, the project will success. The dedication of management, I believe, will inspire others downstream to follow. Management must effectively the benefits of establishing/codifying the organization’s Business Architecture and how it will pay dividends in various other areas of the overall reorganization effort. Part of the organization’s goal entails moving to a more mature operating model, which will be eased through this Business Architecture effort. Other elements that would be more informed and efficiently implemented include leveraging SaaS, Big Data projects and even upcoming IoT efforts – all of which will be eased with the knowledge gained through a codified Business Architecture.


DeGennaro, T. & Miers, D. (2012). Assessing Organizational Readiness for a Business Architecture Program. Forrester Database.


Post 3 – My First Experience in Business Architecture Modeling

The image to the right represents my very first effort at business process modeling with Aris Express using BPMN. I now consider it to be crude compared to some of my latest modelling efforts, but I’m proud of it nonetheless because it represents my first real experience in attempting to understand a business process and model it and the underlying technical architecture. I learned a great deal in this exercise and I’d like to provide a summary of my experiences in order to generally inform as to what this process might look like for others endeavouring to do the same.

First – a warning. This is no trivial process! Initially I expected to sit down and complete this exercise in an afternoon. Ultimately it took me a couple of weeks to compile a list of all of the people, tools and business process steps required to create a business work product. I scheduled countless meetings with various individuals who carry out the modeled tasks – many of whom treated me with suspicion, indifference, or flat-out avoided me. Some of the people that I interviewed knew very little about the overall business process and preferred to stay within the safety of their sub-domain. Those few who had holistic knowledge of the process were particularly difficult to track down. In order to illicit cooperation I reviewed the help-desk queue and found that some key participants had outstanding technical issues – which I happily worked to resolve – and then, BAM!, I hit them with my questions.

My first interview was with a quite-willing participant. My organization’s EA group is a team of one (1). He was thrilled that I might be willing to offer up unsolicited (and unpaid) business process modeling in the area of “Customer Lifecycle Management”, which spanned all business contact points with the customer. I was asked to take on the subset of tasks related to the process of creating sales proposals through account setup. I was provided with a high-level summary of the groups and technologies involved in this process. Using it, and years of organizational knowledge, I embarked upon setting up interviews with individuals in the marketing, sales and ERP subject matter experts.

As previously mentioned, I went through some trials and tribulations to acquire the information I needed to model the current state of the process. In the end I was able to collect all of the information that I needed in order to model the current state model. However, as I embarked to actually create the model I realized that there were serious discrepancies between what had been described to me at a high level by the Enterprise Architect and what was described by the SME’s. I discovered that there were different tool sets being used for the same process. I also discovered that none of these tools had been integrated, and so there were distinctly different processes in place across the organization. Additional interviews revealed that some of these tools had been adopted by shadow IT efforts and were not available to those across the organization. IT was only made aware of these tools thanks to my final report and recommended future-state model.

To create my model I used Aris Express to create BMPM models of the entire process. I had an overarching process (depicted above), and then I created a number of sub-models that articulated the individual business steps by each actor. I had three distinct current state models – one each for each of the sets of tools being used to take customers through the first contact, proposal generation and account setup processes. My magnum opus was a future state that incorporated the “best of” and most economically and functionally-normative tools in use, as well as a plan for integrating each of the technologies.

In my enthusiasm for the project I had dared to hope that the outputs of my effort would have been utilized (or at least reviewed) by others within the organization. Unfortunately, I was informed that I had created some nice models, but that my suggested future state had already been independently discovered by the PMO, and that others were working on a comparable solution. Oh well, at least I had validation that I was on the right track!


Topic 5 – Enterprise Security Architecture

Post 1: Security in Software-Defined Data Centers

Software-Defined Data Centers (SDDC) are a form of virtualized networking technology that shifts the burden of networking logic from hardware to software. Essentially SDDC replaces the capabilities normally handled by hardware on OSI layers 3-7 with “just in time” software definition of the same.  There are several advantages: 1) SCCD allows the dynamic allocation of routing and switching protocols , 2) open source tools are available that provide standardized “hooks”, installed on dedicated controllers, that enable traffic management and also expose API’s to enable programmers to incorporate packet forwarding and control into their code, and 3) SDDC can be added as a feature to existing networking hardware (MacDonald, 2012). The latter point is important because it implies that it could either extend the useful life of existing equipment, and/or prevent the need to replace existing equipment in order to accommodate new system designs. Another key benefit of SDDC is that it enables on-demand fulfillment of various services giving apps the capability to be continuously available.

Thus far, SDDC is not widely adopted. Part of the reason for this is that the adoption rate of cloud computing, and the influx of new technologies such as SDDC, itself, feed uncertainty around making data center investments (Gill & Cappuccio, 2016). In addition to this, there is a great deal of pressure on IT, generally, to keep up with the business, who is happy to take on shadow IT projects in order to gain the capabilities they believe will bring about a rapid and decisive competitive advantage. SDDC’s relative newness is another factor. Its value-proposition is clear to organizations that have deep expertise in infrastructure and operations, but less so to traditional businesses such as construction and banking.

Having provided a sufficient basis for generally understanding SDDC, then it is appropriate to discuss the various security aspects around this new technology. First, since we are essentially overriding the underlying hardware infrastructure – which has long provided hardware-based security – we must take over the burden of this security via software as well (MacDonald, 2012). Consider all of the hardware-based security typically encountered in data centers today: stateful packet inspection, malware inspection, firewalls, etc. This is no small undertaking to replace with code! Moreover, organizations must consider and ensure support for the security capabilities that must be in place to secure their newly-enabled data centers. These new software-defined controls must be aware of the changes of the infrastructure around it, establish adaptive trust zones, and separately manage and configure security policies (MacDonald, 2012).

There are several risk areas to address as well. First, the “software-defined” revolution is in its infancy. There are no “best practices” nor extensive histories or white-paper repositories set up to support organizations that choose to take the leap. Chances are that if an organization does decide to implement SDDC, then it would be supported by a vendor. This vendor would become a critical partner and might not be so easily replaceable should things go poorly. Therefore, the second risk is vendor lock-in (Russell & Scott, 2017). Finally, much like with cloud vendors, there is little standardization of the API’s exposed. These are formidable risks, but not insurmountable, especially for an edge organization willing to take on these risks for the potential of breakthrough technological innovations.


Gill, B., Cappuccio, D., (2016). 2016 Strategic Roadmap for Data Center Infrastructure. Gartner. ID: G00308122

MacDonald, Neil. (2012). The Impact of Software-Defined Data Centers on Information Security. Gartner. ID: G00229660

Russell, D., Scott, D. (2017). Should Your Enterprise Deploy a Software-Defined Data Center? Gartner. ID: G00316477


Post 2: Cloud Security – How Does an Organization Mitigate The Risks?

So, are the risks of moving enterprise resources to the cloud as severe as I’ve been led to believe? My research is leaning me towards the opinion that cloud security is possible when a collaborative process exists between the client and cloud vendor. I submit that in many cases an organization’s data might be more secure in the cloud. Why? Because setting up cloud platforms is what some vendors do – all day long, 24/7. Does your organization? Probably not. Most organizations have one or a few things that they do really well. If your organization is really, very good at providing medical services or design & build contracting, are you also very good at IT security? Maybe, but for many cloud providers maintaining and securing infrastructure is their key deliverable and the reason for their existence. With some due diligence, organizations may find that cloud security isn’t bullet-proof, but that it is no more risky than than setting up apps on your unpatched servers!

So, let’s dig a little deeper. What are the hallmarks of successful attacks on data centers, generally speaking? They include: misconfiguration, mismanagement, missing patches and all manner of human errors (Heiser,, 2016). These are issues subject to any Internet-connected device and can be mitigated with robust IT security management. From a buyer’s perspective, we would be wise to investigate this before signing any contracts – more on that a bit later.

There are general and widespread concerns about cloud security, but in those instances where something bad did happen, were the cloud vendors fully at fault? For those of us that work in project-based organizations, we routinely deal with poorly-defined or changing business requirements. Could this not also be true of customers of cloud services? Of those organizations that move some business systems to the cloud, was it 100% clear who would own it? Was it clear who would be responsible for provisioning users? Lack of agreement on who owns such responsibilities are common (Heiser, 2013). The following graphic is helpful to illustrate the segregation of responsibilities.






Figure 1: Cloud Ownership –

Another issue with cloud platforms is that of limited client control mechanisms for their systems moving to the cloud (Heiser, et. al, 2016). API’s made available by vendors tend to have limited capabilities that might force an organization to compromise on security – and this is a legitimate concern. The workaround would be to utilize third party vendors who specialize in the space between organizations and provisioning users for cloud-based apps. For virtualization security organizations might consider Cloud Workload Protection Platforms (CWPP), and for SaaS governance organizations might consider Cloud Access Security Brokers (CASB).

To mitigate most cloud risks, a robust security assessment is in order. If an organization is to fully understand the risks of moving vital data to the cloud it is important to know the following. Of the many important questions to ask, the first may be as follows (Heiser, 2013):

  • Where will the data be physically housed?
  • What are the laws in that area?
  • What are the contractual practices in that area?
  • Is our contract enforceable in that area?
  • Who will be exposed to the data?

These are the “due diligence” questions I referenced above. It behooves an organization to ask these questions before going any further. From there, an organization should inquire about whether or not they allow the buyer to execute penetration testing. If not, determine if an audit was carried out by a third party. Get written evidence of all security and quality from the vendor before signing any agreements, and research any certification claim (Heiser, 2013).

Finally, it is important that the following risks are considered. The first being that not all cloud technologies are equal, nor are they standardized. The cloud service offered by Amazon Web Service differs from that of Google Cloud, which differs from the speciality SaaS vendor that’s hosting your CRM or back office business process. The security configurations and the breakdown of responsibilities between vendor a buyer will vary for each. Another issue is that of a lack of best practices in the assessment of cloud services risks (Heiser, 2013). Each organization must pursue risk assessments in the way they feel necessary and sufficient. Finally, not all cloud providers are willing to be forthcoming with what they feel to be proprietary information (Heiser, 2013).

Hopefully this gives you some context for better assessing the question of cloud security. Got any comments, questions or nasty remarks? I welcome your feedback. See the comment box below!


Heiser, J. (2013). Analyze the Risk Dimensions of Cloud and SaaS Computing. Gartner. ID: G00247629

Heiser, J., (2016). Predicts:2017: Cloud Security. Gartner. ID: G00296116


Post 3: The Role of Governance in Organizational Digital Security

It occurs to me that governance is a great way of managing the complexity of organizational digital security efforts. Gartner advocates an approach that I’ve outlined in the mindmap format above. It appeals to me because of it’s simplicity and because it reminds me a bit of the TOGAF ADM. The recommended activities are concise and each contribute meaningfully to the overall goal of digital security. Does your organization thoughtfully address each of the elements of a security program above?

There are so many points of attack to defend that the task would be overwhelming without an organized approach. I’m coming to appreciate more through my research that Governance plays a key role in the realization of good security. As evidenced by recent newsworthy security failures there are likely a great many organizations that have gaps in their security. My guess is that these organizations have taken a transactional approach to governance rather than using it to innovate and defend.

There have been several recent security breaches in organizations such as Target and Equifax, and even tech companies like LinkedIn and Yahoo. Might stronger security governance have prevented them? The Target breach was enabled by an HVAC contractor that allowed credentials for several of its customers to be stolen. Equifax failed to patch a Web App in a timely manner. Yahoo and LinkedIn – technology companies mind you – failed to implement encryption schemes in line with those in use by its technological peers at the time. None of these breaches were unavoidable. The thing that they have in common in my mind is that these organizations had the tools, capabilities, and expertise sufficient to fully understand these risks but maintained status quo rather than to actively defend and deflect attacks through a rigorous security program.

Another area of failure in my view is that of providing appropriate funding to meet the substantial global digital security risks. According to Gartner, “investment in security processes lags investment in other IT disciplines by three to five years” (Scholtz, 2013). To me, this is astounding given the well-publicized nature of notable corporate security breaches.


Scholtz, Tom (2013). Creating a Security Process Catalog. Gartner. ID: G00246890


Topic 4 – Enterprise Technology Infrastructure Architecture

Post 1 – Enterprise Technical Architecture Modeling

Having addressed Application Architecture and Information Architecture, I’d like to explore the basis of each – Enterprise Technical Architecture (ETA). An organization’s technical architecture is the physical technical infrastructure upon which all PCs, servers, network hardware, applications and data, IP telephony, and video conferencing components lie. In other words, it is the physical “guts” of an organization’s IT Infrastructure. It isn’t static, but rather a dynamic and changing thing that must accommodate the business as it evolves to compete in an increasingly challenging and competitive environment.

Like other EA development activities, the ETA should be developed incrementally rather than to attempt a “Big Bang” approach. Efforts to create monumental modeling and/or documentation current states will likely waste significant time and will likely never be used. There are two key ETA deliverables: technical pattern models and technical services models. Technical pattern models serve as blueprints that guide application architects use to create applications. An example of this would be the utilization of Service Oriented Architecture (SOA) rather than monolithic client/server application architecture. Technical service models would outline the actual services encapsulated in a SOA pattern model. To reiterate the incremental nature of this approach, rather than to rebuild all legacy apps in following the technical patterns and services promoted by the EA group, they would instead become the basis for future development.

The technical pattern and service models promoted by the EA group should be modeled in each of the following views: portfolio, conceptual, logical and implementation level of detail (Robertson, 2007). These models vary by the level of detail such that:

  • the portfolio-level model would be a very simple model understandable by non-technical business operatives
  • the conceptual model would model key differentiating features
  • the logical model will contain all architecture and technology details
  • the implementation model would have the very highest level of detail including model information, device counts and configuration details

Ideally, these models, when utilized by project teams to implement solutions, would result in product designs that are substantially similar to the models that they’re patterned after so that they can be compared at each level (Robertson, 2007). If it sounds like a lot of work – then you’d be right! However, the benefit of the exercise of EA, and of ETA modeling, is that it would enable an organization to better execute its strategy.


Robertson, B. (2007). Toolkit Tactical Guideline: Use a Common Iterative Approach to Create ETA Models and Project Designs. Gartner. Retrieved October 3, 2017. ID: G00154007



Post 2 – So You’re on the IT Management Track

My purpose for maintaining this blog is two-fold. First, it is a requirement for my EA874 course in Enterprise IT Architecture. Secondly, I’ve found it to be a useful sounding board for my ideas and for discovering and addressing knowledge gaps…albeit in a very public way. One such knowledge gap is how to transition from being an IT contributor to being an IT leader? Thankfully, as with all things IT, Gartner has us covered. In the article Toolkit Framework: Plan Your First 100 Days as an I&O Leader, Jay Pults lays out a roadmap for an IT Infrastructure & Operations executive. This article provides the basis for the suggestions below – peppered with my own insights, of course!

Some of this information may be somewhat intuitive, but so often people complain that there’s no “manual for that” (e.g. for child rearing). For those of us who can use a bit of guidance as we foray into the mysteries of IT managment, this is your guide! The roadmap covered in the Gartner article outlines a 100 day new-hire-to-contributor plan that advocates a three part plan: 1) learning the enterprise, 2) getting to know the key stakeholders, and 3) devising and implementing an enterprise-level I&O strategy. These are lofty goals, indeed, but stay with me because there is a method to the madness.

As advocated in the article, the first 30 days or so should be spent getting to know the organization, the IT infrastructure and the organizational SMEs, stakeholders and executives. It is rightfully described as a fact-finding tour. In my humble experience, IT will not suffer fools – unless you’re a noob, and in such case you have a finite amount of time to get up to speed. In this 30 day window you’ll be expected to learn the role, and the full scope of responsibilities bestowed upon you. This is your best and only window of opportunity to do so as you’ll be forever screaming towards the finish line from this point forward. Don’t make changes right off the bat – instead you should:

  • Meet the CFO, the CIO, the Chief Enterprise Architect, all of your direct reports and key line of business managers
  • Develop an advisory team
  • Identify critical dates and milestones
  • Review all ongoing projects that you’ve inherited
  • Learn about vendors (past and current)

In the next phase of, say, 40 days, you’ll want to take all of these things that you’ve learned in the first phase and device a strategy. Aim for a 5-year strategy, and work with all of the aforementioned to vet your plan, and then get it ratified. Focus on areas where you can meet organizational strategy and also look for opportunities to run lean lean lean! Baseline the current environment and develop the metrics by which you’ll gauge your progress going forward.

In the final phase, you’ll begin the implement your plan. Meet with all of your direct reports and explain your motivations and your goals. Get your staff in alignment with your plans and make any staffing changes necessary in order to give your new strategy the best opportunity for success. This sounds cold, but I’ve seen this process take place in my own experience. My organization took on a new CIO. For the first several months of his tenure he learned the organization and the IT staff and capabilities. Ultimately, he determined where the bottleneck lied, and he made some difficult decisions. They weren’t popular, but it is clear in retrospect to all parties involved that it was for the best. IT has changed for the better and all of those affected have landed on their feet. Hopefully dramatic changes like the one my organization won’t be necessary, but that’s what IT management entails. Any comments? Suggestions? Nasty remarks!?! – leave me a comment below.


Pultz, Jay (2010). Toolkit Framework: Plan Your First 100 Days as an I&O Leader. Gartner. Retrieved October 3, 2017. ID: G00201291



Post 3 – Data Management

When I was a kid the first computer that I was able to actually spend time on and abuse was the Mac “Plus”. It required some rather tedious floppy disk management skills in order to carry out most tasks, but man, I was really smitten by it. The floppy drive had an 800 KB capacity. I recall this for two reasons: 1) my teacher at the time made a big deal about this – he thought it was truly wonderful, 2) my teacher was also my football coach. I’m not sure that the 2nd point is relevant but I’ll never forget that I was taught Basic by my football coach. This all happened in 1987 or 1988, which made my life flash before my eyes as I contemplated how long it has been!

That 800 KB was a miracle then, but fast forward to present day. Now I work for an organization that has its own data center. As I understand it the nightly ETL processes approximately 10-20 GB of transactional data per day – and that the growth rate of transactional data is accelerating. Now consider that my organization is investigating the use of IoT to monitor various aspects of facilities management across ~ 1 billion square feet of real estate. Clearly, the costs of housing all of this data are substantial and poised to grow…and we haven’t touched on all of the dark data accruing in documents, emails and other miscellaneous sources across the WAN.

There are several strategies for managing all of this data. To manage costs organizations can deploy storage efficiency technologies wherever possible (Monroe, 2012). Examples of this include software-defined storage, hybrid arrays or flash storage devices. Now, let’s say that an organization has done everything possible to make its data center as energy-efficient and as data-storage efficient as possible. Wouldn’t it also make sense to implement a data management program as well? Quite a lot of data is stored that has a finite shelf life. There’s little point in allowing it to take up space unnecessarily. Moreover, organizations should look at strategies for reducing redundant data as well (Monroe, 2012). Another possibility for managing all of this data would be to move some or all of it to the cloud. The benefit of this would be to reduce the costs associated with data maintenance and storage.

Since we have all of this data, why not make good use of it? Take the case of my organization. We’re looking at establishing IoT to identify operational efficiencies. By using big data analytics, all of the data collected by these devices can further inform on how to further optimize operations. A great many possibilites for improvement can come from the exploitation of the data stream generated by the IoT devices. Business architects and engineers can identify opportunities for improving and/or automating various business processes (Laney and Steenstrup, 2013). This will entail establishing a Big Data analytic program as well – but this will be addressed in another post! Thanks for reading.

Laney, D. and Steenstrup, K. (2013). The Confluence of Operational Technology and Big Data. Gartner. Retrieved October 5, 2017. ID: G00246899

Monroe, J., et. al (2012). Predicts 2013: Managing Infinite Data From Every Direction. Gartner. Retrieved October 5, 2017. ID: G0025437

Topic 3 – Enterprise Data Architecture

Post 1 – Enterprise Information Architecture

In my previous discussion regarding Application Architecture, I mentioned the four pillars of Enterprise Architecture: business architecture, application architecture, technical architecture and data (information) architecture. Information architecture (IA) entails the dissemination of data across the organization, and for the enterprise architect, all of the modeling, principles, current and future state planning that goes into supporting the organization’s business strategy. Like the practice of Enterprise Architecture, itself, IA should not take on too much at one time or it will be doomed to failure. There’s just too much data across modern organization to tackle all of it at once. Therefore, one would be well advised to take a phased approach to IA based on those areas of urgent need and stakeholder pain.

One critical aspect of IA is that of modeling. Modeling is important because it helps one to distill thoughts and efforts and lay them out in an organized manner. For IA modeling efforts there are three basic levels of modeling abstraction: 1) conceptual models, 2) logical models, and 3) implementation models. Modeling informs a number of subsequent activities including architecture, design, implementation and management activities (Newman,, 2008). Moreover, each level of abstraction speaks to different audiences. At the conceptual level, in my way of thinking, the philosophy or purpose for IA is made clear. The logical level of modeling abstraction establishes a formal argument, a detailed model of the entities and relationships to be established. The implementation model is the detailed specification that would be utilized by solution and application architects in project level work.

AI is a vital component of EA because it paves the way to establishing a “single source of truth”, and simultaneously encourages data reuse and consistency of data across the organization. This is an exercise of breaking down data silos based on legacy technical architecture and upgrading to modern, agile data services and sources (Newman,, 2008). The AI planning done by enterprise architects, if embraced by the organization, become the basis of a full-fledged Master Data Management program. One catch, however, is that not all information can be architected. Proprietary systems, for example are unlikely to expose their data structures/data storage methodology for scrutiny.


Newman, D., Gail, N., & Lapkin, A. (2008). Gartner Defines Enterprise Information Architecture. Gartner Database. Retrieved September 18, 2017. ID Number: G00154071

Post 2 – Data Quality and Data Silos

Data silos and poor quality data have been part of my daily existence for the past decade. In my organization when anyone mentions the quality of our data, most especially that originating from our ERP, they are met with knowing sneers. Data quality is a widely known issue, and nothing has been done to address it. Most of us who have been around for a while still sport the wrong titles on our AD record because nobody has ever bothered to update them…and even if you ask there’s no incentive to jump on the request. This is the norm. Buy why? Well, because developers simply re-create the wheel with many or most of the applications that are designed, developed and tested in-house. They have created their own “sources of truth”, or databases that they have developed from a hodgepodge of sources. There is no quality of culture, no organizational QMS, no attempts to address the issue in any substantive way. A new CIO recently came on-board, and brought with him big plans for modernization and a cadre of new managers, and now they’re asking questions. Fingers have been pointed and now the developers are feeling the heat.

So what can an organization do to get around such an issue? Obviously, it can’t start from scratch and rework every in-house application currently in service. In order to get a grasp of the problem the organization should take an in-depth look at the issue and start the process of building a legitimate data taxonomy and allow this exercise to inform on future EIA activities and development. The company would be well served to start planning and the EA team should be the bleeding edge of the effort. The EA team is well-positioned to look at the existing information architecture, the IT systems and business processes that rely on them. By applying a Enterprise Information Architecture (EIA) framework, the EA group can start building out a map of the data, data silos and the information flows between applications. They should start with the biggest point point – and in the instance of the organization in question – it should start with the ERP.

One recommendation that I encountered in my readings was to start EIA planning activities by documenting and modeling the critical-to-business business processes as identified by executive management. By understanding the pain points of these critical business processes, and architect can identify the systems upon which the business process relies, the information that is created in said business process and how that information is stored and disseminated (Cullen, 2005). Work would then progress to identifying sources of structure and unstructured data and establishing data taxonomies focused around these critical-to-business business processes. It would then be possible to map the data and data entities to the target applications and/or repositories. In this exercise, one would gain an understanding of the current state, which would inform on how to go about implementing the desired future state and enhanced, and more effective and reliable business process.

Ideally, this effort would lead to a business-led effort to establish a full-fledged Master Data Management program which would systematically break down the data silos that are preventing the organization to modernize. Painful, yes – but worth the effort because refusing to acknowledge the issue will, at some point, prevent the organization from realizing its growth goals or to respond to future business disruptions.

Cullen, A. (2005). Simplifying Information Architecture. Forrester. Retrieved September 16, 2017.

Topic 3 – Master Data Management

Having a Master Data Management (MDM) practice is something that appeals to me. My logic goes like this, all companies have governance for large expenditures, almost all  have governance for project activity, but not all organizations have governance over one of their most valuable assets – data. I think that the biggest reason for this is that data management is hard. Also, there’s often no organizational entity that takes ownership of it. The business assumes that IT should be handling it. But should IT be in charge of data management and quality? Would IT be in the best position to determine data semantics? They might try – but the business has the in-depth knowledge of the sources of data and of the business processes that utilize the data. Therefore, it would make sense that the business should take ownership of it, and IT should play an active, but complementary role.

What I’m advocating is establishing an MDM governance framework. This would assure data uniformity, accuracy, semantic persistence, stewardship and accountability for the organization’s master data (White, 2011). The scope of this governance framework would be enterprise-wide, and should be undertaken, owned and managed by the business. An MDM would be composed of a data governance council, data stewards, and a data maintenance role – often taken on by an organizational shared services group. There are two keys to making this work: 1) the governance group should include and have the full backing of executive management and, 2) must be enforced and informed by knowledgeable and active data stewards (Friedman, 2007).

MDM is no trivial undertaking, of course, and will entail a sustained effort and a great deal of hard work. Establishing a “single source of truth” will entail establishing master data complete with a set of identifiers, attributes and hierarchies (White, 2011). Once master data standards are agreed-upon by the governance group, then the data stewards will be charged with assuring that master data adheres to the agreed standards and should be held accountable for the integrity of data going forward.

This effort will not, by itself, bring about a competitive advantage. However, it will improve business process integrity, govern the people who generate and use the data, and will generally bring improved business outcomes (White, 2011).


White, A. (2011). Scoping An Effective MDM Governance Framework. Gartner Database. Retrieved September 19, 2017. ID Number: G00214129

Friedman, T. (2007). Best Practices For Data Stewardship. Gartner Database. Retrieved September 18, 2017. ID Number: G00153470


Topic 2 – The Enterprise Application Architecture

Post 1 – Memcached

I read an article recently that made me think about Moore’s law. According to Moore’s law, roughly, the number of transistors that can be packed into a small piece of silicon will double every 18 months. We’re currently edging on transistors as small as 10nm, which will be a magnificent achievement. Whether we’ll be able to keep up with this pace of miniaturization is yet to be seen. I recall the debate from the 2000’s about whether we would be able to breach the 25nm barrier. Clearly, we got past that obstacle, perhaps we can get past the 10nm threshold as well.

The latest innovations in computing have pushed the envelope on computer processing: the proliferation of mobile devices, IoT, cloud computing, big data, etc. It seems to me that, even with the continued advancement in processing power, our processing capabilities pushing ahead at the pace of Moore’s law could actually be exceeded by the mountains of data that need to be processed. In the case of mobile applications, bursty data connections over mobile networks prevent apps from operating a peak efficiency (Golden, 2010). Thankfully, there are smart folks out there who figured out how to harness the power of unused or under-utilized resources that can help to make up for the inherent limitations of working on a mobile platform.

Memcached is just such a technology. It’s conceptually simple, and implementation appears to be relatively headache-averse. It is a method of pooling unused memory. In any given cluster of servers, each server has its own memory cache. Under normal circumstances each cache is a distinct, limited instance. However, when Memcached is deployed on a cluster, the cache of each machine is pooled into a single cache, shared amongst each server (Dormando, 2009). The advantage of this is that the utilization of Memcached can improve application performance by storing data locally that might otherwise have to be retrieved/stored by means of slower channels (e.g. Web services, databases, etc.).

Another innovation of Memcached technology is the way in which the cache is handled. It uses a “Lease Recently Used” paradigm in which releases data that doesn’t get used over some specified timeframe (Dormando, 2016). This eliminates the need for Java-like garbage collection and keeps the cache from getting bloated with superfluous data. These features are particularly useful for mobile, as mentioned, but also to address the challenges posed by the application architectures needed to support the challenges of cloud computing and big data.


Dormando. (2009, April 3). Retrieved September 9, 2017, from

Dormando (2016, May 27). Memcached Overview. Retrieved September 7, 2017, from

Golden, B. (2010, January 19). Cloud Computing: The Future of IT Application Architectures. Retrieved September 9, 2017, from–the-future-of-it-application-architectures.html


Post 2 – Application Integration: Cost Increases Associated with Mobile Computing

Like any other business cost-center manager, CIO’s are driven to reduce costs rather than seek to spend more. It take a brave CIO, indeed, to push for additional spending. However, with the proliferation of mobile devices, cloud computing, the integrated nature of enterprise computing, IoT and associated big data – it may not be possible to keep up with business computing demands and also save money.

There are several reasons for this. First, supporting mobile devices simply costs more. Quite a lot of work goes into the development of mobile applications. Specialized skills and tools are required to support it including the management of mobile users, server-side logic implementation, UX development, data requirements and data handling (see my earlier post about Memcached), and even specialized testing efforts (Tarcomnicu, 2017). As a QA Analyst I know all too well about the challenges, time and expense of assuring that mobile applications work correctly across platforms, after OS updates, and on various mobile browsers (feel free to take a peek at my LinkedIn profile). Moreover, enterprise mobile apps will likely need to access data from legacy enterprise apps – and to be able to operate without significantly draining battery life in the process (Altman, et. al, 2012). All of these considerations add to the cost of mobile application development.

There are a number of upsides that CIO’s can use to sell this increase in investment (Tarcomnicu, 2017):

  • Greater accessibility
  • Improved customer engagement
  • Reinforce brand recognition
  • Improved value propositions
  • Additional sales channel

Interestingly, there are mobile platform-based cost factors for CIO’s to consider as well. Android applications tend to cost 2-3 times more than iOS apps, and iOS apps tend to serve as a bellwether for application success (Tarcomnicu, 2017). This might be the case because iOS is the older, and perhaps more stable platform between the two, with a significantly larger and more robust app store as compared to Android’s app store. It might also be the case that the development tools are more mature as well, leading to quicker development times and lower costs.


Altman, R., Pezzini, M., Sholler, D., Thompson, J., Gall, N., & Wilson, N. (2012). Predicts 2013: Enterprise Application Architecture Will Emulate Big Web Sites. Gartner Database. Retrieved September 10, 2017. Document ID: G00245997

Tarcomnicu, F. (2017, February 20). The Real Costs of Building a Mobile App for iOS and Android. Retrieved September 7, 2017, from

Post 3 – Application Architecture Roles








If ever I mention that I’m studying Enterprise Architecture, I am invariable asked, “What is Enterprise Architecture”. I must admit, for those first few months after joining the EA program at PSU, I struggled to come up with a concise definition. I’ve found some value in describing in terms that are more easily assimilable – at least for IT folks. My approach is to describe EA as an amalgam of business architecture, application architecture, data architecture and technical architecture. I’ve heard of this referred to as the “Four Pillars of EA” – although I’ve also seen articles referring to three and five-pillar variants as well. I’ll go with four since the folks at the Open Group put a lot of work into codifying it. Over the past year plus, I have spent significant time unraveling the mysteries of business, data and technology architecture – but application architecture hasn’t received equal treatment within the curriculum.

So who does application architecture and what does it entail? It turns out that application architecture is carried out at the enterprise level and also at the project level. At the enterprise level, solutions architects are responsible for the concept planning work behind creating reusable assets and work towards the adoption of application development standards and solution patterns (Blechar & Robertson Pt 1, 2010). Solutions architects also coordinate activities which will bring the organization from its current to its planned future state. At the enterprise level, application architects are responsible for designing the solutions defined by the solutions architect. Applications architects are more immersed in the technical specifications of existing systems, and lend insights and the technical design decisions which facilitate the implementation of proposed solutions and standards (Blechar & Robertson Pt 1, 2010).

The output of enterprise-level solutions and applications architects are the frameworks within which the project-level solutions and application architects work. Project-level solutions architects assure that project work is carried out in accordance with the standards and solutions guidance provided by the enterprise-level solutions architects, but do so with a focus on the project to which he or she is assigned. Project-level solutions architects also assure that the right resources are in place to carry out implementation activities (Blechar & Robertson Pt 2, 2010). Project-level applications architects carry out the work of designing reusable application interfaces and software services, such as those utilized for organizations utilizing the SOA development paradigm (Blechar & Robertson Pt 3, 2010). Application architects work closely with business analysts and other subject matter experts to assure that security, business rules and business requirements are well understood among all stakeholders and project team participants.

This all assumes, of course, that the organization is sufficiently resourced and technologically sophisticated enough to require such services. A medium-to-small enterprise may not break out these roles as described as a matter of practicality.


Blechar, M., & Robertson, B. (2010). Application Architecture Overview, Part 1: General Context and Scope. Gartner Database. Article ID: G00208946

Blechar, M., & Robertson, B. (2010). Application Architecture Overview, Part 2:Enterprise-Level Scope and Roles . Gartner Database. Article ID: G00208666

Blechar, M., & Robertson, B. (2010). Application Architecture Overview, Part 3: Project-Level Scope and Roles. Gartner Database. Article ID: G00208668

Topic 1 – Digital Disruption and Stack overview; Cloud technologies influence

Post 1 – Application Architecture and Microservices

A colleague recently encouraged me to investigate microservices as an emerging trend. This week’s reading assignment gave me just that opportunity. Having read Microservices: An Application Powered by the Cloud and 4 ways Docker Fundamentally Changes Application Development, I came away with the powerful impression that software development has made an evolutionary leap.

My organization’s IT department has recently begun to experiment with Agile software development, and more holistically, has taken steps to evolve into a DevOps mode of operation. This change has been purposefully gradual, with hiccups and new problems being encountered on the way. It has been a learning experience nonetheless, as process reviews are carried out at regular intervals in order to discuss and document lessons learned. However, it occurred to me that our reliance on monolithic application models may become our next challenge to overcome.

The typical software development project in my world closely follows the monolithic application model composed of a three tier infrastructure separated into Web, Business Logic and Data components that are load balanced and pre-scaled to support anticipated peak loads (Russinovich, 2016). This all seemed to be ideal until we started moving applications to third-party clouds. A great deal of integration complexity must now be tamed.

This is where the adoption of microservices may offer relief on many fronts. First, as we follow the DevOps path, incorporating a container technology such as Docker will facilitate closer collaboration between the developments and operations groups (Carlson, 2014). Utilizing a containerized application architecture will help to realize the goal of continuous integration (especially given the newfound reliance on cloud applications), and will address scaling issues at the same time.

The separation of applications from the infrastructure supporting them will facilitate other benefits. For one, the use of microservices will impact software performance, which will be improved by minimizing network overhead and complexity (Russinovich, 2016). Application testing will be simplified since code releases will be restricted to target microservices, assuring a segregation of duties contract that will allow testers to focus on testing newly released code and significantly reduce the need to employ regression testing. The scaling abilities of a microservices architecture will also reduce performance risks due to user loads.

This upsides to adopting a microservices architecture seems to be a good fit in this context. This is a theme that I’m sure to revisit over the course of this semester.


Carlson, L. (2014, September 18). 4 ways Docker fundamentally changes application development. Retrieved August 28, 2017, from

Russinovich, M. (2016, March 17). Microservices: An application revolution powered by the cloud. Retrieved August 28, 2017, from


Post 2 – Digital Disruption in the Facilities Maintenance Industry

This week I read several articles around digital disruption, and felt that to put it into a comfortable context for my own reflection, I would put it into the context of the industry I currently work in. The facilities maintenance industry is on the precipice of digital disruption through IoT service augmentation. There are now inexpensive devices capable of monitoring building usage patterns and even facilities equipment usage and health metrics that will reduce costs through informed targeting of activities. Examples of their use might be to monitor foot traffic in strategic areas for targeted cleaning or security attention. Washroom consumables and even wastebasket bin levels can be tracked to optimize operations and to inform. With this knowledge resources can be more strategically positioned, and overall labor costs scaled to the right fit for the job (France, 2015).

However, these technologies have not yet been rolled out at a large scale, so there is an opening for a significant marketshare grab for a bold innovator. Failure to take advantage of this known, but not yet fully exploited IoT innovation will force incumbents to react to this potential digital disruption, rather than to lead it. As outlined by VanZeebroeck & Bughin incumbent reactions take on two dimensions: 1) the response to the new digital entrant and the digital disruption, itself, 2) response to the actions of other incumbents (VanZeebroeck & Bughin, 2017).

The largest facilities management organizations such as Aramark, ABM Industries, and Sodexo Group are long-term multi-channel organizations that have the benefit of persistent consumer behaviors (Willmott & Olanrewaju, 2013). Digital disruption would have a profound impact on these incumbents because it would be an industry first, and because there is no industry precedent, a potentially painful and expensive endeavor to endure. Rather than an addition to existing capabilities, this disruptive effort must be driven by a well-defined corporate strategy which will fully enable this new mode of operations (VanZeebroeck & Bughin, 2017).

The first entrant have a critical advantage and will prepare and execute a comprehensive business and technical architectural re-engineering effort designed to support all aspects of IoT: implementation, data collection and analysis. This is no small undertaking and it would take significant time for an incumbent to react. New capabilities would have to be studied, an emergency response investment strategy devised, a technical architecture to support the influx of data would have to be implemented along with big data analysis tools and bolstering technical know-how, new and existing business processes would have to be fully integrated so that the new processes can be executed concurrent to the existing, manual, business processes.

The lesson is clear. It is much better to be a new digital entrant than to play catch-up.


France, A. (2016, December 15). The Internet of Things and the Big Brother of Clean. Retrieved August 28, 2017, from

VanZeebroeck, N., & Bughin, J. (2017). The best response to digital disruption. MIT Sloan Management Review, 58(4), n/a-86. Retrieved from

Willmott, P., Olanrewaju, T. (2013, November). Finding your digital sweet spot. Retrieved August 28, 2017, from


Post 3 – The Problems with SaaS

Moving ERP to the cloud will resolve all problems with scale, reduce maintenance, upgrade and operating costs, and reduce software customization. That and, darn it, everyone else is doing it too!

Not so fast.

This may not be the universal experience around moving ERP to the cloud. There is a great deal of vendor hype that makes moving an expensive and labor-intensive ERP to the cloud seem appealing. However, there are a number of SaaS myths and misperceptions that might lead organizations to hasty, regrettable decisions if they don’t carefully consider their business strategy and it’s suitability to a cloud-based ERP.

The reality of moving ERP to the cloud is that it may require more sacrifices than an enterprise may initially anticipate. For one, it is a strategic risk for an organization to hand over its data and critical-to-business systems to be managed, maintained and secured by a third party (Elmonem, et. al, 2016). Organizations would be prudent to do extensive research and risk assessment around data loss. Does the cloud ERP vendor have sufficient disaster recovery contingencies in place? Might organizational data get confiscated in an unrelated law-enforcement case?

There are other issues to consider. Organizations must be working towards, or willing to accept some amount of business processes standardization. If a great deal of customization is required, then the system becomes less maintainable in a cloud-context, and may drive up costs such that any financial benefits are offset. Moreover, any integrations with cloud-based ERP could entail extensive rework to existing business and information systems. The cloud system’s software architecture will be dictated by the vendor, so IT flexibility is required.

Most importantly, however, is that the majority of the time spent on ERP systems is spent on process design, organizational change management, training, data migration and testing (Hestermann & Rayner, 2015). For this reason, it could be the case that a cloud-based implementation may not offer the benefits that might be hoped for in a cloud-based ERP. For small to mid-sized organizations, however, a cloud-based ERP may offer the affordable access to a sophisticated ERP tool-set that, until the advent of the cloud, might otherwise be the domain of large, deep-pocketed organizations.


Elmonem, M. A., Nasr, E. S., & Geith, M. H. (2016). Benefits and challenges of cloud ERP systems – A systematic literature review. Future Computing and Informatics Journal, 1(1-2), 1-9. doi:10.1016/j.fcij.2017.03.003

Hestermann, C., & Rayner, N. (2015). The Top 10 SaaS ERP Myths. Gartner Reports. Retrieved August 29, 2017, from  ID:G002894654