Topic 2 – The Enterprise Application Architecture

Post 1 – Memcached

I read an article recently that made me think about Moore’s law. According to Moore’s law, roughly, the number of transistors that can be packed into a small piece of silicon will double every 18 months. We’re currently edging on transistors as small as 10nm, which will be a magnificent achievement. Whether we’ll be able to keep up with this pace of miniaturization is yet to be seen. I recall the debate from the 2000’s about whether we would be able to breach the 25nm barrier. Clearly, we got past that obstacle, perhaps we can get past the 10nm threshold as well.

The latest innovations in computing have pushed the envelope on computer processing: the proliferation of mobile devices, IoT, cloud computing, big data, etc. It seems to me that, even with the continued advancement in processing power, our processing capabilities pushing ahead at the pace of Moore’s law could actually be exceeded by the mountains of data that need to be processed. In the case of mobile applications, bursty data connections over mobile networks prevent apps from operating a peak efficiency (Golden, 2010). Thankfully, there are smart folks out there who figured out how to harness the power of unused or under-utilized resources that can help to make up for the inherent limitations of working on a mobile platform.

Memcached is just such a technology. It’s conceptually simple, and implementation appears to be relatively headache-averse. It is a method of pooling unused memory. In any given cluster of servers, each server has its own memory cache. Under normal circumstances each cache is a distinct, limited instance. However, when Memcached is deployed on a cluster, the cache of each machine is pooled into a single cache, shared amongst each server (Dormando, 2009). The advantage of this is that the utilization of Memcached can improve application performance by storing data locally that might otherwise have to be retrieved/stored by means of slower channels (e.g. Web services, databases, etc.).

Another innovation of Memcached technology is the way in which the cache is handled. It uses a “Lease Recently Used” paradigm in which releases data that doesn’t get used over some specified timeframe (Dormando, 2016). This eliminates the need for Java-like garbage collection and keeps the cache from getting bloated with superfluous data. These features are particularly useful for mobile, as mentioned, but also to address the challenges posed by the application architectures needed to support the challenges of cloud computing and big data.


Dormando. (2009, April 3). Retrieved September 9, 2017, from

Dormando (2016, May 27). Memcached Overview. Retrieved September 7, 2017, from

Golden, B. (2010, January 19). Cloud Computing: The Future of IT Application Architectures. Retrieved September 9, 2017, from–the-future-of-it-application-architectures.html


Post 2 – Application Integration: Cost Increases Associated with Mobile Computing

Like any other business cost-center manager, CIO’s are driven to reduce costs rather than seek to spend more. It take a brave CIO, indeed, to push for additional spending. However, with the proliferation of mobile devices, cloud computing, the integrated nature of enterprise computing, IoT and associated big data – it may not be possible to keep up with business computing demands and also save money.

There are several reasons for this. First, supporting mobile devices simply costs more. Quite a lot of work goes into the development of mobile applications. Specialized skills and tools are required to support it including the management of mobile users, server-side logic implementation, UX development, data requirements and data handling (see my earlier post about Memcached), and even specialized testing efforts (Tarcomnicu, 2017). As a QA Analyst I know all too well about the challenges, time and expense of assuring that mobile applications work correctly across platforms, after OS updates, and on various mobile browsers (feel free to take a peek at my LinkedIn profile). Moreover, enterprise mobile apps will likely need to access data from legacy enterprise apps – and to be able to operate without significantly draining battery life in the process (Altman, et. al, 2012). All of these considerations add to the cost of mobile application development.

There are a number of upsides that CIO’s can use to sell this increase in investment (Tarcomnicu, 2017):

  • Greater accessibility
  • Improved customer engagement
  • Reinforce brand recognition
  • Improved value propositions
  • Additional sales channel

Interestingly, there are mobile platform-based cost factors for CIO’s to consider as well. Android applications tend to cost 2-3 times more than iOS apps, and iOS apps tend to serve as a bellwether for application success (Tarcomnicu, 2017). This might be the case because iOS is the older, and perhaps more stable platform between the two, with a significantly larger and more robust app store as compared to Android’s app store. It might also be the case that the development tools are more mature as well, leading to quicker development times and lower costs.


Altman, R., Pezzini, M., Sholler, D., Thompson, J., Gall, N., & Wilson, N. (2012). Predicts 2013: Enterprise Application Architecture Will Emulate Big Web Sites. Gartner Database. Retrieved September 10, 2017. Document ID: G00245997

Tarcomnicu, F. (2017, February 20). The Real Costs of Building a Mobile App for iOS and Android. Retrieved September 7, 2017, from

Post 3 – Application Architecture Roles








If ever I mention that I’m studying Enterprise Architecture, I am invariable asked, “What is Enterprise Architecture”. I must admit, for those first few months after joining the EA program at PSU, I struggled to come up with a concise definition. I’ve found some value in describing in terms that are more easily assimilable – at least for IT folks. My approach is to describe EA as an amalgam of business architecture, application architecture, data architecture and technical architecture. I’ve heard of this referred to as the “Four Pillars of EA” – although I’ve also seen articles referring to three and five-pillar variants as well. I’ll go with four since the folks at the Open Group put a lot of work into codifying it. Over the past year plus, I have spent significant time unraveling the mysteries of business, data and technology architecture – but application architecture hasn’t received equal treatment within the curriculum.

So who does application architecture and what does it entail? It turns out that application architecture is carried out at the enterprise level and also at the project level. At the enterprise level, solutions architects are responsible for the concept planning work behind creating reusable assets and work towards the adoption of application development standards and solution patterns (Blechar & Robertson Pt 1, 2010). Solutions architects also coordinate activities which will bring the organization from its current to its planned future state. At the enterprise level, application architects are responsible for designing the solutions defined by the solutions architect. Applications architects are more immersed in the technical specifications of existing systems, and lend insights and the technical design decisions which facilitate the implementation of proposed solutions and standards (Blechar & Robertson Pt 1, 2010).

The output of enterprise-level solutions and applications architects are the frameworks within which the project-level solutions and application architects work. Project-level solutions architects assure that project work is carried out in accordance with the standards and solutions guidance provided by the enterprise-level solutions architects, but do so with a focus on the project to which he or she is assigned. Project-level solutions architects also assure that the right resources are in place to carry out implementation activities (Blechar & Robertson Pt 2, 2010). Project-level applications architects carry out the work of designing reusable application interfaces and software services, such as those utilized for organizations utilizing the SOA development paradigm (Blechar & Robertson Pt 3, 2010). Application architects work closely with business analysts and other subject matter experts to assure that security, business rules and business requirements are well understood among all stakeholders and project team participants.

This all assumes, of course, that the organization is sufficiently resourced and technologically sophisticated enough to require such services. A medium-to-small enterprise may not break out these roles as described as a matter of practicality.


Blechar, M., & Robertson, B. (2010). Application Architecture Overview, Part 1: General Context and Scope. Gartner Database. Article ID: G00208946

Blechar, M., & Robertson, B. (2010). Application Architecture Overview, Part 2:Enterprise-Level Scope and Roles . Gartner Database. Article ID: G00208666

Blechar, M., & Robertson, B. (2010). Application Architecture Overview, Part 3: Project-Level Scope and Roles. Gartner Database. Article ID: G00208668

Topic 1 – Digital Disruption and Stack overview; Cloud technologies influence

Post 1 – Application Architecture and Microservices

A colleague recently encouraged me to investigate microservices as an emerging trend. This week’s reading assignment gave me just that opportunity. Having read Microservices: An Application Powered by the Cloud and 4 ways Docker Fundamentally Changes Application Development, I came away with the powerful impression that software development has made an evolutionary leap.

My organization’s IT department has recently begun to experiment with Agile software development, and more holistically, has taken steps to evolve into a DevOps mode of operation. This change has been purposefully gradual, with hiccups and new problems being encountered on the way. It has been a learning experience nonetheless, as process reviews are carried out at regular intervals in order to discuss and document lessons learned. However, it occurred to me that our reliance on monolithic application models may become our next challenge to overcome.

The typical software development project in my world closely follows the monolithic application model composed of a three tier infrastructure separated into Web, Business Logic and Data components that are load balanced and pre-scaled to support anticipated peak loads (Russinovich, 2016). This all seemed to be ideal until we started moving applications to third-party clouds. A great deal of integration complexity must now be tamed.

This is where the adoption of microservices may offer relief on many fronts. First, as we follow the DevOps path, incorporating a container technology such as Docker will facilitate closer collaboration between the developments and operations groups (Carlson, 2014). Utilizing a containerized application architecture will help to realize the goal of continuous integration (especially given the newfound reliance on cloud applications), and will address scaling issues at the same time.

The separation of applications from the infrastructure supporting them will facilitate other benefits. For one, the use of microservices will impact software performance, which will be improved by minimizing network overhead and complexity (Russinovich, 2016). Application testing will be simplified since code releases will be restricted to target microservices, assuring a segregation of duties contract that will allow testers to focus on testing newly released code and significantly reduce the need to employ regression testing. The scaling abilities of a microservices architecture will also reduce performance risks due to user loads.

This upsides to adopting a microservices architecture seems to be a good fit in this context. This is a theme that I’m sure to revisit over the course of this semester.


Carlson, L. (2014, September 18). 4 ways Docker fundamentally changes application development. Retrieved August 28, 2017, from

Russinovich, M. (2016, March 17). Microservices: An application revolution powered by the cloud. Retrieved August 28, 2017, from


Post 2 – Digital Disruption in the Facilities Maintenance Industry

This week I read several articles around digital disruption, and felt that to put it into a comfortable context for my own reflection, I would put it into the context of the industry I currently work in. The facilities maintenance industry is on the precipice of digital disruption through IoT service augmentation. There are now inexpensive devices capable of monitoring building usage patterns and even facilities equipment usage and health metrics that will reduce costs through informed targeting of activities. Examples of their use might be to monitor foot traffic in strategic areas for targeted cleaning or security attention. Washroom consumables and even wastebasket bin levels can be tracked to optimize operations and to inform. With this knowledge resources can be more strategically positioned, and overall labor costs scaled to the right fit for the job (France, 2015).

However, these technologies have not yet been rolled out at a large scale, so there is an opening for a significant marketshare grab for a bold innovator. Failure to take advantage of this known, but not yet fully exploited IoT innovation will force incumbents to react to this potential digital disruption, rather than to lead it. As outlined by VanZeebroeck & Bughin incumbent reactions take on two dimensions: 1) the response to the new digital entrant and the digital disruption, itself, 2) response to the actions of other incumbents (VanZeebroeck & Bughin, 2017).

The largest facilities management organizations such as Aramark, ABM Industries, and Sodexo Group are long-term multi-channel organizations that have the benefit of persistent consumer behaviors (Willmott & Olanrewaju, 2013). Digital disruption would have a profound impact on these incumbents because it would be an industry first, and because there is no industry precedent, a potentially painful and expensive endeavor to endure. Rather than an addition to existing capabilities, this disruptive effort must be driven by a well-defined corporate strategy which will fully enable this new mode of operations (VanZeebroeck & Bughin, 2017).

The first entrant have a critical advantage and will prepare and execute a comprehensive business and technical architectural re-engineering effort designed to support all aspects of IoT: implementation, data collection and analysis. This is no small undertaking and it would take significant time for an incumbent to react. New capabilities would have to be studied, an emergency response investment strategy devised, a technical architecture to support the influx of data would have to be implemented along with big data analysis tools and bolstering technical know-how, new and existing business processes would have to be fully integrated so that the new processes can be executed concurrent to the existing, manual, business processes.

The lesson is clear. It is much better to be a new digital entrant than to play catch-up.


France, A. (2016, December 15). The Internet of Things and the Big Brother of Clean. Retrieved August 28, 2017, from

VanZeebroeck, N., & Bughin, J. (2017). The best response to digital disruption. MIT Sloan Management Review, 58(4), n/a-86. Retrieved from

Willmott, P., Olanrewaju, T. (2013, November). Finding your digital sweet spot. Retrieved August 28, 2017, from


Post 3 – The Problems with SaaS

Moving ERP to the cloud will resolve all problems with scale, reduce maintenance, upgrade and operating costs, and reduce software customization. That and, darn it, everyone else is doing it too!

Not so fast.

This may not be the universal experience around moving ERP to the cloud. There is a great deal of vendor hype that makes moving an expensive and labor-intensive ERP to the cloud seem appealing. However, there are a number of SaaS myths and misperceptions that might lead organizations to hasty, regrettable decisions if they don’t carefully consider their business strategy and it’s suitability to a cloud-based ERP.

The reality of moving ERP to the cloud is that it may require more sacrifices than an enterprise may initially anticipate. For one, it is a strategic risk for an organization to hand over its data and critical-to-business systems to be managed, maintained and secured by a third party (Elmonem, et. al, 2016). Organizations would be prudent to do extensive research and risk assessment around data loss. Does the cloud ERP vendor have sufficient disaster recovery contingencies in place? Might organizational data get confiscated in an unrelated law-enforcement case?

There are other issues to consider. Organizations must be working towards, or willing to accept some amount of business processes standardization. If a great deal of customization is required, then the system becomes less maintainable in a cloud-context, and may drive up costs such that any financial benefits are offset. Moreover, any integrations with cloud-based ERP could entail extensive rework to existing business and information systems. The cloud system’s software architecture will be dictated by the vendor, so IT flexibility is required.

Most importantly, however, is that the majority of the time spent on ERP systems is spent on process design, organizational change management, training, data migration and testing (Hestermann & Rayner, 2015). For this reason, it could be the case that a cloud-based implementation may not offer the benefits that might be hoped for in a cloud-based ERP. For small to mid-sized organizations, however, a cloud-based ERP may offer the affordable access to a sophisticated ERP tool-set that, until the advent of the cloud, might otherwise be the domain of large, deep-pocketed organizations.


Elmonem, M. A., Nasr, E. S., & Geith, M. H. (2016). Benefits and challenges of cloud ERP systems – A systematic literature review. Future Computing and Informatics Journal, 1(1-2), 1-9. doi:10.1016/j.fcij.2017.03.003

Hestermann, C., & Rayner, N. (2015). The Top 10 SaaS ERP Myths. Gartner Reports. Retrieved August 29, 2017, from  ID:G002894654