What’s the ROI?

The process of evaluating any technology can be a challenging process in a corporate environment, but it’s even more challenging when considering emerging technologies.

Why is this the case? It’s largely because outcome predictability is normally one of the key decision-making criteria. There is an expectation that before an investment is made, the outcome is known. That could manifest itself in terms of financial benefits, productivity, or customer satisfaction.

With emerging technologies, it is sometimes necessary to take a “leap of faith” and move forward without having all of the answers. In larger organizations, this often creates a level of anxiety that often inhibits progress.

There is a balance that needs to be achieved when evaluating these type of technologies and investments. Through continued technology education, organizations will become more comfortable making these type of trade-off decisions.

To Gen AI or not to Gen AI?

It’s been fascinating to see how quickly Generative AI has entered our collective consciousness. In a matter of months, it’s one of the hottest buzzwords in technology and shows no signs of slowing down. To be fair, the implications and opportunities of Gen AI are tremendous. But what I’ve found so interesting is how a large organization such as mine has started to embrace it.

There are two paths that we seem to be following when it comes to Gen AI: Guidelines for how employees can leverage it, and how it can be leveraged in our products and services. For the internal users, there has been significant effort to drive awareness and education. Guidelines have been established for what can and can’t be done to protect company IP. While it’s still very early, there are strong indicators that it will be a gamechanger for internal productivity.

On the external side, the key questions being asked are how Gen AI can help enhance our products and services. There are many considerations and tradeoffs to be considered, as the longer-term implications of employing some of this technology is unknown in a lot of use cases. With that said, I’m excited to how we can drive more customer value through this technology and shorten the overall time-to-value equation.

Architecting for Success

One of the challenges that I have found in advancing Enterprise Architecture in my organization is the confusion regarding different types of architecture roles. Some of the readings from this module speak to this very challenge, with confusion regarding the difference between solution architecture, business architecture, and enterprise architecture. As EA continues to evolve, it is important to find ways to move past these challenges and drive clarity across the enterprise.

The challenge partially arises from the distributed nature of work within the organization. By separating development activities amongst distinct Centers of Excellence, each of the teams have a desire to “own” their respective architecture. This leads to divergence in terms of solutions, and often, redundancy in efforts. Too often, these gaps do not reveal themselves until too late in the process, which can result in a significant amount of rework.

To help address, we have already started to leverage the Enterprise Solution Architecture (ESA) principles around solutions, patterns, and portfolios.

  • Solutions – Developing holistic solutions requires us to work not only across our COEs, but also our shared service towers such as infrastructure and cybersecurity. We are starting to look at ways to deploy more integrated architectural design reviews to bring everyone to the table and identify potential areas of concern sooner.
  • Patterns – In the integration space, patterns have become an important component of how we look at opportunities to drive API reusability. With the volume of point-to-point interfaces that exist in our environment, patterns are a way for us to identify opportunity to develop once, but use many times.
  • Portfolios – From a portfolio perspective, my organization already leverages the Gartner TIME model to drive an understanding of the appropriate disposition. This needs to be extended to our infrastructure footprint, to drive a more holistic and consistent approach.

While we are early in this journey, we believe these steps will help us drive better solutions for the business, and greater clarity on the role of EA within the organization.

Business Capabilities: Lessons Learned

Business capability modeling is a technique that my organization has been leveraging to better articulate the as-is landscape of our business, to better develop future-state models. Some of the learnings we’ve had in the process include:

  • Focus Efforts – Due to complexity of the business landscape, it is important to not try to “boil the ocean” and address all capabilities at the same time. We have chosen to focus on specific portions of the business, work through the respective capabilities, and move forward as we drive better definition.

  • Business Partnership – As recommended by the readings, we are engaging with our business partners in this journey, but have found that the EA organization needs to lead the efforts. Close alignment with the business is critical to ensure accuracy, but it is also important to ensure ongoing buy-in.
  • Measure Progress – It is important to find ways to measure progress on the journey. We have chosen to establish different levels of business capability definition maturity, and plot progress as we move forward.

While we are still early in the overall process, we continue to make measurable progress and are looking forward to leveraging the increased business capability visibility and definition to drive better go-forward decisions.

Log4j – A Challenging Vulnerability to Contain!

Does anyone remember Log4j? This was a serious Java vulnerability that was discovered in late 2021 that allowed attackers to execute code remotely. While it was patched a couple of weeks after it was discovered (though the variant/patch cycle repeated itself many times), the biggest challenge was identifying where to find the vulnerability in our applications.

Since Java is ever-present in both custom and off-the-shelf packages, the discovery of the vulnerability set off a mad cat and mouse chase through our portfolio to identify and rectify applications that were impacted. We leveraged our application portfolio tool, LeanIX, to concurrently identify applications with potential vulnerabilities, as well as keep track of remediation status.

In the end, we were able to successfully identify and remediate applications that were exposed to the vulnerability. But the entire exercise was a lesson in the importance of alignment between the application and cybersecurity teams, and the importance of taking a business-focused approach to managing these types of situations. By prioritizing the most business-critical workloads, and taking a more proactive approach to vulnerability management, my expectation is that we’ll be better prepared for future situations of this nature.

Managing Cloud and SaaS Risk

Reading the article about analyzing the risk dimensions of cloud and SaaS computing, I started to think about our internal processes to ensure cybersecurity compliance and risk management for cloud/SaaS providers. One of the biggest challenges, in my opinion, is getting an opportunity to complete this type of assessment early enough in the process. With the ease of ramping up SaaS applications, it’s not always easy to know when a new application is entering the ecosystem.

To help address this, my organization has implemented a third-party assessment process. The intention is to inventory, assess, monitor, and off-board vendors to minimize organizational risk. One of the key success factors for this process is how it is tied to our procurement process. For any new providers, this assessment must be completed before a purchase order or contract can be issued. This is a control that is built into the process to ensure that we are not trying to address this critical activity after the fact, and we can clearly identify concerns and gaps in a timely fashion.

Any governance is often met with concerns about “slowing down” processes. However, it is important to have mechanisms such as this third-party assessment to help protect the organization and its data.

I&O Improvements – Don’t Forget the Marketing!

As I reviewed the “Five Steps Toward a Faster, Better, Cheaper I&O”, a thought dawned on me. What happens if you execute these steps, or a similar model, and still find yourself feeling as if you’re running in place? I find that a key component of demonstrating progress in this space is effective communication and promotion marketing, as articulated in the article. Following are some considerations to keep in mind:

  • Memories are Short – It’s amazing how easily people forget how things were in the past. In many cases, how bad things were. 😊 It’s important to take the time to understand your baseline and starting position so you can articulate progress over time.
  • Measure, Measure, Measure! – In order to articulate progress, it’s important to be able to quantify results. Establish clear metrics, whether it be cost-related or performance-focused.
  • …but Don’t Underestimate the Optics – Don’t underestimate the role that optics can play. Gaining support from key stakeholders and having them articulate how things feel from a performance perspective is sometimes as important as the metrics themselves!

My organization has made tremendous progress in driving better outcomes in the I&O space. Effectively communicating this progress is a critical component of ensuring that success!

To Outsource or Not to Outsource?

I found the article around “Key Considerations When Thinking About Insourcing or Changing IT Service Providers” to be a very interesting and timely read, as a large part of my role over the past few years has revolved around a significant outsourcing of application managed services. My organization made the decision to outsource application support, and we have wrestled with many of the challenge outlined in the article.

Following are a few of the things I learned through this journey, which could be helpful to others looking to embark upon the same path:

  • Contractual Fortitude – I have found that prospective partners will promise just about ANYTHING during the ramp-up to a signed contract. In fact, many of these items made it into the contract as well. However, it is very important to ensure there are clear SLAs and measurements in place to hold providers accountable to delivering these services. Even in places where we thought we had covered completely, the lack of clear metrics made it very challenging to enforce.
  • Balanced Competition – We made the decisions to award work to two providers, using a similar contractual structure. The intent was to create competitive pressure, and avoid putting all of our eggs into a single proverbial basket. We partially achieved this objective, but for a variety of reasons, the work was awarded in more of an 80%/20% split. As a result, I don’t think we gained all of the benefits of having multiple providers, as the provider with the smaller share has had a challenging time driving the level of cost efficiencies we expect due to lack of scale.
  • Scope Changes – Things change, and sooner or later, you’ll be in a position where you need to add or remove services. It is important to have clear guidelines for these adjustments. On the incoming side, a clear cost estimation model is critical to ensure that the cost efficiencies gained through the original contract are not eroded. On the outgoing side, a clear model for shedding costs as applications are removed is critical.

Even with some of the challenges we have encountered, the decision to outsource these services was the right one for our organization. We look forward to continuing to optimize the model as we go forward.

Creating a Center of Excellence around Data

Reading the article about “must have” data roles made me think more about we have decided to attack data within my organization. We have created a center of excellence around analytics and automation, which is one of the foundational CoEs along with ERP, Digital Factory, Integration, and Digital Sales & Service. This structure was created in part, to bring together our capabilities to scale outcomes in these disciplines.

The model has created both positive outcomes, as well as opportunities for improvement:

Positives

    • Leadership – By aligning Analytics & Automation (A&A) under a senior leader, it has demonstrated to the organization the importance of this practice and its impact on the broader organization.
    • Consolidation – Bringing together individuals across the organization who are experts in this space has provided scalability and focus.

Opportunities

    • Silo Effect – By separating COEs into separate organization structures, there has been situations where solutions have been developed that do not “play nice with others” or are built without consideration of capabilities that existing out-of-the-box in other platforms.
    • Master Data Management – The group’s focus has tended to be on driving reporting and visibility, with not enough focus on foundational master data management and how that needs to be modeled within the organization.

Overall, the establishment of the Analytics & Automation CoE has been a game-changer for our organization, and a structure that we will continue to improve in the future.

LeanIX – Foundation for Application Data

I was recently at the LeanIX Connect Conference in New York City, where a lot of the focus was on how the platform can be leveraged to drive Enterprise Architecture initiatives. For those who are unfamiliar, LeanIX is a SaaS-based EA management tool. We began using it a few years initially to catalog our application portfolio, and have increased our maturity to continue documenting our IT ecosystem.

The heart of the platform is data, so I wanted to highlight some of the ways it has helped my organization drive more effective data visibility and governance:

    • 360 Degree View – LeanIX has helped integrate application data from disparate sources (such as our CMDB ServiceNow), which has helped provide a single, unified view of our landscape view.
    • Visualization – The platform has built-in dashboarding and visualization capabilities that allows us to model complex environments in an consumable way.
    • Collaboration – We provide broad read-only capabilities to the platform, and use “crowd-sourcing” capabilities to keep data up to date by distributing the ownership across application leaders.
    • Governance – Built-in capabilities ensure that data access is controlled, and individuals can only see the information that they are permitted to access.

Having a platform like LeanIX has given us a foundation to build from in terms of application data. I’m looking forward to continuing to mature our capabilities to better leverage the platform to make better decisions.