Topic 4 b) – Technology Infrastructure Architecture

This article presents nine trending strategies for 2023. Out of the nine trends, I am interested in four areas and hope to use them to further our competitive edge in the coming year.

  1.  Digital Immune System: A strong digital immune system can pave the way for a less vulnerable digital infrastructure, promoting customer trust by proactively identifying and addressing threats. This will not only safeguard any operations but also enhance customer trust in the brand name
  2. Applied Observability: The application of observability in the organization’s systems and infrastructure can enhance understanding and enables proactive issue resolution. This, in turn, will help in improving the overall customer experience in online shopping.
  3. Platform Engineering: The organization can realize the benefits of Platform Engineering through the creation of scalable and sustainable digital platforms. This has the potential to enhance customer interaction, streamline operations, and foster innovation.
  4. Adaptive AI:

The retail organization stands to benefit from the implementation of Adaptive AI systems. These systems can offer highly personalized shopping experiences and improved data-driven decision-making, enhancing business operations.

=====================================================

I have heard many times of “Platform Engineering (PE)” and was not sure where it stood between Software Engineering (SWE), DevOps Engineering (DevOps), and Full Stack SWE. This Youtuber provided a historical background of why PE was imperative to developing the application architecture.

Platform engineers are like builders who work behind the wall to meet “nonfunctional requirements”. This role appeals to me because I can set standards, such as what tools, technologies, and protocols that other engineers should follow.

At least no one will tell me “what to do”.

 

Topic 4 a) – Technology Infrastructure Architecture

Ambition is expensive, very much.

I am always learning new concepts and this time it was the “Digital Ambition“.  It was not very clear at first, and it seems that it refers to an organization’s strategic vision and goals related to leveraging digital technologies and capabilities to drive significant improvements, growth, and innovation within the business.

The same article discusses three levels of ripple effects when the value proposition is changed.

  1. Creating a New Value Proposition:
    • Requires a new business model and capabilities to deliver a changed value proposition, a new operating model and resources aligned towards service delivery,  and a shift in the financial model to balance short-term profitability and long-term scalability.
  2. Expanding the Value Proposition:
    • Necessitates changes to the business and operating models to accommodate an expanded value proposition, development of new capabilities and resources to deliver the expanded proposition, and adjustment of the financial model and incentives to align with the new value proposition.
  3. Improving the Value Proposition:
    • Focuses on enhancing operational efficiency without changing the business model, introduction of new capabilities, such as automation and intelligence, and  improvement on the operating model and processes to support the enhanced value proposition

I am excited about digital transformation, but it will also require some adjustments to the current value proposition. I will be very cautious because even improving it will cause a huge wave that would shake up the core business model and capabilities.

It is essential to have ambition, but at the same time, I must also open my heart and listen to advise to prevent people from being swept away and perish by the waves.

Topic 3 b) Looking for an Insightful AI

Data, Pattern, and Meaning (of life)

I’ve learned a few exciting nuances of data analytics through my experience with Python Pandas and Jupyter Notebook.  Suddenly, a bunch of numbers transformed into a story of past, present, and future.  In this article, over ten different roles are discussed in relation to successful data-driven business architecture.

Some of the noticeable roles are

Data/AI Engineer: Data/AI Engineers build and manage data pipelines, ensuring the availability of relevant data for various organizational roles. They are responsible for curating data sets, operationalizing data delivery, and maintaining data governance and security compliance.

Data Scientist: Data Scientists model business problems, discovering insights through quantitative disciplines, and using visualization techniques to present findings. They contribute to the development of the organization’s data infrastructure and provide insights for decision-making processes, often through predictive and prescriptive analytics

Artificial Intelligence/Machine Learning Developer: AI/ML Developers infuse applications with AI capabilities, integrate, and deploy AI models developed by data scientists or provided by service providers. They are skilled in data collection and preparation for model training, API management, and containerization.

Insight, Insight, Insight

All three roles require insights on data to varying degrees, but I would say that the Data Scientist role is most intensively focused on gaining insights from data. Data Scientists use advanced statistical, algorithmic, and data mining techniques to model complex business problems, discover insights, and present these findings for decision-making processes. They specialize in transforming data into actionable knowledge, which often involves creating predictive or prescriptive models to guide business strategies.

This article talks about two different mindsets on data analytic practice.

Deductive reasoning is a top-down approach. Professionals using this approach typically start with a clear understanding of the organization’s objectives and strategy, formulate performance indicators, and then ask specific business questions. They design a data model structured to answer these questions, gather and load the necessary data into that model, and then use this model to answer the questions. This approach is rigorous and systematic, resulting in a solid strategy review that can validate if the organization is on the right track.

Inductive reasoning, on the other hand, is a bottom-up approach. It begins with sets of data, with the goal of discovering unknown insights within them. This approach does not start with a specific question or a rigid data model, but rather it explores the data using various types of analytics, looking for meaningful results. The results can include noise, obvious findings, and new insights, often emerging from the combination of different types of data. Data science and new styles of data management, such as data lakes, utilize this approach.

Becoming philosophical, all of a sudden.

During my college years, I was attracted to a particular branch of philosophy: Existentialism. It posits that existence precedes essence, suggesting that we exist first and only later define our essence or meaning through our actions.

I realized Inductive Reasoning aligns more closely with the existentialist viewpoint. Here, existence (data or evidence) precedes essence (conclusions or theories). Inductive reasoning starts with specific observations or data – raw, undefined existence. Through analysis of this data, patterns are observed, and conclusions are drawn, creating the “essence” or meaning. This reflects the existentialist view of forging our essence through our experiences or actions. We exist first, experience life (gather and analyze data), and from those experiences, derive our essence or meaning (conclusions or theories).

Insightful AI for Meaningful Interpretation

Essentially, we are looking for hidden meanings in data.  An insightful data scientist will notice the complex pattern and pull all evidence together to grant significance to it. Is AI capable of assigning meaning to data?, and to the extent, will it convey purpose to individual life once it collects the data from 5 billion people on Earth?

Would you mind sharing the answer with me?

 

Topic 3 a) Data and Gravity

Gravity in Data

Data Gravity: It was already there, and it has been there since creation. It just seemed too natural to notice its presence. In reading this article, I came across the concept of data gravity for the first time, and I saw why I had so many problems managing data, particularly data migration between different geographical locations, technologies, and cloud environments.

Just like how a planet’s gravitational pull increases as its mass increases, attracting more objects towards it, the same concept applies to data. As data accumulates and its “mass” grows within a particular environment (like a data center or cloud storage), it attracts more services and applications. This is due to the fact that these services and applications often perform better when they are close to the data they need to access, just like objects in space move faster as they get closer to the planet due to increasing gravitational pull.

The factors that accelerate this “gravitational pull” in a data context are latency and throughput. Latency refers to the delay before a transfer of data begins following an instruction for its transfer, and throughput is the amount of data transferred in a given amount of time. Lower latency and higher throughput result in faster data access and processing, making the services and applications more effective and efficient. This can be visualized as services and applications accelerating towards the data at an increasingly faster velocity as they get closer to the data.

As more data accumulates in a given environment, it becomes more difficult to move due to various factors.

Overcoming Gravity

This article gave me a few ideas on how to overcome the data gravity. A Data Hub approach can potentially counteract data gravity by decentralizing the data processing.

Instead of bringing all data to a central repository for processing (which can introduce latency and strain on network resources), the Data Hub architecture can bring the processing to the data, creating integrated views of the data where and when needed. This increases scalability and operability and aligns with the shift in data architectures from collecting data to connecting data.

 

Probably, it is not 100% feasible to construct a data gravity-free architecture.  The job of the data architect is to redirect the undeniable force toward stability and interconnection while minimizing the power that holds us back from “drifting” without hindrance.

Topic 2 b)– Systems Development & Integration

Custom Built or Pre-Packaged API?

I felt the power, the power as if I had everything I needed when I first built a customized API with Python3, during my Cisco DevNet training.  Without API, my program was useless since all the data I needed to process was external, out of reach without API.

System integration with APIs plays a crucial role during system development by enabling disparate software applications, systems, and services to communicate and work together. Specifically, APIs define the methods and data formats that different software components should use for interaction, essentially serving as a bridge between different software systems.

 

Integrating SaaS applications like ServiceNow or Salesforce into an existing platform typically requires use of APIs, as well as potentially middleware platforms or integration tools. Other than the vendor-specific API toolkit, there is a wide range of commercial integration platforms to choose from.

  • MuleSoft Anypoint Platform: MuleSoft is owned by Salesforce and is deeply integrated with it, but it can also be used to connect to other applications like ServiceNow.
  • Dell Boomi: Boomi is a multi-tenant platform that provides cloud-based integration, API management, and Master Data Management services. It supports integration with a wide range of applications including ServiceNow and Salesforce.
  • Jitterbit: Jitterbit allows rapid connect SaaS, on-premises, and cloud applications and instantly infuses artificial intelligence into any business process.

 

Now we can even build self-integrating services! 

A self-integrating application is an application that uses a combination of automated service discovery, metadata extraction and mapping, and machine learning to integrate itself into an existing application portfolio with minimal human interactions. It leverages technologies like machine learning, natural language processing, and AI to automate the integration process, reducing the need for manual labor and potentially speeding up the process.

 

It is part of modern integration trends when devising an integration strategy.  It can be adapted to the system development for Azure Stack HCI.

  1. Enable Self-Service Use of Integration Platforms: As part of Azure Stack HCI system development, the new trends suggest shifting from a centralized model to a self-service model. It will allow more agility and faster delivery of integration use cases by involving various roles in the integration process, under an Integration Strategy Empowerment Team (ISET).

  2. Reuse Existing APIs: It’s suggested to avoid creating new APIs if existing ones can solve integration issues. API-led integration is crucial, and APIs themselves are essential for data exchange but they also require integration for data to be sent across interfaces.

  3. Hybrid Integration Platform (HIP) for Simplifying Hybrid and Multicloud Integration Scenarios: The document encourages using HIP to address the challenges posed by hybrid and multi-cloud environments. With Azure Stack HCI, I should be able to leverage hybrid deployment, hybrid patterns, hybrid roles, and hybrid endpoints.
  4. Limited Use of SaaS-Embedded Integration Tools: While the embedded integration features in SaaS offerings can provide quick solutions, it’s advised to use them only when they significantly reduce time to value.  Azure Stack HCI with various SaaS applications might lack integrated monitoring, causing escalating costs, and duplication of skills.

Nerve System                                            vs.                                        API

         

I see API as a human nerve system.  It responds to changes both inside and outside the body, sending signals to elicit appropriate responses. Similarly, an API can detect changes in the data or state of one application and communicate those changes to another, triggering appropriate responses.

As I do with my peripheral nerves and central nervous system by following new exercise trends, I should protect and improve my API.

Topic 2 a)– Application Architecture Layer

New Trends, Cloud Services, and BODEA

Are the new trends finally here?

While I was reading the article, I sensed that the majority of current cloud service providers are already delivering the services, including Azure and AWS. To verify my assumption and validate the user experience, I researched the current  SaaS architecture in Azure and AWS.

 

1. Mesh App and Service Architecture (MASA): MASA’s flexibility is paramount in handling the fast-paced evolution of Systems of Engagement. By utilizing Azure Kubernetes Service and Azure Service Fabric, organizations can form an adaptive system that supports the proliferation of engagement applications, enhancing both employee productivity and the speed of business processes.

 

2. API Platform: With the increasing number of SaaS applications and their integral role in business operations, managing interactions between these systems becomes critical. AWS API Management provides a secure and efficient way to handle the APIs that connect Systems of Record and Systems of Engagement. This harmonizes the interactions between these two types of systems, leading to better integration and cohesiveness.

 

3. Event Processing: The necessity of speed in business and the emphasis on employee productivity underscore the need for real-time event processing and analytics. Azure Event Grid and Amazon Kinesis Data Analytics. support rapid actions in response to real-time events, minimizing latency, and facilitating business operations.

An Enterprise is an outcome-sensitive organization. When integrating these trends, it’s crucial to have a clear understanding of the expected business outcomes, whether it’s increasing operational efficiency, improving customer experiences, or enabling new revenue streams: Business Outcome Driven Enterprise Architecture.

 

Finally, I ask myself as a technology solution architect “What skills do I still need to develop to deliver ideas that will lead to these holistic and tangible business capabilities?” This article indicates that I have a considerable amount of work ahead of me, which I agree with.

Top Three Skills that I need to improve:

  1. Technical Skills (Hard Skills): Solution and technology architects need a solid foundation of technical skills. These include proficiency in areas such as business architecture, solutions architecture, design and integrations, cloud solutions, infrastructure, networking, and security.

 

2. Communication Skills (Soft Skills): The article highlights the importance of soft skills, particularly communication skills, for solutions and technology architects. These professionals need to be able to articulate complex technical concepts clearly to different stakeholders. They also need skills like leadership, collaboration, influencing, and presentation skills to drive project success. Soft skills such as creativity, verbal communication, and relationship management also enhance their contribution to agile team efforts.

3. Strategic and Business Skills: While being heavily technical, the role of a solutions and technology architect also requires strategic thinking and an understanding of business processes. The ability to align technical solutions with business goals is key. Skills like business process understanding, planning, and resourcing are essential.

  To my fellow Solution Architect; “I do not feel that I have reached my destination yet. But I forget those things that are behind me, and I strive to achieve what is ahead.”