Week 7 – Innovation, skills and Emerging Technologies

Navi Radjou on his Ted Talk Can Limited Resources Lead to Better Innovation? is super insightful, because discusses how Innovation can emerge from places, people and situations with lack of resources.  He studied entrepreneurs from China, India and South America and saw that streets are important labs, because the scarcity of resources requires from this person creativity to bring solutions to their needs/problems. He calls it frugal innovation the approach of finding simple, cheap and efficient solutions that can be technically and economically viable to be adopted in scale. Also it looks for what is more available, like mobile phones and day by day resources, to be reinvented and improved to solve new challenges.

He also compares Silicon Valley approach that is constantly looking for the next generation technology ” more for more” and start creating a disconnection between the technologies created versus the real situation of people to acquire these equipments and technologies.

I live in Brazil and in fact we notice that usually companies in Brazil have less people to cover more areas and activities on companies. Is very common on richer Countries we see very specialized people in IT, supporting few activities/technologies much deeper, while here is more common roles that covers more technologies and areas.

Important principles for frugal innovation

  1. Keep it simples – easy enough to use.
  2. Do not reinvent the wheel – leverage existing and accessible resources.
  3. Think and act horizontally – companies usually scale UP and centralizing operations but if you want to be agile scale OUT horizontally to be more agile and capture more customer demands

Connecting with the Forbes article 16 Key Steps To Better Business Decision Making, the agility and simplicity are key factors to innovate with a clear proposal, It means, being inclusive and removing barriers, clarifying and simplifying problems and putting the customer at the center.

Also, the article says that emotional intelligence, learning from past situations, seeking diverse perspectives, and getting input from multiple departments contribute to better choices and in the same way understanding personal biases, creating quick experiments, and taking emotions out of decisions are also essential, while suppressing ego and encouraging collaboration foster effective decision-making.

Embracing strategic alignment, providing guidance, and leveraging AI-driven data analytics can enhance decisions but many companies still don’t now how to be really data driven or don’t trust on the insights they gather from data. As the article “Executives still mistrust insights from data and analytics” brings the majority of business leaders lack confidence in the insights generated from data and analytics. The lack of trust stems from concerns about data quality and accuracy, incomplete or inconsistent data, a lack of understanding of data and analytics, organizational silos, previous bad experiences, fear of change, and cultural resistance. To foster trust, organizations must focus on improving data quality, governance, and integration, provide data literacy programs for executives, promote a data-driven culture, and demonstrate the value and reliability of data-driven insights through successful case studies and pilot projects.

So start simple, fast and with the resources you need and then scale fast and rapidly gather customer feedback and run cycles of improvement seems to be important best practices for agile innovation.

Week 5 – The Enterprise Security Architecture

Security and compliance are job zero for any company nowadays. Is important that Enterprise Architects keeps in mind security considering all layers, taking a in depth approach, not only protecting borders, but also designing security in every layer. Zero trust approach has been an important concept specially for customers adopting hybrid and cloud solutions.

The Gartner’s Article “Analyze the Risk Dimensions of Cloud and SaaS Computing” from 2013 discuss some challenges about Security in the Cloud and I would like to discuss these challenges considering what we know know in 2023.

  • The analysis of cloud computing risks is complicated by a lack of best practices on the method and content of a cloud services risk assessment.

Clouds Providers have evolved a lot on Security Space. Zero trust architecture is a security model that assumes no implicit trust, requiring verification for all users and devices attempting to access resources. It enforces strict access controls, continuous monitoring, and authentication, reducing the risk of data breaches and unauthorized access. The principle is to trust no one and always verify. In the Cloud taking a Zero Trust approach is important, and Cloud services have evolved to improve Identity, Just in Time Access and Just Right Access, also Landing Zone patterns helps to define correctly a multi-environment network and governance approaches.

  • Nontransparency of service providers presents challenges to security professionals charged with assessing the risk of cloud services.

Visibility and observability tools evolved providing auditing log of everything that happens in the Cloud from the authentication through the launch of services. Also, service health monitoring have evolved to be more complete and also SIEM/SOAR services have also been more popular and adopted in Cloud space.

SIEM (Security Information and Event Management) is a technology that collects and analyzes security event data from various sources, providing real-time threat detection and incident response capabilities. Companies in Industry Space, for example, have used SIEM to have visibility not only about IT but also from their OT networking, converging logs and signals from both IT and OT.

SOAR (Security Orchestration, Automation, and Response) is a system that streamlines incident response workflows by automating repetitive tasks and orchestrating actions across security tools, enhancing the efficiency and effectiveness of incident handling. SOAR is a great approach for Cloud as in the Cloud everything can be launched/changed using scripts through Infra as a Code Terraform, CloudFormation, ARM etc, and you can react to alarms in fact automating actions like denying a firewall rule, isolating a compromised machine/environment, automating backup and disaster recovery even on another region in case of problems. Also, using the on-demand capability of the Cloud, once an incident happens or something suspicious happens, is possible to launch a brand new resource to restablish the environment, but also keep the compromised resource to investigate deeply.

I consider Cloud provides even more visibility and transparency nowadays than on-premises environments.

  • Poorly defined business requirements on the part of cloud services buyers create additional areas of potential risk.

It’s important to clearly define security requirements and goals and also considering security as job zero in all areas like DevSecOps, hardening of VMs, identity and access, networking, data visibility and accessibility and follow best practices as part of all projects.

The article also discuss something very important that is the Responsibility Model accordingly with the type of services used.

So in the first column we see IT dedicated environment where the company is responsible for all layers from physical to application. In the third column we see Public IaaS where the Cloud Provider is responsible for physical security of data centers, the hypervisor and underlying network security and the customer starts from the VM and Operational System. So the customer is responsible for correct configure virtual networking, routing and firewall configurations, O.S. Update and patching after it is launched. For PaaS the cloud provider also is responsible for the O.S. And the platform service and the customer is responsible for correct usage, follow the least privilege access and applications access. For SAAS customer is responsible for data, access and identities.

It’s important to keep in mind these different approaches to correctly monitor and approach security.

Week 6 – Understanding the Emerging Business Architecture Layer

Willian Ulrich’s Article Essence of Business Architecture   emphasizes the importance of business architecture as a tool for understanding and improving the organization, and it clarifies the role of the business architect in facilitating this process.

The text discusses the practice of business architecture and the role of the business architect. It highlights the concept of a blueprint in architecture and compares it to business architecture.

Business architecture is a blueprint of the enterprise that provides a common understanding of the organization and aligns strategic objectives and tactical demands. The blueprint analogy is used by Ulrich to explain how blueprints in other forms of architecture (building, city, ship) provide a foundation for understanding and making changes. Business architecture allows business professionals to visualize the business from various perspectives and understand the interconnectedness of different parts.

The business architecture blueprint provides readily understood views of the enterprise to a wide range of business professionals, including executives, project planners, risk managers, and IT architecture counterparts. Executives can use business architecture to tackle complex issues that cross functional silos by understanding the cause of the problem and potential solutions.

The business architecture should be able to convey information to executives in a readily understood manner and lead to strategic decisions and proposed solutions. Further analysis and detailed understanding are required for drilling down into impacted business units, decomposing business capabilities, identifying redundancies, and understanding IT interfaces. Based on visualizations and analysis from business architecture, executives may request views of the target business architecture and a transition plan for moving towards a solution.

From this, we can find some guidance on how to implement a roadmap to communicate the strategy and inform better IT and business decisions on the Gartner#s article Create Roadmaps That Support Decision Making and Communicate Strategy Effectively written by James McGovern.

But what is a Roadmap? Still accordingly to McGovern:

Roadmap

A roadmap is a graphical representation that is used to illustrate the milestones and deliverables required to manage change to a future state from the current state over a specific period. It:
  1. Uses time as the primary dimension
  2. Shows milestones and the deliverables that need to occur over time to achieve the future state
  3. Uses a level of abstraction appropriate for the audience and the intended purpose
  4. May also show additional influencing factors

Called my attention the roadmap described as a HeatMap by McGovern as below that shows Illustrative Example of a Heat Map of the Oil and Gas Industry. It doesn’t address How, but is very useful to show a broad view of recommendations considering also the risk of not implementing through colors red for High, Yellow for Medium and Green for Low. Also, super useful and visual the size of circles showing the impact and potential benefit of each identified item of the dimensions explained in the graph. Very nice also the years showing a timeline of years that can help to have a When view.

I though a very good view to show what is more important to address and how to prioritize considering the impact, very visual and clear way to organize a roadmap view.

Week 4 – The Enterprise Technology Infrastructure Architecture

Arun Chandrasekaran (2022) discussed in Gartner´s material called “Compute Evolution: VMs, Containers, Serverless Which to Use When?” compute layer evolution. These new application architectures have brought about advancements at the computational level, with the introduction of concepts like application containers and serverless functions in recent years. This has enabled further virtualization of the computational layer, going beyond just virtualizing servers. As a result, the computational layer has become more focused on applications, offering benefits such as agility, scalability, and automation.

In all industries software is becoming increasingly prevalent, and modern digital businesses are utilizing software to analyze data and develop applications to gain a competitive edge.

So how important is to correctly choose between VMs x Containers x Serverless? Is more important to choose correctly or to move fast? Is more important to make 100% sure when and why to use it? Or is more important to understand what scenarios they enable for business?

Cloud Computing has enabled startups to born and grow. An also, startups helped Cloud Computing to grow, to gain force and experience. Why? Because Cloud enabled these ideas to start with low investment in IT, gain experience and space with a reduced IT team but also scale fast and on time to increase exponentially the demand, and to not lose momentum. So Cloud was key to enable new business paradigms.

If you take a look at your cellphone right now the most common apps you use like Netflix, Spotify, Instagram, Uber, AirBnB and what they have in common? They recreated the paradigm on their industries, AirBnb revolutionized the way people travel and how cities handle the capacity of tourists and Uber became a concept and took to another level the discussion of having a car x enable you to commute easily without having the overburden of maintain a car, we saw “uberization” everywhere.

So, Enterprise Architects look for how to empower the business and as stated by an Cox, Noah Rosenstein, and Monika Sinha (2021) on Gartner’s toolkit “IT Strategy Template — Embedding Information and Technology in Business Strategy”, “strategy seeks to clarify how the enterprise will compete and succeed in its chosen markets or, for the public sector, how the enterprise will achieve its mission. Enterprises should have only one strategy — the business strategy — and information and technology (I&T) must be a core part of it. This means that the effort of creating an IT strategy must shift away from creating a separate document focusing on the IT organization toward creating a set of inputs, or key chapters, that are embedded directly in the business strategy”.

If we consider, for example, NuBank a fast growing fintech in Brazil, they used the Cloud to enable their solution, considered the software and the app as part of their core business and developed a strong concept, easy to use app and services, and made bank services more accessibly to people. So they focused on their product, first on VMs and at some point they had to start a long time waiting for EC2 to boot and start containers, so adopting Kubernetes became compelling for their business, because I made sense for the business agility and resiliency they were looking. If you want to know more access Nubank | Cloud Native Computing Foundation (cncf.io).

So Enterprise Architects are key to enable the company to achieve business goals and help to drive the best choice thinking about the Agility the company is looking for, resiliency and costs.

Serverless is a great way to start fast, agile and light, as you can start from code. But as the solution requirements evolves, at some features, modules, or systems, the flexibility and how containers can take advantage and use more efficiently the infrastructure can help you to achieve better results. It also depends on the team proficiency and learning curve, to make the company decide to shift from VMs to containers to serverless.

Rui Zhang, Kristin Moyer, Hung LeHong (2020) on Gartner’s document called “Digital Transformation Starts With Redefining Your Value Proposition” brings a very interesting case from TRATON Group (formerly Volkswagen Truck & Bus). They historically focused on selling products. So the company decided to launch a digital logistics platform called RIO to connect the transportation and delivery value chain. They explain that RIO is an open, cloud-based platform that combines data about tractor units, trailers, superstructures, drivers and orders with traffic, weather and navigation data to provide real-time recommendations. So to launch this new market segment, and idea to be fast delivered, tested and scale fast once the market requires and a cloud-based architecture was the Enterprise Technology Infrastructure that enabled their intents.

So the best Architecture and Underlying technologies are not only a matter of the best technical decision, it also considers components of team skills, how fast I can learn and move and how we can consider softwares can evolve and take advantage of new paradigms to achieve the objectives. Who could imagine 1 year ago that everything we know so far about chatbots would change so fast? Generative AI has changed the game, opened room for new use cases, and use what can be beneficial for the business is more important than consider that your application cannot change or move.

Week 2 – Post 2 – Application Architectures and Sytems development

In the week 2 we discussed Application Architectures considering Monolithics, Service Oriented Architecture, Microßservices and a MASA Model (Mesh Apps and Services Architecture.

Recently Amazon Prime Video published a success case called Scaling up the Prime Video audio/video monitoring service and reducing costs by 90% discussing how Prime Video effectively handled a growing volume of content while achieving 90% reduction in costs by leveraging a monolithic architecture, which enabled them to consolidate various components of their system into a single cohesive unit.

In this case, the monolithic architecture was beneficial for Prime Video due to some reasons. Firstly, by consolidating various components into a single cohesive unit, it simplified the development, deployment, and management processes. This allowed for seamless communication and data sharing between different modules, improving overall efficiency. Additionally, the monolithic architecture provided better scalability as it could handle increased volumes of content without significant performance issues. Moreover, it enabled Prime Video to leverage existing infrastructure and technologies, reducing the need for additional resources and lowering costs. Overall, the monolithic architecture proved to be a suitable choice for Prime Video in terms of simplicity, scalability, and cost-effectiveness for their audio-video monitoring service.

It caused a lot of discussions and noise about Monolithic x Micro-services. But as always the only certainty we have is that one size never fits all on IT World.

While it has its advantages, such as simplicity and easier development, there are scenarios where other architectural approaches may be more appropriate.

During our class Dave Hollar, our Professor, shared this very interesting diagram from Gartner that shows the Mesh App and Service Architecture. Masa architecture promotes the use of loosely coupled services and APIs, allowing for easier integration and reusability of components. It encourages the development of microservices and event-driven systems, enabling greater agility and responsiveness to changing business needs. By adopting Masa, organizations can build resilient, scalable, and distributed applications that are adaptable to evolving technology landscapes and can effectively leverage cloud-native capabilities.

In large-scale systems with complex and rapidly evolving requirements, a monolithic architecture can become cumbersome and difficult to maintain. It can hinder agility and scalability since any changes or updates require modifying the entire system. In such cases, a microservices or distributed architecture might be more suitable. I worked in the past with Internet Banking in a monolithic approach. After the fast growth of the Mobile and Internet usage, the deployments became each time bigger, harder to test and if one thing failed, everything needed to be rolled back. So breaking into services and more business and features driven components, helped to expedite features and fixes launches in a parallel way reducing the surface of problem in case of something broken were delivered.

Microservices architecture allows for modular and independent services that can be developed, deployed, and scaled independently. This offers flexibility, scalability, and fault isolation, but it also introduces complexities in managing the interactions between services.

Also, with more than 20 years in IT area I have seen big companies with a huge legacy environment that keeps helping the business. With APIs and event-driven patterns is possible to decouple systems, while you still keep some legacy and third party applications that will be more complex and require a longer time to be modernized.

The Prime Video experience teaches us that in some cases, the monolithic address the most important thing the business needs. Prime Video team realized that distributed approach wasn’t bringing a lot of benefits in their specific use case, their specific tool for audio/video quality inspection, so they packed all of the components into a single process. This eliminated the need for the S3 bucket as the intermediate storage for video frames because our data transfer now happened in the memory. They also kept the same components, but now instead of the distributed architecture, in a monolithic approach.

So here we have the good IT DEPENDS statement that Architects say a lot. Considering the requirements of scalability, costs, the feature and which are the goals to address are key to show which one is the best architecture to address our needs.

Week 1 – Post 2 – Digital Disruption

At EA 874 in the first week we studied about Digital Disruption. These were the suggested readings:

The AWS Well-Architecture Framework establishes some pillars to better take advantage of the power of the cloud: Security, Resiliency, Operational Efficiency, Cost Optimization, Reliability, Performance Efficiency and Sustainability. It brings meaningful and important guidelines to take advantage of the Cloud Computing through the elasticity of cloud, costs optimization, only use the capacity you need when you need in an automatic way, and many more important best practices. And this is very agnostic to any Cloud you intent to adopt, as these are foundational principles to all of them.

The Cloud can make Docker’s impact even more powerful as Lucas Carlson states in Docker and its Impact as creates a more concise and flexible way to deliver applications specially micro-services. The cloud makes infrastructure and devops tools out of the box to be used as soon as needed, as It provides a self service and just in time way to provision resources. In the same way, is easier to create composable applications and microservices, as you can state using Infrastructure as a Code the basis of the environment and reuse the reference architectures to expedite new solutions that have assured security, the company guidelines and rules that can be forced into an Infrastructure as a Code (IAAC) script reference.

Connecting with the Gartner’s Composable Reference Architecture described in ourPost 1 once we define our business goals and which business contexts are important, cloud helps to expedite the creation and evolvement of each Packaged Business Capability, make it continuously integrated and delivered through DevOps and expedite the journey defining by script integration layers through IaaC.