Migration Math – You can’t afford to ignore it

Share

Mathematics They say that “if it’s not broken, don’t fix it.” The trouble is that sometimes things are working ‘just fine’, meanwhile the competition has jumped up the evolutionary ladder to take advantage of efficiencies and capabilities that aren’t available to you. In response, we have to change our perspective.

An archaic system or process may not be inherently broken, but its use to solve your business needs may well be broken in light of newer, faster, and more powerful alternatives.

If your enterprise is like most, then it has come to depend upon one or perhaps several core operational systems that run on legacy platforms. These systems have served you well, but are increasingly expensive to support. What you may not realize is that math and the sheer rate of innovation in computing are both working in your favor. Migrating to a modern platform has never made as much sense as it does today.

Enterprise computing used to be expensive

Consistent with Moore’s Law, we have seen computing power grow over time as costs continue to plummet. My first computer was a 286 AT which topped out at 12 MHZ and cost several hundred dollars in 1991. Last month I bought my wife a Chromebook for less than that 286 and it came with 1.7 GHZ (141 times as fast, for less money).

What about Enterprises? The most advanced edition of the IBM z900 series was released in the second quarter of 2002.  For just over $5,000,000 you had a state-of-the-art mainframe system.  But what constituted cutting-edge processing in 2002 is trivial in 2014 (and surprisingly inexpensive).

Your phone might be more powerful than your legacy system

I don’t know what kind of phone you have or how dated your legacy system is, but it’s quite possible that you have more processing power in your pocket than you do in that aging legacy system you have been nursing in your data center.

Let’s compare the legacy system to modern computing capabilities.  We’ll use Million Instructions Per Second (MIPS), which is a crude, but reasonably effective means of measuring processing power.  Consider the following:

  • Legacy system – The same z900 system from 2002 ($5 million) which we previously mentioned, could accomplish 3192 MIPS. (z900 Specs)
  • Laptop – The Intel i7 chip, common in most laptops today, ranges between 92,000 and 177,000 MIPS. Thus, your laptop ($1000-$2000) is between 28x and 55x as powerful as your z900 in terms of processing speed. (Chipset Specs)
  • Smartphone – The A7 chip in the iPhone 5s ($200-$850) clocks over 20,000 MIPS. So even your phone is 6x as powerful as your z900. (iPhone Specs)

This phenomenal growth in processing power has been experienced across the entire spectrum of computing (multiple cores, larger RAM sets, faster RAM, etc.) and even further extended through the massive growth in virtualization.  Hardware today is orders of magnitude more powerful than what was available when your legacy system was deployed.

You don’t carry a brick phone anymore for a reason

Brick Phone

What was considered a large deployment in 2002 is considered trivial now.  With a couple of relatively cheap blades and a redundant storage system behind it, you could replace your legacy system with a modern and far more capable platform for less than $150,000 in hardware investment.  Moreover, the resulting solution would require less physical space, less power, and less cooling.

A slew of benefits exist for migrating your legacy system to a modern platform. Benefits include the following:

  • Faster processing
  • Higher throughput
  • More flexible processing through virtualization
  • Access to more productive and powerful software and infrastructure platforms
  • Lower maintenance costs
  • Faster deployment cycles
  • Smaller physical footprint
  • Lower operational costs

Time to trade-up

Now that you’re all out of excuses, it’s time to seriously consider migrating your legacy system to a modern platform. For the cost of 2-3 years worth of maintenance on your mainframe, you can migrate to a modern, supportable, and flexible platform that will more readily adapt alongside your changing business. It may not be broken yet, but relying upon outdated technology is a broken strategy that needs to change.

Share
Posted in General | Tagged , , | Leave a comment

Move over Reliability, Resilience has arrived

Share

[This article was originally written as a guest post for Puppet Labs and published at their blog on January 9th, 2014.]

If you haven’t yet noticed that prioritization of non-functional requirements (NFRs) is changing amongst your user base, you will soon. For decades, we have held to the same familiar set of NFRs. Every team had its own definition and particular spin on NFRs, but the usual suspects are accessibility, availability, extensibility, interoperability, maintainability, performance, reliability, scalability, security, and usability.

But new priorities have surfaced, as IT has experienced a sea change over the past few years. Some organizations have even adopted completely new NFRs. The rise of DevOps has coincided with these changes, and the movement’s principles enable IT teams to more readily adapt to rapidly changing requirements.

Your grandfather’s mainframe was very reliable

Historically, IT system designs were praised for reliability. Robust and stable systems could “take a licking and keep on ticking.” As computing became more pervasive, scalability became the watchword. Systems should be able to grow and expand to meet increasing demands.

Scalability as an NFR priority represents just a slight shift from reliability as an NFR. Both operated off the mindset that the original system design was valid. Reliability ensures that the system continues to provide the stated functionality over time, and scalability ensures that you can do so for an increasing demand set.

Roughly 10 years ago, things began to shift as more and more organizations embraced movements like agile or XP, and architectural models like Service Oriented Architecture (SOA). These initiatives promoted adaptation and response to change as desirable system qualities. Next, cloud computing introduced us to the notion of elasticity, further promoting the values of flexibility and responsiveness to change.

A resilient system is a happy system

The state of the art for system design is always evolving, and we see noticeable leaps forward every few years. The current phase of evolution is toward resilient systems.

Legacy system designs relied upon expensive infrastructure with multiple-redundant-hot-swappable-live-backup-standby-continuity-generators (or whatever vendors are peddling lately). In contrast, resilient system designs embrace failure and promote the use of cheap, commodity hardware, coupled with distributed data management, parallel processing, eventual consistency, and self-healing operational nodes.

Some portion of your system is likely to go down at some point, and resilient systems are designed with that expectation. Resilient systems and resilient processes are able to continue operation (albeit at diminished capacity) in the face of failure.

The prioritization of resilience over reliability as an NFR can be seen within the DevOps movement, the development of the Netflix Simian Army, and the rise of NoSQL data management solutions.

DevOps and resiliency

DevOps is a multi-headed beast, more a movement guided by a set of principles than a tangible and well-defined construct. While organizations are free to adopt aspects of DevOps that suit their needs, one common thread is that of resilience. Failure is seen as an opportunity to improve processes and communication, rather than as a threat.

The principles of continuous integration and continuous delivery that are core to most DevOps practices exemplify a resilient mindset. Where the classic waterfall model relies upon detailed front-end design and planning with an all-or-nothing development phase and late-stage testing, DevOps teams are more agile, embracing a “fail early, fail often” model. This approach results in more resilient and adaptable applications.

Netflix Simian Army

Netflix gained world renown when the company broadcast details of its Simian Army work in 2010 and 2011. Through the automated efforts of Chaos Monkey, Chaos Gorilla, and a slew of other similar utilities, failure is simulated in order to develop more resilient processes, tools, and capabilities.

John Ciancutti of Netflix writes, “If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most — in the event of an unexpected outage.”

NoSQL

A third illustration of the growing fascination with resilient, self-healing systems is the transformation now going on in the data realm. Data and metadata management have evolved considerably from the relational databases of yore. Modern data management strategies tend to be distributed, fault-tolerant, and in some cases even self-heal by spawning new nodes as needed. Examples include Google FS / Bigtable, in-memory datastores like Hazelcast or SAP’s HANA, and distributed data management solutions like Apache Cassandra.

Miko Matsumura of Hazelcast notes, “Virtualization and scale-out power new ways of thinking about system stability, including a shift away from ‘reliability,’ where giant expensive systems never fail (until they do, catastrophically), and towards ‘resiliency,’ where thousands of inexpensive systems constantly fail—but in ways that don’t materially impact running applications.”

Keeping pace with the cool kids

It’s often said that the only constant is change. The DevOps movement positions organizations to embrace change, rather than fear it. Continuous integration, continuous delivery, and continuous feedback loops between dev teams and ops teams facilitate an enhanced degree of agility and responsiveness.

As business and society evolve, our system design priorities must adapt in parallel. The cool kids will change the game again at some point, but for right now, “change” means designing systems and supporting processes that are responsive and adaptable by prioritizing resilience over reliability.

Share
Posted in DevOps | Tagged , , , , , | Leave a comment

Socrates and Enterprise Agility

Share

Back in my debate days, I was introduced to The Socratic Method, which is a line of reasoning popularized by Socrates in which seemingly “fundamental” concepts that appear to defy definition were explored to either bring clarity or to reveal one or more false assumptions. Far too often I find in the business community that we toss around jargon without digging any deeper to unpack terms and concepts that are regarded as ‘fundamental’ or ‘self-defining’. One such concept that I think warrants some attention is the term ‘agile’. If Socrates were a business consultant today, he might well ask: What does it mean for an enterprise to be agile?

Dispelling Myths about Agility

To start with, I’d like to address some common misconceptions associated with agility.

  1. Agility does not mean you simply work faster.
  2. Cutting out entire categories of work isn’t agile. (You still need to conduct analysis, design, and testing, you just do it differently.)
  3. Agility is not purely the domain of startups, even massive enterprises can be nimble and responsive.
  4. If your development team is the only thing you have transformed, then you are only halfway there. Your entire value chain must become agile, from strategy all the way through operations (this is where ‘DevOps‘ comes into play).
  5. Brick and mortar businesses can benefit as much or even more from agile practices and innovative uses of emerging methods and automation technologies.

Crystalizing the Goal – Competitiveness

Having rejected several common and flawed understandings of agility, we are left with a clean slate to build upon. As with any analysis, we should “begin with the end in mind” and set our sights on the purpose of agility in an enterprise context. In their paper on “Dynamic Organizations”, Lee Dyer and Jeff Ericksen describe the importance that agility and responsiveness play in maintaining a competitive edge:

Dynamic organizations (DOs) operate in business environments characterized by frequent and discontinuous change. For them, competitiveness is a moving target, a constant pursuit of proactivity and adaptability in the marketplace, preferably undertaken as a matter of course rather than with great travail.

We have to stop being surprised by change. We have to stop getting bent out of shape when leadership changes priorities, the marketplace changes directions, or our customer base changes their needs. Change is a given. Successful enterprises embrace a dynamic and responsive approach to their business, complete with the requisite process, people, and technology to operate in such an environment.

The Elements of Agility

To be a dynamic, agile organization, you have to be willing to try things out that may not work. Be prepared to fail early, fail often, learn quickly from your mistakes and respond when new information presents itself. Remarkable athletes are credited with attempting more shots than most players successfully make. They also are sometimes derided for significant miss percentages. The moral of the story is to persevere rather than try and line everything up for that ‘perfect shot’. Dyer and Ericksen (referenced earlier) outline four elements of dynamic / agile organizations:

  • Explore – innovate, try new things, take risks, constantly improve
  • Exploit – find something that works and run with it
  • Adapt – change directions as new information arises, even returning to the ‘explore’ phase if necessary
  • Exit – failure helps you identify what won’t work, do it quickly and as often as possible, then repeat the cycle to find something that does work

Cycle of Agile Innovation

What are you doing within your organization to embrace a dynamic, agile approach to operation? If the answer is ‘not much’, then you are at the mercy of any competitor that chooses to embrace a lean and adaptive model of operations. Unless your business, like Socrates, want to be regarded as a historical relic. Agility is key to competitiveness and to relevance.

Share
Posted in Agility, DevOps, General | Tagged , , , | Leave a comment

Braving the DevOps Frontier

Share

“There’s gold in them clouds!!” – The gold rush for business value and pace of change is on and the latest golden child is DevOps. That delicious portmanteau of Development and Operations. But much like the Wild West, our rush to charge into this brave new frontier, blending two disciplines that have been separate for decades, is not without issue.

The Good

  • Developers must take ownership of their code all the way into production
  • Ops staff embrace the pace of business and the rhythm of feature delivery and business change

The Bad

  • Dev Teams are starting to drift away from a long-term strategy for their software design, embracing the expediency of delivering features ASAP
  • Ops Teams are losing a means of vocalizing when business and development decisions put the organization in a precarious solution, because they are seen as being obstructionist to the new way of life

The Ugly

  • The critical role of quality assurance (QA) can sometimes get lost in the continuous development and delivery shuffle. Where is the dedicated and intentional effort to fold a thorough and professional examination of quality fit into the DevOps equation?
  • Although development teams have been moving incrementally toward agile and lean startup strategies for years, operations teams are very much finding themselves behind the eight ball. There is a significant shift in mindset, skills, and tooling required for these groups to come up to speed.
  • Development teams have a sizable knowledge gap as well. Although my family has trouble grasping it, it is not true that if you ‘know technology’ then you are equally adept at all things technical. Many developers have lived in a “if it compiles, then it’s good” mentality for much or all of their careers. They’ve never really needed to understand the production world…until now.

DevOps and the overall shift toward Development teams and Operations teams working together to deliver business value early and often is a good thing. Developers becoming aware, sensitive, and ultimately responsible for operationalizing their code, is a good thing. Ops folks partnering with the developers to deliver new and ever-changing business features is also a good thing. But if in an effort to embrace this new normal, we lose all of the uniqueness that a proper Development, QA, and Operations disciplines offer us, then we’ve swung too far to the latest extreme. Teams need to come together as partners for business value delivery, but not at the expense of long-term strategy and robust quality.

Share
Posted in DevOps, General | Tagged , , , , | 1 Comment

The journey of a thousand miles in architecture education begins with one step – TOGAF 9 training

Share

OK, so maybe TOGAF education (currently, TOGAF 9.1 is the latest version) is not exactly what Lao Tzu had in mind when describing the importance of taking initiative toward completing a monumental task, but it certainly is an apt metaphor.

Reason #1 – The field of architecture is broad and deep

  1. There are multiple architecture methods (Zachman, TOGAF, FEAF, DoDAF, MoDAF, TRAK, etc.)
  2. There are multiple modeling standards (UML, Archimate, SysML, etc.)
  3. There are multiple architecture styles (SOA, MDM, MOM, EAI, etc.)
  4. There are various architecture modeling tools (Sparx EA, Magic Draw, various Rational tools, Troux, etc.)

Reason #2 – TOGAF offers the most comprehensive approach as a starting point

  1. Offers a comprehensive method – The Architecture Development Methodology (ADM)
  2. Offers a slew of techniques (gap analysis, business scenarios, migration planning, stakeholder management, capability-based planning, and etc.) and concrete artifacts (catalogs, matrices, and diagrams)
  3. Covers the four primary architecture domains (business, data, application, and technology) as well as supporting sub-domains (security and interoperability)
  4. Provides two reference models (TRM and III-RM)
  5. Defines a generic governance framework
  6. Defines a capability framework to address the skills, roles, responsibilities, and team structure side of things
  7. Defines a content framework with an underlying meta-model to support relationships and linkages across architectural model elements

Reason #3 – TOGAF Supports Customization

  1. Encourages alignment with other frameworks and methods (TOGAF and, not TOGAF or)
  2. Defines an explicit place in the life cycle where tailoring makes sense (Preliminary Phase) and logical intersection opportunities with other project / program management disciplines (Preliminary, Architecture Vision, and also Migration Planning)
  3. Defines 3 categories of customization (process, content, terminology)

Reason #4 – Certification is Available

  1. Certification provides confidence that you have really absorbed the body of knowledge
  2. Certification increases confidence in hiring
  3. Certification can be applied to tool selection as well

Important Caveats

  1. TOGAF might be a good FIRST STEP in your EA journey, but there are many other things to learn, experiences to gain, and practical skills to develop BEYOND the core of TOGAF.
  2. Learning TOGAF or even getting certified does not suddenly make you an architect and it certainly doesn’t mean that TOGAF is the right way to solve every problem.  You still need to add in experience, other education, and an experienced practitioner that can guide you in tailoring TOGAF to apply it in a way that is lightweight and packed with value.
  3. Every approach (EA and TOGAF are no different) must be balanced with common sense principles such as ‘what makes sense’ and ‘what is going to drive value’.  Blindly following principles and processes is a recipe for disappointment.

Architecture Education is a Journey

TOGAF is a great first step in your architectural journey of a 1000 miles.  But it is only a first step.  Never stop learning frameworks, styles, and patterns of architecture.  The state of the art can and will change; so be ready.  Get rolling with it, apply elements that make sense, and use it as a solid platform for growing and expanding your architectural toolbox of capabilities.

Share
Posted in EA | Tagged , , , , , , , | Leave a comment

Business Architecture – Functional Model

Share

Across my client base, one of the most misunderstood architecture domains is that of Business Architecture.  This is terribly unfortunate, because Business Architecture is the cornerstone for any successful Enterprise, Solution, or Project Architecture.  Ultimately, it all needs to be grounded in the business needs and target objectives.  Otherwise, what’s the point?

To crystalize the concept of business architecture, it helps to examine the models and artifacts that are used to elaborate a particular organization’s business architecture.  In this post, I will explore one of the core aspects of business architecture – the functional model.

Functional Models answer the WHAT (and also sometimes the WHY)

A functional model provides a strategic view of how the business architecture delivers capabilities that align with business goals and drivers. This model answers the question:

“WHAT functions does the business require and how does the architecture align to this functionality?”

A functional model provides a macro-level view of what the business does so that the technology organization can support that functionality through processes, data exchanges, integrated systems, and enabling infrastructure.  It gives a complete picture of what the enterprise does and plans to do.  Moreover, it provides a mechanism for articulating how the business will evolve – new business functions, modified functions, outsourced functions, obsolete functionality, and any other defined changes that are required in order to realize the desired future state of functional business delivery.

A secondary question answered by the model is:

“WHY do these functions exist?”

This is where the question of value or business driver comes into play.  Each identified function must tie back to something the business cares about in order to be a valid part of the functional model.

Two Sub-Models

Typically a functional model includes a construct that identifies business functions or capabilities and maps those to business motivators such as drivers, goals, or objectives.  At its core, a functional model is aiming to answer two questions.  The primary question – WHAT does the business provide or deliver to customers and then a secondary question – WHY does that matter to the business (i.e. what’s the value?).  The bulk of a functional model deals with the WHAT (the functions), but these should always be mapped against the WHY (the business motivators) as illustrated in Figure 1.

Two Functional Models

 Business Motivation Model

The first model captures business motivations and links them to business units and/or business functions (part of the second model).  The aim of this model is to capture what the business wants to achieve and link it with the enabling mechanism within the enterprise that delivers on that vision.  At one level, this helps to ensure that all of the elements of the business’s target outcomes are accounted for within the business architecture.  Conversely, it also ensures that every business function and enabling support within the enterprise model is explicitly tied to a goal or objective that the business wants to achieve.

There are multiple approaches in the industry for modeling business motivations:

  • TOGAF’s Goal / Objective / Service Model – TOGAF defines a Motivation extension to its Content Metamodel that defines Drivers (internal / external motivating condition), Goals (high-level statement of intent or direction), and Objectives (time-bounded milestone to demonstrate progress toward a goal).  The Goal / Objective / Service model specifically creates a linkage between business targets (goals & objectives) and the business services that enable the fulfillment of these targets. Source: http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap35.html
  • OMG’s Business Motivation Model (BMM) – Identifies Ends (Vision, Goal, Objective) and links those to the Means (Mission, Strategy, Tactic, Directive) by which you achieve those Ends.  Finally, the Means are further linked to Internal Influencers (Strengths and Weaknesses) and External Influencers (Opportunities and Threats) and then ultimately mapped to other business model elements such as Organization Units, Business Processes, and Business Rules.
    Source: http://www.omg.org/spec/BMM/1.1/PDF/

Business Capability Model

The second element of a functional model is the elaboration of functionality in the form of a portfolio of business capabilities, business processes, business functions, or business services.  The decision on which construct to use depends largely upon the overall direction that the organization is going strategically.

  • Capability models support business functionality threads that run through multiple lines of business
  • Process models enable orchestration and workflow styles of integration
  • Service models drive toward reuse, composite solutions, and promote contract-driven interfaces
  • Function models promote a componentized view of the enterprise and support modularity

To a certain extent, every business has capabilities, processes, services, and functions.  By selecting one of these to represent the second-half of the functional model you are making a statement as an organization regarding what the emphasis will be within your architecture.  In any case, we are aiming to capture what the business does in order to deliver on the vision articulated in the motivation model.  These two models can be kept distinct and loosely coupled or you may choose to weave them together into an aggregate model that demonstrates the linkage between the business’s aims and the business’s capabilities to deliver on those targets.

Here again, a number of approaches exist for modeling business functions, services, processes, and / or capabilities exist.

  • TOGAF’s Business Footprint Diagram - Describes the links between business goals, organizational units, business functions, and services, and maps these functions to the technical components delivering the required capability. A Business Footprint diagram provides a clear traceability between a technical component and the business goal that it satisfies, while also demonstrating ownership of the services identified. Source: http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap35.html
  • TOGAF’s Business Service / Function Catalog – Identifies organizational capabilities and demonstrates the relationship between business services and technology functions.  Furthermore, it can be mapped against organizational units to demonstrate ownership of business services. Source: http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap35.html
  • TOGAF’s Functional Decomposition Diagram – Provides a graphical depiction of the information captured in the Business Service / Function Catalog (see above).  Also provides a very natural mechanism for eliciting the technical capabilities necessary to fulfill business needs.  One unique facet of this diagram is that it can easily illustrate shared components (which are more challenging to represent with a catalog artifact). Source: http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap35.html
  • OMG’s Business Capabilities View – Describes business activities, aligned against the organization that delivers that function (much like the other capability mechanisms).  One unique element of this view is the categorization and depiction of functions as customer-facing, supplier-facing, management-focused, and execution-centric. Source: http://www.omgwiki.org/bawg/doku.php
  • MODAF’s Strategic Views – Define a whole range of capability artifacts.  Typically MODAF is a better fit for highly technical engineering environments with heavy system integration requirements.  Additionally, a non-defense meta model would have to be crafted to retrofit this for Chubb.  The extensive set of integrated capability artifacts is still worth exploring either for direct usage or to inspire a custom solution for the enterprise. Source: http://www.modaf.org.uk/

One purpose of the capability model is to identify the spectrum of business functionality now, in the near-term, and in the long-term future.  It can be used to assist with strategic architecture planning to highlight what capabilities will persist, those that will be modified, capabilities that will be added, and those that will be removed as the organization progresses to the desired, future state.  A formal capability-based planning approach could even be adopted, leading to a need to map an organization’s initiatives, programs, and project portfolios against the set of capabilities that these efforts will impact. Source: http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap32.html

 Business Architecture (It’s like a real career or something!)

Business Architecture has come a long way in the last several years and its maturity and its scope extend well beyond merely analyzing business requirements and creating some process models (these are actually the domain of a business analyst or a business modeler). Now it is a full-blown architecture discipline with real models and meta-models supporting it.

In trying to help organizations come to grips with the realities of business architecture, I will often tell them the following:

While not every organization formally recognizes it, your business has an architecture.  It’s either the intentional one that has been created by the steady hand of one or more architecture practioners, or it is the ad-hoc one that you have stumbled into based upon tactical decisions that have been made over the last 10-15 years.

So it’s not a question of whether or not your organization should decide to do Business Architecture or not.  It’s a question of whether you want to be strategic and intentional about how you structure the architecture for your business.

Share
Posted in EA | Tagged , , , , , , , | Leave a comment

I passed the TOGAF cert, and all I got was this lousy piece of paper…

Share

Fog a mirror? Check.  Study and pass the TOGAF certification? Check.  Congratulations, you are now an architect.  Wait, what?

If you have ever wondered what the TOGAF certification gets you (or any cert for that matter), you may be disappointed to discover that it does not grant you preferred seating at restaurants, airline upgrades, or even a guarantee of employment.  All you definitely get is a piece of paper (or more likely a PDF that you could choose to print on a piece of paper in the event that you hate trees).  I have written previously about the speculative value of architecture certification.

All of this begs the question.  Let’s assume that you HAVE gotten TOGAF certified, what should you do next?

Top 5 Things To Do After Getting TOGAF Certified

(Note: It requires every ounce of my self-control to provide you with real answers here and not do a satirical list in the format of The Late Show with David Letterman.)

  1. Resolve the building block concept
  2. Unpack each architecture domain
  3. Compile a list of reference models
  4. Learn another framework or approach
  5. Tailor TOGAF to the essentials

1. Resolve the Building Block Concept

I have written previously about the mythical creation known as the TOGAF “Building Block“. Getting a solid handle on this and crafting a viable model for how you will elaborate building blocks is essential to being able to actually use TOGAF on a real project.

2. Unpack each Architecture Domain

Define for yourself (or your organization) the exact role of each type of architect (business, data / information, application, infrastructure, security, etc.).  Additionally, you will need to define distinctions that exist between analysts, SMEs, and architects within each domain.

Example 1: Most individuals when asked to define a ‘Business Architect’ will define it essentially the same as how they would define a ‘Senior Business Analyst’.

Example 2: Very few organizations truly have a data architecture discipline.  Instead, they typically have some basic data modeling skills and a heavy emphasis upon database administration.  These capabilities are a far cry from real data architecture.

I have explored the subject of architecture domain knowledge previously.

3. Compile a list of reference models

Reference Models play an important part in promoting an effective architecture practice, because they serve as a starting point for creating new artifacts and they can be used to validate and critique artifacts that have already been created.

Places to look for RM artifacts

  • Other frameworks (FEAF has a huge set of RMs)
  • Standards organizations (Open Group, OMG, OASIS, etc.)
  • Industry organizations (ACORD for Insurance, TeleManagement Forum for Telecom, etc.)

4. Learn another framework or approach

The Open Group is actually quite clear on this advice.  It is TOGAF and, not TOGAF or. You can, and should, learn as many frameworks, methods, and ontologies as you can. Knowing more architectural methods will give you a broader perspective and equip you to pull together a best-of-breed approach.

Frameworks / methods to consider:

  • EA Frameworks such as Zachman, FEAF, and TRAK
  • Solution Architecture styles such as BPM, SOA, MDM, etc.
  • PMI
  • Six Sigma / Lean

5. Tailor TOGAF to the Essentials

TOGAF is….

  • A 700+ page specification
  • A massive set of artifacts, deliverables, guidelines, techniques, models, and meta-models that are designed to be:
    • Comprehensive
    • General
    • Adapted to your needs

TOGAF is huge. Trying to implement all of it is a sure-fire way to produce an architecture approach that is bulky and unwieldy.  Instead, you should Embrace the agile principle of ‘just enough’.  In fact, many of clients are turning to toward the notion of Agile EA.

Moving Beyond Certification

Getting certified is a great first step in the journey toward architecture enlightenment. Treat as the beginning of your journey rather than the end of it and then proceed with concrete steps to apply what you have learned.  TOGAF, in particular, provides an enormous set of potential resources.  Start small and build your practice incrementally if you want any hope of preserving your insanity or grabbing ahold of that elusive architecture ROI.

 

Share
Posted in EA | Tagged , , , , | Leave a comment

Big Data May Have Big Problems in 2013

Share

Unless you’ve been hiding under a rock, you are aware of the hype around Big Data and Predictive Analytics.  The potential is mind-blowing, but organizations looking to pursue Big Data must be cautious and consider the enabling pieces that need to be in place in order to be successful.  Otherwise, irrational exuberance may well lead to a Big Investment in a Big Disappointment.

Earlier this month, I was interviewed for the Cloud Computing Journal as a survey of IT pundits on the subject of Big Data Predictions for 2013.  While many were extolling the virtues of Big Data, I was the proverbial party-pooper.  My cautionary tale is repeated here for ease of access and in hopes of facilitating a dialogue on the subject:

  1. 2013 holds the potential for Big Data and Analytics to either generate real returns for the next wave of adopters or potentially ‘jump the shark’ and be chalked up as yet another hype-fueled collection of promises and ethereal ROI.
  2. The risks for firms adopting Big Data and implementing Analytical capabilities lies in the fundamental lack of clean and congruent data as a starting point, combined with a general ambiguity surrounding what a given enterprise should be looking for in their proverbial haystack of information.
  3. Our modern world is so awash in data that organizations will find themselves at a critical juncture in 2013 as they aspire to capture the potential of these emerging disciplines while combating the realities surrounding data management, data stewardship, and effective data analysis. Those firms that are able to overcome these obstacles stand to win in 2013 in a very big way.
  4. Other significant challenges that organizations will face in 2013 are in the areas of staffing and identification of best practices surrounding the new arsenals of information that are in the hands of the enterprise. There is a considerable shortage of qualified personnel and a relatively large gap in terms of knowledge transfer and skills development capabilities surrounding Information Architecture, Big Data, and Analytics within many organizations.
  5. Additionally, there is a considerable lack of dialogue surrounding what constitutes best practice in these domains. Selecting an appropriate analytics technique and/or supporting technology toolset for a given problem is a critical decision point that few organizations are currently equipped to make. In short, organizations will need to invest in equipping their team’s toolbox with Information Architecture tools and techniques in order to ensure success with any Big Data or Analytics initiative in 2013.
Share
Posted in Cloud | Tagged , , , | Leave a comment

Agile Enterprise Architecture

Share

Old News.
Long Shorts.
Jumbo Shrimp.
Agile EA.

What’s the common thread?? They are all oxymoronical statements. But does it have to be that way? Does the implementation of Enterprise Architecture, or for that matter Solution Architecture, necessarily have to be a cumbersome and heavy-weight initiative? In my experiences with countless organizations from a wide range of industries, the answer is a definitive – NO. Your architecture practice can be robust, and still be nimble.

Bigger is not always Better

In spite of what you may have been led to believe, a bigger EA initiative is not always better.  In fact, in my experience it is often a liability.

“Big Bang” EA has been proven time and time again to not work — You know what I’m talking about.  This is the epic enterprise architecture initiative that involves one or more of your best and brightest being sequestered into an ivory tower to get the EA approach “figured out”.  Then once a framework has been selected, artifacts and templates defined, comprehensive enterprise models created, governance gate criteria elaborated, a five-year road map developed, and a comprehensive training program established, the entire tome of documentation can be brought down from the mountain to share with the commoners.  Inevitably, the end result is a theoretical model that has yet to see the light of day after 18+ months of time / money / energy and no end in sight where the EA initiative even breaks even on its investment.

“Boil-the-Ocean” EA is also a recipe for disaster — Here we have a variation on a theme.  In this case, the EA initiative may or may not be an 18+ month epic initiative, but its sheer scope (i.e. the ENTIRE enterprise) is enough to cripple its chances of getting any real traction in the foreseeable future. It simply is not practical to attempt to roll out ANY substantial architectural changes across the whole of the enterprise at the same time.  There’s too much risk and far too many stakeholders to manage everyone’s expectations simultaneously.

Simple, Practical Steps to a more Nimble EA

“Start Small and Grow Incrementally” is the only thing we have seen work consistently — Bite off a reasonably-sized piece (i.e. something big enough to matter, but small enough to be successful) of the business.  Achieve success with this initial piece, incorporate lessons learned, and then repeat this pattern.

Specifically, we use the following strategy with our clients at Web Age Solutions:

  • Step 1 – Establish a thin EA at the highest level of the organization. This provides the strategy, high-level roadmap, principles, guidelines, etc.
  • Step 2 – Identify a slice of the enterprise where you are about to embark upon a moderately sized initiative (replacing a key system that is not mission-critical, integrating two LOBs, reworking one or more significant business processes, expanding into new business markets, etc.). Alongside that initiative, engage in Solution Architecture (Portfolio-level) and Technical Architecture (Project-level) in alignment with the over-arching EA initiative. As you are engaged in this effort, begin to populate the repository with enterprise models, governing artifacts, and start documenting guidelines and best practices.
  • Step 3 – Assess the results, adjust your governance approach, re-align your strategy, etc.
  • Step 4 – Select another slice of the enterprise and repeat steps 2 and following.

Taking a more agile approach to developing your architecture practice dramatically reduces risk and shortens the window for realizing value.

Share
Posted in EA | Tagged , | 1 Comment

Classic Approachs to Architecture Maturity are Broken

Share

I just delivered a webinar describing the gaps that exist in classic, CMMI-style approaches to maturity when applied to architecture practices. If you’d like the deck, you can find it here:
Architecture Maturity with Rubrics

I’ll post a link to the recording once it is available on Web Age’s webinar page.

If you’d like to discuss the topic, feel free to comment here or give me a shout at Linked In.

Share
Posted in Uncategorized | Tagged , , | 2 Comments