Architecture

Conclusion: CIOs need to decide if they will invest in the practice of enterprise architecture and if so, how to approach it. Many CIOs choose to invest in enterprise architecture for the wrong reasons: because other organisations are doing it or because a consultant says it is “best practice”. Instead CIOs should consider which enterprise architecture functions would provide specific benefits, given the functions that are already provided in the organisation.

In the last four years the mobile device space has undergone a major transformation as Apple redefined the market, first with the iPhone and then the iPad. In that period Apple created a mobile device business with revenues that exceed the total of all Microsoft’s revenues1!

Microsoft, long the dominant desktop software vendor, has struggled in the mobile device market and has fallen out of favour with the consumer and the enterprise for mobile devices. A recent survey2 of the smartphone installed base in the US shows the iPhone has 34% of the market, Android 51% of the market and Windows mobile 4%.

Conclusion: Direct dependencies between services represent one of the biggest mistakes in the adoption of a service oriented architecture. An event driven approach to service design and service orchestration is essential for increasing agility, for achieving reuse and scalability, and for simplifying application deployment. Complex Event Processing offers a gateway to simplicity in the orchestration of non-trivial service supply chains.

Conclusion: In the last two years VMware’s desktop vision has undergone a profound transformation from a narrowly focused VDI (a centralised, virtualised desktop) strategy to a broader Dynamic Desktop1 strategy that supports Physical and Virtual desktops and Software as a Server and mobile applications. Despite this change, for the next 18 months VMware will continue to trail Citrix, which has greater desktop experience and had all the elements of a Dynamic Desktop since 2009.

Conclusion: Governments across Australia have been engaged in Shared Services initiatives for almost a decade. Decisions taken over the last two years to abandon, de-scope or rethink Shared Services by these same Governments demonstrate that the traditional model has not worked and a different perspective is needed. Perhaps knowledge/wisdom can be drawn, not from other government shared services initiatives, but from a completely different business model such as franchising? In franchising it is about having a great product with repeatable and standardised business process, great customer service and a growth strategy.

Conclusion: IBM’s launch of its PureSystems line of hardware completes the vendor line-up for Integrated Systems. While this does not dramatically change the market it does further solidify our 2009 prediction that IT infrastructure is transitioning to a new procurement and deployment model. However, due to internal barriers adoption rates are modest and this transition will only happen slowly over the next seven years.

On the next major IT infrastructure refresh, especially storage, IT organisations should review their approach to procuring and delivering infrastructure. This may require challenging the established infrastructure dogma in order to accurately evaluate the benefits of Integrated System.

Conclusion: For organisations that use digital content distributors, telecoms suppliers, and social media, the Convergence Review is an important stage in how policy and regulation will evolve. The review sought to update the regulations in the sector which has changed rapidly. Although the review did not focus on digital players, there were elements in the digital arena that indicate where change may lead.

It is probably inevitable that more regulation will enter the digital content and distribution sector. The need to impose controls will be to facilitate market competition and foster new ventures. It will also be used to protect individuals. That means that running an unregulated market is not possible if the goals of increasing local content, commerce and technology innovation are to be achieved. Organisations may have a special interest perspective depending on their role within the content, communications, technology development and social media sectors.

Conclusion: Einstein said that “everything should be made as simple as possible, but not simpler.” This is true in enterprise architecture and project management. CIOs know that simple solutions have many benefits over complex ones. Highly complex projects have high failure rates, like highly complex architectures. However, many CIOs unwittingly encourage and reward complexity. Complexity must be viewed as a primary focus for reducing cost and risk associated with large projects. CIOs should understand some of the key steps that can lead to reduced complexity in projects and systems.

The topic of Bring Your Own Device (BYOD) has resurfaced this year. While this is an important trend that needs to be examined by IT organisations, be careful to separate the facts from the hype. Here are the four most common myths that I keep hearing.

Conclusion: As the market for Board Portals rapidly matures, IT organisations are being asked to assist in selecting and implementing a solution. This is a golden opportunity to raise the IT Organisation’s profile with some of the most influential people in the company.

The CIO must ensure that technical staff do not overcomplicate the project and must find an Executive sponsor who can manage the Board members’ requirements and expectations.

Conclusion: The speed and disruptive effects of consumerisation in the mobile market surprised many organisations that were looking back, not forward. Even mobile providers have not anticipated rates of change and must invest millions to remain competitive.

Over the next three to four years the mobile market will face stark realities in a fully developed and oversupplied market. Providers will have to manage costs, improve service delivery and raise user revenue. That is not an easy set of objectives to achieve. The effect of raising revenues and cost management on users could be disruptive as users seek to maintain price and service levels they have enjoyed for some time. Organisations may have to manage another round of change when it comes.

CIOs, architects and managers responsible for IT systems often wonder – how did we end up with this mess? There’s no decent documentation. No-one seems to be responsible for the apparent lack of any rational architecture. A lot of stuff is “due to historical reasons”. Of course this would never have happened under your watch, but now it’s your responsibility to make some sense out of it. If your system represents a substantial investment, it stands to reason that you’ll want to understand why it was designed the way it is before you take any radical action to change it.

Conclusion: In spite of changes over the last decade the Microsoft Windows Server licensing is still rooted in the physical machine era of the ’90s. However, most organisations run the majority of their x86 workloads in virtual machines. Microsoft’s disconnect with the virtualisation realities of the last five years can result in licensing confusion. Organisations that choose the wrong licensing approach will either greatly over-spend on Microsoft licences or, more likely, not be compliant.

Conclusion: In spite of some benefits in security, remote access and speed of deployment, VDI has remained a niche product. This has largely been due to the higher complexity and much greater capital cost compared with a Full Desktop. However, as VDI infrastructure innovations continue to close the gap, the adoption of VDI will increase beyond this small base. Due to the risks and costs of switching from a well understood model to a relative unknown model, the adoption will increase at a moderate rate and there never will be a “year of VDI”.

Circa 1960: The “Hard theory of platforms”

In the early days of information technology, hardware was THE platform. Companies such as IBM and DEC provided the big iron. Business software was THE application. In those days even software was as hard as stone. The term application platform was unheard of.

Conclusion: No, and there never will be “the year of VDI”. However, now that the capital cost of VDI is close to that of a Full Desktop the adoption of VDI will begin to increase beyond its current small niche. The large capital cost and complexity of replacing the existing desktop fleet, the perceived risks in using to a new desktop approach, and a general lack of experienced staff will ensure adoption of VDI will proceed slowly.

For the next 5-7 years organisations will continue to use a range of desktop deployment techniques (such as Full Desktop, Laptop, Remote Desktop Services aka Terminal Server) with VDI being just one of many.

Conclusion:Emerging Technologies (such as those relating to Tablets, to Cloud, to Social Media, to Big Data) threaten to complicate and disrupt the work of enterprise architecture. As enterprise architects struggle to understand, simplify and bring governance to heterogeneous technology environments, new and emerging technologies get in the way.

Emerging technologies cannot be ignored. They promise tantalising new benefits and bring a vision of hope to CIOs struggling with increasing costs and stagnant budgets.

Enterprise architects must understand what is possible with new technology and matching that to the specific needs of an organisation whilst reducing technology sprawl.

Conclusion: The foundation of any BYO device initiative is a robust BYO device policy. The policy must set the boundaries for acceptable use, costs and security. Ensure device security is driven by business stakeholders and is based on pragmatic risk analysis rather than technical concerns from IT staff, or FUD from vendors who are anxious to sell their wares.

Robust policy, strong corporate culture and proper training can be more effective than technology in securing corporate data and controlling costs and risk. Use policy, culture and training to drive compliance, minimising the need for complex and expensive technological controls.

Conclusion: The idea of Bring-Your-Own (BYO) Laptop has been bandied about for the last seven years, but it is not as common as implied by the press. Few ANZ organisations have BYO Laptops, however some have implemented BYO smartphones and many intend to do so in the next 18 months.

The driver of BYO device in the organisation is not avoidance of the capital costs but rather the need to accommodate users’ expectations of technology, which have been significantly increased by the consumerisation of IT, and largely driven by the iPhone and iPad.

Conclusion: Oracle will continue to excel in the Application, Middleware and Database markets, but it also intends to radically transform and simplify IT infrastructure. Oracle’s strategy is to eliminate complexity, create significantly greater business value and reduce infrastructure costs using an Integrated Systems approach. The objective is to enable customers to focus on applications, instead of infrastructure, in the hope they consume more Oracle software.

IT executives should keep abreast of Oracle’s infrastructure innovations, as well as the competitors’, and be prepared to rethink their existing infrastructure approach if an Integrated System can create a significant new opportunity for the business.

Conclusion: Gainshare models have started to emerge as a way of evolving IT and BPO outsourcing and increasing measureable financial benefits of outsourcing. Gainshare is immature and not without challenges, but can be a proof point of a mature outsourcing philosophy by an organisation.

Conclusion: The instincts of greed and ambition can sometime blindside the architects of IT Shared Services (ITSS) initiatives. Thinking too grandiosely and without sufficient regard for the consequences of ITSS can doom such ventures from the outset. Conversely, taking more level-headed approaches, tempered by the honest counsel of those that aren’t necessarily management sycophants, can have the opposite effect.

Conclusion: The discipline of Enterprise Architecture has evolved from the need to articulate and maintain a big picture overview of how an organisation works, covering organisational structure, processes, and systems. Whilst Enterprise Architecture can assist in implementing industry best practices, several-fold improvements in productivity and quality are only possible if the organisation makes a conscious effort to attract and retain top-level subject matter experts, and if it commits to a so-called Domain Engineering / Software Product Line approach to the strategic analysis of market needs and the design of products and services.

Conclusion: Poor quality and incomplete requirements continue to be a leading cause of IT project failure. While the more widespread use of iterative project management techniques is minimising the impact of bad requirements, it is still not addressing the underlying cause. Accountability for improving the quality of requirements remains elusive. Enterprise architects must take a stronger role in the validation of requirements, and be prepared to intervene when necessary.

Observations: The saying goes that you cannot create a symphony by asking a hundred people to give you ten notes each. This is an apt description of the way requirements may be developed on large IT projects. The result is often a disjointed set of wishful ideas, concepts and assumptive solutions without any intrinsic integrated design or consistent rationale. Given this profoundly flawed starting point, it is not surprising that subsequent project implementation activities that rely on correct and consistent requirements will be inherently challenged.

Challenges in defining requirements: Understanding of the term “requirement” differs among stakeholders. Requirements can be variously perceived as user wish-lists; or detailed product features sets; or complex business process descriptions. The language used to express these requirements is often loose and ambiguous, instead of concise, testable statements of conformance. Requirements often focus on functional behaviour and ignore important non-functional aspects such as performance, security and operational concerns.

Commonly the task of establishing a set of requirements is somewhat blithely described as “requirements gathering” and implies that they already exist ready-formed in perfect shape, and just need to be harvested like simply picking cherries from a tree. Such a perception is a very dangerous attitude – especially among senior executives.

The reality is that high-quality requirements are difficult to create. Unless there is a very clear and concrete understanding of the objectives of the system, and ready access to explicit and accurate supporting information about all relevant dependencies, the process of defining requirements can become a messy and imprecise affair. Common challenges include:

  • conflicting understanding of the underlying business problems between stakeholders

  • limited access to key subject matter experts

  • organisational politics that hinder contribution and create divergent objectives

  • changing circumstances render requirements obsolete

  • time pressures that cause analysis to be incomplete and poorly formed

Dealing with poor quality requirements: Delivery pressures tend to force poor requirements to be accepted unchallenged. In the face of impending (or missed) deadlines, there is acute pressure to have the requirements ‘signed-off’ regardless of the quality. Project governance checkpoints tend to measure when a project milestone has been completed, but not the quality of the actual work products. If requirements are identified as lacking, this advice can be ignored, or dismissed as rework that can occur in later project phases.

The best way to guard against poor quality requirements is to have them validated early and often. Requirements can be quickly tested against some very simple heuristics to gauge the quality and completeness of their definition. Simple tests include:

  • Cohesive – does the requirement address a single, simple business function?

  • Precise – is the requirement completely unambiguous and stated using concise, simple, plain language?

  • Verifiable – can conformance with this requirement be easily proven in the testing phase?

  • Traceable – are all requirements linked back to a clear business need or objective, and are all business needs covered by a comprehensive set of requirements?

The rise of agile delivery techniques has cut the time between requirements definition and requirements testing. This has meant that faulty requirements can be identified faster and at a smaller scale than traditional waterfall techniques. However agile delivery methods are still not pervasively utilised – and very large programs of work found in government and financial sectors still rely heavily on waterfall techniques.

The role of the architect in requirement validation: Requirements elicitation and definition is commonly the domain of the business analyst. Architects tend to be engaged in projects in the earlier conceptual phases to make key decisions about platforms and technologies based on high level business needs. Then, in parallel to the detailed business requirements definition, architects focus on things such as:

  • defining system context and external integration points

  • identifying system components and interactions

  • understanding information structures and flows

  • performance, security and capacity analysis

The risk here is that while the architects are focused on all-things architecture, they remain isolated and disconnected from the detailed requirements definition and validation. But architects are the best placed people to perform requirements validation. They are the experts that should hold the end-to-end system knowledge and enterprise context, coupled with a clear understanding of the business needs and desired benefits to critically and authoritatively validate the quality of requirements.

Despite protestations from architects that requirements validation is unwanted QA of business analyst artefacts, or unnecessary detail, or this is the role of the project manager – architectural validation of detailed requirements must be performed. And project managers must be accountable in ensuring that any deficiencies identified in architectural review are acted upon.

If poor quality requirements are identified by architects, and not addressed by project teams, architects are obligated to escalate the issue for executive attention. Architectural intervention over poor quality requirements is perhaps one of the most important actions that can be taken to improve the chances of project success.

Next steps:

  1. Examine how the quality of requirements is assured on projects within the enterprise.

  2. Check whether architects have appropriate review and oversight of requirement quality, or is this left as a project manager responsibility.

  3. Make architects accountable for requirement validation as a mandated governance checkpoint.

  4. Ensure appropriate escalation path exists for architecture intervention if necessary.

 

Conclusion Leading IT organisations now recognise that selecting and integrating a mix of best-of-breed servers, storage and networks no longer adds value to their organisation. Instead they are purchasing Integrated Systems from a single vendor that eliminates the cost and complexity of integrating these components; lowers the integration and support risks; and reduces the time to deliver a working solution.

To make this paradigm shift most organisations will need to change the kind of relationship they have with their infrastructure vendors from a purely transactional supplier to a long term strategic partner. For many IT, and vendor staff, this will be a difficult and traumatic transition.

Conclusion: Enterprise architects must be systems thinkers first and foremost. Enterprise Architecture is a discipline rooted in IT, and its practitioners often have a deep IT background. However as an enterprise architect, an IT heritage can often be a burden as much as it is an advantage. Organisations that are looking for people with the “right stuff” to become an enterprise architect should cast their net wider than just the IT domain.

Conclusion: There is a perception that public sector organisations experience higher failure rates with IT Shared Services (ITSS) ventures than their private sector counterparts. While no definitive studies have confirmed this, it remains true that both sectors have a chequered history of success with ITSS. However, perceptions are skewed by the sometimes massive and very public ITSS failures that have occurred locally in the public sector. Curiously, many of these failures could have been averted by following some simple steps.

Conclusion: The IPv6 day in June attracted significant media attention and raised the profile of IPv6 again. As is typical, the media latched on to the “bad news” and ran headlines stating that the Internet is running out of addresses! While this is correct, most ANZ organisations will not experience any significant impact and the burden of supporting IPv6 will largely fall to the telecommunications vendors, or other organisations that run large public networks.

IT executives need to check that their organisation has a strategy for dealing with IPv6, largely at their gateways systems, and ensure that this strategy does not get blown out of proportion.

Conclusion: Business architecture is poorly served by IT-centric enterprise architecture teams. While EA teams have the skills to establish detailed technology architecture, they lack the knowledge and understanding of the higher-level business activities that the technology is supporting. Meanwhile business experts who do have an innate understanding of the business landscape lack the skills and tools to create high-fidelity business architecture. To create a complete Enterprise Architecture, organisations should consider splitting responsibility between business and IT areas.

Observations: The terms business architect and business architecture are often used very loosely. In a strict sense business architecture is the subset of an enterprise architecture that deals with the structure and behaviour of organisational assets in support of a business operating model. These organisational assets are generally the non-IT components such as business processes, roles, rules, knowledge, events, locations, services and products.

Most enterprise architecture frameworks (such as TOGAF and Zachman) include the business architecture elements as part of the encompassing enterprise architecture. So in theory, business architects should be members of the enterprise architecture team, just as the information, application and technology architects are.

This is rarely the case.

In IBRS experience, virtually all enterprise architecture teams report to the CIO, normally through another senior IT executive, and exclude important business architecture elements such as business processes, functions, services and products. At the moment many organisations are restructuring IT around a “design, build, run” model and EA is typically positioned in the design stream – alongside areas such as business analysts, business process modellers and usability.

The term “business architect” is a title adopted by people from various areas within an organisation, both inside and outside of IT. These are often senior business analysts, process modellers, strategy areas or transformation consultants, who are outside of the EA group, but use the term “business architect” to indicate a more strategic nature to their work.

The challenge with these self-anointed “business architects” is that while they generally possess immense subject matter expertise and extensive networks with key stakeholders, they lack the structured methodology, or systemic approach to their work which would qualify as an “architectural” approach. They often work at the conceptual or business case level, and are generally unaware of the established discipline for managing business architecture found in enterprise architecture frameworks.

It is important to distinguish between these “business architects” and the function of “business architecture”. IBRS is not aware of any successful formalised “business architecture” teams who exist outside of IT and manage the discovery and definition of business architecture using accepted architecture frameworks and tools that are integrated with the related IT architecture. Business architecture can be found implemented in a number of ways:

  • Some organisations have business architecture functions that operate within EA teams but do so in a limited fashion, and lack the skills and knowledge to craft a complete and accurate picture of the way a business operates.

  • Some have business architecture functions that sit alongside EA teams within IT, but are loosely connected, such as business process modellers, rules analysts or service management functions. The information these teams create and manage is often not integrated with the EA view.

  • Some have business architecture teams that sit in the business but manage information in an unstructured manner in isolation of the IT EA and other groups.

Running business architecture out of the business: Enterprise Architecture has largely emerged from IT and is still often perceived IT-centric activity. But the true scope of Enterprise Architecture is much broader than just the IT elements of an organisation. However given EA has been driven by IT, not surprisingly, most EA teams tend to be much stronger on the technology side of enterprise architecture. The business architecture elements are often ignored, or pushed out to separate teams, such as business process modelling or business rules. Rarely are all the business architecture elements managed with the same care and attention as the technology and application areas.

Ideally a business architecture team would sit in a business area, report to a business executive and manage business architecture elements using an established framework and information repository that is shared with (and protected from) their IT architecture colleagues. They would have a separate governance process that is accountable to a single board level executive. They would act in close concert with IT architecture but be logically separated. In these situations the role of an EA framework clearly delineating between what is business architecture and what is IT architecture and how they relate is essential.

The diagram below shows how the enterprise architecture function could be split along business and technology lines.

In this situation the interface between business architecture and technology is tightly coupled around applications and information. The governance mechanisms of business architecture and EA would need to be carefully synchronised around these dependencies. A set of principles would need to be established and agreed between the EA and business architecture governance bodies to promote coordination and cooperation. An organisation with a low governance maturity would struggle to implement such a structure effectively.

The diagram below indicates a generic governance structure that could support the operation of a dual business / IT enterprise architecture strategy.

Key to this structure working is ensuring that the information managed by the business and IT areas is unified in a single repository with consistent architectural principles and modelling standards applied across the two areas. The responsible senior business executive must have an appreciation for a systemic approach to business architecture and recognise the importance of the activity as a management discipline.

Next Steps:

  1. Understand whether the Enterprise Architecture team is effectively supporting the business architecture view. To do this have business stakeholders validate selected models of business functions, services or processes. Decide whether they are an accurate and meaningful representation of the business operating model.

  2. If not, consider whether situating the responsibility for business architecture under a business executive would enhance the completeness and accuracy of the enterprise architecture. For this to be successful ensure the following pre-conditions are met:

    1. A mature governance culture exists within the enterprise

    2. A responsible senior business executive is willing to take on the function, understands the value, and appreciates the nature of a structured, systemic approach to architecture definition and management of business activities

    3. A unified set of information management tools is available to support the business and IT architecture teams in establishing and maintaining the enterprise architecture

    4. Most critically, a business unit exists that has a purview across the entire organisation and does not form part of a siloed operation. If this is not the case, then the best place for EA is in IT.

  3. Understand the transition plan for enabling business driven Enterprise Architecture

    1. Initially groom business architects inside the EA team

    2. Next establish a dotted reporting line from the EA team to the nominated business executive

    3. Finally transition business architecture as a separate function to the new arrangements

 

Conclusion: VMware’s vSphere Storage Appliance (VSA) is the beginning of the end of the modular storage market. While built for the low end of the market, the VSA will scale-up over time and disrupt the modular storage market. The key benefits of the VSA in a VMware cluster are: lower infrastructure complexity, lower capital costs, greater workload agility and reduced IT skills.

SMBs should consider the VSA at their next major infrastructure refresh. Enterprises should experiment with a standalone environment, such as dev/test or a new departmental application, and become familiar with this technology. Enterprise should then create an adoption strategy to replace modular storage in their VMware server cluster as the VSA matures and scales up.

Conclusion: Recent events1 have shown that IT shared services initiatives do not always live up to their promises. When benefits fail to materialise, emotional rather than logical thinking predominates. Naysayers engage in the fallacy of faulty generalisation, asserting that if one IT shared services venture is deemed to have failed, then the very notion of IT shared services is questionable.

Conclusion: Australian IT organisations should be setting the bar higher to extract maximum value from outsourcing arrangements. Furthermore, if the level of outcomes for many providers has been exceeded, it is often only because those expectations were set so low, with a focus on organisations pushing off low-hanging IT functions.

Clearly the blame in allowing the sub-optimal outcomes to occur is shared by both vendor and customer. Organisations must ensure that they are evolving the way in which they manage their outsourcing vendors to take advantage of cloud and utility based service delivery.

Conclusion: Outsourcing remains a core service delivery model for a significant number of Australian firms. As outsourcing evolves to encompass cloud based services alongside traditional infrastructure outsourcing and managed services relationships, options for the CIO have increased rapidly.

Conclusion: Despite its position as the second largest IT services provider in Australia, and the largest in New Zealand, Hewlett Packard (HP) does not have a consistent or mature end to end IT and business service delivery capability across its services lines in Australia and New Zealand (ANZ).

Future customer confidence in the IT Outsourcing business of HP is reliant in part upon the completion of announced investment in data centres in both countries. It is the opinion of HP’s customers and prospects that its application, industry and Business Process Outsourcing (BPO) based services requires a similar investment and focus to the IT outsourcing business.

Conclusion: Oracle’s decision to end all further development on Itanium, will force HP Integrity customers to make a strategic decision between Oracle and Itanium. Providing the latest version of Oracle software is used customers have until 2016 to implement this decision. Since Red Hat and Microsoft have also abandoned Itanium, IT Organisations must evaluate the long-term viability of this architecture based on its ability to run the applications that matter to the business.

Since high-end UNIX systems typically have a 7-8 year lifespan, Organisations must have a strategy before purchasing new systems or undertaking a major upgrade. This strategy will be driven by the degree of dependence on, and satisfaction with, Oracle’s business applications.

Conclusion: In order to be effective, Quality Assurance must be woven into all parts of the organisational fabric. Designing, implementing, and monitoring the use of an appropriate quality management framework is the role performed by a dedicated Quality Assurance Centre of Excellence in the organisation. This internal organisation ties together QA measures that apply to core business processes and the technical QA measures that apply to IT system development and operations. Unless the QA CoE provides useful tools and metrics back to other business units, quality assurance will not be perceived as an essential activity that increases customer satisfaction ratings.

Conclusion: IT departments continually struggle to replace antiquated business systems. As a result, business processes are supported by inefficient and ineffective technologies that diminish business performance. A common cause is that replacement systems fail to meet expectations and as a contingency, legacy systems must linger on to prop up the shortfall. This imposes the costly burden of maintaining duplicate systems and requiring complex and unwieldy integration. Successful replacement of legacy assets requires a clear strategy and dedicated support for a holistic decommissioning process.

Observations: The IT industry is obsessed with the new and the innovative. Better, faster, cheaper technology solutions are seen as a panacea to the ills of the age. Yet in the excitement to adopt new technologies and systems, the enthusiasm to completely remove the old is sometimes lost. This work is unglamorous and not regarded as a professionally valuable or motivating experience. As a result legacy systems remain like barnacles on the hull of IT.

The need for system decommissioning can arise from two main motivations. The cost of using a superseded technology may become greater than the cost of deploying and using a modern replacement. The technical implications of this scenario are explored in further detail in this research note. Alternatively a system can be made obsolete by a changing business model that no longer requires direct system support, as is often the case when outsourcing business functions. In either circumstance it is essential to have business owners driving the decommissioning agenda through a clear business case focused on realising direct economic benefits.

The easy part:  Decommissioning in a technical sense is often perceived as the end-of-lifecycle activities undertaken to remove outdated or superseded hardware: sanitisation of sensitive information; audited removal and disposal of hardware; destruction and recycling of components. Sometimes the decommissioning of a system will not involve the retirement of any hardware at all – as might be the case with new applications that will run on existing hardware. In these circumstances end-stage decommissioning is largely concerned with things such as the retention and archival of historical data; winding down of support and maintenance arrangements and reallocation of supporting resources.

From a system perspective this is the very final stage of the decommissioning process and a fairly straightforward affair. What is immensely challenging however is the process by which the system got to the point where these activities could take place.

The hard part: Getting to the point where you can decommission an IT system involves the implementation of the replacement system and the migration of all required information, workloads, processes and supporting functions from the old to the new. This is fraught with difficulty. Firstly the new system must be specified and built to adequately support the existing business processes. This is often not the case. New systems may be missing critical requirements, have poor performance characteristics or impose unworkable practises on users. Existing systems may also have many hidden dependencies or unknown uses that only emerge during the decommissioning process and require significant extra effort to address. All of these issues create significant cultural momentum by users to retain the legacy system until the new system is “perfect”.

A common scenario is that a new system is built, but due to significant flaws, its use is limited to a subset of the anticipated functionality and the legacy systems that it was meant to replace continues to operate. This situation can continue indefinitely and, ultimately due to the lack of resources to redress the issues with the new system, the combination of the old and new becomes the status quo, with the unfortunate attendant increase in operational and support costs. This scenario has played out many times as organisations have attempted to replace suites of bespoke solutions with packaged application suites such as SAP or Seibel.

Phased decommissioning: To offset the risk of introducing new systems, many organisations choose an approach that allows a phased introduction through an integration architecture that supports operating both old and new systems at the same time. While at a conceptual level this seems like a good idea, in practise it results in many complications. Technically, the complexity of the integration required is always drastically under-estimated. Legacy systems are riddled with obfuscated business logic that is often nigh on impossible to reverse-engineer and can only be discovered through trial and error. Complex integration logic is also a prime source of major performance problems. Running a business process across two systems creates enormous monitoring and reporting headaches.

Direct decommissioning: The alternative of a “big-bang” cutover is challenged by the complexity and size of data migration required to implement such an approach. Many new systems are designed to enforce much higher data quality standards than legacy systems, creating a data conversion conundrum. Interpreting poorly structured legacy data can be dependent on incomprehensible business rules, making the translation of some data from old to new systems almost impossible.

Therefore it is critical to firstly ensure your new system is up to the job before decommissioning the old system. This may sound rather obvious, but while poor IT delivery can often be masked by well marketed scope reductions, creative delivery phasing and limited user deployment – the decommissioning status stands as a stark measure of progress. Measuring the decommissioning process can be spread across three key performance indicators:

  1. the percentage of working, tested functionality in new systems that is required to replace all equivalent legacy functions

  2. the percentage of workloads and historical information that have been migrated from legacy system to new systems

  3. the percentage of functions that have been wound down and removed from operational service in legacy system.

Addressing decommissioning must be done explicitly in the business case for any IT investment. The question must be asked – are financial savings or other critical benefits directly linked to the decommissioning of legacy systems? If so, how dependent is the economic viability of the planned investment on the timely achievement of decommissioning objectives? If the business case is sensitive to planned savings from decommissioning, what is the risk mitigation in the case of decommissioning delays? In any case, strong governance, stakeholder management and communication are essential to successfully pursue a decommissioning strategy.

Many IT business cases promise substantial financial benefits through decommissioning, but the reality is that retiring legacy systems drags on far longer than anyone desires.

 

Conclusion: Running a robust, cost efficient data centre requires a scale of operations and capital expenditure that is beyond most ANZ organisations. Organisations that host equipment in their own facilities have a higher business risk. Business management is unlikely to be aware of these risks, and has not signed off on them, leaving the IT organisation exposed.

Business management should ask for a business impact assessment to expose these risks to an appropriate decision making level. Management can either sign-off on these risks or request a mitigation plan. For many organisations, moving to a commercial Tier-2/3 data centre reduces risk without substantially changing the total cost. SMEs should consider migrating to a cloud environment (IaaS and/or SaaS) and get out of the business of owning and running their own IT infrastructure.

Successful IT architecture is largely about choosing the optimum systems and technologies that enable organisations to achieve their strategic objectives. The right way to choose between architecture options is through an open, timely, visible process that incorporates key stakeholder input, is based on credible evidence and is measured against alignment with organisational needs and priorities. Poor architecture decision making leads to confusion, waste and delay.

Conclusion: When probity and management accountability are rigorously applied in the IT procurement process, a message is sent to all stakeholders, including vendors, that fair and equitable buying decisions will be made.

Conversely, when probity is absent or lip service only is paid to it, stakeholders may be wary of investing scarce resources to market their services and potentially decide to ‘no bid’ when a tender is issued. The corollary is the client may not get visibility to the best solutions the market has to offer.

2010 has seen many high profile IT failures in well-run Australian companies

  • Westpac’s online banking system was down for about nine hours in August, due to a “software patch”.

  • Virgin Blue had a complete outage of its online reservation system in September, which lasted for about 21 hours. This was caused by data corruption on a solid state disk that appears to have then corrupted the recovery environment. Virgin said this created severe interruption to its business for 11 days and estimated the cost as between $15 million and $20 million.

  • NAB’s core banking system was impacted by the “upload of a corrupted file” in November. This prevented many of customers from receiving payments or withdrawing funds. The impact of this failure was still being felt some weeks after the initial incident.

  • CBA had processing problems that impacted about five per cent of customers so that from an ATM their accounts had a zero balance.

  • Vodafone customers experienced poor reception and slow download speeds for over a month ago after a “software upgrade caused instability” in its system.

In five months Australian has experienced five high profile failures from five brand name companies. So how is this possible? Each of these organisations has large, well-funded IT organisations.

Conclusion: As Windows 7 celebrates its first birthday many organisations are contemplating a desktop upgrade. Most desktops were designed more than seven years ago and there are many new technologies and approaches that need to be considered.

For most staff the desktop is a personal experience, making the upgrade a high-profile project. Treating this as just a technical refresh risks creating a technically successful solution that is considered an expensive failure by the business, or of marginal value. To avoid a career-limiting move, approach the desktop upgrade as a business project that has strong links to key business drivers, and structure the implementation to ensure it quickly delivers tangible business benefits.

A spate of poor Service Oriented Architecture (SOA) initiatives has left some thinking that SOA is yesterday’s silver bullet. However, an effective SOA remains an essential foundation for the evolution of enterprise systems. Any organisations disillusioned by the promise of SOA should revisit its experiences to understand why business value was not successfully realised. With the right insight into the critical conditions for SOA success, those organisations can realign, and if necessary reactivate, SOA efforts as an integral part of their IT strategy.

Conclusion: Data centres which are less energy efficient will ultimately be more expensive to host in; because customers will end up paying for a data centre's excessive power consumption. CIOs should insist on knowing the Power Usage Efficiency (PUE) score of their data centre service provider, as this score will have a direct impact on pricing. Some data centres are very shy about their PUE, so any PUE claim should be independently verified.

The right starting point for Enterprise Architecture (EA) is a clear picture of your organisation’s strategic objectives and desired operational model. If this picture is well formed, an EA can effectively mould the necessary structure and behaviour of business and IT assets over time to maximise business performance. For organisations just beginning the process of developing an EA, it is better to start by concisely and simply documenting an EA vision, rather than attempting to create a detailed EA strategy with complex, formalised frameworks.

Conclusion: CIOs who do not know the level of alignment of their IT strategy and performance with business objectives are potentially flying blind. They could, for instance be advising management to allocate scarce resources to the wrong problems or hiring the wrong people and not know it.

Because alignment is based on perception, CIOs may also be missing the signals from business managers that they are not doing well. Restoring alignment, based on measuring it and taking corrective action when out of alignment, must be a priority of today’s CIOs.

Conclusion: With the release of View 4.5 VMware has failed to move beyond the limitations of a centralised, virtualised desktop (aka VDI) to a robustly managed Dynamic Desktop that supports Full, Virtual and Published Desktops. VMware claims to have eliminated the capital cost barriers to VDI adoption and has introduced a management framework concept called the Modular Desktop that in the long run will enable VMware to expand out of its desktop niche.

VMware will continue to be challenged by Citrix which has much greater experience in the desktop market and has delivered a Dynamic Desktop for over 12 months. Microsoft also has the capability to deliver a Dynamic Desktop, but has yet to articulate it in a robust or compelling way.

Archimate is a vendor-neutral, pragmatic and simple visual language for Enterprise Architecture that can help define and communicate architectural solutions to diverse stakeholders. However the lack of support for transition roadmaps and program management integration means its use should be limited to tactical situations until the planned harmonisation with TOGAF is complete.

Conclusion: The last 15 years was the era of the controller-based storage array. As organisations built ever large storage networks the storage array grew in both capacity and functionality. These devices are now extremely powerful, but for many organisations they are overly complex and the unit cost of storage is very high compared to low end storage.

As the controller-based storage array reaches its plateau of maturity it is ripe for displacement by a disruptive innovation. While no clear product has yet emerged there are four interesting candidates that should be examined to see how storage technology will evolve over the next five years.

‘Superb’ may be a silly name for a car, however the Skoda Superb sits at the top of the Skoda range. It’s the aspirational model, competing with many luxury brands, albeit at a lower price-point.

Conclusion:Client hypervisors have been available from start-up vendors for over a year, but this technology has largely gone unnoticed. The release this month of Citrix XenClient Express will quickly change this and raise the client hypervisor into mainstream awareness.

The client hypervisor is a very interesting technology and much hype will be generated over it, however its business value is limited. Nonetheless the client hypervisor will be quickly adopted by PC vendors looking for the “next big thing” and it will become common in new desktops/laptops over the next three years. IT organisations should look at the client hypervisor to understand how it can be used to lower desktop TCO or to create new business capabilities in the desktop.

Conclusion: With core Fibre Channel over Ethernet (FCoE) and Converged Enhanced Ethernet standards now ratified, and with major networking vendors having rolled out FCoE products, IT executives should prepare themselves for an onslaught of converged FC and IP networking product marketing.

While FCoE will be the dominant storage protocol in the long-run, IT organisations must brush aside vendor’s future/function technobabble and understand the benefits of a converged network in the context of their environment. Only then can the organisation define an adoption strategy that guides how and when storage networking is migrated to FCoE.

Conclusion: Moving from today’s Layered Component model to an Integrated Systems model of IT infrastructure will bring many benefits such as lower operational costs and a more agile infrastructure. However there will be many challenges in undertaking this change, and at the top of the list is the IT infrastructure inertia created by people’s resistance to change and the scale of the investment in the existing technologies.

Rather than focus on the technology IT executives need to work on the people issues, (resistance to change, competency traps, fear of the unknown) and the capital investment issues, that are typical in any major program of change.

Conclusion: CIOs must keep all levels of management aware of the impact of extending the organisation’s reach and range1 of services. Whilst there are obvious benefits from the extension, business managers must understand that it brings with it increased application and IT infrastructure complexity2 and extra support costs. It also makes the organisation’s network vulnerable to intrusion.

Astute CIOs know that having alerted management of the impact of extending reach and range, and to keep their job, they need to present their strategy for its support while minimising the risks. Without strategies, as set out below, they put their jobs at risk.

Conclusion: Storage vendors promoted storage deduplication a technology that can increase storage efficiency and reduce storage capital costs. However, since some storage deduplication products have a high capital cost, to ensure that an investment is recouped IT organisations must first understand where it should be used and why. IT organisations must then decide whether storage deduplication is a tactical band-aid, and limit its use to a specific case, if it is a strategic platform that must be invested in and built out across the enterprise or if it should be avoided entirely.

Conclusion: In a continually evolving business world, organisations with immediate access to quality data can fast-track decision making and gain a competitive edge or be recognised as a leading agency. Critical in sustaining this edge will be the performance of the CIO (Chief Information Officer) in securing and supplying data on demand and ensuring its meaning is understood by business professionals and managers.

Conclusion: When designing a service-oriented architecture it is essential to provide a mechanism for connecting services from different sources. Enterprise Service Bus (ESB) technologies add value when the systems involved don’t make use of shared data formats and communication protocols.

The market now includes a number of mature open source ESB technologies. Selecting the most appropriate option involves looking beyond the technologies and understanding the factors that influence the quality of a service oriented architecture.

Conclusion: HP’s acquisition of 3COM, Oracle’s acquisition of Sun Microsystems and Cisco’s move into blade servers are all clear signs that IT infrastructure is at the beginning of another major structural change. These events herald a transition from today’s Layered Components model, where best-of-breed components are purchased from a number of specialist vendors and then integrated by the IT organisation, to an Integrated Systems model where complete systems are purchased from a single vendor, avoiding the need for the IT organisation to act as a Systems Integrator.

IT organisations should look at adopting the Integrated Systems model when the costs and risks of acting as a System Integrator outweigh the benefits of competition at the component level (commoditisation and innovation).

Desktop virtualisation is no longer the hottest topic in the media, however it still gets considerable interest from IT executives. As part of a series of roundtables that I am running on “The Evolution of the Desktop” I have just finished speaking to 28 IT executives on this topic. From these conversations it is clear there is still a strong interest in finding a better way to deliver the desktop that both reduces the TCO and increases agility. That is, simplifies remote access, enables business continuity and speeds up deploying new desktop applications.

The centralised virtual desktop, commonly known as VDI (which was VMware’s product name), was once considered a promising way to achieve these goals. However many IT organisations have discovered that simply moving the desktop into the data centre does not solve the real problem which is the management of the desktop image (the operating system, applications and data). Leading organisations are now recognising that it is necessary to radically change the way they build the desktop image so that the management costs and problems can be radically reduced.

Conclusion: Increasing your data centre efficiency is a journey that has clearly defined steps. Organisations should focus on defining clear, measurable objectives, planning and monitoring efficiently rather than on the technology that vendors promote to deliver data centre efficiency.

For the last few years IT has been slowly catching up with the messages of environmentalists such as David Suzuki, David Attenborough, and Tim Flannery (tree-huggers, all of them!). IT has come to the rude awakening that “oh wow, servers run on electricity! And you’re telling me that electricity is made with fossil fuels? And that means that my awesome clustered Exchange server is helping kill the Ozone layer, the whales, and future generations of Icelanders? Shocking! (Pun intended)”

Conclusion: The data centre is an essential IT resource with a finite capacity. Due to the very long lead times and very high capital costs for expanding that capacity, IT organisations must be sure they have sufficient head room to accommodate near term growth and a plan enabling long term growth.

Organisations that run into their data centre’s capacity limits will have significant constraints placed on IT and on business growth. Based on recent incidents at ANZ organisations this risk maybe much greater than you think.

Conclusion: The largest cost for a data centre migration is typically the cost of new hardware deployed to mitigate the risk of hardware failure during the migration. Organisations should look seriously at using a physical to virtual (P2V) process as the basis of their migration strategy to lower hardware costs, lower power consumption, and avoid the risk of hardware failure during the migration. There is also the compelling benefit that the worst case scenario, for any failure mid-project, is a rollback to the status quo.

Conclusion: The role of the traditional service desk has been to act as the single point of contact for clients for operational incidents and to track their resolution. With ITIL v3‘s (IT Infrastructure Library – version 3) having as one of its objectives the improvement in IT Infrastructure service delivery, one way to do it is to expand the role of the service desk. In its expanded role, the service desk takes on activist responsibility for delivery life cycle functions, including implementing continuous service improvements.

Conclusion: Migrating physical servers to virtual machines is a one-off project that requires deep specialised knowledge, and IT organisations should engage a specialist third party to develop the migration plans and to perform the physical to virtual migration. This leaves IT staff free to focus on the acceptance testing of the migrated applications and on learning how to manage the environment to drive the greatest benefit from the new virtualised infrastructure.

IT organisations that have not migrated the majority of their x86 workloads to virtualised servers should evaluate the costs, risks and benefits of this migration, then identify the triggers that can be used to drive this.

Conclusion:Increasing server power density means that the cost of power will become a critical driving force in the data centre market. Data centre operators are now talking about adding costs for power consumption to older metrics based on the number of racks or square metres. These new pricing formulas will favour organisations running virtualised environments. Consequently, many hosted organisations will perform physical to virtual migrations over the next 12 months to reduce both their power consumption and physical space costs.

Conclusion:Departmental computing in most organisations today is pervasive, commonplace and almost impossible to control. Because it is used widely and for multiple purposes line managers, who fail to supervise its use, are allowing an unsustainable situation to continue.

Attempts to bring departmental computing under control and minimise the risks, while a worthy objective, will fail unless senior management is committed to fixing the problem and forcing line to act. Failure will not only compound the risks, it will increase the hidden (or below the surface) costs of departmental computing.

Conclusion: Many organisations overcomplicate their desktop RFPs with technical jargon while underplaying some of the key operational and commercial considerations associated with their desktop procurement process. The end result can be a contract that while providing a desktop that meets the organisations technical needs, falls down in commercial areas such as competitive pricing over the contract life.

Conclusion: As the number of specialist IT services providers (software, operations and applications) increase each year and organisations choose to engage multiple (technology platform) service providers, organisations must implement tighter systems integration processes. If processes remain unchanged organisations the number of operational problems will increase and unless staff skills are updated it will take longer to resolve these problems.

Conclusion:In our November 2008 survey1we found many organisations are using archiving to manage their rapidly growing unstructured data. On further in-depth research we found that these archiving projects are mostly IT driven, focused on silos of data, and are largely limited to automating storage tiering (HSM) to control storage costs. While this is a sensible starting point, IT organisations could extract more value from archiving by offering enterprise search and eDiscovery to the data owners.

Conclusion: Organisations that allocate insufficient effort to planning for their desktop RFPs run the risk of achieving a sub-optimal outcome from their RFP. Less than competitive pricing over the contract life and a mismatch in buyer and vendor expectations are just two examples of the negative outcomes that can result from inadequate planning.

Conclusion:Despite the challenging economic climate, the data centre is a hive of activity with many organisations taking a strong interest in consolidating the data centre and running it as a shared service. Savvy manager will take the current economic slowdown as an opportunity to rationalise, consolidate and optimise the existing data centre infrastructure before the next growth cycle starts.

Conclusion: A decision to migrate an enterprise’s desktop operating environment from Microsoft Windows XP to Windows Vista in the near term, or to wait until Windows 7 is available, is both technically and politically complex. The final decision depends heavily upon many interrelated IT infrastructure factors, as well as business issues, not in the least of which are end-user animosity against Vista. However, senior IT executives and Enterprise Architects should not dismiss Vista as an option, nor rush to Windows 7 without first a careful evaluation of the risk and benefits of each.

Conclusion:SharePoint is rapidly becoming a victim of its own success. Rapid tactical deployments and uptake by individual departmental teams has led to pockets of isolated information, which are growing in size at an alarming rate. Also, lack of understanding and planning when developing SharePoint-based solutions is leading to unexpected licensing costs. Organisations must re-evaluate their SharePoint deployments and, if needed, step back and architect their SharePoint implementations if they are to avoid being bitten by their SharePoint projects in the future. Following are four SharePoint deployment scenarios that bite.

Conclusion: Faced with a direction to identify and report on areas where IT costs can be reduced or contained, CIOs must respond by developing a comprehensive cost management program that considers all service delivery options and regards no area as sacred. To maintain credibility with stakeholders and get their buy-in the CIO must convince them every expense line will be investigated and ways to reduce it examined, without compromising essential services.

Conclusion: Four years of Service Oriented Architecture hype and a middleware product diet rich in enterprise service busses are starting to take their toll. The drive towards service based application integration often goes hand in hand with unrealistic expectations and simplistic implementations. Instead of a reduction in complexity, the net effect is a shift of complexity from one implementation technology into another. The recipe to shedding spurious complexity involves reducing the (fat) content on enterprise service busses.

Conclusion: Many organisations do not distinguish between backup and archive and assume their backup data is also their archival data. This makes the backup environment overly complex and difficult to operate and creates a very poor archival platform.

Organisations that separate these processes find that backups shrink significantly, resulting in much smaller backup windows and much faster recovery times. This also enables the archival data to be optimised to meet desired business requirements. That is, cost, retrieval time, compliance, discovery and so on.

Conclusion: Virtual Desktops was one of the hottest infrastructure topics of 2008. However, tight IT budgets due to the economic downturn, and mounting evidence that Virtual Desktops are more expensive that well managed full desktops, will dampen enthusiasm for this technology in 2009.

Based on recent discussions with a cross-section of large and small organisations we confirm our long held view that Virtual Desktops are not a general purpose replacement for a Full Desktop and that reports of mass rolls-outs of Virtual Desktops are pure vendor hype! As predicted, we did find some organisations using Virtual Desktops in a limited fashion for a specific niche.

Conclusion: While much has been written about the release of Microsoft’s hypervisor into a virtualisation market already dominated by VMware, there is a quite battle being fought for third place between XEN and KVM.

With KVM stealing the open source thought leadership from XEN, and XEN being acquired by Citrix, which is better known for desktop products, the position of third place is now up for grabs. The net result is that XEN will remain a niche product in the virtualisation market.

Conclusion: Most major IT implementations such as ERP roll-outs, do not fully realise their original objectives. One symptom is that planned functionality is not utilised by staff to the fullest extent. Another is a tendency for staff to fall back to their comfort zones, using manually-maintained records, spreadsheets and the like. The root cause is that insufficient attention is paid to dealing with the human aspects of change. Knock-on effects are largely financial. If additional resources need to be brought in to effect lasting change, this action dilutes the strength of the original business case, not only in terms of outright cost but in the time taken to achieve desired outcomes. If left untreated, the full benefits may never be realised.

Conclusion: Reducing the environmental footprint of the Desktop has become an important topic for many organisations. Organisations that have undertaken a Green Desktop initiative report excellent returns from low risk operational and behavioural changes that avoid the massive capital projects associated with radical changes to the desktop deployment model such as Thin Desktops.

Conclusion: Many organisations have made a major commitment to ITIL to lift their IT service delivery1 capabilities. ITIL is valuable in providing a lingua franca for IT service delivery professionals and is an excellent frame of reference for process improvement. However, a single-minded focus on ITIL to improve service delivery is akin to taking vitamins as the only strategy for improving our health. Extending that analogy, establishing an effective IT service delivery strategy first requires a general medical examination. Then, using the results obtained, a holistic and targeted program can be developed aimed at improving overall health outcomes.

Conclusion: Reducing the environmental footprint of the Desktop has become an important topic for many organisations. Astute CIOs will implement simple measurement processes to test vendors’ claims and separate the ‘green washing hype’ from the truly effective changes.

Conclusion: Historically, the main barriers to mobility were the high cost and the limited capabilities of the mobile devices and the mobile data network. With network and device costs plummeting, 3G network bandwidth good enough, and the computing capacity of recent mobile devices rivalling laptops from a few years ago, these barriers have now been all but eliminated.

The new mobility barriers are the lack of a robust Identity and Access Management infrastructure to securely authenticate users and determine their access level and the rigid Standard Operating Environment (SOE) currently used to manage desktop complexity.

Conclusion: While Virtual Desktops are one of the hottest infrastructure topics of 2008, simply virtualising a typical desktop environment and migrating that to the data centre will prove to be a very costly mistake. Instead organisations should look beyond the Virtual Desktop hype and focus on implementing a Dynamic Desktop architecture that increases desktop agility and lowers the total cost of ownership. Once adopted the Dynamic Desktop architecture can be used with any type of desktop deployment method, i.e., Full Desktop, Virtual Desktop or Terminal Services, and becomes the foundation for reducing desktop cost and increasing desktop flexibility.

Conclusion: The release of Microsoft’s hypervisor into a market already dominated by VMware will trigger a tidal wave of marketing from Microsoft that is designed to move the virtualisation “goal-posts”, enabling Microsoft to score some desperately needed wins. These messages will be targeted at CEOs, CIOs and Departmental Managers who will then likely ask IT Architects and Infrastructure Managers why they are not using Microsoft’s virtualisation products.

To prepare for this onslaught IT professionals must understand both Microsoft’s strategy for shifting the goal-posts, and how to deal with it, and the strengths and weaknesses of Microsoft’s new hypervisor.

Conclusion: Dramatically increasing energy costs means that organisations must explore and implement approaches that ensure they reduce or contain the energy demands of their data centres. While ostensibly long term green driven, the short term real drivers will are economic.

Conclusion: In most organisations Windows based desktops are ubiquitous and the hardware and software has largely become a commodity. However, in spite of this the desktop Total Cost of Ownership (TCO) still varies wildly across organisations.

The major source of variation in TCO is the relative maturity of an organisation’s desktop management processes. CIOs seeking to lower their desktop TCO should first look closely at their desktop management processes before evaluating new desktop deployment models, i.e., Virtual Desktops, Thin Desktops.

Conclusion: Many non finance matters have to be considered before entering a leasing arrangement for IT assets. IT and Finance managers must weigh up the merits of each situation and decide whether it is advantageous to buy the asset and maintain control of it, or lease it and free up the cash for business growth. Having a blanket policy to always buy or lease makes little business sense.

Conclusion: Riding on the coat tails of Server Virtualisation1, Virtual Desktops have become one of the hottest infrastructure topics of 2008. Vendors promote Virtual Desktops as a desktop replacement that eliminates the common concerns of a traditional full desktop, i.e., high cost, complex management, slow provisioning, security, inflexible etc.

Unfortunately discussions about Virtual Desktops are often clouded by misinformation and unrealistic expectations that obscures the issues and stifles investigation. Too often the stated benefits are not closely examined because the answers seem self-evident. Desktop managers who fail to carefully examine each of the stated benefits may find themselves swept away by the hype and end up with an even more expensive and complex desktop environment.

Since the turn of the decade IT infrastructure has undergone an incredible transformation driven by the rapid commoditisation of servers, storage and operating systems. In the last 10 years relatively expensive high-end proprietary servers have given way to cheap but powerful commodity servers. In the same period expensive and inflexible internal storage has given way to shared, networked storage and the various vendor’s flavours of UNIX have fallen to the two mass market operating systems, Linux and Windows Server.

Conclusion:One of the more common mistakes that organisations make in implementing Service Oriented Architecture (SOA) is assuming that introducing the concept of services into the architecture and conforming to SOA-related technical industry standards amounts to a sufficient condition for the development of a maintainable software architecture. Getting software design right additionally requires a solid component architecture underneath the visible layer of business services.

Conclusion: IT organisations considering implementing Thin Desktops should first examine the three main architectures (i.e., Terminal Services, Virtual Desktops and Blade PCs), understanding the different costs, risks and benefits for each. These should be compared to the target desktop ‘use case’ (i.e., call centre, knowledge worker, engineer) to determine which architecture is the best fit.

Conclusion: There is ample evidence from industry studies 1 that the IT systems environment is becoming more complex to manage and this is unlikely to change. Reasons for extra complexity vary from the need to offer enhanced services to clients to legislative compliance to the need to manage an increasing number of interactions between people in today’s workplace.

Unless the impact is addressed systems support costs will increase, availability of enhanced client services with a systems component will become slower and cost competitiveness adversely impacted.

Put simply, IT management’s challenge is to minimise the increasing cost of systems complexity, while ensuring the organisation’s information systems deliver quality solutions. Business managers for their part must minimise exception processing and include realistic systems life cycle support costs in their evaluation of enhanced client services.

Conclusion: Many consulting firms, software suppliers and industry associations promote their version of what constitutes best practice in a discipline such as IT, Finance or Human Capital Management.

What is often not mentioned to the client is that there is no world-wide repository of best practices for a discipline and the definitions put forward typically assume the client has no operational cost constraints and the highest quality services must be delivered.

This is not to say that time spent understanding what constitutes best practice is wasted. Indeed it is most valuable as it helps management identify the practices and their attributes it must strive to implement to deliver quality services at an affordable price.

Without knowing what constitutes best practices for the organisation, management is in the dark when determining service level priorities. Furthermore knowing the attributes of best practice for a discipline helps management do continuous self-assessments, identify the gap between actual and expected performance and develop action plans to bridge the gap.

Conclusion: As the realisation dawns that x86 server virtualisation is a key component of a modern infrastructure stack, and not just an operating system feature, the major software vendors have rushed into this billion dollar market to stake out their claim1. While this will result in significantly increased levels of vendor FUD (fear, uncertainty and doubt) over the next 18 months, it will also significantly increase competition that will further lower cost and drive greater innovation.

Savvy IT organisations recognise that server virtualisation is the most important data centre infrastructure trend since the shift towards Wintel servers in the late ’90s and will leverage the vendor’s “gold fever” to their financial benefit. This will require experienced technologists who can navigate the FUD and seasoned negotiators who can safely drive a bargain with these vendors.

Conclusion: There is never a good time to break the legacy cycle. A significant number of the core systems used in large corporations today have a history that extends over two or three decades. New applications, implemented in modern technologies, often still require additional functionality to be added to legacy back-end systems. But new is not necessarily better, and an educational deficit in the IT workforce is a major part of the problem.

Conclusion: Over the last 20 years data management has typically focused on structured data, and as a result most IT organisations now do a good job managing structured data and turning it into useful information that supports the business. However, many IT organisations have reached a tipping point where more than half of all their electronic data is unstructured, and the very high growth rates for unstructured data will ensure that this capacity balance rapidly shifts away from the traditional structured data “comfort zone”.

To cope with this rapid transition to unstructured data, IT organisations must learn to manage unstructured data as successfully as they currently manage structured data. To accomplish this, the IT organisation needs to work with the business to define a data management policy and then implement unstructured data management systems to enforce that policy. Since e-mail is usually the largest unstructured data repository, and often the de-facto records management system, we recommend starting with e-mail.

Conclusion: Since the announcement by VMware of the Virtual Desktop Infrastructure (VDI) initiative there has been a strong resurgence in interest in Thin Desktops. While there is a business case for a Thin Desktop, the benefits are often overhyped and it is not the universal panacea for desktop deployment as portrayed by some vendors.

While nearly every organisation uses Citrix Presentation Server or Microsoft Terminal Services, only a minority (6%) use these as a strategic technology to deliver an entire desktop, while the majority simply use them as a tactical solution to specific application delivery issues. In spite of VMware’s incredible success with Server Virtualisation, VDI will most likely follow in the footsteps of Citrix and Microsoft Terminal Services and be limited to a tactical solution instead of being a replacement for traditional desktop deployment.

Conclusion: Server virtualisation will continue to be one of the most important IT infrastructure trends for the next few years. However, the same cannot be said of storage virtualisation which is poorly defined, poorly understood and not widely used. Infrastructure managers must understand the realities of storage virtualisation, learn to separate vendor hype from facts, and discover where it can be applied to give real benefits.

Over the next 2 years network based storage virtualisation will remain a niche, while thin provisioning enjoys rapid adoption and becomes the storage virtualisation technique most talked about.