IT Operational Excellence

When IT departments are tuned to run their best, they achieve more, spend less and drive success back into the organisations they support.

IT operational excellence is an approach that helps to ensure IT departments run efficiently and deliver great service. Without an operational excellence philosophy, IT departments lack vision and strategy, are slow to adapt and are more likely to be bogged down by trivial issues.

Achieving IT operational excellence isn't about implementing one particular framework. It is a mindset geared towards continuous improvement and performance that incorporates multiple principles designed to align team goals around delivering value to the customer.

IBRS can help organisations achieve IT operational excellence by revealing the most effective ways to leverage resources and identify the most valuable activities and differentiators in a given IT team.

A very busy month with a significant amount of material. A lot of deals have been done and there are a large number of projects under consideration. Additionally there are some interesting initiatives to be found in the other news section.

Read more ...

Conclusion: Australian IT organisations should be setting the bar higher to extract maximum value from outsourcing arrangements. Furthermore, if the level of outcomes for many providers has been exceeded, it is often only because those expectations were set so low, with a focus on organisations pushing off low-hanging IT functions.

Clearly the blame in allowing the sub-optimal outcomes to occur is shared by both vendor and customer. Organisations must ensure that they are evolving the way in which they manage their outsourcing vendors to take advantage of cloud and utility based service delivery.

Read more ...

Conclusion: To date vendors such as Microsoft and Apple have been able to exploit operating systems as an effective mechanism for creating locked-in technology ecosystems, but the emergence of the HTML5 standard and Google Chrome sees the value of such ecosystems tending towards zero.

Providers of Cloud Computing services are united by the goal of minimising the relevance of in-house IT, from hardware right up to operating systems and higher-level infrastructure software. Enterprise application vendors such as SAP1 and Salesforce.com are pulling in the same direction. To avoid sunk IT costs and a dangerous level of technology lock-in, any further developments of in-house architectures and applications that ignore this trend should be re-examined.

Read more ...

Related Articles:

"The Art of Lock-In Part 2" IBRS, 2011-07-26 00:00:00

"The Art of lock-in Part 3" IBRS, 2011-08-24 00:00:00

Conclusion: With most organisations now completely dependent on IT systems for their day-to-day operations, and ongoing viability, ensuring the availability and recoverability of these systems is one of the IT organisation’s most important responsibilities. However, like many other forms of insurance, disaster recovery planning is not seen to be urgent by IT or the business, and often fails to meet the requirements of the business.

IT executives need to look for the early warning signs that their disaster recovery plan is compromised, and if found, take action to defuse this ticking time-bomb that could blow up their career.

Read more ...

Conclusion: When reviewing options to reduce IT costs, ensure the application systems deployment strategy is included in the list of tasks in case the current strategy is costing more than expected and the benefits are proving elusive. Unfortunately the review is often overlooked because the perceived ‘cost of switching’ to other solutions and the business risks are viewed as too high and the task seen as a distraction from day to day business operations. CIOs must disabuse management of these views.

Read more ...

A very exciting month! Not just lots of deals, but big ones too (mostly due to the NBN Co). No real tenders this month, but lots of project announcements in the Queensland Government budget in the wake of flood recovery

Read more ...

The CIO walks into the boardroom. He proudly tells the board that he‘s hired “Global System Integration Leader” to be the prime SI for the organisation’s upgraded ERP system. The board fires him on the spot. When he asks for an explanation for his firing, the board tells him that it’s the third time that he’s hired the “Global System Integration Leader” for a major system integration engagement and the first two times failed to achieve objectives. He wouldn’t get a third chance. As he made his way to the lift well he was heard to exclaim in a loud high pitched voice; “But you don’t understand, they get it right once in every three times – they’re due”.

Read more ...

Conclusion: Most branch office data is poorly protected by the organisation’s existing backup strategy. Recent improvements in network connectivity, and the commoditisation of advanced deduplication techniques, fundamentally change the landscape and make highly automated, reliable and cost effective branch office affordable to most organisations.

Organisations with extensive branch office data that is not adequately protected should revaluate their branch office backup strategy.

Read more ...

Conclusion: Outsourcing remains a core service delivery model for a significant number of Australian firms. As outsourcing evolves to encompass cloud based services alongside traditional infrastructure outsourcing and managed services relationships, options for the CIO have increased rapidly.

Read more ...

Conclusion: In many organisations there is a major disconnect between user expectations relating to software quality attributes (reliability of applications, intuitive user interfaces, correctness of data, fast recovery from service disruption, and so on.) and expectations relating to the costs of providing applications that meet those attributes.The desire to reduce IT costs easily leads to a situation where quality is compromised to a degree that is unacceptable to users. There are three possible solutions:

  1.  Invest heavily in quality assurance measures,
  2.  Focus on the most important software features at the expense of less important ones, or
  3. Tap into available tacit domain knowledge to simplify the organisation, its processes, and its systems.

Read more ...

Software: Ah, what a day. Do you know you’re the 53,184th person today asking me for an account balance? What is it with humans, can’t you even remember the transactions you’ve performed over the last month? Anyway, your balance is $13,587.52. Is there anything else that I can help you with?

Customer: Hmm, I would have expected a balance of at least $15,000. Are you sure it’s 13,500?

Software: 13,500? I said $13,587.52. Look, I’m keeping track of all the transactions I get, and I never make any mistakes in adding numbers.

Customer: This doesn’t make sense. You should have received a payment of more than $2,000 earlier this week.

Read more ...

Conclusion: Relationship Managers are most effective when they can act as trusted advisors to business managers in how to best use existing IT services while helping them enhance offerings to gain comparative or competitive advantages.

Read more ...

Conclusion: Despite its position as the second largest IT services provider in Australia, and the largest in New Zealand, Hewlett Packard (HP) does not have a consistent or mature end to end IT and business service delivery capability across its services lines in Australia and New Zealand (ANZ).

Future customer confidence in the IT Outsourcing business of HP is reliant in part upon the completion of announced investment in data centres in both countries. It is the opinion of HP’s customers and prospects that its application, industry and Business Process Outsourcing (BPO) based services requires a similar investment and focus to the IT outsourcing business.

Read more ...

Conclusion: Oracle’s decision to end all further development on Itanium, will force HP Integrity customers to make a strategic decision between Oracle and Itanium. Providing the latest version of Oracle software is used customers have until 2016 to implement this decision. Since Red Hat and Microsoft have also abandoned Itanium, IT Organisations must evaluate the long-term viability of this architecture based on its ability to run the applications that matter to the business.

Since high-end UNIX systems typically have a 7-8 year lifespan, Organisations must have a strategy before purchasing new systems or undertaking a major upgrade. This strategy will be driven by the degree of dependence on, and satisfaction with, Oracle’s business applications.

Read more ...

This month’s outsourcing highlights are especially interesting – there has been lots of activity in buying and selling outsourcing businesses and specialised business units. There have also been a lot of announcements about new service offerings, or expanded service offerings by vendors. You’ll see examples of this in the general news section, as well as an article that talks about BPO firms less likely to experience growth if they have a singular focus, and not several service streams, like IT services. All these things show the need for diversification in outsourcing service offerings, as well as streamlining, expanding and very targeted offerings in specialist industries (like health care, mining or cloud specific etc).

Read more ...

Activity pretty ordinary this month with deals being pretty thin and sub-standard. We heard a couple of significant deals were going to be announced on the 29th but only one between IBM and NBN eventuated.

Read more ...

Conclusion: Analysis of Microsoft’ recently announced licensing model for education suggests that up to 60% savings are possible for K-6 schools, with 30% savings for 7-12 education. Furthermore, Microsoft’s new cloud-based offerings provide similar opportunities for licensing rationalisation. Educational organisations planning desktop migration must carefully assess these new licensing and deployment options in order to gain the most advantage of Microsoft’s new licensing models. The licensing costs involved also raise questions regarding the pedagogical value of take-home netbooks.

Read more ...

Conclusion: In order to be effective, Quality Assurance must be woven into all parts of the organisational fabric. Designing, implementing, and monitoring the use of an appropriate quality management framework is the role performed by a dedicated Quality Assurance Centre of Excellence in the organisation. This internal organisation ties together QA measures that apply to core business processes and the technical QA measures that apply to IT system development and operations. Unless the QA CoE provides useful tools and metrics back to other business units, quality assurance will not be perceived as an essential activity that increases customer satisfaction ratings.

Read more ...

Conclusion: In 2011 Chinese based IT services providers will start to appear in the Australian IT marketplace. Clearly, their impact will be modest at first, although for certain organisations, there is the potential to benefit from engagement.

While they use Indian based firms as benchmarks, Chinese firms are significantly different from the Indian based services providers. Despite sharing the same offshoring based model, Chinese firms are more engineering focused and significantly less mature. You must be aware that engagement of Chinese providers will have specific engagement benefits and challenges.

Read more ...

Conclusion: RFTs (Requests for Tender) increasingly contain NFRs (Non Functional Requirements) describing the desired attributes of the systems solution or services being sought. Attributes sought vary from those directly related to products and services such as scalability and high availability to strategic management capabilities.

NFRs are needed to help differentiate tenderers due to the commoditisation of products and services. Astute tenderers know they have to submit a compelling value proposition complemented by initiatives to convince clients they can deliver what is required. Clients likewise need to define fine achievable NFRS, be discerning assessors of responses, and be able to hold the tenderers accountable.

Read more ...

Conclusion: IT departments continually struggle to replace antiquated business systems. As a result, business processes are supported by inefficient and ineffective technologies that diminish business performance. A common cause is that replacement systems fail to meet expectations and as a contingency, legacy systems must linger on to prop up the shortfall. This imposes the costly burden of maintaining duplicate systems and requiring complex and unwieldy integration. Successful replacement of legacy assets requires a clear strategy and dedicated support for a holistic decommissioning process.

Observations: The IT industry is obsessed with the new and the innovative. Better, faster, cheaper technology solutions are seen as a panacea to the ills of the age. Yet in the excitement to adopt new technologies and systems, the enthusiasm to completely remove the old is sometimes lost. This work is unglamorous and not regarded as a professionally valuable or motivating experience. As a result legacy systems remain like barnacles on the hull of IT.

The need for system decommissioning can arise from two main motivations. The cost of using a superseded technology may become greater than the cost of deploying and using a modern replacement. The technical implications of this scenario are explored in further detail in this research note. Alternatively a system can be made obsolete by a changing business model that no longer requires direct system support, as is often the case when outsourcing business functions. In either circumstance it is essential to have business owners driving the decommissioning agenda through a clear business case focused on realising direct economic benefits.

The easy part:  Decommissioning in a technical sense is often perceived as the end-of-lifecycle activities undertaken to remove outdated or superseded hardware: sanitisation of sensitive information; audited removal and disposal of hardware; destruction and recycling of components. Sometimes the decommissioning of a system will not involve the retirement of any hardware at all – as might be the case with new applications that will run on existing hardware. In these circumstances end-stage decommissioning is largely concerned with things such as the retention and archival of historical data; winding down of support and maintenance arrangements and reallocation of supporting resources.

From a system perspective this is the very final stage of the decommissioning process and a fairly straightforward affair. What is immensely challenging however is the process by which the system got to the point where these activities could take place.

The hard part: Getting to the point where you can decommission an IT system involves the implementation of the replacement system and the migration of all required information, workloads, processes and supporting functions from the old to the new. This is fraught with difficulty. Firstly the new system must be specified and built to adequately support the existing business processes. This is often not the case. New systems may be missing critical requirements, have poor performance characteristics or impose unworkable practises on users. Existing systems may also have many hidden dependencies or unknown uses that only emerge during the decommissioning process and require significant extra effort to address. All of these issues create significant cultural momentum by users to retain the legacy system until the new system is “perfect”.

A common scenario is that a new system is built, but due to significant flaws, its use is limited to a subset of the anticipated functionality and the legacy systems that it was meant to replace continues to operate. This situation can continue indefinitely and, ultimately due to the lack of resources to redress the issues with the new system, the combination of the old and new becomes the status quo, with the unfortunate attendant increase in operational and support costs. This scenario has played out many times as organisations have attempted to replace suites of bespoke solutions with packaged application suites such as SAP or Seibel.

Phased decommissioning: To offset the risk of introducing new systems, many organisations choose an approach that allows a phased introduction through an integration architecture that supports operating both old and new systems at the same time. While at a conceptual level this seems like a good idea, in practise it results in many complications. Technically, the complexity of the integration required is always drastically under-estimated. Legacy systems are riddled with obfuscated business logic that is often nigh on impossible to reverse-engineer and can only be discovered through trial and error. Complex integration logic is also a prime source of major performance problems. Running a business process across two systems creates enormous monitoring and reporting headaches.

Direct decommissioning: The alternative of a “big-bang” cutover is challenged by the complexity and size of data migration required to implement such an approach. Many new systems are designed to enforce much higher data quality standards than legacy systems, creating a data conversion conundrum. Interpreting poorly structured legacy data can be dependent on incomprehensible business rules, making the translation of some data from old to new systems almost impossible.

Therefore it is critical to firstly ensure your new system is up to the job before decommissioning the old system. This may sound rather obvious, but while poor IT delivery can often be masked by well marketed scope reductions, creative delivery phasing and limited user deployment – the decommissioning status stands as a stark measure of progress. Measuring the decommissioning process can be spread across three key performance indicators:

  1. the percentage of working, tested functionality in new systems that is required to replace all equivalent legacy functions

  2. the percentage of workloads and historical information that have been migrated from legacy system to new systems

  3. the percentage of functions that have been wound down and removed from operational service in legacy system.

Addressing decommissioning must be done explicitly in the business case for any IT investment. The question must be asked – are financial savings or other critical benefits directly linked to the decommissioning of legacy systems? If so, how dependent is the economic viability of the planned investment on the timely achievement of decommissioning objectives? If the business case is sensitive to planned savings from decommissioning, what is the risk mitigation in the case of decommissioning delays? In any case, strong governance, stakeholder management and communication are essential to successfully pursue a decommissioning strategy.

Many IT business cases promise substantial financial benefits through decommissioning, but the reality is that retiring legacy systems drags on far longer than anyone desires.

 

Conclusion: Organisations that receive competitive and insightful responses to their RFTs for products and services know they do not come their way by accident, but due to sound planning and conscientious execution of the bid process.

Conversely, organisations that rush the bid process and give potential suppliers little warning of the RFT’s availability and insufficient time to respond are likely to find fewer than expected responses, or even an empty tender box, on the closing day.

Read more ...

Conclusion: Running a robust, cost efficient data centre requires a scale of operations and capital expenditure that is beyond most ANZ organisations. Organisations that host equipment in their own facilities have a higher business risk. Business management is unlikely to be aware of these risks, and has not signed off on them, leaving the IT organisation exposed.

Business management should ask for a business impact assessment to expose these risks to an appropriate decision making level. Management can either sign-off on these risks or request a mitigation plan. For many organisations, moving to a commercial Tier-2/3 data centre reduces risk without substantially changing the total cost. SMEs should consider migrating to a cloud environment (IaaS and/or SaaS) and get out of the business of owning and running their own IT infrastructure.

Read more ...

This month’s outsourcing highlights are especially interesting – there has been lots of activity in buying and selling outsourcing businesses and specialised business units. There have also been a lot of announcements about new service offerings, or expanded service offerings by vendors. You’ll see examples of this in the general news section, as well as an article that talks about BPO firms less likely to experience growth if they have a singular focus, and not several service streams, like IT services. All these things show the need for diversification in outsourcing service offerings, as well as streamlining, expanding and very targeted offerings in specialist industries (like health care, mining or cloud specific etc).

Read more ...

Successful IT architecture is largely about choosing the optimum systems and technologies that enable organisations to achieve their strategic objectives. The right way to choose between architecture options is through an open, timely, visible process that incorporates key stakeholder input, is based on credible evidence and is measured against alignment with organisational needs and priorities. Poor architecture decision making leads to confusion, waste and delay.

Read more ...

Conclusion: Microsoft Office 365 represents the biggest change in Microsoft since the departure of Bill Gates. While Microsoft’s evolution of its Business Productivity Online Suite to Office 365 is interesting from a technology perspective, the most important aspect of this announcement is Office 365’s licensing: Microsoft will finally offer its Office suite on a per-user basis. We now have an entirely new Microsoft licensing landscape to work with. The new licensing and deployment possibilities provided by Office 365 should be examined as part of new SOE (Standard Operating Environment) initiatives.

Read more ...

Conclusion: When probity and management accountability are rigorously applied in the IT procurement process, a message is sent to all stakeholders, including vendors, that fair and equitable buying decisions will be made.

Conversely, when probity is absent or lip service only is paid to it, stakeholders may be wary of investing scarce resources to market their services and potentially decide to ‘no bid’ when a tender is issued. The corollary is the client may not get visibility to the best solutions the market has to offer.

Read more ...

Not a lot of deals this month, but a couple of really big deals that are interesting. There was a lot more general news, and we’re at the beginning of another year so industry trends and review of developments in 2010 were a focus.

Read more ...

Conclusion: For many organisations the question of thin vs. full is highly polarised and usually framed as a mutually exclusive choice where the “winner takes all”. Recent advances in desktop deployment methods enable this question to be constructively reframed as a benefit analysis focused on who, what and where. This approach ensures the appropriate device is used in each scenario, enhancing desktop agility and improving the user’s desktop experience. 

Read more ...

Conclusion: Over the past two decades, management of the Student Information Systems (SIS) was generally in the domain of each school’s Administration. However, recent investments from the Digital Education Revolution, coupled with increasing State and Federal demands for ‘accountability’ in education, have promoted the SIS to centre stage. Essential SIS functionality now goes well beyond basic student records as it contains comparable functionality to that found in an ERP (Enterprise Resource Planning) solution, along with the complexities and extent of customisations required.

Read more ...

Conclusion: In most organisations the Help Desk is the single point of contact for business and IT professionals regarding desktop support. When management skimps on the number of IT professional needed and their training, users typically wait too long to get through to the Help Desk or become frustrated and abandon the call, with adverse business consequences.

Conversely, when too many Help Desk staff are assigned, boredom quickly ensues. Ensuring the Help Desk has the right number of IT professionals with the right skills is a balancing act for management. Unless management has sound performance metrics to measure service effectiveness, achieving the balance is hard.

Read more ...

Conclusion: Due to the cyclical nature of outsourcing contracts, a large number of large enterprises and government agencies in Australia have engaged in a renewal process for outsourcing contracts in the past 24 months. It is clear from the new contract terms that the balance of power in the relationship has shifted from vendor to organisation. The window currently exists for a deal structure that ensures you maximise business objectives and outcomes and your provider achieves measurable service levels and process delivery.

Read more ...

While there’s been a lot of news this month, it mostly happened in the first two weeks of Dec., the xmas season’s impact hasn’t ever been this obvious – repeated news over a couple of days is common, but we had news repeated throughout the whole four weeks! That, and some very strange deals and news items popping up definitely made things interesting. On a practical level, there were lots of tenders and talk about data centre outsourcing this month.

Read more ...

2010 has seen many high profile IT failures in well-run Australian companies

  • Westpac’s online banking system was down for about nine hours in August, due to a “software patch”.

  • Virgin Blue had a complete outage of its online reservation system in September, which lasted for about 21 hours. This was caused by data corruption on a solid state disk that appears to have then corrupted the recovery environment. Virgin said this created severe interruption to its business for 11 days and estimated the cost as between $15 million and $20 million.

  • NAB’s core banking system was impacted by the “upload of a corrupted file” in November. This prevented many of customers from receiving payments or withdrawing funds. The impact of this failure was still being felt some weeks after the initial incident.

  • CBA had processing problems that impacted about five per cent of customers so that from an ATM their accounts had a zero balance.

  • Vodafone customers experienced poor reception and slow download speeds for over a month ago after a “software upgrade caused instability” in its system.

In five months Australian has experienced five high profile failures from five brand name companies. So how is this possible? Each of these organisations has large, well-funded IT organisations.

Read more ...

Conclusion: As Windows 7 celebrates its first birthday many organisations are contemplating a desktop upgrade. Most desktops were designed more than seven years ago and there are many new technologies and approaches that need to be considered.

For most staff the desktop is a personal experience, making the upgrade a high-profile project. Treating this as just a technical refresh risks creating a technically successful solution that is considered an expensive failure by the business, or of marginal value. To avoid a career-limiting move, approach the desktop upgrade as a business project that has strong links to key business drivers, and structure the implementation to ensure it quickly delivers tangible business benefits.

Read more ...

Conclusion: Engagement by Australian organisations with Indian based service providers (IBSP) has accelerated in recent years. Indian providers have invested significantly to increase the breadth and depth of engagement with their Australian clients.

Read more ...

Not too many outsourcing deals the past month, but lots of talk about implementations, collaborative projects, company buyouts etc.

Read more ...

A spate of poor Service Oriented Architecture (SOA) initiatives has left some thinking that SOA is yesterday’s silver bullet. However, an effective SOA remains an essential foundation for the evolution of enterprise systems. Any organisations disillusioned by the promise of SOA should revisit its experiences to understand why business value was not successfully realised. With the right insight into the critical conditions for SOA success, those organisations can realign, and if necessary reactivate, SOA efforts as an integral part of their IT strategy.

Read more ...

Conclusion: Organisations planning a migration from earlier versions of Office to Office 2007 or Office 2010 need to conduct an 'Office Readiness Assessment' prior to the migration - or risk significant business disruption. Rather than developing in-house assessments skills , a short term engagement with consultants experienced in Office file scanning tools and migration technologies is likely to be the most cost-effective, timely and lowest risk approach to safeguarding business continuity during Office migrations.

Read more ...

Conclusion:When assessing potential service providers, rate highly those whose solution clearly meets requirements and who have capable IT professionals ready to implement it. To reflect the rating assign a higher evaluation weighting to providers meeting both tests and a lower weighting for attractive pricing, previous experience and availability of proprietary methodologies.

Read more ...