Software Asset Management

Conclusion: Portable Electronic Devices (PEDs) are flooding into enterprises. In addition to the technical challenges and costs PEDs place on IT departments, PEDs may be actually hindering service quality and productivity. Management need to step back from the promises that PEDs offer, and take a long, hard, pragmatic look at how these devices are really being used.

Related Articles:

"PED Antics part 2: a collaborative perspective" IBRS, 2008-08-28 00:00:00

Conclusion: There is a clear trend towards specialisation amongst software vendors, not limited to vertical markets, but also in terms of a concentration on specific areas in the technology landscape. As a result, many software products are becoming more focused and robust, and the opportunities for implementing modular enterprise architectures are increasing.

This article is the second in a series of three on technologies and techniques that are leading to fundamental changes in the architectures used to construct software applications and software intensive devices.

Related Articles:

"The Industrialised Web Economy - Part 3: Automation and Model Driven Knowledge Engineering" IBRS, 2008-05-28 00:00:00

"The Industrialised Web Economy - Part 1: Cloud Computing" IBRS, 2008-03-31 00:00:00

Conclusion: Riding on the coat tails of Server Virtualisation1, Virtual Desktops have become one of the hottest infrastructure topics of 2008. Vendors promote Virtual Desktops as a desktop replacement that eliminates the common concerns of a traditional full desktop, i.e., high cost, complex management, slow provisioning, security, inflexible etc.

Unfortunately discussions about Virtual Desktops are often clouded by misinformation and unrealistic expectations that obscures the issues and stifles investigation. Too often the stated benefits are not closely examined because the answers seem self-evident. Desktop managers who fail to carefully examine each of the stated benefits may find themselves swept away by the hype and end up with an even more expensive and complex desktop environment.

Conclusion: There is ample evidence from industry studies 1 that the IT systems environment is becoming more complex to manage and this is unlikely to change. Reasons for extra complexity vary from the need to offer enhanced services to clients to legislative compliance to the need to manage an increasing number of interactions between people in today’s workplace.

Unless the impact is addressed systems support costs will increase, availability of enhanced client services with a systems component will become slower and cost competitiveness adversely impacted.

Put simply, IT management’s challenge is to minimise the increasing cost of systems complexity, while ensuring the organisation’s information systems deliver quality solutions. Business managers for their part must minimise exception processing and include realistic systems life cycle support costs in their evaluation of enhanced client services.

Conclusion: There are two ways to implement SharePoint: as an enabler of departmental point solutions, or as a set infrastructure components for collaborative knowledge management. Organisations looking to implement SharePoint for collaborative knowledge management must possess skills well beyond those needed for departmental solution implementations. It is highly improbable that any one person – or even a single development team - will possess all the skills required to implement SharePoint for collaborative knowledge management. Organisations should consider the establishment of a cross-departmental group dedicated to SharePoint deployment, integration, maintenance and training throughout the organisation.

Conclusion:Rather than developing their own systems, many Australasian organisations are adopting commercial off-the-shelf software (COTS) to implement or enhance their business applications. So strong are the perceived COTS benefits that US government agencies (including Defence agencies), in line with the Clinger-Cohen Act of 1996, are now mandating COTS to take advantage of the significant procurement, implementation, and maintenance cost savings they offer.

While a COTS approach can bring many benefits, it can also bring many problems. Organisations considering using COTS as a way of improving their IT support of business operations must consider carefully the costs, benefits and risks.

Conclusion: There is never a good time to break the legacy cycle. A significant number of the core systems used in large corporations today have a history that extends over two or three decades. New applications, implemented in modern technologies, often still require additional functionality to be added to legacy back-end systems. But new is not necessarily better, and an educational deficit in the IT workforce is a major part of the problem.

Conclusion: Business units and end users are calling for, if not demanding, IT managers to deploy Microsoft SharePoint. SharePoint is this years ‘must have’ product1 - however few understand what SharePoint is, what it does well and what alternatives exist. SharePoint initiatives will backfire without significant effort to ensure that an organisation is properly educated, specific applications and business needs are identified, and realistic expectations are set.

Conclusion: In 2006 the adoption of X86 server virtualisation moved from “fast follower” to “mainstream”, with over half of IT organiations using or piloting it. In that year VMware established a clear technology and market share lead while today Microsoft is still 12 months away from releasing its first competitive (hypervisor based) product.

As the market grows dramatically though 2007, due to technical and market leadership coupled with a lack of credible alternatives, VMware will cement a dominant position in the Enterprise market that Microsoft will fail to over come until at least 2012. In this same period, due to Microsoft’s product and channel strength in the SMB market and the current low take up of server virtualisation by that segment, Microsoft will establish a strong base of Intel server virtualisation in SMBs.

Conclusion: Linux on the IBM mainframe (z/Linux) has been available since 2000 but is not widely adopted. As IBM increases its resources promoting z/Linux in Asia-pacific it is an idea that will be raised more frequently in this region. While recent advances in z/Linux (e.g., 64-bit implementation) make it a powerful and technically viable platform, with some organisations reporting significant benefits in specific circumstances, z/Linux will remain a niche solution rather than a common alternative to Lintel and Wintel.

The factors involved in making a decision to migrate Linux from Intel to System Z are extensive and complex, ensuring that the adoption of z/Linux in Asia-Pacific will remain slow and usage will stay very low though 2010.

Have Microsoft Operating Systems reached their best-used-by date? Ten years ago such a question would have seemed ridiculous. Today however, there are several indications that the Microsoft rule in the OS domain should no longer be considered as one of the fundamental constants of IT.

Oracle has long been a major contributor and supporter of Linux, beginning in 1998 with the first release of the Oracle database on Linux and later with the release of Oracle applications and middleware. Oracle has made significant contributions to the Linux kernel over recent years, e.g., the Oracle clustered file system, and in the process developed considerable Linux expertise.

Conclusions: As virtualisation of distributed systems rapidly matures, IT organisations should evaluate their current virtualisation capability level and determine which level best supports the business’s needs. The right level of virtualisation capability will reduce complexity and increase infrastructure agility; enabling better alignment of IT infrastructure to applications requirements and hence alignment of IT infrastructure to business strategy.

Conclusion: The quality of software architecture in a system is not something that is easily measurable; without the help of specialised tools and without the existence of a meaningful baseline or benchmark. The short life expectancy of most software systems is often explained as being due to rapid technological change. For the most part this explanation is simply a convenient excuse for tagging existing systems as “legacy” and for obtaining permission to build new systems without having to understand the fundamental forces that lead to legacy software.

Conclusion: IT organisations are aware of the impending release of Windows Vista; however in a recent surveyi less than 20% of those with more than 100 desktops have a formal strategy for dealing with it. The most common driver for using Vista on the desktop is the need to keep current, so as to ensure long term support from both Microsoft and ISVs. None of the people interviewed anticipated any business benefit from upgrading and did not consider it an important or urgent project; instead they saw it as a necessary evil that must be dealt with.

Conclusion: The potentially negative impact of vendor lock-in is unavoidable, but it can be minimised by making intelligent choices with respect to the use of technology products when building application software. In the interest of keeping the cost of lock-in at bay, IT organisations should rate the maturity of the various technologies that are being employed, consider the results in the design of their enterprise architecture, and pay appropriate attention to the degree of modularisation within the architecture.

IT managers need to be aware that their software environments will dramatically change between now and 2010. The expected broad and rapid adoption of varying types and levels of software-as-a-service (SaaS), multiple "flavours" of services-oriented architectures (SOA), and open source-based software should be expected to increase an organisation’s IT and business complexity, and management costs.

Users are demanding, and gaining, more IT flexibility in order to attain greater business flexibility. It''s not yet clear in many industries how and where users will require such flexibility, but buying behaviour is usually an indicator of emerging business strategy. However, it is clear that flexibility is the strategy du jour. In this environment, users are adapting their business and IT investment behaviours to enable flexibility, and to pay for it. The move to tactically strategic IT and business change is a direct response to the desire for flexibility, and its inherently higher investment costs to achieve.

With the release of Microsoft Vista and Microsoft Office 2007 early next year IT organisations should take this opportunity to review their desktop strategy. Early indications are that both products are significantly different from the current versions and, as with the prior major releases, will involve significant time, effort and money to implement.

While Microsoft assures us there is significant new value in these new products, particularly from “integrated innovation”, none of the IT managers I’ve spoken to were able to translate this into business value. In a recent interview with Peter Quinn, former CIO of the State of Massachusetts, he said when they looked at how staff actually used their desktops “most of the people don’t use all those advanced features [of MS Office] so it begs the question as to why I would spend all that money”. With the trend to web services (i.e., services delivered over the internet) and the availability of MS Office alternatives such as OpenOffice, he seriously questioned the value of remaining on the Microsoft upgrade treadmill.

Conclusion: Organisations, which enable customers to transact business over the phone must continually re-evaluate the effectiveness of their business model and exploit emerging technologies to enhance the customer’s experience. Failure to do so will put them at a competitive disadvantage.

Conclusion: 2006 will be the year that server virtualisation technology becomes mainstream on x86 based servers. IT Organisations are combining commodity x86 based servers with virtual machines and Storage Area Networks (SANs) to build agile, low cost server infrastructure. They are reporting many benefits including excellent TCO, rapid provisioning, increased resource utilisation and simple, low cost high-availability and disaster recovery.

Of the three core technologies used to build this infrastructure, virtual machines are the newest and most rapidly evolving. In 2006, IT organisations must understand this technology, and the vendor landscape, to ensure they make the right strategic choice for the next 5 years.

Conclusion: With the maturing of Server Virtualisation on industry standard (X86) and RISC/Unix servers, all IT organisations should evaluate its role in optimising IT infrastructure. See IBRS research note “Infrastructure Consolidation: Avoiding the vendor squeeze” October-05).

The recommend strategy is to start by using server virtualisation to enable the consolidation of non-production systems (i.e. dev/test/QA), progressing to consolidating smaller non-mission critical production applications and finally creating a virtual server infrastructure that simplifies and enables load-balancing, high-availability and disaster recovery. A well executed server virtualisation strategy will reduce complexity and increase agility, leading to better alignment of IT infrastructure with the applications requirements and business strategy.

Conclusion: SOA is an increasingly common TLA (three letter acronym), and is often thought of as a new technology - and equated with Web Services. This does injustice to Service Oriented Architecture, a new software design concept that emerged from the need to easily integrate web based applications independent of their implementation technology. Hence the adoption of SOA is not about migrating to yet another technology stack, but rather about adopting new software design principles that make life easier in today's world of distributed and heterogeneous systems.

Conclusion: When it comes to design and implementation of an Enterprise Architecture, traditionally the key decisions regarding software systems have been around building vs. buying, and vendor selections based on criteria derived from business requirements.

In the last five years however, many Open Source infrastructure software offerings have matured to the point of being rated best-in-class solutions by experienced software professionals. This means that build vs. buy decisions need to be extended to build vs. buy vs. Open Source decisions, a reality that has yet to sink in for many organisations.

Interestingly, the key benefit of using Open Source components is not necessarily cost savings, but reducing vendor lock-in, and the risk the vendor may go out of business or discontinue support for a product line.

In recent years, microprocessor vendors have begun designing chips with more than one processing unit, or "core," on the chip in an effort to boost performance for certain types of applications. As far as the software running on the systems is concerned, dual-core chips appear to be two separate processors, raising the question of whether or not they should require two software licences.

Last month we introduced the concept of a vendor management program. We noted that most mid-size organisations do not consider the full life cycle of product selection; instead, they tend to focus on purchase price alone. IT acquisitions are usually made by the IT department in isolation, without the proper insight of the business requirements and with the primary focus being on "speeds and feeds," price and the ability of a vendor to deliver a solution quickly. This month we provide a framework for the process.

Related Articles:

"The Case for a Vendor Management Process Part 1: The Case" IBRS, 2004-04-28 00:00:00

Conclusion: For years many organisations have ignored best practice advice in evaluation and selection.  Inevitably, the choice is to move forward after a couple of vendor demonstrations, or to fast track, do an abbreviated version of a hierarchical methodology. This, unfortunately, can introduce subjectivity, or at least an assertion of personal bias.


Conclusion: IBRS strongly believes that Australian Mid Sized organisations must begin to actively manage their dealings with their business partners, suppliers and customers. At the same time, they must deal with staffing and budgets that are not keeping pace with the ever-growing requirements of their IT infrastructures. A top priority must be to find ways to decrease the total cost of ownership of IT infrastructures and to minimise staffing requirements. Being able to consistently select the right vendors and products will be essential to achieving this goal.

Many of our clients report that they are not satisfied with the relationships that they have with the IT vendors and consultants they have selected. Poor post-purchase relationship management seems to be as much to blame as actual selection and negotiation process.

Related Articles:

"The Case for a Vendor Management Process Part 2: The Process" IBRS, 2004-05-28 00:00:00

Software licence compliance is something that many will have to achieve during 2004. The risk of a licence audit by any of your software vendors has increased greatly during the last 6 months; many audits have been done and a high proportion of those auditees have found themselves to be in violation of their agreements. It’s time to consider your position and plan a course of action.

Conclusion: Linux has its place(s) in the SME organisation now, and clear evidence for reduced cost can be demonstrated. However, in more complex environments, the costs of commodity hardware and operating systems are small compared to the costs of ISV software and support and the use of Linux will be harder to warrant before 2005. Linux and other open source software offerings need to be evaluated rigorously before committing your organisations directions this way, as the vendor hype does not yet match reality for the SME.

While the intense competition in the PC market has benefited technology buyers in the form of lower capital costs, it has forced vendors into tighter product cycles and a frantic pace of incremental technological advancements with high-perceived but little real value to users. Corporate technology buyers of both desktop and laptop systems should focus efforts on achieving within their PC fleet a balance of meaningful technological aggressiveness and stability. They should also be increasingly vigilant in their assessment of the benefits of new technologies. Those organisations that do not take steps to understand vendor product transition processes and assess the impact on support will quickly develop a more complex mix of installed PCs than necessary. Escalating costs (driven by the uncontrolled installation of poorly understood technologies) will also be become a concern.

Conclusion: No solution comes shrink-wrapped and perfectly adequate so that it can be considered complete and that is true of e-learning. If the implementation of e-learning in the workplace has stumbled the two guidelines below will assist in getting better results:

  1. Ensure that the e-learning process is continuous – not constant – but persistent for all employees over time;

  2. Test, test, test, not just the e-learning software package but also what the users thought of it as much as the content of the program.

If e-learning is viewed as a process, not just a one-off event, it will become part of the working schedule and also integral to the productivity of the organisation.