Architecture

Due to the availability of desk top software solutions for budgeting and forecasts and widespread use of Business Intelligence software to analyse and report results, it is not surprising that the need to improve data quality in reporting emerged as the most frequently cited technology concern of respondents in a recent survey of financial executives1.

Conclusion: With climate change a hot social issue, organisations with a “Social Responsibility” strategic objective are looking at ways to reduce their environmental impact and the IT organisation, like other areas of business, is expected to find ways to reduce their carbon foot print.

The data centre is a prime target for a few “quick wins” because in most organisations it houses a significant proportion of IT resources, and it is the area over which IT has the greatest control. IT organisations should start with short-term initiatives that are self-funding (i.e., payback period < 12 months) and which can be accomplished with little or no capital investment. With server 3-years power and cooling costs now comparable to server acquisition costs, Infrastructure Managers must look at optimising data centre energy efficiency.

In the last 18 months, many hardware vendors have jumped on the Green IT bandwagon to try and differentiate their offerings in a rapidly commodifying market. IT organisations must carefully evaluate vendors’ claims to separate the marketing hype from the reality. Purchase agreements or tender requests for ICT goods and services should explicitly require vendors to demonstrate how and where their products meet the buyer’s environmental requirements1.

Conclusion: The increased focus on the environmental impact of ICT activities makes it essential that organisations making ICT purchasing decisions focus on the virtuous triple bottom line of economic viability, social responsibility, and environmental impact. Purchase agreements or tender requests for ICT goods and services should explicitly require vendors to demonstrate how and where their products meet the buyer's environmental requirements.

Conclusion: Since the advent of e-Commerce executive management has become acutely aware that their business systems are transparent to their clients and their data network exposed to hackers. This has forced many to rethink the role of strategic systems planning and integration.

To meet the need and present the best possible image to clients many organisations have elevated the role of the EA (Enterprise Architect), often with an attractive remuneration package.

The EA’s charter, put simply, is to determine and implement the framework needed to:

  • Ensure all business systems are developed to meet the requirements of the Mission;

  • Make the delivery of business systems appear seamless to clients;

  • Reduce systems complexity and support costs by adopting common platforms to develop and deliver business systems;

  • Ensure the right software and hardware solutions are used and data is protected against unauthorised access.

Determining what framework is able to best support the charter as above and implementing the framework with its associated changes has both a political (organisational) and technical dimension. This means executive management and the EA need to think identify the best framework, influence stakeholders to adopt it and provide direction to staff who need to use it to develop business systems.

Conclusion: Storage capacity growth rates have accelerated from a historical average of 30% to more than 50%, ensuring that data management and storage remains one of the key IT infrastructure issues though 2012. However, as the total storage capacity for unstructured data rapidly overtakes that for structured data, simply adding more storage capacity in an ad-hoc fashion will no longer work. Instead IT organisations must create new data storage strategies that can deal with the more rapid growth (60%-200%) of unstructured data.

Conclusion: Organisations are moving to minimise their IT staffing problems by contracting much of their planned IT recruitment process to specialised HR outsourcing agencies. This trend, Resource (or Recruitment) Processing Outsourcing (RPO), brings significant benefits including having the right people available at the right time, reducing placement costs, and saving hiring managers time and stress. This RPO approach is even more potent when coupled with Panel Supply contracts. Its value is not limited to ICT recruitment and can be applied across as much of the organisation as will benefit.

Conclusion: One of the fundamental drivers of the Windows desktop Total Cost of Ownership (TCO) is the tightly coupled “application installation” model used by the Windows operating system. Application virtualisation can eliminate many of the problems associated with this model, significantly reducing the time and effort to install and maintain applications.

Since implementing application virtualisation requires significant changes to the desktop image the ideal time to introduce this technology is when deploying a new image across the desktop fleet. With Windows Vista being a significant trigger for deploying a new desktop image in the next 24 months, IT Organisations should consider adding application virtualisation to the migration project as a way to derive a stronger and quicker return from a Vista upgrade project.

Conclusion: In 2006 the adoption of X86 server virtualisation moved from “fast follower” to “mainstream”, with over half of IT organiations using or piloting it. In that year VMware established a clear technology and market share lead while today Microsoft is still 12 months away from releasing its first competitive (hypervisor based) product.

As the market grows dramatically though 2007, due to technical and market leadership coupled with a lack of credible alternatives, VMware will cement a dominant position in the Enterprise market that Microsoft will fail to over come until at least 2012. In this same period, due to Microsoft’s product and channel strength in the SMB market and the current low take up of server virtualisation by that segment, Microsoft will establish a strong base of Intel server virtualisation in SMBs.

Conclusion: Backsourcing IT-related services is not a simple exercise. To ensure that there is minimal risk to the business while the IT functions are being brought back in-house, significant management attention will need to be devoted to this activity.

Conclusions: The volume of digital data created and storage by organisations continues to grow exponentially, typically anywhere from 30%-60% per annum. For most organisations this level of digital data growth is not new, however what is different is that growth is now being driven by unstructured data.

Over the last 20 years organisations have made significant investments to deal with structured data, resulting in well managed structured information that supports and drives the business. On the other hand few organisations have invested similarly in unstructured data (e.g., e-mail, faxes, and documents) and many organisations are now finding the growth in unstructured data is a significant business problem.

To cope with the data growth over the next 10 years, organisations must learn to deal as effectively with unstructured data as they do today with structured data.

Conclusion: With storage capacities typically growing at 35%+, most organisations are finding they must routinely add more capacity. To avoid creating silos of storage capacity that can not be shared and optimised, or building a storage infrastructure that becomes increasing complex and costly to manage, IT organisations must properly plan and execute storage acquisition.

Conclusion: Convergence has been an idea on the horizon for so long that it barely stirs much interest. Even so, various moves and technological changes, have brought convergence front and centre again, and with it new opportunities for organisations.

Once technological difficulties precluded the implementation of services through convergent media networks but now that challenge is no longer a problem. Various types of handsets and choices of networks for reliability and speed mean it is possible to deliver data and media services at economically affordable prices.

The outstanding issues remain the evolution of business models and applications to markets. IT executives can and should be involved in and promoting the growing potential of convergence to their organisation.

Conclusion: Trusted Computing is a family of open specifications whose stated goal is to make personal computers more secure by using dedicated hardware. Critics, including academics, security experts, and users of free and open source software, contend, however, that the overall effect (and perhaps intent) of Trusted Computing is to impose unreasonable restrictions on how people can use their computers1.

Conclusion: Over the last 10 years the IT applications and infrastructure in many organisations has rapidly evolved forcing IT departments to implement a variety of new technologies. In many cases this has resulted in technology silos that are complex, difficult to maintain and costly to extend. To support the business through the next 10 years, IT organisations must transform this complex legacy into an agile infrastructure that enables change.

A starting point for this journey is the development of an infrastructure architecture based on reusable, end-to-end infrastructure design patterns that leverage internal and external best practices, skills and technologies.

Conclusion: For an organisation to gain maximum benefit from IT infrastructure being built with new technology there needs to be a corresponding change in the approach to the management of this infrastructure. Infrastructure management needs to move from a “build to order” to a “factory” type approach, where infrastructure services are supplied, as “orders” for these services are received.

Conclusion: Linux on the IBM mainframe (z/Linux) has been available since 2000 but is not widely adopted. As IBM increases its resources promoting z/Linux in Asia-pacific it is an idea that will be raised more frequently in this region. While recent advances in z/Linux (e.g., 64-bit implementation) make it a powerful and technically viable platform, with some organisations reporting significant benefits in specific circumstances, z/Linux will remain a niche solution rather than a common alternative to Lintel and Wintel.

The factors involved in making a decision to migrate Linux from Intel to System Z are extensive and complex, ensuring that the adoption of z/Linux in Asia-Pacific will remain slow and usage will stay very low though 2010.

Have Microsoft Operating Systems reached their best-used-by date? Ten years ago such a question would have seemed ridiculous. Today however, there are several indications that the Microsoft rule in the OS domain should no longer be considered as one of the fundamental constants of IT.

Oracle has long been a major contributor and supporter of Linux, beginning in 1998 with the first release of the Oracle database on Linux and later with the release of Oracle applications and middleware. Oracle has made significant contributions to the Linux kernel over recent years, e.g., the Oracle clustered file system, and in the process developed considerable Linux expertise.

Conclusion: Service Oriented Architecture (SOA) is used to refer to a whole variety of approaches to achieve enterprise software integration and/or some degree of reuse. By now there is reasonable consensus in the industry around the essence of service orientation, yet no Web Service standard can ever prevent implementers from making glaring mistakes in their use of the SOA concept. The number of SOA implementations is growing, and some valuable lessons can be learned by looking under the hood and assessing the results against the original expectations.

I recently facilitated three panel discussions at Storage Network World. Using an interactive response keypad I was able to run a series of polls to the 80+ delegates. The results are very interesting and cause for some industry reflection.

When asked what the “most important business issue” was, they said “reliability and availability” closely followed by “business continuity”, with “cost containment” a distant third. When asked their “biggest storage related challenge”, the top response was “Managing growth and meeting capacity.”

Conclusion: Many organisations are watching with interest the experience of others in their adoption of SOA, which is a collection of self-contained programming and data access services typically used for repeatable business processes such as for credit authorisation for point of sale transactions. Questions are legitimately asked whether SOA as a concept is another fad or is here to stay.

Early adopters claim SOA is worth the investment and, if used in conjunction with Web Services to link to other applications including legacy systems, a productivity enabler. Nevertheless, these adopters point to the need for a disciplined approach by developers and its use only when justified on business grounds.

Conclusions: As virtualisation of distributed systems rapidly matures, IT organisations should evaluate their current virtualisation capability level and determine which level best supports the business’s needs. The right level of virtualisation capability will reduce complexity and increase infrastructure agility; enabling better alignment of IT infrastructure to applications requirements and hence alignment of IT infrastructure to business strategy.

Conclusion: The quality of software architecture in a system is not something that is easily measurable; without the help of specialised tools and without the existence of a meaningful baseline or benchmark. The short life expectancy of most software systems is often explained as being due to rapid technological change. For the most part this explanation is simply a convenient excuse for tagging existing systems as “legacy” and for obtaining permission to build new systems without having to understand the fundamental forces that lead to legacy software.

Conclusion: A recent survey by Ideas International showed that 2/3rds of IT organisations use virtual servers, at least occasionally in their data centres, with 1/3rd using them at least frequently. While server virtualisation is commonly associated with server consolidation, and as a method of lowering hardware costs; early adaptors are using virtual servers to create an agile, utility-like infrastructure that is better aligned to the business’s needs.

Leading IT organisations use virtualised infrastructure, based on virtual x86 servers (generally VMware) and networked storage, to decouple workloads and data from the underlying hardware. This enables new infrastructure capabilities that better support the business’s availability and capacity requirements. Unlike the earlier, over-hyped promises of utility infrastructure, infrastructure virtualisation is a simple, practical technique that is delivering measurable business and IT benefits now.

Conclusion: IT organisations are aware of the impending release of Windows Vista; however in a recent surveyi less than 20% of those with more than 100 desktops have a formal strategy for dealing with it. The most common driver for using Vista on the desktop is the need to keep current, so as to ensure long term support from both Microsoft and ISVs. None of the people interviewed anticipated any business benefit from upgrading and did not consider it an important or urgent project; instead they saw it as a necessary evil that must be dealt with.

Conclusion: The last 3 months have seen two significant announcements from Sun. Firstly the resignation of iconic CEO Scott McNealy in favour of Jonathan Schwartz, followed by an announcement of a planned 10%-13% reduction in workforce. Unfortunately Sun’s malaise has no quick fix and like notable IT giants before it (e.g., Digital Equipment, IBM) its problems are due to the commoditisation of its core value propositions, SPARC and Solaris.

As customers have increasingly adopted “good enough” Wintel/Lintel systems, Sun’s revenues have remained flat for the last 3 years and Sun has consistently posted losses since 2002. Sun has not yet established new value propositions that will return it to its former glory and the most likely outcome is a continued slide into irrelevance.

Conclusion: The technology adoption cycle and its cousin, the hype cycle, are familiar concepts. While the theory is instructive, there are acknowledged gaps in its explanatory power, and consequently in practical application, although many organisations and vendors implicitly subscribe to the general thrust of the concept.

The steady stream of new technology – including innovations and upgrades - means potential buyers are evaluating products almost continuously; whether as consumers or for businesses. From both sides of the equation, can buyers and sellers make a better bargain with the idea of the adoption cycle? Could buyers be better informed and rational or is there a portion of emotion involved in buying technology? And how does a business adapt the technology adoption cycle to be successful?

IT managers need to be aware that their software environments will dramatically change between now and 2010. The expected broad and rapid adoption of varying types and levels of software-as-a-service (SaaS), multiple "flavours" of services-oriented architectures (SOA), and open source-based software should be expected to increase an organisation’s IT and business complexity, and management costs.

Though the costs of acquiring storage hardware will continue to decline during the next five years, any savings for users will be exceeded by the additional costs that will be incurred in the ongoing management of increasingly large disk farms. Storage will require significant investment in tools, development of processes and retraining and recruitment of specialist staff. New models for procurement of storage capacity and storage management will become a viable alternative to in-house management of storage.

Conclusion: By the end of this decade, blade servers will have become the standard form factor within most datacentres. Driven by convenience, manageability and price/performance, most IT organisations will choose blades over rack-optimised to build out a low cost, highly flexible computing infrastructure. Over 90% of these systems will be based on industry standard servers (i.e. x32/64 based systems) running Windows or Linux.

As the existing IT infrastructure begins to reach the end of its economic life IT organisations should re-examine their architectural standards and evaluate the benefits of bladed based computing. They should start by first understanding the new trends in server infrastructure (see, “Refreshing IT Infrastructure? First Break All the Rules!” Feb-2006) and then comparing the value proposition of blades compared to traditional rack-optimised servers.

With the release of Microsoft Vista and Microsoft Office 2007 early next year IT organisations should take this opportunity to review their desktop strategy. Early indications are that both products are significantly different from the current versions and, as with the prior major releases, will involve significant time, effort and money to implement.

While Microsoft assures us there is significant new value in these new products, particularly from “integrated innovation”, none of the IT managers I’ve spoken to were able to translate this into business value. In a recent interview with Peter Quinn, former CIO of the State of Massachusetts, he said when they looked at how staff actually used their desktops “most of the people don’t use all those advanced features [of MS Office] so it begs the question as to why I would spend all that money”. With the trend to web services (i.e., services delivered over the internet) and the availability of MS Office alternatives such as OpenOffice, he seriously questioned the value of remaining on the Microsoft upgrade treadmill.

Conclusion: After 5 years of tight IT budgets many IT infrastructure components are reaching the end of their economic life and recent surveys suggest IT organisations intend to begin refreshing key systems this year. The path of least resistance is to replace these components with new items, staying with the “status quo”. However this may not be the best strategy!

Due to significant changes in technologies over the last 7 years we recommend IT organisations challenge their existing infrastructure assumptions (formal or informal) and create new rules to guide construction for the next 7 years. The greatest obstacle is not changing technologies but overcoming people’s resistance to changes in their environment.

Conclusion: 2006 will be the year that server virtualisation technology becomes mainstream on x86 based servers. IT Organisations are combining commodity x86 based servers with virtual machines and Storage Area Networks (SANs) to build agile, low cost server infrastructure. They are reporting many benefits including excellent TCO, rapid provisioning, increased resource utilisation and simple, low cost high-availability and disaster recovery.

Of the three core technologies used to build this infrastructure, virtual machines are the newest and most rapidly evolving. In 2006, IT organisations must understand this technology, and the vendor landscape, to ensure they make the right strategic choice for the next 5 years.

Conclusion: Since the beginning of the dot.com boom of the late ‘90s, there has been considerable debate over which web server should be used. By 2004 the web server wars were over with two clear victors emerging, IIS from Microsoft and Apache from the Apache Software Foundation. IT Organisations (ITOs) should move beyond debating the technical merits of various product and select an organisation wide technology standard based on existing investment, skills or alignment with strategic platforms. As part of an ongoing strategy to reduce infrastructure complexity (see “Infrastructure Consolidation: Avoiding the vendor squeeze”), ITOs should create a pragmatic plan to migrate to the new standard.

Conclusion: With the maturing of Server Virtualisation on industry standard (X86) and RISC/Unix servers, all IT organisations should evaluate its role in optimising IT infrastructure. See IBRS research note “Infrastructure Consolidation: Avoiding the vendor squeeze” October-05).

The recommend strategy is to start by using server virtualisation to enable the consolidation of non-production systems (i.e. dev/test/QA), progressing to consolidating smaller non-mission critical production applications and finally creating a virtual server infrastructure that simplifies and enables load-balancing, high-availability and disaster recovery. A well executed server virtualisation strategy will reduce complexity and increase agility, leading to better alignment of IT infrastructure with the applications requirements and business strategy.

Conclusion: The US State of Massachusetts' policy that by 2007 all Executive Department documents must be stored in Open Document Format (ODF) or PDF is a significant milestone in the ongoing migration from proprietary systems to open standards. The statement is founded in the belief that open standards are the best option for ensuring that official public records are freely and openly available for their full lifecycle. Experience with other open standards (ASCII, TCP/IP, SQL, HTML) demonstrates their central role in interoperability, confirming this belief.

Microsoft will resist ODF in an attempt to maintain control over a critical standard in one of its most profitable product lines. However, like other open standards before it, and for similar reasons, ODF will become the common standard for office documents, though due to the ubiquity of Microsoft formats this may take 6-8 years.

Conclusion: The need for systems integration skills in organisations that keep their IT processing in-house and those that use external service providers will continue to grow and increasingly be a differentiator in services offered to clients. Managers who do little to enhance the skills of professionals engaged in systems integration activities will be doing themselves a disservice and run the risk of losing their highly marketable staff.

Just recently, there have been a number of announcements from the heavyweights of the software industry. These events have the potential to make a big impact on the industry and on user plans. Whenever a vendor acquires another company, reorganises or announces a new strategy the effects are sure to be manifested in changed product roadmaps, reduced support or R & D for products, account management changes and many other aspects that could change user plans. By understanding the impact of announcements such as those discussed below, users can avoid costly mistakes when choosing products and services in a constantly evolving market.

Conclusion: SOA is an increasingly common TLA (three letter acronym), and is often thought of as a new technology - and equated with Web Services. This does injustice to Service Oriented Architecture, a new software design concept that emerged from the need to easily integrate web based applications independent of their implementation technology. Hence the adoption of SOA is not about migrating to yet another technology stack, but rather about adopting new software design principles that make life easier in today's world of distributed and heterogeneous systems.

Conclusion: Infrastructure Consolidation has been a hot topic since the IT downturn in 2001/2. Unfortunately, this topic has been hijacked by IT vendors and used as justification for purchasing their latest high-end technology. To date most consolidation efforts have been technology projects with poorly defined goals that rarely go beyond implementing a specific technology. As a result most consolidation projects fail to deliver lasting benefits.

To ensure long term benefits, IT organisations (ITOs) must view infrastructure as an asset to be optimised for an appropriate mix of Total Cost of Ownership (TCO), agility and robustness as required by the business. The critical success factor is the recognition that complexity is the key driver of these characteristics and that a planning process (not technology) is necessary to reduce and control complexity.

Conclusion: While the introduction of Serial Attached SCSI (SAS) will have a significant impact on the storage environment though 2006/7, over the next 12 months clients should be wary of the hype vendors will use to promote it. By year end 2005, technical staff should gain a basic understanding of the key features/benefits of SAS. Though 2006/7, IT organisations should begin using SAS, in conjunction with SATA, in DAS, SAN and NAS configurations when it provides a lower cost storage alternative [i.e. than Fibre Channel (FC)] while still meeting application and data service level requirements.

Conclusion: When it comes to design and implementation of an Enterprise Architecture, traditionally the key decisions regarding software systems have been around building vs. buying, and vendor selections based on criteria derived from business requirements.

In the last five years however, many Open Source infrastructure software offerings have matured to the point of being rated best-in-class solutions by experienced software professionals. This means that build vs. buy decisions need to be extended to build vs. buy vs. Open Source decisions, a reality that has yet to sink in for many organisations.

Interestingly, the key benefit of using Open Source components is not necessarily cost savings, but reducing vendor lock-in, and the risk the vendor may go out of business or discontinue support for a product line.

Despite the "On-Demand" type services, many organisations will continue to own and manage their server platforms. With cost-cutting directives still an issue for all our clients, understanding future events in the server markets and their impact on buying decisions is essential.

Conclusion: Putting forward arguments for IT business/process improvement can sometimes bring about eye-glazed responses from senior executives. ISO/IEC1 15288:2002 ‘Systems Engineering – System lifecycle processes’, hereafter ISO 15288, can be an ally in terms of providing an authoritative source of reference when putting forward a case for change. It is a comprehensive standard that covers all system life cycle processes from Stakeholder Requirements Definition through to System Disposal as well as providing guidance on essential governance matters.

Conclusion: System erosion concerns the run down state that systems decline into when improperly tended by management, users and IT staff. Whilst there are many characteristics that describe eroded systems, the common theme is that these systems fail to provide the value originally ascribed to them when the business case to develop them was prepared.

Conclusion: The CIO Perspective, December 2004 entitled, 'IT Issues in Company Acquisitions' highlighted the CIO's involvement in two due diligence processes and having to provide an opinion on the state of the business solutions included in the assets being offered for sale.

In the article, two examples were quoted of small but profitable organisations which were being offered for sale and had immature IT service delivery systems and governance processes. In both cases the organisations used IT to provide business support or delivery systems. He was of the opinion the systems and process immaturity did not adversely impact the business performance of both organisations.

New server assessment and acquisition practices are often ignored in preference to a "speeds and feeds" focussed, "hardware is cheap" mentality. This focus is supported by the constant bragging by server vendors about their latest and greatest TPC1 Benchmark figures. Unfortunately, these benchmarks and the technologies that enable them have little impact on most workloads. A better purchase can be made by understanding the application characteristics and how server technologies will benefit performance.

Conclusion: Unless organisations develop and implement comprehensive Email Management Guidelines and insist on total compliance, the hidden cost of processing emails will continue to escalate.

Technologies to support the consolidation of rampant servers continue to make marked advances. Server virtualisation is an approach that can make a consolidation exercise faster, easier and safer. Microsoft''s announcement of Virtual Server 2005 broadens the market and will bring lower licensing costs from the existing alternatives. If VS05 is of interest, then be aware of its limitations and costs.

IT organisations that build systems and software piecemeal fashion, using the so-called "Best of breed" components on the market from different vendors, should be aware that "Best of breed" is a dying breed. The increased complexity and cost of these systems rarely can demonstrate the incremental benefits over "Good Enough" solutions to justify the time, effort and expense

Conclusion: Effective ICT architectures allow organisations to become smart buyers of applications and infrastructure; and, ensure that technologies work together in a cohesive and effective way.  Attempting an ICT strategic planning project without an effective architecture carries three major risks.  

  1. The planning team will struggle to turn business ideas into ICT initiatives.
  2. The planning team will need to make decisions about potential ICT investments without sufficient time to analyse how well these investments may or may not work with the existing applications portfolio.
  3. Technical implementations may differ from the initiatives specified in the plan through lack of architectural standards.

Conclusion: Increased demands on bandwidth have been growing in the last two years. The Australian Bureau of Statistics' figures for the March - September quarter of 2003 showed a 180% increase in bandwidth usage by business and government. As an index of demand this trend is significant and poses for managers the question of how to plan for bandwidth demand in the future.

Although most organisations differentiate the "backbone" bandwidth, the "peer to peer" bandwidth, LAN bandwidth, and voice over IP bandwidth, from each other, demand on all networks should be assessed overall to forecast how an organisation should manage changes to its requirements.

New applications also put pressure on existing networks. The rapid deployment of applications naturally turns attention to the adequacy of current networks and platforms to deliver those applications. With increases in demand what was once acceptable to a business becomes insufficient. 

The challenge for managers is how to forecast, taking into account temporary surges in demand and also longer term trends.The two techniques below will help managers plan for such requirements:

1. Review the business strategy and the upcoming demands across your organisation and whether current arrangements are suitable for the next two years. This review should include the influence of competition and market conditions.

2. Establish an efficiency benchmark of current communication services to be certain of what is delivered and at what cost. There are many widespread claims as to the additional effectiveness and efficiency from more bandwidth, but these claims are not generally qualified with productivity figures to support them. A clear understanding of the benchmarks will assist future investment decisions.

Conclusion: There is no time like the present to get Executive buy-in to invest in a Records and Document Management (RDM) framework and technical solution. Pending legislation in USA (Sarbanes-Oxley and Bio-Terrorism) and CLERP9 in Australia, a synopsis of which appears in Note 1 below, is likely to put RDM on the radar screen of many CIOs and IT managers.

Conclusion: Linux has its place(s) in the SME organisation now, and clear evidence for reduced cost can be demonstrated. However, in more complex environments, the costs of commodity hardware and operating systems are small compared to the costs of ISV software and support and the use of Linux will be harder to warrant before 2005. Linux and other open source software offerings need to be evaluated rigorously before committing your organisations directions this way, as the vendor hype does not yet match reality for the SME.

Conclusion: The Enterprise Architect faces three major unrelated challenges today. They are to:

  • Keep the architecture or standards viable when the technology options are changing continuously

  • Sustain Executive commitment to the standards when the benefits are not immediately apparent

  • Stay informed of technology developments and advocate its adoption in advance of it proving to be a ‘winner’

The ideal person for the role is someone who is intellectually curious, politically aware and able to sell their ideas. To succeed the architect needs to gain the trust of their Executive, have access to vendors and early adopters of emerging technology and an awareness of business imperatives.

While the intense competition in the PC market has benefited technology buyers in the form of lower capital costs, it has forced vendors into tighter product cycles and a frantic pace of incremental technological advancements with high-perceived but little real value to users. Corporate technology buyers of both desktop and laptop systems should focus efforts on achieving within their PC fleet a balance of meaningful technological aggressiveness and stability. They should also be increasingly vigilant in their assessment of the benefits of new technologies. Those organisations that do not take steps to understand vendor product transition processes and assess the impact on support will quickly develop a more complex mix of installed PCs than necessary. Escalating costs (driven by the uncontrolled installation of poorly understood technologies) will also be become a concern.

Conclusion: No solution comes shrink-wrapped and perfectly adequate so that it can be considered complete and that is true of e-learning. If the implementation of e-learning in the workplace has stumbled the two guidelines below will assist in getting better results:

  1. Ensure that the e-learning process is continuous – not constant – but persistent for all employees over time;

  2. Test, test, test, not just the e-learning software package but also what the users thought of it as much as the content of the program.

If e-learning is viewed as a process, not just a one-off event, it will become part of the working schedule and also integral to the productivity of the organisation.

Conclusion: If you know your organisation’s records and document management processes are out of control and do not propose a viable solution, you are putting your job, and the CEO’s, at risk.

Why Records and Document Management?

One of the hidden and unavoidable costs of running an organisation is that of manually filing, retrieving and disposing of records and documents. This cost often runs concurrent with the hidden risk from not being able to find key documents when required for evidentiary purposes or completing an asset sale. How can the costs be avoided and the business risks minimised?

To answer the question, let’s look at what has been happening in many firms of all sizes in the last couple of years.

Conclusion: Server consolidation has become widespread as budget pressure is maintained and corporate mergers and reorganisations continue. Although the overwhelming majority of consolidation projects are viewed as successful (at least in terms of the reduction in number of servers) when failures occur it is usually because of poor planning. Vendor endorsed programs often appear very attractive, but unless the full implications of your particular environment are taken into account the only goal that will be met is the vendor’s revenue target.

Conclusion:

As-a-Service machine learning (ML) is increasingly affordable, easily accessible and with the introduction of self-learning capabilities that automatically build and test multiple models, able to be leveraged by non-specialists.

As more data moves into Cloud-based storage – either as part of migrating core systems to the Cloud or the use of Cloud data lakes/data warehouses – the use of ML as-a-Service (MLaaS) will grow sharply.

This paper summarises options from four leading Cloud MLaaS providers: IBM, Microsoft, Google and Amazon.