Disaster Recovery (DR) planning is much more than just developing a DR plan. Building your organisation’s maturity to successfully recover from a disaster scenario is an exercise in continuous improvement. Recently, IBRS hosted a webinar to address four IBRS advisory papers focusing on the steps needed to successfully plan for DR and build the maturity of the organisations DR planning processes. The end game; to improve the likelihood of mitigating an ICT disaster event to ensure business success. Disaster Recovery Must Work!

When your business faces a disaster it is key to address the issue head on. You must first understand who's problem it is to solve and create an effective disaster recovery (DR) plan. Both business and ICT need to be comfortable that the DR plan has been verified to ensure a reasonable expectation that recovery will be successful. IBRS has created a 4 part series to help organisations plan for and recover from disasters successfully. Download the 'disaster recovery must work ebook' and prepare your organisation.


Part four in this series of advisories looks at how to improve the disaster recovery (DR) planning maturity of your organisation. The focus of improving maturity in DR planning is to improve your probability of successfully meeting the needs of your business in the event of a disaster. Ensuring your DR plan (DRP) and business continuity planning (BCP) are fully integrated and that all elements of the organisation have a high degree of familiarity with DR processes.

Importantly, your organisation must understand that maturity is both a journey and a target. To maintain the target maturity, your organisation must put in place a number of strategies that will be continually repeated to ensure the target is both met and maintained.


Part three of this four part series looks at how the disaster recovery (DR) plan can be verified. The DR plan is in effect a contingency plan to deal with risk of a disaster. The DR test plan is a validation of the preparedness of the organisation to address these risks.

The need to have a DR plan verified is therefore essential if the contingency is to be effective. Just having a plan in place is not enough to mitigate the risk. The plan must be tested and verified as part of business as usual (BAU) to both increase familiarity with the plan, its standard operating procedures (SOPs) and processes, and most importantly, improve the likelihood of success.


With the growth of dependence on ICT for business to perform effectively, many organisations have increased risk associated with the ability of ICT to provide service continuity. ICT downtime means business is negatively impacted. Many organisations believe the DRP is a problem that is ICTs to solve. Whilst ICT will lead the planning and do a lot of the heavy lifting when a disaster occurs, it can only be successful with the assistance and collaboration of its business partners. It will be the business that sets the priorities for restoration and accepts the risk.

Both business and ICT need to be comfortable that the disaster recovery (DR) plan has been verified to ensure a reasonable expectation that recovery will be successful.

The Latest

18 March 2021: Veeam released a report which suggests that 58% of backups fail. After validating these claims, and from the direct experiences of our advisors who have been CIOs or infrastructure managers in previous years, IBRS accepts there is merit in Veeam’s claim.

The real question is, what to do about it, other than buying into Veeam’s sales pitch that its backups give greater reliability?

Why it’s Important

Sophisticated ransomware attacks are on the rise. So much so that IBRS issued a special alert on the increasing risks in late March 2021. Such ransomware attacks specifically target backup repositories. This means creating disconnected, or highly-protected backups is more important than ever. The only guarantee for recovery from ransomware is a combination of well-structured backups, coupled with a well-rehearsed cyber incident response plan. 

However, protecting the backups is only useful if those backups can be recovered. IBRS estimates around 10-12% of backups fail to fully recover, which is measuring a slightly different, but more important situation than touted by Veeam. Even so, this failure rate is still far too high, given heightened risk from financially-motivated ransomware attacks.

Who’s impacted

  • CIO
  • Risk Officers reporting to the board
  • Infrastructure leads

What’s Next?

IBRS has identified the ‘better-practice’ from backup must include regular and unannounced, practice runs to recover critical systems from backups. These tests should be run to simulate as closely as possible to events that could lead to a recovery situation: critical system failures, malicious insider and ransomware. Just as organisations need to rehearse cyber incident responses, they also need to thoroughly test their recovery regime. 

Related IBRS Advisory

  1. Maintaining disaster recovery plans
  2. Ransomware: Don’t just defend, plan to recover
  3. Running IT-as-a-Service Part 59: Recovery from ransomware attacks
  4. Ransomware, to pay or not to pay?
  5. ICT disaster recovery plan challenges
  6. Testing your business continuity plan

The Latest

28 March 2021: AWS has a history of periodically lowering the costs of storage. But even with this typical behaviour, its recent announcement of an elastic storage option that shaves 47% off current service prices is impressive. Or is it?

The first thing to realise is that the touted savings are not apples for apples. AWS’s new storage offering is cheaper because it resides in a single-zone, rather than being replicated across multiple zones. In short, the storage has a higher risk of being unavailable, or even being lost by an outright failure. 

Why it’s Important

AWS has not hidden this difference. It makes it clear that the lower cost comes from less redundancy. Yet this architectural nuance may be overlooked when looking at ways to optimise Cloud costs.

One of the major benefits of moving to Platform-as-a-Service offerings is the increased resilience and availability of the architecture. Cloud vendors, including AWS, do suffer periodic failures within zones. Examples include the AWS Sydney outage in early 2020 and the Sydney outage in 2016 which impacted banking and e-commerce services.  

But it is important to note that even though some of Australia’s top companies were effectively taken offline by the 2016 outage, others just sailed on as if little had happened. The difference is how these companies had leveraged the redundancies available within Cloud platforms. Those that saw little impact to operations when the AWS Sydney went down had selected redundancies in all aspects of their solutions.

Who’s impacted

  • Cloud architects
  • Cloud cost/contract specialists
  • Applications architects
  • Procurement leads

What’s Next?

The lesson from previous Australian AWS outages is that organisations need to carefully match the risk of specific application downtime. This new announcement shows that significant savings (in this case 47%) are possible by accepting a greater risk profile. However, while this may be attractive from a pure cost optimisation/procurement perspective, it also needs to be tempered with an analysis of the worst case scenario, such as multiple banks being unable to process credit card payments in supermarkets for an extended period.

Related IBRS Advisory

  1. VENDORiQ: AWS second data centre in Australia
  2. Post COVID-19: Four new BCP considerations
  3. Running IT-as-a-Service Part 55: IBRS Infrastructure Maturity Model

Conclusion: Australian organisations must have strong disaster recovery plans, be it for natural disasters or man-made disasters. The plans need to deal with the protection and recovery of facilities, IT systems and equipment. It is also critical that the plan deals with the human side of the impact of a disaster on the workforce. What planning needs to be done, what testing will be done, what will happen during a disaster and what needs to be done after a disaster?

This planning can be complex and confronting. Whilst testing the failover of IT systems can be relatively straightforward, testing the effectiveness of the workforce side of a plan will be difficult, and may even disturb employees who may prefer to think “surely it will never happen to us”.

Conclusion: Despite market hype around the role of data scientists and in-house developers for the successful exploitation of artificial intelligence (AI), organisations are increasingly looking to their vendor partners to provide ready-made solutions. Both business and technology leaders are expecting solutions to be based on the vendor’s ability to leverage their customer base across various industries to create AI features such as machine learning models.

Vendors are responding by increasingly incorporating these features into their offerings, along with a new breed of vendors that are producing pre-trained or baseline machine learning models for common use cases for specific industries.

However, organisations must be prepared to contribute to this AI product development or continuous improvement process which in practical terms means giving major vendors access to data. Without access to good data the result will be sub-optimal for both parties.

Conclusion: The enterprise application marketplace has seen some changes in the past two years, with new entries, consolidation and acquisition, particularly in the mid-market of ERP finance systems. IBRS recently investigated a cross-section of ERP finance systems from top tier to the smaller players including, but not limited to: SAP, Oracle, Workday (Finance), Technology One Microsoft Dynamics 365 for Operations (Microsoft Dynamics AX), Sage X3, NetSuite, Microsoft Navision, Sage 300, Great Plains, MYOB, Xero and SaaSu.

This research paper includes a comparison of current functionality available across three popular mid to upper market ERP finance systems, namely Sage X3, Microsoft Dynamics 365 for Finance and Operations (previously A) and Oracle’s revamped NetSuite. They have been reviewed given their strength in the finance and operations functionality.

Conclusion: Although online digital platforms are in ready supply, organisations remain unable to avoid the receipt of critical information in the form of paper documents or scanned images. Whether from government, suppliers or clients, organisations are faced with written correspondence, typed material, completed forms or signed documents that must be consumed. For a variety of reasons, it may be unreasonable or impractical to expect this information to be sent in machine-readable form.

However, machine-readable content from incoming information, both past and future, is emerging as a prerequisite to exploit artificial intelligence and machine learning as part of digital transformation. Therefore, organisations need to re-examine their data ingestion strategies and move proactively to the use of optical character recognition on incoming paper- and scanned image-based information.

Conclusion: The release of Amazon’s Echo in 2014 heralded the first of a series of “ambient” technologies1. These new devices are unobtrusive, multiple purpose and capable of responding to conversational input through integration with virtual digital assistants (VDAs) such as Amazon’s Alexa.

A key enabler of these platforms is the ability to implement “skills” or expand the platform’s capability to interpret and respond with appropriate conversational content beyond the basic function of the device itself.

The consistency of information required by organisations under omni-channel delivery models, combined with under-resourced editorial teams, mean organisations must prepare for conversational channels by transforming existing content sooner rather than later.

Failure to do so will see history repeat itself through short-term replication of content to support new channels only to have that content and channel functionality merged back into increasingly sophisticated content management platforms at significant cost.

Conclusion: A customer relationship management (CRM) software tool is both a database for contact interaction and a productivity tool used to analyse customer data, win new business and track employee sales performance. The competition is fierce for a higher share of the CRM software market. Major benefits of the growing demand in CRM are the improvements in functionality including mobile applications, enhanced reporting and analytics, and better integration tools.

Conclusion: The ERP finance system is one of an organisation’s critical IT applications that can benefit or constrict operations. It is the backbone system that underpins how an organisation interacts with customers and suppliers, and manages day-to-day transactions and business operations. It is the CEO and CFO’s key business tool used to make business decisions. If an organisation can streamline its backend processes efficiently and automate transactions to speed up interactions with customers direct from online bookings to the capturing of payments swiftly, it sends a positive message to customers. This builds customer retention, a good reputation and long-term revenue by potentially increasing the lifetime value of customers and their referrals.

Conclusion: Data overload and the ease of accessing various types of data has created a problem of what to use and where. This is manifested in the choices of analysis which tend to the facile, such as Return on Investment, which can be applied universally even when it is not strictly applicable. Furthermore, the relative priority of some types of measurement, and in which cases, is vague. It is not always feasible to strive for the absolute solution, such as the comprehensive view, and therefore a graded and qualified response is more pragmatic.

There is expected to be moderate growth between 2017 and 2025 which will have an impact on business operations.

Conclusion: The business climate over 2017-2025 will present new conditions that are more challenging. Based on various forecasts, the eight-year period will see moderate growth and that will have a direct impact on business operations.

Conclusion: As the nature of work is becoming less routine and linear, the most effective collaboration solutions are supporting the ways that teams and individuals want to work.

At the same time, customer service techniques are changing to appeal to individuals in the ways that they like to be treated.

Developments in business work flow and customer service are emerging in four broad generations of deployment:

  • Business process, work flow and customer service have morphed from document and transaction-centricity to
  • augmentation by social networking and mobility applications, followed by
  • increasing support from a conversational (Chat) model aided by interactive robotic speech, and
  • in future, even more personalised and intimate experiences delivered by Artificial Intelligence (AI) and Virtual Digital Assistants (VDA).

Conclusion: Since the inception of Bitcoin, the blockchain is now viewed as a potential technology improvement to many ordinary transaction and data storage functions. The financial sector has led the way, from investment banks to stock exchanges, but deployment of the blockchain has application in other industries. Its clear advantages may yield much efficiency leading to reduced costs. Organisations should examine how and when they might adopt the technology.

Conclusion: Despite the prominence of Business Process Management (BPM) in most organisations, Enterprise Architects are routinely oblivious to the scope for using Communications-Enabled Business Process (CEBP) within their BPM.

The very large global Microsoft and Google developer communities have run with the most popular collaboration suites as a foundation for their CEBP apps.

The most common CEBP solutions are based on customised messaging allowing alerts, alarms and notifications to be used to support business process. Widespread use of customised ‘Presence’ has become particularly helpful in giving the status of people or resources to inform transactions. Human delay and business latency is being minimised by using notifications to handle routine processes as well as exceptions to business rules.

Conclusion: Telstra’s new shared access WiFi service Telstra Air solves the problems of users’ limited access to WiFi away from their own home, office or WiFi Hotspots by sharing some of other users’ WiFi capacity (2Mbps on a land line).

It uses globally deployed Fon services which also have massive capital expenditure reduction benefits for fixed and mobile telecommunications carriers and global roaming benefits for Internet service providers and users.

Enterprises should evaluate this type of architecture and service for use in novel ways to brand, differentiate and transform their customer engagement. Shared WiFi access to the Internet is another example of recent trends in the ‘sharing’ economy such as airbnb, Uber, GoGet carshare and others that create practical value.

Conclusion: The first generation of the Internet of Things (IoT) is now reliably internetworking uniquely identifiable embedded computer devices.

However, the emerging Internet of Everything (IoE) will go beyond the IoT and its machine-to-machine (M2M) communications between devices, systems and services. The demands from popular consumer IT will lead to a broad adoption of IoE in enterprises although corporations will focus on the IoE for its business process improvement.

Use of common collaboration tools will become the most prevalent and valuable way to extend isolated low level IoT interactions into sophisticated orchestrated IoE apps that deliver valuable experiences and tangible benefits to both consumers and corporate users.

Conclusion: When implementing enterprise Cloud services, a disciplined and locally distributed approach to user acceptance testing in combination with real-time dashboards for test management and defect management can be used as the centrepiece of a highly scalable quality assurance framework. An effective quality assurance process can go a long way to minimise risks, and to ensure a timely and successful rollout.

Conclusion: The development of new digital services often entails not only changes to workflows but also changes to the business rules that must be enforced by software. Whilst vendors of business rule engine technology often market their products as powerful and highly generic tools, the best results are achieved when restricting the use of the different approaches to specific use cases.

Conclusion: While many IT organisations believe that using public IaaS (e.g. AWS, Microsoft Azure, Google) to host business applications is a cost-effective strategy, they still require to manage the hosted environment themselves or select an external service provider to manage it for them. Towards this, it is critical to understand the current service management maturity level prior to choosing an in-house or outsourced solution. This note provides a self-assessment service management maturity model to create a solid foundation for selecting sourcing options. IBRS recommend that IT organisations with maturity level 3 or higher retain the service management function in-house, whereas, IT organisations below maturity level 3 should outsource the service management function.

Conclusion: The proliferation of mobile devices and increasingly mobile staff in the enterprise is driving demand for file sharing and synchronisation services. In the absence of a usable offering from the organisation, users are turning to the ad-hoc use of consumer grade services. This is often referred to as ‘The Dropbox Problem’.

Failure to provide a workable enterprise alternative will increase organization’s risk of data loss or leakage.

Conclusion: 80% of traditional outsourcing contracts established in Australia during the last 25 years were renewed with the same service provider. However, with the emergence of public Cloud, IT organisations should examine the feasibility and cost-effectiveness of migrating to public Cloud prior to renewing the existing outsourcing contracts.

In the technology industry Apple, Google, Amazon and others are seen as synonymous with innovation. These companies disrupted prevailing business processes and changed the way people use music, buy products or even write documents. From their design, software tools, and e commerce, what these corporations have done to business around the world is dynamic. Innovation has been at the centre of their success and with it has come development and growth.

Conclusion: Search was always the most important utility online. Now it is moving into a new phase with higher functionality and relevance. In the next phase search will unite facts with opinions and personal needs. The umbrella term for this evolution is semantic search. When this search functionality is inside the devices consumers use it may be highly influential.

Organisations will confront search in two ways. Firstly, through the lock-in that users may demonstrate for the devices with the search function they prefer, and secondly, through a better context in which information is presented and through saliency.

Conclusion: While organisations and personal customers anticipate NBN reaching their premises soon, the fact is it will take some time. The roll out timetable has been well known since the NBN design was outlined. The apparent delays in the roll out are the result of implementation and resourcing which NBN Co. has solved. NBN Co. expects to be able to ‘catch up’ on the roll out by the middle of 2013 and exceed its targets.

Organisations that want, or have a high demand, for the NBN should refer to the roll-out timetable and geographical detail. It may be a catalyst for planning, or allow them to develop strategies for services within their own organisational network which can be deployed in a timely manner.

Conclusion: Ticketing and other forms of transactions are essential elements to make other forms of non-cash and mobile financial transaction become habitual to customer behaviour. The familiarity of using the mobile device in such a way, with guaranteed security and convenience, is fundamental to user acceptance. It will help encourage all trust-based mobile interactions on a wider scale.

While smartcards have been seen as the transport ticketing solution there are risks and costs. Ticketing solutions built on smartphone platform is the obvious choice for transit authorities and other organisations that offer services to large groups of users and must manage their use of the service.

Conclusion: Productivity is going to be a real and growing concern for organisations. A widely held view is productivity can be raised through social technologies because these technologies necessarily enhance levels of collaboration. If only it was that simple.

Social technologies can offer better means of performing some processes but improving productivity is not achievable nor a direct result of using social technologies. Productivity is too complex a financial and business issue to be solved by a single IT deployment. Organisations ought to apply social technologies after due diligence and examining their requirements very well.

Conclusion: Business Capability Modelling is a simple, structured approach that offers a strategic view of an enterprise. A Business Capability Model remains stable even as business processes change, and as your organisation is restructured. A Business Capability Model offers a higher return on investment than Business Process Modelling, and has several advantages as a tool to help bring the ICT organisation closer to becoming a partner with the business.

Conclusion: Does every organisation need a dedicated ECM system? Not necessarily. Given the breadth of the topic, it is common to use a combination of different systems to adequately address enterprise wide management of content. When embarking on an ECM initiative, it is important to set clear priorities, and to explicitly define the limits of scope, otherwise the solution that is developed may primarily be a costly distraction.

Conclusion: The forecast growth of data transmission over the Internet in the next decade means the role of content distribution networks will probably rise. As demands on bandwidth grow, efficient management of online data will be at the centre of many organisations’ online delivery strategy.

While it may seem that improved broadband and the arrival of the NBN (when that occurs) will solve the issues of speed, it will not because more users, richer media and more applications will fill the bandwidth. Consequently a content distribution network (CDN) strategy ought to be part of any organisation’s online planning.

Conclusion: Adding analytics is essential to any social media strategic initiative, whether it is well organised or just experimental. Without using analytics an organisation is blind to market interaction and therefore cannot modify or understand how to modify tactics. However, avoid simply trusting the data alone to provide the answers and set directions. To gain the most benefit from such analytics tools will require skills in interpretation, analysis and judgement in when to implement actions and or revisions.

Conclusion: Educating executives in the essentials of information management and related technology trends is an ongoing challenge. CEOs and board members are being bombarded with simplistic marketing messages from the big global IT solution vendors, as well as the messages from the most prominent local IT service providers. The same vendors usually target CIOs and senior IT managers with a bewildering set of new, “must-have” technologies every year. To avoid spending millions of IT dollars on dead ducks, vendor claims must be deconstructed into measurable aspects of product or service quality.

Conclusion: The seemingly growing deployment of enterprise social media may add another layer to organisational communications and collaborative suites; or it may replace them altogether. At this stage definite judgement is not possible, given the varying feedback on usage, value and overall benefits.

Ostensibly these tools are being introduced to improve collaboration and productivity. Yet the evidence is not conclusive on those criteria. Nevertheless, it is not necessary to rationalise such deployments on efficacy criteria alone.

Conclusion: Crafting a durable social media strategy is a challenge. How social media tools and behaviour will mature, and the lessons taken from the early phase, will define how it will be implemented later. To manage the social evolution, adequate guidelines can serve as a strategic path.

The two key elements to have in creating a social media strategy are: 1) a robust view of how users and user behaviour is evolving and 2) practical and tactical techniques and tools to deploy and measure in order to produce the information to grow competence.

Conclusion: Social media networks may appear to have developed businesses that can only continue to grow, but they have a real challenge ahead. Demography is everything and with social networks it’s the crucible which will affect the organisations that use social media for their communication and marketing objectives.

Organisations should take a 3-5 year view with Facebook and other social media to build strategies that can evolve with the channel over the period.

Conclusion: Business process management and enterprise collaboration tools are converging into a new form of enterprise capability termed Social BPM. This new approach harnesses the viral power of social networking into enabling real-time user-developed collaborative business processes within the enterprise. This convergence may deliver the transformational value promised, but never realised, by either technology in isolation. Organisations should watch this trend carefully and have a combined strategy for enterprise collaboration and business process management to be in a position to exploit the amplified value that social BPM promises.

Observations: Business process management and enterprise collaboration have long been two prominent themes for organisations seeking to improve efficiency and productivity through IT innovation. IBRS experience with Australian and New Zealand organisations has found the return from investments into these areas to be underwhelming.

Business process management has tended to focus on centralised business process modelling and attempts at process re-engineering. There are many challenges in doing this. The common model has been for a team of dedicated business process modellers to study the organisation from an almost anthropological perspective, capturing imperfect business processes from the field and seeking to create an optimised future state. However, the resulting models are often ineffectual in driving real change in the organisation, succumbing to the ivory tower syndrome of being disconnected from the “real world”.

At the same time enterprise collaboration and more recently enterprise social networks have been seen as a way of improving communication and interaction within the enterprise. Generally this has meant the implementation of browser-based content and document management systems, along with the ubiquitous (and often token) “blogs and wikis”. While centralised, well-structured searchable corporate knowledge is a tremendous asset, it tends to reflect static policy and operational documentation, not real-time system and stakeholder behaviour.

The over-trumpeted “Web 2.0” technologies do provide a degree of democratisation of content and freshness of information but are most commonly seen at the periphery of core business activities. Social networking tools are emerging within the enterprise but the enduring business value of “James just made a cheese toastie in the marketing kitchen” status updates are viewed with scepticism.

Social business process management. A new class of information management tools is emerging. These web-based tools allow business processes to be defined and implemented in a decentralised fashion using “Facebook-style” social networking tools. These tools leverage the creativity and intelligence of human participants in work processes to deliver productivity and efficiency benefits. The fundamental shift in perspective is to acknowledge that a business process is a fluid activity performed by a group of people acting co-operatively, rather than a rigid set of flowcharts imposed by an aloof systems bureaucracy.

With social BPM the participants in a business process are responsible for defining it. Once a business process is operational, the participants can use an array of social technologies including social networking, status updates and comments, RSS/twitter feeds, blogs/wikis to augment the process with contextualised support. Supporting IT systems feed important updates into the social stream to provide events that can trigger support from the underlying network of interested parties.

This is a novel concept and an example will help illustrate how it works in practice.

A fictitious manufacturing company has a range of people supporting the enquiry-to-sale business process. With a social BPM tool, a stream of relevant events underpinning the business process is made available to users. A range of people in the company may have subscribed to receive events from a particular customer. These events are notified across web, mobile and email channels. In the web channel these updates may appear in a similar fashion to a Facebook page.

  • An event is posted from the CRM system indicating a particular customer’s contract will expire in two weeks.

  • A comment is left by a salesperson saying he is planning to visit them next week and will organise a renewal.

  • Another comment from a legal person provides a link to the new contract template that must be used for future transactions.

  • An engineer posts a comment that they have an active support call open which needs to be finalised if they choose not to review the contract.

  • A salesman who has been assigned to a different account chooses not to be notified about this customer and removes them from his subscription feed.

  • After the contract is renewed the SAP system posts an automatic hyperlink to the invoice which was generated for this client.

  • A marketing expert notes that this process could be improved by having a customer satisfaction report being completed for each contract renewal and modifies the defined business process to automatically post a link to the client satisfaction survey system after each renewal for a client is completed.

This example shows have a user-driven real-time business process emerges that uses collaboration technologies across a variety of channels to improve productivity, increase quality and improve client service. Key to this process is the feeding of events from both human and system participants in the process. It is the integration of important system events into the social media stream that distinguishes Social BPM from the established enterprise collaboration or unified communications models.

While a number of CRM and ERP systems may offer social features, these are often limited to within the confines of a particular software system and cannot span across the enterprise. Vendors of broad-spectrum social BPM tools include Appian Tempo, Salesforce Chatter, IBM Blueworks Live, Oracle BPM Suite 11g and PegaSystems.

Social BPM embeds the responsibility and power for business process design and management into the hands of the people responsible for delivering in the real world. It also connects the raft of collaboration tools directly into the nervous system of business process execution. While this devolved model of organisational management may run counter to the traditional command-and-control mentality of some large organisations, it opens up a new model for democratic empowerment of business users.

Next Steps:

  1. Take stock of the value gained from enterprise collaboration and/or business process management investments within the organisation – are real benefits being realised from these investments?

  2. Conduct a trial of a social BPM tool within the organisation with a passionate and curious user base. With a web-based SaaS social BPM tool this can be achieved at little cost, risk or commitment. The user driven philosophy also means this can be achieved with little corporate support.

  3. Evaluate the findings of social BPM usage against traditional methods. Decide whether it suits the culture of the organisation and is sustainable as a long-term business platform. If so reframe the business process management and enterprise collaboration strategy to embrace social BPM as a core strategic objective.

  4. Ensure appropriate governance, risk and policy controls are in place to guide social BPM as a platform for business execution.


Conclusion: Business intelligence has traditionally served as an after-the-fact reporting and analysis capability that drifts weeks or months behind current events. Modern enterprises demand timelier access to integrated information. This demand cannot be met by conventional business intelligence approaches and requires a variety of new techniques targeted at the immediacy of the information required.

Conclusion: The calculated process by which the Facebook message is intended to corral investors, marketers, users and others into the world of ‘social’ is breathtaking. The reality is more complex, less easily believable, and should make any organisation involved with social media ask questions.

Because Facebook (and social media generally) is still developing, it is necessary for organisations using the media to set their own metrics and build knowledge.

Conclusion: Optimising the efficiency and security of statutory board communication is a critical requirement for any organisation. The development of board portal solutions have enabled the basis of board communication to shift from paper to digital media. It is vital that the IT department helps to facilitate this shift. The key challenge for IT departments is to ensure a focus on solutions that are able to be implemented across multiple platforms, and not tied to the latest ‘must-have’ device.

Conclusion: Many enterprise applications remain in existence for 10 or 15 years, or even longer. The magnitude of their total lifetime costs usually mark enterprise applications as being in the top decile of all IT investments. Despite these factors, many of those involved in selecting candidate products choose the wrong products for the wrong reasons. A more structured approach is necessitated in which the traditional focus on detailed functional requirements is de-emphasised and balanced against other factors essential to making a sound, long term IT investment.

Reviewing one of the many new Christmas/New Year film releases, David Stratton, probably Australia’s best known film critic remarked: “It’s surprising how many A-grade actors it takes to make a B-grade movie these days......”.

Conclusion: Many organisations use flawed approaches for selecting enterprise applications. In short, they buy the wrong software for the wrong reasons. With many enterprise applications continuing in existence for 10 or even 20 years, this is a long time to live with a bad decision.

In dealing with the many issues around the cloud it will take a delicate balance of political skill backed by a strong communications strategy to negotiate and collaborate with business. Offering informed and contextual guidance in an open minded discussion is a strong position to adopt. Technology managers should reflect on, and if necessary, modify how they are managing the cloud, with their business colleagues. In some cases a formal approach may be required: presentations, roadmaps, evaluations and information packages delivered to a business audience. In many other cases, a revised approach may be informal, and involve a collaborative attitude to enable an organisation to make better choices.

Conclusion: Software products are marketed with long feature lists, and data export/import features in industry standard formats are commonly advertised – and perceived as the pinnacle of product maturity. Similarly application integration is often equated with the need for data exchange mechanisms between systems. Yet interoperability is a much wider topic, and data exchange only represents the most rudimentary form of interoperability. Failing to understand more advanced forms of interoperability leads to overly complex and brittle systems that are extremely costly to maintain and operate.

Conclusion: In recent months several Tier-1 Australian and New Zealand vendors have announced, and in some cases delivered, locally hosted Infrastructure as a Service (IaaS)1. These announcements will reduce Business and IT Executives’ perception of the risk of adopting IaaS, and result in greater interest in using cloud as a “lower cost alternative” to in-house infrastructure.

While the cloud is often assumed to be an inexhaustible supply of low cost virtual machines, that are available on a flexible pay-as-you-go model, organisations that have looked beyond the hype found it was not as cheap, or as flexible, as you might think.

Conclusion: As the post-GFC economic thaw continues, organisations are seeking to become more resourceful and adventurous. They are rediscovering their innovatory DNA whilst remaining focused on staff productivity and cost control.

The launch of Windows Phone 7 has surprised the worldwide IT industry in two ways: firstly that Microsoft launched a quality product – one that had to face up to the expectations set by iPhone and Android. These are not phones, not tools of information and communication. They are extensions of personal identity, designed objects that are adored.

Conclusion: Neither written languages nor formal programming languages are capable of representing organisational knowledge in a human-friendly format. Even though Semantic Web technologies attempt to offer assistance in this area, their scope of applicability is limited to the role of establishing crude links between elements of knowledge in the public domain. Making organisational knowledge tangible and easily accessible requires new techniques, and dedicated technologies.

It all really started with the hype and the launch around Apple’s iPad earlier this year. Until then, tablet devices were perceived as a fringe phenomenon, of little interest to the mainstream consumer or business user. I have had an eye on the tablet space since the first release of the Amazon Kindle in 2007, and always wondered when devices with a tablet form factor would finally take off. To some degree the introduction and promotion of netbooks in the last two years had confused the market, but the range of tablet devices that are now available is reassuring. Still, the dust is far from settled, and there is a whole pipeline of tablet devices that have yet to hit the shelves. So, apart from the geek-factor, what value can a business user get out of a tablet?

Conclusion: Being acquired by Oracle is a good thing for Sun technologies. However the long acquisition period, followed by weak marketing of the benefit and poor communication of the product roadmaps, has left many customers unsure about their strategic investments in Sun technologies.

Oracle has a clear plan for Sun, with detailed product roadmaps, but customers will have to dig deep to get this information.

Organisations dealing with larger volumes of information, and increasingly complex information requirements need solutions which can be integrated and suit users’ needs. Google’s search product is quite well understood, even if it is just as a search interface and affiliation with its Web search engine.

Conclusion: Large-scale Enterprise Data Warehouse implementations and operations often lead to multi-million dollar items in annual IT budgets. It is paramount that investments of this magnitude are put to good use, and are translated into tangible value for the organisation. Complexity of the underlying information structures can become a major issue, especially once complexity impacts the ability to formulate data warehouse queries in a timely manner. With a bit of foresight, or even retrospectively, it is possible to equip data warehouse designs with simple orientation and navigation aids that significantly reduce the time that users need to locate relevant information.

Conclusion:Last year Richard Soley, Ivar Jacobson, and Bertrand Meyer called for action to re-found software engineering on principles and practices that are backed by robust scientific theories. Achieving big gains in software quality and productivity by introducing off-the-shelf methodologies has proved to be elusive. The evidence suggests that looking for much smaller (and scientifically validated) building blocks that can be composed into an organisation-specific methodology is much more likely to deliver results than the quest for the ultimate methodology. Alignment between business and IT requires constant vigilance of staying on the narrow ridge that separates over-simplification of an organisation’s activities from spurious complexity in software implementations.

Conclusion: Oracle Exadata is an innovative approach to system design that makes Oracle a leading vendor in our Integrated Systems model and it is an example of how IT infrastructure will evolve over the next 3-7 years.

Oracle’s reinvention of storage as a cluster of commodity servers (x64), using commodity storage (SAS/SATA), and a volume storage operating system, is particularly noteworthy. This is a fundamental departure from the last 20 years of storage design, and heralds a major shakeup in the storage industry over the next five years.

Software development was still a very esoteric discipline in the days when Lisp was born. In the meantime the software industry went through a whole series of major paradigm shifts:

  • From structured programming (Pascal and related languages)

  • To relational databases (the SQL standard and implementations from IBM, Oracle and others)

  • To Computer Aided Software Engineering (a very large range of competing tools)

  • To object-oriented languages (such as Smalltalk, C++ and Java)

  • To components (such as the CORBA standard and Java Enterprise Edition)

  • To web based applications (HTML, XML, JavaScript, and other scripting languages)

Conclusion: Oracle’s vision is to become the leading IT Systems Vendor by creating a complete IT stack of hardware, middleware and applications. The objective is to reduce complexity, and to lower the total cost of ownership, though integration and optimisation across the entire stack.

Oracle will retain Sun products that are both complete this Systems Vendor vision and are aligned with its long term business and technology strategies. The remaining Sun products will either be parked, and the customer base transitioned to a related Oracle product, or sold to a third party.

Conclusion: Most attention has been focused on Chrome OS's technical qualities and possibly disruptive effects on the operating system status quo while the commercial objectives of the operating system are veiled. Chrome OS is another potential channel by which Google can harness network effects to develop revenue.

Observing how revenue will grow from Chrome OS will indicate its real market and technological potential. Although it seems far away now, in the next 18-24 months IT departments in organisations will probably have to deal with the swelling influence of Chrome OS from its early adopters.

Conclusion: Interviewing CXOs during consulting assignments over the past eighteen months has revealed significant dissatisfaction about their ERPs. Many contend their ERP investment has significantly eroded since originally implemented, and, given the need to maintain a reasonable degree of release currency, their ERPs are now providing negative returns on capital invested.

 Conclusion: CIOs and IT operations managers must avoid the risk of succumbing to green fatigue. Greenwashing is rampant, with every IT vendor promoting its products as "green." Most IT publications have at least one Green IT focused section. At the same time organisations are continuing their focus on cost reduction, often with IT under the magnifying glass. In these circumstances, it is easy for Green IT to be given lip service only while everybody gets on with the "real work". This must not happen. The biggest green issue for IT is how to reduce the energy consumption of the data centre. Organisations should first focus on reducing the energy consumption in their data centre: not only does it bring a significant green benefit but it saves money.

Conclusion: Given the hype around the interactive aspects of Web 2.0 and the continuing popularity of Business Process X – with X being any element of the set {Management, Modelling, Analysis, Re-engineering, Integration} – the role of artefacts in enterprise collaboration and in value chains is easily neglected. If an organisation looks beyond the hype and invests in a comprehensive and accurate model of artefact production and consumption, the result is an understanding of business processes and value chains that is much more useful than the average business process model.

Conclusion: Software vendor Zoho is pinning its growth on the rapid adoption of cloud services with the aim of being the IT department for SMEs. This business strategy might seem overly optimistic as its potential success may even be partly dependent on Microsoft. According to Zoho, the status of Microsoft in delivering products online is an implicit approval of the delivery and use of software by smaller vendors.

Conclusion: Automated software and system testing will never be the testing silver bullet. One of its components though, the automated generation of test data, is one of the powerful weapons in the software testing arsenal1 and its deployment can provide a strategic advantage in the testing battle. The key is when and how to automate test data generation and which of its features are most effective when deployed. Two of its most useful benefits are reducing risks by protecting personal details and lowering costs by significantly reducing the numbers of tests required.

Conclusion: vCloud Express is a new entry level Infrastructure as a Service (IaaS) offering based on self-service portals, credit card payments and VMware’s enterprise class virtualisation products.

CIOs should look at vCloud Express as a low cost, low risk way to learn how to use public cloud infrastructure. Since vCloud Express may be seen by some groups (dev/test, business units) as a way to side-step the perceived bureaucracy of the IT Organisation, CIOs should develop a strategy to embrace this use as a way to retain control and ensure relevancy with dissatisfied customers.

Conclusion: User interface design, implementation, and validation can easily turn out to be the most expensive part of application development, sometimes consuming over 50% of the overall project budget. This does not have to be the case. If user interface and usability requirements are specified at the appropriate level of abstraction, the required design and implementation effort can be reduced by an order of magnitude, whilst consistency and usability of the resulting application is greatly improved.

Conclusion: Any successful software testing regime uses a judicious mix of manual and automated testing. Manual testing is best in those areas that need spontaneity and creativity. Automated testing lends itself to explicit and repetitive testing and to scenario, performance, load and stress testing. While not all tests can be automated, given good tools there is no reason why much testing and test data generation and test management cannot be automated.

Conclusion: Like a toy that comes with a ready meal, Google Apps is seen by universities as suitable for student users. By its cost per student and terms of service, Google Apps exemplifies how the principle of good enough (POGE), has been accepted to service student needs.

With ever-present financial pressures institutions will consider Google Apps, and for its trading cost it is a viable alternative, which will develop and in all likelihood offer more features in the future.

Conclusion:Cloud computing is promoted as the next disruptive technology in the organisational use of IT. If this does happen, no matter what else changes there are some verities which must not change, in particular meeting legal requirements. There are at least seven areas where a move to cloud computing should not be contemplated unless the legal requirements can be demonstrably satisfied.

Conclusion: Google Apps' products are developing rapidly. These developments range from the large and significant, to the small minor adjustments. Google has increased its pace of development, and enterprise users will want to gain a strategic view of how the Apps mature in the next two years.

Google Apps' driving force, Rajen Sheth defines the corporation's main ambitions in two areas: to improve functionality, perhaps in ways that have not been considered by users, and to redefine enterprise messaging and collaboration. Whether they can achieve such ambitions is not foreseeable but they will offer many new tools and enhancements to reach that objective.

Some commentators have been sceptical about Google''s intentions with the Chrome OS. Is it a mere distraction? Why has Google bothered?Is Chrome part of a broader plan? As a former CIO, Chrome appears to me as just one element in a complete armoury of products Google is developing, all aimed at the CIO heartland.

Conclusion: Google is working in a dynamic market exploring and challenging current approaches. While that evolving plan may confuse some observers, it may succeed, though perhaps not exactly in the way originally set out.

To help understand what Google is doing in the enterprise market, IBRS interviewed the founder and driving force of Google Apps.

Recently Wired magazine featured an interview with the CEO of Facebook where Mark Zuckerberg claims that Facebook does not regard other online networking platforms as competition, but that Google is the real competitor.

Conclusion:The Web, and social networks, as virtual places of conversation, challenge the role and effectiveness of an organisation’s communication management.

Traditional management and censorship in the unfettered communications world of the Web may only be effective to a limited degree. In this new communications landscape, organisations will have to train staff, and modify their traditional attitudes, to deal with the varied and complex online channels.

Conclusion:Software as a service seems suspiciously familiar, bringing up old memories of time share mainframe computing systems in a different era, and more recent memories of application service provider based software offerings. Repackaging of old concepts in new terminology is a technique commonly used by software vendors. However, don’t dismiss software as a service due to a lack of technical innovation. The current attraction of SaaS is a result of changes in the economics of IT infrastructure.

Conclusion: Proprietary web services are raising concerns about strong lock-in. Those raising the alarm bells paint a simplistic picture based on the assumption that services such as Facebook are representative of the web service landscape. Upon closer examination it appears that the doomsday prophets have a vested interest in prolonging the use of localised IT infrastructure. In reality the concept of web services opens new possibilities to unbundle and mitigate lock-in, allowing internal IT to focus on the core business and to outsource the operation of non-core functionality.

Conclusion: Any potential user of Google Apps should understand how Google operates and distributes software products and services. Google’s economies of scale may offer a compelling basis to utilise its software.

Conclusion: There is still more hype in the media about cloud computing than uptake. Advocates promise dramatically improved ease of use, lowered costs driven by economies of scale, and much greater flexibility in sourcing and adapting to change. Nicholas Carr in his latest book2, predicts that cloud computing will put most IT departments out of business. "IT departments will have little left to do once the bulk of business computing shifts out of private data centres and into the cloud," Such arguments make it likely that organisations will increasingly place some or all of their IT supported services in “the cloud”. This makes these organisations dependent on the reliability of the vendor’s cloud offerings. If an organisation moves all or part of its IT services to a cloud environment it must first identify and understand the new risks it may be exposed to.

Conclusion: Building valuable software solutions increasingly means building solutions that run on the web, and that are not dependent on any particular operating system. Pervasive web connectivity leads to a new paradigm for building software architectures that is based around the availability of high quality web services and around the conscious use of Open Source software in selected areas to reduce vendor lock-in.

Conclusion: While the total cost of ownership model is helpful in an initial comparison of products and services, the familiar problem with TCO as an analytical methodology is evident1. This problem is especially clear when dealing with Google Apps because its costs of production and distribution are atypical of the software industry.

The assessment of price should be done in relation to, or in the context of features and benefits. These may be itemised as utilitarian functions and therefore it is possible to assign costs to each feature. The differences in requirements for each organisation mean that to a large degree, TCO evaluation should be done in the context of an organisation’s own situation.

Conclusion: As the economy enters recession both public and private organisations are trimming costs. There is emerging evidence that Google Apps Premier may have some appeal compared with other vendor products. Despite questions over Google’s capability and experience with channel partners, deeper investigation is worthwhile.

Organisations assessing Google Apps Premier must determine not only total cost of ownership, as Google does not have a model template to assist with that, but also whether the channel relationships will endure, as Google has almost no experience in running such programs.

Conclusion: One of the weakest process elements in the software development lifecycle of most organisations is the discipline of requirements engineering. Over-investing in requirements specification amounts to speculation on behalf of the customer, and under-investing in requirements specification leads to speculation by the software development team. The optimal balance involves selecting an appropriate set of artefact types, and minimising the effort for maintaining these artefacts.

Conclusion: One commonly used approach for model management in Unified Modeling Language (UML) tools centres on using package-based modularisation and versioning of models – but this leads to a complex and unlimited web of inter-module dependencies. Another approach consists in the use of a scalable multi-user repository, and versioning at the level of individual atomic model elements. The latter technique, although largely eliminating practical contention and consistency issues between users, still does not encourage good modularisation, and gives no indication as to the state of completeness of a model. Fortunately, there are a set of best practices that can be applied to ensure modularity is treated as a first-class concern, such that model versioning is adequately addressed with standard version control software and minimal additional tooling.

Conclusion: The possibility of enhancing websites is not high in 2009. Therefore, developing ingenious ways to improve old website properties is necessary. Evaluating and testing the website is a wise strategy in order to refresh content and enhance contact with site users.

A testing strategy should set out the business case, including the logic by which it will be conducted and the return on investment that may be expected. This focus on process will help to ensure that the testing program can achieve results and that other stakeholders within the organisation understand the objectives and purpose of such a testing program.

Conclusion: It is now increasingly recognised that small (domain specific) modelling languages hold the key for improving productivity and quality in software design, development, configuration, interoperability, and operation. Little custom-built languages can be introduced and exploited without necessitating any changes in architectural frameworks or run-time technologies – a characteristic that painfully lacking in the vast majority of software products and tools. One of the first steps to get started with domain specific modelling is the selection of an appropriate DIY tool kit to build software power tools based on little languages. Currently there are three mature tool kits in the market that are worthwhile considering and the number of contenders is increasing.

Conclusion: The choice of technology for a website involves a selection process with several factors. The process must consider adequacy of the technology, future business needs, and organisational resources, both current and future. Clarity in the choice of products will reduce risk and offer better resource allocation.

The best way to decide the preferred technology option is to use a decision template which assists in the selection process, providing a rational, transparent background to choices. This method can work for an organisation into the future regardless of personnel.

Conclusion: The importance of web site usability has higher recognition now than it did a few years ago, but there are still several gaps in achieving an effective usability evaluation process. In order to improve site usability for end users, combining technology with survey research will help considerably.

There have tended to be two paths to examining website usability. The first is the use of Web analytics data, and other technology tools generally, to improve a site’s functionality. The second path employs consultants’ expertise in conjunction with research focus groups to address the usability and functionality of web properties. The integration of these two methods, on a case by case basis, would be more effective.

Conclusion: Exploring better content management solutions to remain competitive and to raise the value of online investments is a wise policy to adopt now. With much slower economic prospects ahead, gaining greater efficiency or reaching users in better ways is going to be necessary.

For commercial websites the criteria to implement content management should be underpinned by usage – that is, click rates, content access and so on. The web sites that create dynamic – and personalised – online environments are more likely to outperform stale Web sites. Having a better content management system process may also use resources more efficiently and help align an organisation’s objectives to the new business conditions.

Conclusion:The balance of information power is skewed in favour of knowledge intensive organisations, to the detriment of information-poor organisations and individuals. Reliable, high quality information distilled from Software as a Service users is evolving into a powerful currency that can be translated into financial profit via the sale of ad space and other techniques.

Conclusion: To get the most from their IT vendors, buying organisations must understand the underlying importance of each of their vendors to the organisation, and their potential to work with the organisation to help achieve business goals. A structured approach to building a vendor portfolio will allow key vendors to be identified and for the process of building strategic, partnership type relationships to be initiated.

Conclusion: Web analytic tools are so pervasive and widely used it hardly seems necessary to consider their capabilities and implementation. Yet businesses and other organisations may under-use, their Web analytics software. In which case they are not obtaining the value they expected.

The evidence from both measured and anecdotal sources is that organisations that achieve the greatest gains through Web analytics have used a process to select the right tool for their needs, then integrated it well, and trained their staff to use the system to segment visitors, understand their engagement, and quantify the effectiveness of the website.

Last month’s issue of the Communications of the Association for Computing Machinery (ACM) contained a timely article on the role of formal methods in the design and construction of software systems. The article drives home the point that much of software development today still amounts to "radical design" when viewed from the perspective of established engineering disciplines and that, to date, there are only a limited number of areas for which established "normalised software designs" exist. But this picture is slowly starting to change, as model-driven approaches offer economically attractive ways of packaging deep domain knowledge as reusable "normalised designs".

Conclusion: In the current credit and liquidity market investors demand more transparency, and accurate and timely product and market information, yet most legacy banking systems are not up to the job. There is a strong business case for replacing legacy banking systems to restore organisational agility, and to improve the quality of service offered to customers.

Conclusion: A new age for business applications is unfolding. Arguably, in 2008 applications are at a tipping point akin to that experienced in the early to mid-1990s, which was marked by the emergence of mature ERP technology and subsequent explosive sales growth. CIOs are urged to put applications firmly on their radar and begin acting upon their application portfolios as well as the methodologies and governance approaches that underpin them.

Conclusion: The usefulness of Web based applications is not limited to the provision of Web-enabled front-ends to traditional business software. The Web also allows the design of applications that are capable of putting powerful human intelligence at our fingertips. Tapping into that intelligence to solve truly hard problems possibly constitutes the next disruptive innovation. Intelligence has never been cheaper!

Conclusion: The recent strong media attention on Green IT, coupled with aggressive vendor marketing, has left the impression that many IT organisations have made significant progress in reducing their environmental impact. In recent conversations with our clients it seems this media and vendor attention has raised concerns with some organisations that they have fallen behind their peers in this area.

To help clarify the status of Green IT in ANZ we recently undertook a survey that indicates most organisation are still in the earliest stages of reducing their environmental impact of IT. While there is great interest in Green IT, and the majority of organisations have a mandate from the executive to reduce the environmental impact, there is a strong disconnect with the IT organisations ability to effect change due to lack of budget and formal programs of work.

Conclusion: Manually re-implementing application functionality in a new technology every few years is highly uneconomical. Model driven automation offers the potential to eliminate software design degradation and to minimise the cost of technology churn. Yet the model driven approach only works if conceptual models of questionable quality are discarded, and if deep knowledge about the business is used to develop elegant, compact, and tailored specification languages for domain knowledge.

This article is the final in a series of three on technologies and techniques that are leading to fundamental changes in the architectures used to construct software applications and software intensive devices.

Conclusion: Over the next 7 years the typical commodity IT infrastructure1 will be ‘reinvented’ from today’s network of independent servers and storage into a unified computing resource that looks and behaves remarkably like the old mainframe. This new infrastructure will blend the best attributes from each architecture to create a highly agile, robust and cost effective environment that is based on commodity components.

While the key technologies are available today, due to the inertia of the existing environment, and the cultural barriers in IT and the business, this journey will take most organisations 5-7 years to complete. IT organisations can hasten the journey by breaking down the siloed, hardware centric cultures that exist in their organisation. To succeed, the commodity IT infrastructure must be reinterpreted as a unified, shared resource, where a server is a mere component, rather than as a loose network of servers owned and managed by individuals or groups.