Conclusion: 

The need to have a disaster recovery (DR) plan that is understood, agreed, and jointly owned by all elements of the organisation is essential in preparing for a disaster event. An effective DR plan will focus on managing the risk associated with completing a successful restoration and recovery in a time, and to a level of effectiveness, acceptable to business.

To ensure the plan is effective at mitigating the risks associated with completion of restoration and resumption of services after a disaster event; the DR plan must also clearly identify how the plan is to be verified and therefore reduce the risk of not completing a successful disaster recovery.

The key focus of the DR plan must always be about the restoring delivery of business functions. The technical delivery may be from ICT services on-premise, outsourced providers, or Cloud. Regardless of technical delivery to business, the impact of an ICT disaster event needs a verified plan!

Read more ...

Conclusion:

Due to the scarcity of skilled ICT professionals and managers, organisations will inevitably seek extra capacity from augmented services providers to address the shortfall. Staff conducting due diligence to find the best provider and qualify the providers must be unafraid to ask difficult questions, business savvy and, when dealing with providers, able to separate the wheat from the chaff. Identifying providers with the capacity and ability to deliver the desired outcomes and are a good fit is not an easy task.

If the staff find that no provider can deliver what is required, stakeholders must either:

  • Wait for internal staff to become available, or
  • Hire and train staff which can be an expensive, time-consuming exercise that may increase business risks.

Read more ...

Conclusion:

This month, discussions regarding an increased demand for disaster recovery, business continuity and work management solutions has been prominent. While the pandemic has triggered fundamental IT changes in an effort to resolve gaps and vulnerabilities, the accelerated rate of digital transformation and migration efforts has resulted in shortfalls when planning and establishing new work environments. Vendors have found difficulties maintaining business processes when unforeseen or extreme events occur. Combined with management solutions that cannot cater to all scenarios and a lack of clarity regarding customer responsibilities when responses to operational failures are required, difficulties have arisen for service providers. This requires vendors to provide more detailed and clearer disaster recovery and business continuity plans for customers, as well as specialised management tools and associated resources to implement solutions and responses. It is also critical for vendors to communicate with customers to facilitate the recovery of processes and ensure all business systems can be utilised in new and dispersed working environments.

Read more ...

Conclusion:

Traditional development practices have been supplanted by the DevOps movement over the past decade. The next evolution is the movement towards DevSecOps where security is integrated across the development lifecycle.

DevSecOps is not just a matter of buying the latest tooling and running the developers through some training. It requires commitment, not just from the technology group as a whole but from the business leaders themselves.

It is as transformative a project for an organisation as is a move from on-premise to Cloud. Poorly managed or even unplanned DevSecOps can have a negative impact on the development capabilities within an organisation.

Read more ...

Conclusion: 

Project management in organisations is commonplace. Reviews are often undertaken at the end of the project to gain insights for future projects. Project reviews completed during the life of a project need to ensure that they are inclusive of appropriate stakeholder groups, and assessment is targeted at the appropriate focus areas. Active and inclusive review and assurance activities need to be well understood and supported within the organisation so that it is not viewed as an exam that needs to be prepared for and passed. Applying reviews and assurance as a process checkpoint only, is ineffective and will not ensure quality project delivery.

Read more ...

The Latest

26 May 2021: Google has introduced Datasteam, which the vendor defines as a “change data capture and replication service”. In short, the service allows changes in one data source to be replicated to other data sources in near real time. The service currently connects with Oracle and MySQL databases and a slew of Google Cloud services, including BigQuery, Cloud SQL, Cloud Storage, Spanner, and so forth.

Uses for such a service include: updating a data lake or similar repository with data being added to a production database, keeping disparate databases of different types in sync, consolidating global organisation information back to a central repository.

Datastream is based on Cloud functions - or serverless - architecture. This is significant, as it allows for scale-independent integration.

Why it’s Important

Ingesting data scale into Cloud-based data lakes is a challenge and can be costly. Even simple ingestion where data requires little in the way of transformation can be costly when run through a full ETL service. By leveraging serverless functions, Datastream has the potential to significantly lower the cost and improve performance of bringing large volumes of rapidly changing data into a data lake (or an SQL database which is being used as a pseudo data lake). 

Using serverless to improve the performance and economics of large scale data ingestion is not a new approach. IBRS interviewed the architecture of a major global streaming service in 2017 regarding how they moved from an integration platform to leveraging AWS Kinesis data pipelines and hand-coded serverless functions, and to achieve more or less the same thing that Google Datastream is providing. 

As organisations migrate to Cloud analytics, the ability to rapidly replicate large data sets will grow. Serverless architecture will emerge as an important pattern.

Who’s impacted

  • Analytics architecture leads
  • Integration teams
  • Enterprise architecture teams

What’s Next?

Become familiar with the potential to use serverless / cloud function as a ‘glue’ within your organisation’s Cloud architecture. 

Look for opportunities to leverage serverless when designing your organisations next analytics platform. 

Related IBRS Advisory

  1. Serverless Programming: Should your software development teams be exploring it?
  2. VENDORiQ: Google introduces Database Migration Service

The Latest

26 May 2021: Talend, a big data, analytics and integration vendor, has received ISO 27001:2013 and 27701:2019 certifications. According to the Talend, they are the only big data/integration vendor with this level of certification.  

Why it’s Important

IBRS has observed that even the most security focused organisations often overlook their big data integration and ETL (extract, transform, load) when it comes to assessing business risk. For example, when Microsoft launched its protected Azure services in Canberra, many of the Azure analytics capabilities, such as its machine learning services, were excluded from the platform.

The data being ingested into data lakes, be they on-premises or in the Cloud, will include private information on clients, staff or citizens, and possibly sensitive financial data. But more significantly, taken as an aggregate, this information contains patterns and insights that cyber criminals and state actors may leverage for further attacks.  The value of analysing data at scale to an organisation is just as valuable to criminals.

Who’s impacted

  • Business analytics architecture specialists
  • CISO 
  • Security teams

What’s Next?

Start by reviewing the sensitivity of information moving to the data analytics platform. Such information would be reviewed against the organisation's existing data governance and data classification framework.

Next, review the process of how sensitive information is ingested, manipulated, stored and accessed within the organisation’s analytics platform. Be sure to pay attention to ETL processes: both the technologies and processes involved. 

Finally, review the third-party (vendor) supply chain for all platforms and services involved in data analytics.

Related IBRS Advisory

  1. How does your organisation manage cyber supply chain risk?
  2. IBRSiQ: Risk assessment services and the dark web
  3. VENDORiQ: SolarWinds Incident

The Latest

10 May 2021: ServiceNow is acquiring Lightstep, a specialist vendor for monitoring digital workflows. While ServiceNow already has capabilities for monitoring its low-code applications and workflows, Lightstep will provide deep analytics and performance metrics. 

Why it’s Important

The rise of low-code will necessitate the use of application monitoring tools.  

From a technical perspective, being able to monitor performance of applications that may themselves be comprised of dozens of integrations and span multiple SaaS environments, is an important precursor to meeting user expectations. In low-code environments, gone are the days of being able to monitor server and network performance. Vendors such as ThousandEyes and Lightstep have emerged to provide a more comprehensive (and simplified) view of the complex application infrastructure that is emerging. Buying Lightstep is a smart move for ServiceNow, as it increasingly moves into enabling low-code departmental and public-facing applications. 

Another reason for monitoring low-code is to report back to the business tangible business benefits. While digitising a process can clearly save money, being able to quantify the savings with evidence after a solution has been deployed helps build the case for an expansion of low-code and (in the case of high-value products, such as ServiceNow) justify any increased licensing.

However, an often overlooked benefit of observability is application lifecycle. Observability allows organisations to identify and consolidate duplicate processes across an organisation. Observability also allows organisations to identify digital processes that are not being utilised and determine why, and give clues as to what to do about them.

Who’s impacted

  • Development team leads
  • Business analysts

What’s Next?

Expect low-code vendors to continue investing in workflow monitoring/observability tools, as well as low-code integration capabilities. 

When selecting a low-code application development platform, consider the degree to which being able to monitor workflows and processes will be useful. If using ServiceNow, will the existing capabilities be sufficient, or will investments in products such as Lightstep be needed. If using products such as Nintex, will leveraging their business process modelling tools provide the desired observability.

Related IBRS Advisory

  1. VENDORiQ: ServiceNow to Acquire Vendor Intellibot
  2. VENDORiQ: Creatio - More Low-Code Investments
  3. Aussie vendor radar: Nintex joins the mainstream business process automation vendor landscape

The Latest

19 May 2021: Google has launched Vertex AI, a platform that strives to accelerate the development of machine learning models (aka, algorithms). According to Google and IBRS discussions with early adopters, the platform does indeed dramatically reduce the amount of manual coding needed to develop (aka, train) machine learning models. 

Why it’s Important

The use of machine learning (ML) will have a dramatic impact on decision making support systems and automation over the next decade. For the majority of organisations, ML capabilities will be acquired as part of regular upgrades of enterprise SaaS solutions. Software leaders such as Microsoft, Salesforce, Adobe and even smaller ERP vendors such as Zoho and TechnologyOne, are all embedding ML powered services into their products today, and this will only accelerate.

However, developing proprietary ML models to meet specific needs may very well prove critically important for a few organisations. Recent examples of this include: customise direct customer outreach with specific language tailored to lessen overdue payment, and creating decision support solutions to reduce the occurrence of heatstroke.

IBRS has written extensively on ML development operations (MLOps). However, the future of this disciplin e will likely be AI-powered recommendation engines that aid data teams in the development of ML models. In a recent example, IBRS monitored a data scientist as they first developed an ML model to predict customer behaviour using traditional techniques, and then used a publicly available tool that leveraged ML itself to build, test and recommend the same model. Excluding data preparation, the hand-coded approach took 3 days to complete, while the assisted approach took several hours. But more importantly, the assisted approach tested more models that the data scientist could test manually, and delivered a model that was 3% more accurate than the hand-coded solution.

It should be noted that leveraging ‘low-code’ AI does not negate the need for data scientists or the pressing need to improve data literacy within most organisations. However, it has the potential to dramatically reduce the cost of developing and testing ML models, which lowers the financial risk for organisations experimenting with AI.

Who’s impacted

  • CIO
  • COO
  • CFO
  • Marketing leads
  • Development team leads

What’s Next?

Prepare for low-code AI to become increasingly common and the hype surrounding it to grow significant in the coming two years. However, the excitement for low-code ML should be tempered with the realisation that many of the use cases for ML will be embedded ‘out of the box’ in ERP, CRM, HCM, workforce management, and asset management SaaS solutions in the near future. Organisations should balance the ‘build it’ versus ‘wait for it’ decision when it comes to ML-power services. 

Related IBRS Advisory

  1. Six Critical Success Factors for Machine Learning Projects
  2. Options for Machine Learning-as-a-Service: The Big Four AIs Battle it Out
  3. How can AI reimagine your business processes?
  4. Low-Code Platform Feature Checklist
  5. VENDORiQ: BMC Adds AI to IT Operations
  6. Artificial intelligence Part 3: Preparing IT organisations for artificial intelligence deployment

IBRSiQ is a database of Client inquiries and is designed to get you talking to our advisors about these topics in the context of your organisation in order to provide tailored advice for your needs.

Read more ...

IBRSiQ is a database of Client inquiries and is designed to get you talking to our advisors about these topics in the context of your organisation in order to provide tailored advice for your needs.

Read more ...

The Latest

11 May 2021: Jamf is a market leader in Apple iOS device management, with a strong presence in education. It has announced its intention to acquire the zero-trust end-point security vendor Wandera. 

Why it’s Important

Vendors in the device management have two options for continued growth: add new services and grow horizontally within their market (as in VMWare), or specialise in increasingly niche areas. Jamf has remained firmly entrenched in providing Apple device management, so it is a niche (though important) player in device management. Its acquisition of Wandera, hot on the heels of its purchase of Mondad, will broaden its base and help cement its position against the broader players. 

Who’s impacted

  • End user computing/digital workspace teams
  • Security teams

What’s Next?

Globally, the move to working from home saw an uplift in Apple products being connected to enterprise (work) environments. Citing IDC, Jamf reports the penetration of macOS in 2019 was around 17%, and during 2020 this increased to 23%. In addition, globally 49% of smartphones connecting to work environments remain iOS, though this is slightly lower in Australia, where Android has gained small market share in a tight market last year. 

The challenge with supporting a mixed device ecosystem (Windows, Android, macOS, iOS, Chrome) is now more than just securing the end-point, but the entire information ecosystem. VPNs in particular proved difficult to scale and adapt to a myriad of end points. The need to patch reliability and manage software also becomes significantly difficult due to differing rates of change, patch cycles and tools needed. 

Jamf’s acquisition of Wandera will not eliminate these challenges completely, but will at least simplify the Apple slice of the situation. 

Related IBRS Advisory

  1. Requirements Check-List for Mobile Device Management Solutions
  2. Embracing security evolution with zero trust networking

The Latest

Mid May 2021: Mulesoft detailed its new Connectors for SAP during an analyst’s briefing. The SAP connector is most interesting, since it aims to speed up the development of lightweight, agile customer-facing, online self-service capabilities, while building on the weighty (not exactly agile) capabilities of SAP.  

Mulesoft has out-of-box integrations (called connectors) for existing data sources including AWS, Google, GCP, Azure, Snowflake, Salesforce, Splunk, Stripe, Oracle, ServiceNow, Zendesk, Workday Jira, Trello, Azure, SAP, Microsoft Dynamics, etc. Mulesoft has identified 900 common enterprise applications, though only 28% of these have pre-existing integrations. Mulesoft states that on average 35 different apps are needed for a single customer-facing enterprise digital solution. Therefore, it is investing heavily in developing additional connectors for enterprise solutions, with at least 50 planned for release in 2021.

Why it’s Important

In late 2019 and early 2020, IBRS conducted a series of 37 detailed interviews with organisations that found organisations with ERP SaaS platforms supported by low-code workflows and integration, saw at least 3 times (and up to 10 times!) as many customer-facing services delivered annually as compared with on-premise solutions with traditionally managed API integrations. A recent series of 67 interviews confirms these findings.

During COVID-19, the big winners of the ‘prepackaged integration’ model (specifically, the model outlined in the 'Trends for 2021-2026: No New Normal and Preparing For the Fourth Wave of ICT'), were business-to-consumer organisations that quickly pivoted from a myriad of shopfront locations to digital stores in a matter of weeks. As Mulesoft has figured out, this is not just an issue of having the ability to integrate, but having a consolidated core of ERP capabilities to provide core data and processes, surrounded by a fabric of low-code application, workflow and integration services.

Who’s impacted

  • COO
  • CIO
  • Head of sales 
  • DevOps leads
  • Enterprise architects

What’s Next?

Organisations should consider how their current environment - including legacy ERP - can evolve to support the fourth wave of enterprise architecture. This will impact upgrade decisions for ERP and other enterprise applications, the selection of low-code application development and integration tools.  

Related IBRS Advisory

  1. Trends for 2021-2026: No New Normal and Preparing For the Fourth-wave of ICT
  2. Accelerating Remote Services Deployment

The Latest

May 2021: Talend, a vendor of data and analytics tools, released its Data Health Survey Report that claims 36% of executives skip data when making decisions, and instead go “with their gut”. At the same time, the report claims that 64% of executives “work with data everyday”. On the surface, these two figures seem at odds. However, the report goes on to claim 78% of executives “have challenges in making data drive decisions”, and this is largely due to data quality issues. However, the most interesting finding from the report is “those who produce and those who analyse data live in alternative data realities”.

Why it’s Important

At its core, this report highlights the issue of data literacy. The report was compiled from 529 responses from companies with over USD10 million in sales. A quarter of respondents were from the Asia Pacific region. However, IBRS cautions drawing Australia-specific inference, given that different markets have differing levels of data literacy maturity. No details were given for industry, which is also likely to impact data literacy maturity. In fairness, any more detailed analysis of a country or industry would not be feasible, given the sample size. 

The above concerns aside, the report does highlight the importance of data literacy: investments in big data tools are useless unless executives are knowledgeable and well versed in the key concepts of applying analytical thinking to business decisions. IBRS notes that without data literacy, the most common use of new self-service visualisation tools such as Power BI, Looker, Domo, Tableau, Qlik, Zoho and others, is to ‘prove’ executives' gut feelings. In short, too often visualisations tools are used to reinforce the ‘current ways of thinking’ rather than seek areas for improvement.  

The report’s statement that “those who produce and those who analyse data live in alternative data realities”, frequently underpins IBRS inquiries into why business intelligence and analysis programs fail to produce the expected business benefits.

Who’s impacted

  • Business intelligence/analytics teams
  • Senior line-of-business executives
  • Human resources/training teams

What’s Next?

ICT teams responsible for providing business intelligence and analytics services need to cease solely focusing on the tools and technologies and ‘getting data curated’, and spend time exploring which business decisions would most benefit from the application of analytical thinking. However, the ICT teams cannot do this alone. They need to be involved in uplifting data literacy among line-of-business executives and work closely with them to identify the decisions that not only can be addressed with data, but those that would make the biggest difference to organisational outcomes. This does not mean that all aspects of a data scientists role need to be explained to business executives. Rather, training executives in the principles of using data to inquire into issues or disprove current ways of doing things is more important.  

Related IBRS Advisory

  1. Staff need data literacy – Here’s how to help them get it
  2. When Does Power BI Deliver Power to the People?
  3. The critical link between data literacy and customer experience

We all hear that data is growing at exponential rates, and so too is the demand and complexity of data management practices. But does this mean you need to obtain the highest levels of data management and buy into the most sophisticated tool?

Read more ...

Contract management can be more than just record keeping. When done well, it can enable organisations to explore the best ways to optimise their investments when conditions change.

This capability proved essential for the Australian government when COVID-19 hit, with investments in all manner of services and infrastructure being needed almost overnight.

IBRS interviews ZEN Enterprise, an Australian niche contract management solution vendor, and the contract manager from a large Australian agency to tease out the benefits and challenges of advanced contract management in an age of rapid change.

IBRS interviews Dr Kevin McIsaac, a data scientist who frequently works with board-level executives to identify and prototype powerful data-driven decision support solutions.

Dr McIsaac discusses why so may 'big data' efforts fail, the role ICT plays (or rather, should not play) and the business-first data mindset.

IBRSiQ is a database of Client inquiries and is designed to get you talking to our advisors about these topics in the context of your organisation in order to provide tailored advice for your needs.

Read more ...

The government’s new tax incentives making it easier to depreciate software will help big businesses invest in their own software development but will do “bugger all” for Australian software companies and small and medium businesses, and may even create perverse incentives for large companies to invest in the wrong type of software, industry experts say.

IBRS advisor Joseph Sweeney, who works with numerous large organisations on their technology strategies said the policy was a positive step in recognising the need to increase development of a local digital services economy, but would do little to raise productivity in the small- and medium-sized business market, which accounts for half of Australia’s workforce. Dr Sweeney is midway through conducting a study into national productivity gains from Cloud services, and said the early data showed that introducing Software-as-a-Service solutions to small and mid-sized organisations was the quickest way to get tangible productivity gains.as
 
“By only allowing for offset in assets like CapEx in IT infrastructure and software, this policy has the potential to skew the market back towards on-premises solutions. It will certainly make the ‘total cost of operation’ calculations for moving to the Cloud less attractive,” Dr Sweeney said.
 

Conclusion

Whilst many enterprises have successfully implemented a bring your own device (BYOD) mobile policy, many have put this in the too-hard basket fearing a human resources (HR) backlash.

Revisiting the workplace mobile policy can reduce operating costs associated with device loss, breakages, and unwarranted device allocation. IT service delivery operating costs have been increasing annually as more sophisticated and expensive handsets hit the market. Meanwhile, mobile applications are creating increased security concerns which add to asset management and monitoring costs.

Now is the time to take stock and transform the organisation’s mobility space by creating a shared responsibility with staff. Mobile phone allowances are fast becoming the norm with a multitude of different models now being adopted. Choose the one that delivers cost savings across the board as there are both direct and indirect costs associated with each option.

Read more ...

Conclusion

The deployment of machine learning (ML) solutions across a broad range of industries is rising rapidly. While most organisations will benefit from the adoption of ML solutions, ML’s capabilities come at a cost and many projects risk failure. Deployment of ML solutions needs to be carefully planned to ensure success, to minimise cost and time, but also to deliver tangible results and assist decision-making.

Read more ...

Conclusion

With the growth of dependence on ICT for business to perform effectively, many organisations have increased risk associated with the ability of ICT to provide service continuity. ICT downtime means business is negatively impacted. Many organisations believe the DRP is a problem that is ICTs to solve. Whilst ICT will lead the planning and do a lot of the heavy lifting when a disaster occurs, it can only be successful with the assistance and collaboration of its business partners. It will be the business that sets the priorities for restoration and accepts the risk.

Both business and ICT need to be comfortable that the disaster recovery (DR) plan has been verified to ensure a reasonable expectation that recovery will be successful.

Read more ...

Conclusion

For organisations when there is stakeholder agreement the enterprise resource planning (ERP) solution has failed to meet business needs, act decisively to turn failure into success. Management must also be proactive, and act when the implementation cost has been fully amortised and deemed past its use-by date, or when vendors providing SaaS ERP solutions have not met their contractual and service delivery obligations. In all situations, it is important to be proactive and tell executive management what is being done about it.

Read more ...

Conclusion

This month, discussions regarding project investment have been prominent. In particular, increases attributed to the changing threat environment and the constant emergence of new technologies. The resultant digital initiatives help create new opportunities or mitigate issues that can have a cascading and negative impact throughout operations. A continuous cycle of project investment is beneficial to improve business processes, resolve operational difficulties, as well as accelerate digital transformation. By delivering more efficient and innovative operations, companies can address new and shifting technology goals and expectations.

Read more ...

Conclusion

The growing maturity of data handling and analytics is driving interest in data catalogues. Over the past two years, most of the major vendors in the data analytics field have either introduced or are rapidly evolving their products to include data cataloguing.

Data catalogues help data users identify and manage their data for processing and analytics. Leading data cataloguing tools leverage machine learning (ML) and other search techniques to expose and link data sets in a manner that improves access and consumability.

However, a data catalogue is only beneficial when the organisation already has a sufficient level of maturity in how it manages data and analytics. Data literacy (the skills and core concepts that support data analytics) must also be established in the organisation’s user base to leverage full benefits from the proposed data catalogue.

Organisations considering data catalogues must have a clear picture of how to use this new architecture, and be realistic in how ready they are to leverage the technology. Furthermore, different organisations have unique and dynamic data attributes, so there is no one-type-fits-all data catalogue in the marketplace.

Read more ...

Conclusion

Low-code solutions expand the entry-level for application development by enabling non-developers (a.k.a. citizen developers) and developers alike to create applications visually. Low-code platform solutions allow citizen developers to develop applications using WYSIWYG tools to create functional prototypes of applications that digitise special – often narrowly defined – business processes. This can be highly disruptive without clear policies (see ‘Non-techies Are Taking Over Your Developers’ Jobs – Dealing with the Fallout’). In addition, to avoid the Microsoft access problem of creating fragmented applications and processes, the ICT group needs to be involved in the selection of a low-code platform that provides not only eforms and workflow capabilities, but also governance features to avoid the chaos that can ensue from unfettered development.

Low-code platforms can be viewed as offering a spectrum of capabilities, as detailed in ‘How to Succeed with Eforms Part 1: Understand the Need'. To provide a smooth transition along the spectrum of development capabilities, organisations may either:

  • introduce a second developer-focused low-code platform, since many citizen-developer-focused solutions have insufficient capabilities for developers.
  • adopt a single, low-code platform that provides both the simplicity needed for citizen developers and the power needed for developers.

Read more ...

The Latest

29 April 2021: Cloud-based analytics platform vendor Snowflake has received ‘PROTECTED’ status under IRAP (Australian Information Security Registered Assessors Program).  

Why it’s Important

As IBRS has previously reported, Cloud-based analytics has reached a point in cost of operation and sophistication that it should be considered the de facto choice for future investments in reporting and analytics. However, IBRS does call out that there are sensitive data sets that need to be governed and secured to a higher standard. Often, such data sets are the reasons why organisations decide to keep their analytics on-premises, even if the cost analysis does not stack up against IaaS or SaaS solutions.

The irony here is that IT professionals now accept that even without PROTECTED status, Cloud infrastructure provides a higher security benchmark than most organisations on-premises environments.

However, security must not be overlooked in the analytics space. Data lakes and data warehouses are incredibly valuable targets, especially as they can hold private information that is then contextualised with other data sets.

By demonstrating IRAP certification, Snowflake effectively opens the door to working with Australian Government agencies. But it also signals that hyper-scale Cloud-based analytics platforms can not only offer a bigger bang for your buck, but greatly improve an organisation's security stance.

Who’s impacted

  • CDO
  • Data architecture teams
  • Business intelligence/analytics teams
  • CISO
  • Public sector tech strategists

What’s Next?

Review the security certifications and stance of any Cloud-based analytics tools in use, including those embedded with core business systems, and those that have crept into the organisations via shadow IT (we are looking at you, Microsoft PowerBI!). Match these against compliance requirements for the datasets being used and determine if remediation is required.

When planning for an upgraded analytics platform, put security certification front and centre, but also recognise that like any Cloud storage, the most likely security breach will occur from poor configuration or excess permissions.

Related IBRS Advisory

  1. Key lessons from the executive roundtable on data, analytics and business value
  2. VENDORiQ: AWS Accelerates Cloud Analytics with Custom Hardware
  3. IBRSiQ: AIS and Power BI Initiatives
  4. VENDORiQ: Snowflakes New Services Flip The Analytics Model

The Latest

7 May 2021: Analytics vendor Qlik has released its mobile client Qlik Sense Mobile for SaaS. During the announcement, Qlik outlined how the new client enables both online and offline analytics and alerting. The goal is to bring data-driven decision-making to an ‘anywhere, anytime, any device’ model. 

Why it’s Important

While IBRS accepts that mobile decision support solutions will be of huge value to organisations, this needs to be tempered with an understanding that not all decisions should be made in all contexts. There is a very real danger that in the hype surrounding analytics, people will start making decisions in less than ideal contexts. Putting decision support algorithms (i.e. agents), KPI dashboards and simply modelling tools on mobile devices will likely be the next wave of analytics. In short, mobile big data/AI driven solutions that support specific, narrow mobile work tasks will be a very big deal in the near future.

However, creating and diving into data - that is, data exploration - is or should be, a process rooted in deep, careful, considered scientific thinking. That is a cognitive task that is not well suited to a mobile device experience. This is not just due to the form factor, but also the working context. Such deep thinking requires focus that a mobile work context does not provide.

As organisations embrace self-service analytics and more staff are engaged in creating and consuming visualisations and reports, data maturity will become an increasingly important consideration. However, data literacy is not just a set of skills to learn: it requires a change in culture and demands staff become familiar with rigorous models of thinking. It also requires honest reflection, both of the organisation’s activities and individually. 

While mobile analytics will be a growing area of interest, it will fail without a well-structured program to grow data literacy within the organisation and without granting staff the time and appropriate work spaces to reflect, explore and challenge their assumptions using data.

Who’s impacted

  • CDO
  • HR directors
  • Business intelligence groups

What’s Next?

Organisations should honestly assess staff data literacy maturity at a departmental and whole or organisation level. Armed with this information, a program to grow data literacy maturity can be developed. The deployment of data analytics tools, and indeed data sets, should coincide with the evolution of data literacy within the organisation. 

Related IBRS Advisory

  1. Staff need data literacy – Here’s how to help them get it
  2. When Does Power BI Deliver Power to the People?
  3. The critical link between data literacy and customer experience

The Latest

28 April 2021:  AWS has introduced AQUA (Advanced Query Accelerator) for Amazon Redshift, a distributed and hardware-accelerated cache that, according to AWS, “delivers up to ten times better query performance than other enterprise Cloud data warehouses”.

Why it’s Important

AWS is not the only vendor that offers distributed analytics computing. Architectures from Domo and Snowflake both make use of elastic, distributed computing resources (often referred to as nodes) to enable analytics over massive data sets. These architectures not only speed up the analytics of data, but also provide massively parallel ingestion of data. 

By introducing AQUA, AWS has added a layer of specialised, massively parallel and scalable cache over its Redshift analytics platform. This new layer comes at a cost, but initial calculations suggest it is a fraction of the cost of deploying and maintaining traditional big data analytics architecture, such as specialised BI hyperconverged appliances and databases.

Given the rapid growth in self-service data analytics (aka citizen analytics) organisations will face increasing demands to provide analytics services for increasing amounts of both highly curated data, and ‘other’ data with varied levels of quality. In addition, organisations need to consider a plan for rise in non-structured data. 

As with email, we have reached a tipping point in the demands of performance, complexity and cost where Cloud delivered analytics outstrip on-premises in most scenarios. The question now becomes one of Cloud architecture, data governance and, most important of all, how to mature data literacy across your organisation.

Who’s impacted

  • Business intelligence / analytics team leads
  • Enterprise architects
  • Cloud architects

What’s Next?

Organisations should reflect honestly on the way they are currently supporting business intelligence capabilities, and develop scenarios for Cloud-based analytics services. 

This should include a re-evaluation of how adherence to compliance and regulations can be met with Cloud services, how data could be democratised, and the potential impact on the organisation. BAU cost should be considered, not just for the as-in state, but also for a potential future states. While savings are likely, such should not be the overriding factor: new capabilities and enabling self-service analytics are just as important. 

Organisations should also evaluate data literacy maturity among staff, and if needed (likely) put in place a program to improve staff’s use of data.

Related IBRS Advisory

  1. IBRSiQ: AIS and Power BI Initiatives
  2. Workforce transformation: The four operating models of business intelligence
  3. Staff need data literacy – Here’s how to help them get it
  4. The critical link between data literacy and customer experience
  5. VENDORiQ: Fujitsu Buys into Australian Big Data with Versor Acquisition

The Latest

29 April 2021: Microsoft briefed analysts on its expansion of Azure data centres throughout Asia. By the end of 2021, Microsoft will have multiple availability zones in every market where it has a data centre.

The expansion is driven in part by a need for additional Cloud capacity to meet greenfield growth. Each new availability zone is, in effect, an additional data centre of Cloud services capability.

However, the true focus is on providing existing Azure clients with expanded options for deploying services over multiple zones within a country.  

Microsoft expects to see strong growth in organisations re-architecting solutions that had been deployed to the Cloud through a simple ‘lift and shift’ approach to take advantage of the resilience granted by multiple zones. Of course, there is a corresponding uplift in revenue for Microsoft as more clients take up multiple availability zones.

Why it’s Important

While there is an argument that moving workloads to Cloud services, such as Azure, has the potential to improve service levels and availability, the reality is that Cloud data centres do fail. Both AWS and Microsoft Azure have seen outages in their Sydney Australia data centres. What history shows is organisations that had adopted a multiple availability zone architecture tended to have minimal, if any, operational impact when a Cloud data centre goes down.

It is clear that a multiple availability zone approach is essential for any mission critical application in the Cloud. However, such applications are often geographically bound by compliance or legislative requirements. By adding additional availability zones within countries throughout the region, Microsoft is removing a barrier for migrating critical applications to the Cloud, as well as driving more revenue from existing clients.

Who’s impacted

  • Cloud architecture teams
  • Cloud cost / procurement teams

What’s Next?

Multiple available zone architecture can be considered on the basis of future business resilience in the Cloud. It is not the same thing as ‘a hot disaster recovery site’ and should be viewed as a foundational design consideration for Cloud migrations.

Related IBRS Advisory

  1. VENDORiQ: Amazon Lowers Storage Costs… But at What Cost?
  2. Vendor Lock-in Using Cloud: Golden Handcuffs or Ball and Chain?
  3. Running IT-as-a-Service Part 49: The case for hybrid Cloud migration

IBRSiQ is a database of Client inquiries and is designed to get you talking to our advisors about these topics in the context of your organisation in order to provide tailored advice for your needs.

Read more ...

IBRSiQ is a database of Client inquiries and is designed to get you talking to our advisors about these topics in the context of your organisation in order to provide tailored advice for your needs.

Read more ...

The Latest

09 April 2021: During its advisor business update, Fujitsu discussed its rationale for acquiring Versor, an Australian data and analytics specialist. Versor provides both managed services for data management, reporting and analytics. In addition, it provides consulting services, including data science, to help organisations deploy big data solutions.

Why it’s Important

Versor has 70 data and analytics specialists with strong multi-Cloud knowledge. Fujitsu’s interest in acquiring Versor is primarily tapping Versor’s consulting expertise in Edge Computing, Azure, AWS and Databricks. In addition, Versor’s staff have direct industry experience with some key Australian accounts, including public sector, utilities and retail, which are all target sectors for Fujitsu. Finally, Versor has expanded into Asia and is seeing strong growth. 

So from a Fujitsu perspective, the acquisition is a quick way to bolster its credentials in digital transformation and to open doors to new clients. 

This acquisition clearly demonstrates Fujitsu’s strategy to grow in the ANZ market by increasing investment in consulting and special industry verticals.  

Who’s impacted

  • CIO
  • Development team leads
  • Business analysts

What’s Next?

Given its experienced staff, Versor is expected to lead many of Fujitsu’s digital transformation engagements with prospects and clients. Fujitsu’s well-established ‘innovation design engagements’, are used to explore opportunities with clients and leverage concepts of user-centred design. Adding specialist big data skills to this mix makes for an attractive combination of pre-sales consulting.

Related IBRS Advisory

  1. The new CDO agenda
  2. Workforce transformation: The four operating models of business intelligence
  3. VENDORiQ: Defence Department Targets Fujitsu for Overhaul

The Latest

16 April 2021: BMC has released a new edition of its Helix Platform, which leverages machine learning algorithms to support AI-driven IT operations (AIOps) and AI-driven service management (AISM) capabilities. The introduction of these algorithmic features enable IT service and operations teams to predict and resolve issues more effectively.

Why it’s Important

The use of algorithms to both categorise and predict events in IT operations is a growing trend. Such AI capabilities will be increasingly embedded in existing IT operations suites. As vendors enter a new ‘AI-powered’ competitive phase, these new AI capabilities will be included as part of regular upgrades and maintenance, rather than as add-on components.

Getting value from the new AI capabilities requires planning very human responses.  

For example, the predictive capabilities of algorithms, especially when using multi-organisational data, can provide op teams with alerts well in advance of problems becoming apparent. But unless op teams are resourced and given budget to respond to such ‘predictive maintenance’ issues, these predictive capabilities will be relegated to little more than an alarm clock with a snooze button. 

Likewise, the ability to correctly leverage and continually train advisory from resolution support algorithms, will demand both training of, and input from, the support team. The algorithms are only as good as the information and the contexts they can draw on. Support team people play an intimate role in ensuring the right information is selected for training the algorithm and, most importantly, the right contexts. This is especially pertinent as virtual agents (chatbots) are introduced for self-help capabilities.

Who’s impacted

  • CIO
  • IT operations staff
  • Support desk

What’s Next?

Begin to track the new AI capabilities available in IT operations support platforms, not just for the platforms used by your organisation, but in the competitive landscape. While there is no critical priority to adopt AI-powered IT operations or service management capabilities (just yet), it is important to understand what is coming and what may already be available as part of your current licensing agreements.

Assemble a working group to explore how AI capabilities could positively impact IT operations and service management, and the changes in process and roles that would be required to leverage them.

In short, start planning for AI-powered operations and a service management future.

Related IBRS Advisory

  1. Running IT-as-a-Service Part 55: IBRS Infrastructure Maturity Model
  2. Sustaining efficiency gains demands architecture risks mitigation Part 2
  3. Artificial intelligence Part 3: Preparing IT organisations for artificial intelligence deployment
  4. IBRSiQ: Approach to identifying an ITSM SaaS Provider

Conclusion

Even well-articulated and documented cyber incident response plans can go astray when a cyber incident actually happens. Experience shows the best plans can fail spectacularly. In this special report, IBRS interviews two Australian experts of startups in the field of cyber incident response, and uncovered the better practices for keeping your incident response plans real.

Read more ...

The Latest

18 March 2021: Veeam released a report which suggests that 58% of backups fail. After validating these claims, and from the direct experiences of our advisors who have been CIOs or infrastructure managers in previous years, IBRS accepts there is merit in Veeam’s claim.

The real question is, what to do about it, other than buying into Veeam’s sales pitch that its backups give greater reliability?

Why it’s Important

Sophisticated ransomware attacks are on the rise. So much so that IBRS issued a special alert on the increasing risks in late March 2021. Such ransomware attacks specifically target backup repositories. This means creating disconnected, or highly-protected backups is more important than ever. The only guarantee for recovery from ransomware is a combination of well-structured backups, coupled with a well-rehearsed cyber incident response plan. 

However, protecting the backups is only useful if those backups can be recovered. IBRS estimates around 10-12% of backups fail to fully recover, which is measuring a slightly different, but more important situation than touted by Veeam. Even so, this failure rate is still far too high, given heightened risk from financially-motivated ransomware attacks.

Who’s impacted

  • CIO
  • Risk Officers reporting to the board
  • CISCO
  • Infrastructure leads

What’s Next?

IBRS has identified the ‘better-practice’ from backup must include regular and unannounced, practice runs to recover critical systems from backups. These tests should be run to simulate as closely as possible to events that could lead to a recovery situation: critical system failures, malicious insider and ransomware. Just as organisations need to rehearse cyber incident responses, they also need to thoroughly test their recovery regime. 

Related IBRS Advisory

  1. Maintaining disaster recovery plans
  2. Ransomware: Don’t just defend, plan to recover
  3. Running IT-as-a-Service Part 59: Recovery from ransomware attacks
  4. Ransomware, to pay or not to pay?
  5. ICT disaster recovery plan challenges
  6. Testing your business continuity plan

The Latest

28 March 2021: MaxContact, vendor of a Cloud-based call-centre solution, announced it is supporting integration of Teams clients. Similar vendors of call centre solutions have announced or are planning similar integration with Teams and/or Zoom. In effect, the most common video communications clients are becoming alternatives to voice calls, complete with all the management and metrics required by call centres. 

Why it’s Important

The pandemic has forced working from home, which has in turn positioned video calling as a common way to communicate. There is an expectation that video calling, be it on mobile devices, desktop computers or built into televisions, will become increasingly normalised in the coming decade. Clearly call centres will need to cater for clients who wish to place calls into the call centre using video calls.

But there is a difference between voice calls and video that few people are considering (beyond the obvious media).  That is, timing of video calls is generally negotiated via another media: instant messaging, calendaring, or meeting invites. In contrast, the timing for voice calls are far less mediated, especially when engaging with call centres for service, support or sales activities.

For reactive support and services, video calls between a call centre and a client will most likely be a negotiated engagement, either instigated via an email or web-based chat agent. Cold-calling and outward bound video calls is unlikely to be effective.

The above has significant implications for client service and support processes and call centre operations.

Who’s impacted

  • CIO
  • Development team leads
  • Business analysts

What’s Next?

The adoption of video calls by the masses is here to stay. Video calling is not a fad, but it will take time to mature. 

Having video support and services available as part of the call centre mix is likely to be an advantage, but only if its use makes sense in the context of the tasks and clients involved.  

Organisations should begin brainstorming the potential usage of video calls for serving. However, adding video calling to the call centre is less of a priority than consolidating a multi-channel strategy and, over time, an omnichannel strategy.  

Related IBRS Advisory

  1. Better Practice Special Report: Microsoft Teams Governance
  2. Evolve your multichannels before you try to omnichannel
  3. VENDORiQ: CommsChoice becomes Australia's first vendor of Contact Centre for Microsoft Teams Direct Routing

The Latest

28 March 2021: AWS has a history of periodically lowering the costs of storage. But even with this typical behaviour, its recent announcement of an elastic storage option that shaves 47% off current service prices is impressive. Or is it?

The first thing to realise is that the touted savings are not apples for apples. AWS’s new storage offering is cheaper because it resides in a single-zone, rather than being replicated across multiple zones. In short, the storage has a higher risk of being unavailable, or even being lost by an outright failure. 

Why it’s Important

AWS has not hidden this difference. It makes it clear that the lower cost comes from less redundancy. Yet this architectural nuance may be overlooked when looking at ways to optimise Cloud costs.

One of the major benefits of moving to Platform-as-a-Service offerings is the increased resilience and availability of the architecture. Cloud vendors, including AWS, do suffer periodic failures within zones. Examples include the AWS Sydney outage in early 2020 and the Sydney outage in 2016 which impacted banking and e-commerce services.  

But it is important to note that even though some of Australia’s top companies were effectively taken offline by the 2016 outage, others just sailed on as if little had happened. The difference is how these companies had leveraged the redundancies available within Cloud platforms. Those that saw little impact to operations when the AWS Sydney went down had selected redundancies in all aspects of their solutions.

Who’s impacted

  • Cloud architects
  • Cloud cost/contract specialists
  • Applications architects
  • Procurement leads

What’s Next?

The lesson from previous Australian AWS outages is that organisations need to carefully match the risk of specific application downtime. This new announcement shows that significant savings (in this case 47%) are possible by accepting a greater risk profile. However, while this may be attractive from a pure cost optimisation/procurement perspective, it also needs to be tempered with an analysis of the worst case scenario, such as multiple banks being unable to process credit card payments in supermarkets for an extended period.

Related IBRS Advisory

  1. VENDORiQ: AWS second data centre in Australia
  2. Post COVID-19: Four new BCP considerations
  3. Running IT-as-a-Service Part 55: IBRS Infrastructure Maturity Model

Conclusion

At 21.7 per cent, staff attrition within the Australian Information Technology (IT) sector is unsustainably high. Staff recognition can be defined as the action or process of recognising employees for the work completed through words and gratitude1. Over the past five years, globally, organisations have increased their focus and investment on employee reward and recognition.

However, despite this increased focus, research shows that recognition is not occurring as often as it should be, as only 61 per cent of employees feel appreciated in the workplace1. Research also shows that even when recognition is provided for employees, it is not executed well or enacted correctly 1/3 of the time.

Organisational development and human resource studies demonstrate that reward and recognition programs commonly do not resonate or hit the mark for employees, if they are: not authentic and sincere2, only provided in a single context, or are based on award criteria that is overly complex or unattainable3.

This paper covers how leaders and organisations can recognise and then subsequently avoid these three common pitfalls, to maximise the investment into employee reward and recognition programs and efforts.

Read more ...

Conclusion

Traditionally, vendor lock-in was associated with deliberate vendor-driven outcomes, where software and hardware forced the client to align their business processes to those offered by a specific software or ICT platform. Vendor lock-in often limited the flexibility of organisations to meet business needs as well as increasing costs. As a result, information and communication technology (ICT) was often seen as a limiting factor for business success when agility was needed. Historically, vendor lock-in was therefore seen as a negative. Poor timing, bad decisions and clumsy procurement practices may still see organisations fall into unwanted vendor lock-in situations. But is vendor lock-in always a negative?

Read more ...

Subscribe

Want to get the latest papers from all our advisors? Subscribe, and we'll send you the information you need.

Invalid Input
Please enter a valid email address
Invalid Input
Please enter your mobile phone number
Invalid Input