Featured

The Latest

28 April 2021:  AWS has introduced AQUA (Advanced Query Accelerator) for Amazon Redshift, a distributed and hardware-accelerated cache that, according to AWS, “delivers up to ten times better query performance than other enterprise Cloud data warehouses”.

Why it’s Important

AWS is not the only vendor that offers distributed analytics computing. Architectures from Domo and Snowflake both make use of elastic, distributed computing resources (often referred to as nodes) to enable analytics over massive data sets. These architectures not only speed up the analytics of data, but also provide massively parallel ingestion of data. 

By introducing AQUA, AWS has added a layer of specialised, massively parallel and scalable cache over its Redshift analytics platform. This new layer comes at a cost, but initial calculations suggest it is a fraction of the cost of deploying and maintaining traditional big data analytics architecture, such as specialised BI hyperconverged appliances and databases.

Given the rapid growth in self-service data analytics (aka citizen analytics) organisations will face increasing demands to provide analytics services for increasing amounts of both highly curated data, and ‘other’ data with varied levels of quality. In addition, organisations need to consider a plan for rise in non-structured data. 

As with email, we have reached a tipping point in the demands of performance, complexity and cost where Cloud delivered analytics outstrip on-premises in most scenarios. The question now becomes one of Cloud architecture, data governance and, most important of all, how to mature data literacy across your organisation.

Who’s impacted

  • Business intelligence / analytics team leads
  • Enterprise architects
  • Cloud architects

What’s Next?

Organisations should reflect honestly on the way they are currently supporting business intelligence capabilities, and develop scenarios for Cloud-based analytics services. 

This should include a re-evaluation of how adherence to compliance and regulations can be met with Cloud services, how data could be democratised, and the potential impact on the organisation. BAU cost should be considered, not just for the as-in state, but also for a potential future states. While savings are likely, such should not be the overriding factor: new capabilities and enabling self-service analytics are just as important. 

Organisations should also evaluate data literacy maturity among staff, and if needed (likely) put in place a program to improve staff’s use of data.

Related IBRS Advisory

  1. IBRSiQ: AIS and Power BI Initiatives
  2. Workforce transformation: The four operating models of business intelligence
  3. Staff need data literacy – Here’s how to help them get it
  4. The critical link between data literacy and customer experience
  5. VENDORiQ: Fujitsu Buys into Australian Big Data with Versor Acquisition

IBRSiQ is a database of Client inquiries and is designed to get you talking to our advisors about these topics in the context of your organisation in order to provide tailored advice for your needs.

The Latest

29 April 2021: Microsoft briefed analysts on its expansion of Azure data centres throughout Asia. By the end of 2021, Microsoft will have multiple availability zones in every market where it has a data centre.

The expansion is driven in part by a need for additional Cloud capacity to meet greenfield growth. Each new availability zone is, in effect, an additional data centre of Cloud services capability.

However, the true focus is on providing existing Azure clients with expanded options for deploying services over multiple zones within a country.  

Microsoft expects to see strong growth in organisations re-architecting solutions that had been deployed to the Cloud through a simple ‘lift and shift’ approach to take advantage of the resilience granted by multiple zones. Of course, there is a corresponding uplift in revenue for Microsoft as more clients take up multiple availability zones.

Why it’s Important

While there is an argument that moving workloads to Cloud services, such as Azure, has the potential to improve service levels and availability, the reality is that Cloud data centres do fail. Both AWS and Microsoft Azure have seen outages in their Sydney Australia data centres. What history shows is organisations that had adopted a multiple availability zone architecture tended to have minimal, if any, operational impact when a Cloud data centre goes down.

It is clear that a multiple availability zone approach is essential for any mission critical application in the Cloud. However, such applications are often geographically bound by compliance or legislative requirements. By adding additional availability zones within countries throughout the region, Microsoft is removing a barrier for migrating critical applications to the Cloud, as well as driving more revenue from existing clients.

Who’s impacted

  • Cloud architecture teams
  • Cloud cost / procurement teams

What’s Next?

Multiple available zone architecture can be considered on the basis of future business resilience in the Cloud. It is not the same thing as ‘a hot disaster recovery site’ and should be viewed as a foundational design consideration for Cloud migrations.

Related IBRS Advisory

  1. VENDORiQ: Amazon Lowers Storage Costs… But at What Cost?
  2. Vendor Lock-in Using Cloud: Golden Handcuffs or Ball and Chain?
  3. Running IT-as-a-Service Part 49: The case for hybrid Cloud migration

The Latest

09 April 2021: During its advisor business update, Fujitsu discussed its rationale for acquiring Versor, an Australian data and analytics specialist. Versor provides both managed services for data management, reporting and analytics. In addition, it provides consulting services, including data science, to help organisations deploy big data solutions.

Why it’s Important

Versor has 70 data and analytics specialists with strong multi-Cloud knowledge. Fujitsu’s interest in acquiring Versor is primarily tapping Versor’s consulting expertise in Edge Computing, Azure, AWS and Databricks. In addition, Versor’s staff have direct industry experience with some key Australian accounts, including public sector, utilities and retail, which are all target sectors for Fujitsu. Finally, Versor has expanded into Asia and is seeing strong growth. 

So from a Fujitsu perspective, the acquisition is a quick way to bolster its credentials in digital transformation and to open doors to new clients. 

This acquisition clearly demonstrates Fujitsu’s strategy to grow in the ANZ market by increasing investment in consulting and special industry verticals.  

Who’s impacted

  • CIO
  • Development team leads
  • Business analysts

What’s Next?

Given its experienced staff, Versor is expected to lead many of Fujitsu’s digital transformation engagements with prospects and clients. Fujitsu’s well-established ‘innovation design engagements’, are used to explore opportunities with clients and leverage concepts of user-centred design. Adding specialist big data skills to this mix makes for an attractive combination of pre-sales consulting.

Related IBRS Advisory

  1. The new CDO agenda
  2. Workforce transformation: The four operating models of business intelligence
  3. VENDORiQ: Defence Department Targets Fujitsu for Overhaul

Conclusion

The decision to integrate machine learning (ML) into systems and operations is not one that is made lightly. Aside from the costs of acquiring the technology tools, there are added considerations such as staff training and the expertise required to improve ML operations (MLOps) capabilities.

An understanding of the ML cycle before deployment is key. Once requirements and vision are defined, the appropriate tools are acquired. ML specialists will then analyse and perform feature engineering, model design, training, and testing and deployment. This is also known as the dev loop. At the implementation stage, the ML model is deployed and the application is subsequently refined and enhanced. The next stage is the monitoring and improving stage where the organisation refines the model and evaluates the ROI for its data science efforts. This stage triggers the retraining of the model through data drift and monitoring.

Conclusion: Machine learning operations (MLOps) adapts principles, practices and measures from developer operations (DevOps), but significantly transforms some aspects to address the different skill sets and quality control challenges and deployment nuances of machine learning (ML) and data engineering.

Implementing MLOps has several benefits, from easing collaboration among project team members to reducing bias in the resulting artificial intelligence (AI) models.

Conclusion: Two key supporting artefacts in the creation of pragmatic incident response plans are the incident response action flow chart and the severity assessment table. Take time to develop, verify and test these artefacts and they will be greatly appreciated in aiding an orderly and efficient invoking of the DRP/BCP and restoration activities.