Latest Advisory

VENDORiQ: Google’s Wiz Acquisition – What It Means for Multi-Cloud Enterprises
Google’s Wiz acquisition scales multi-cloud security via AI integration, yet creates vendor lock-in and concentrated risks requiring rigorous governance.

VENDORiQ: Google’s Wiz Acquisition – What It Means for Multi-Cloud Enterprises
Google’s Wiz acquisition scales multi-cloud security via AI integration, yet creates vendor lock-in and concentrated risks requiring rigorous governance.

Measure your Information Security Culture to Supercharge Organisational Cyber Resilience – Webinar and Presentation Kit
Enhancing cyber resilience hinges on evolving cyber security awareness programs to actively measure and influence human behaviour and organisational culture, fostering a shared responsibility beyond the IT department through visible metrics and incentivisation

AI as the Engine of the Innovation Economy: Part 1 – Strategy
Businesses are shifting from a ‘knowledge’ to an ‘innovation’ economy, with AI driving new ideas, customer engagement, and operational efficiency.

Strategic Considerations for AI Video Tool Adoption – Lessons from the Dawn of Desktop Publishing
AI video tools, like early desktop publishing, offer huge potential, but smart adoption needs a clear strategy, skilled people, and pilot programmes to ensure real business value.

Why Your AI PoC Won’t Get into Live Production… and What to Do About It
Most artificial intelligence proof-of-concepts fail in production due to underestimated costs, dynamic data issues, governance, and integration challenges. Tackle these early for success.

The ASD’s Foundations for Modern Defensible Architecture – A Strategic Lever
The Australian Signals Directorate’s (ASD) Foundations for Modern Defensible Architecture[1] provides a strategic framework that bridges tactical security controls with comprehensive guidance, providing Australian organisations with the architectural principles needed to build inherently resilient systems in an era where cyber breaches are inevitable.

AI Explainability: Available Techniques
Explainable AI offers diverse techniques like LIME, SHAP, and counterfactuals, crucial for building trust, meeting compliance, and empowering staff to collaborate effectively with AI systems.