VENDORiQ: VMware and Nvidia Collaborate to Produce Software for On-Premise Generative AI Development

VMware and Nvidia are partnering to develop software tools for businesses with plans to release these tools in 2025 as an alternative to public cloud-based solutions.

The Latest

VMware is collaborating with Nvidia to create software tools geared towards businesses interested in developing generative AI within their own data centres instead of relying on public services. The new tools are specifically designed to manage Nvidia’s chipsets that are tailored for computationally intensive AI tasks, including training. The two vendors state the tools are slated for release in 2025 and offer an alternative to public Cloud-based solutions such as those offered by Microsoft, OpenAI, AWS and Google.

Why It’s Important

Developing generative AI within an enterprise’s own data centre is crucial for enterprises with information that require high levels of privacy and security. For example, healthcare organisations that leverage generative AI for personalised treatment plans based on both sensitive patient data and large amounts of less private, but highly curated medical information and research. Given the highly confidential nature of patient data, it must remain secure, adhere to various national and legislation, and be under the enterprise’s control. 

Running the AI within a private data centre (or within a private and secured Cloud infrastructure) provides an added layer of security for such organisations seeking to develop custom generative AI solutions. 

Who’s Impacted?

  • AI developers
  • Architecture groups

What’s Next?

  • Before implementing generative AI in a private data centre, conduct a thorough risk and compliance assessment. This should cover aspects like data sensitivity, regulatory requirements (e.g., The Privacy Act 1988 for healthcare, GDPR for European clients), and internal compliance protocols. This will help identify the appropriate security measures and establish whether the in-house approach is indeed the most suitable option for the particular application.
  • Note that creating AI models in computationally intensive and new necessary hardware to support the AI training and running, will likely be required. Additionally, staff trained in managing and optimising this hardware for AI applications should be either trained or hired. This ensures that the technology is utilised to its fullest potential while also maintaining security and compliance norms.

Related IBRS Advisory

1. The Top Six Risks of Generative AI

2. Five Things To Consider When Evaluating AI… And Five Dangerous AI Misconceptions

3. VENDORiQ: Meta AI Will Release Open-Source AI Model

Trouble viewing this article?

Search