VENDORiQ: Google AI in Creative Workflows: An Evolution Beyond Standalone Tools?

Google’s new AI tool, nano-banana, is a major step towards integrating generative AI directly into creative workflows, automating repetitive tasks and redefining productivity.

The Latest

Google recently announced the preview release of a new version its Gemini Image creation tool, referred to as ‘nano-banana’. The tool, offered through the Gemini API, Google AI Studio and Google Workspaces, focuses on native image generation and editing capabilities. Key functionalities include maintaining character consistency across multiple generations, enabling precise, prompt-based image edits such as inpainting and outpainting, composing and merging image elements, and leveraging multimodal reasoning to interpret visual instructions.

The model is available via SDKs for Python, JavaScript, and Go, alongside direct REST API calls. Google states that batching image requests can potentially reduce API costs by up to 50 per cent and increase throughput.

Why it Matters:

The introduction of nano-banana underscores an industry trend: generative artificial intelligence (AI) services are increasingly moving from experimental, standalone applications to integrated components within professional creative workflows. This evolution reflects a growing understanding that AI’s primary utility in design and video creation is not to supersede human creativity but to automate derivative tasks.

For instance, the model’s ability to maintain character consistency across image sequences, or to perform precise, prompt-driven edits like object removal or addition, addresses common bottlenecks in creative production. These are tasks that traditionally consume significant time and effort, but are often mechanical once the core creative brief is established. By offloading such functions to AI, design and video professionals can allocate more time to conceptualisation, strategic development, and the nuanced refinement of creative output.

While platforms like Canva have explicitly integrated Google’s AI image generation capabilities into their toolsets, enhancing accessibility for a broad user base, the landscape for other major creative suites, such as Adobe Creative Cloud, is more nuanced. 

Adobe has largely concentrated on developing and integrating its proprietary Firefly generative AI models directly into its core applications. However, the availability of Gemini 2.5 Flash Image via an API facilitates its potential integration by third party developers, or even by enterprises building custom tools that interface with existing creative software. This API-first approach signals a future where creative professionals might access diverse AI models, whether native to their primary tools or accessible through a growing ecosystem of interconnected services.

This integration trajectory indicates a shift in what constitutes ‘productivity’ within the commercial creative sector. Efficiency gains from AI-driven automation may lead to higher output volumes and faster iteration cycles. Consequently, the commercial definition of creativity will need to adapt. Commercial skill sets will be refined, emphasising increased creativity alongside proficiency in executing that creativity with AI-assisted content creation and effective curation to achieve specific artistic, commercial and communicative vision. 

This does not diminish human creativity but rather reorients it towards higher level problem solving and ideation, potentially prompting a re-evaluation of labour specialisations within creative teams.

Who’s Impacted?

  • Creative Directors/Heads of Design/Marketing Leads: Need to understand how AI tools can enhance team efficiency, redefine creative project scopes, and influence the overall quality and speed of content production.
  • Application Development Teams/AI Teams: Directly involved in leveraging APIs and SDKs to build or integrate AI-powered features into custom applications or existing platforms.
  • Project Leads/Product Managers: Must identify specific workflows where AI integration can provide tangible benefits, manage the transition, and articulate new requirements for creative output.
  • Digital Transformation Leads: Overseeing the strategic application of AI across business units to drive innovation and operational efficiency.

Next Steps

  • Pilot Integration Initiatives: Identify specific creative workflows with repetitive visual tasks and initiate small-scale pilot projects to evaluate the quantifiable benefits of integrating AI image generation tools.
  • Invest in Skill Development: Provide training for creative and technical staff on prompt engineering, ethical AI usage, and the strategic application of generative AI tools within commercial contexts.
  • Evaluate API Roadmaps: Monitor the development and integration roadmaps of leading creative software vendors and AI model providers to understand future interoperability and native feature enhancements.
  • Establish Governance Frameworks: Develop internal guidelines for the responsible and compliant use of AI-generated content, addressing intellectual property, quality control, and potential biases.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week