Why It’s Important
Generative AI models, including text-to-video tools, are trained on massive datasets of images and videos. There is a need for clarification regarding the sources of these materials, as it raises concerns about the consent given by the creators for the use of their work. OpenAI is working with policymakers, educators, and artists to gather their feedback. IBRS believes that this will still introduce the risk of copyright infringement. Which could lead to legal challenges and damage an enterprise’s reputation considering the undisclosed sources of imagery and video used to train Sora. Also, the level of complexity and user control over generated videos may be limited.
The industry expects vendors to compete in bringing their tool to the public first and gaining a larger share of the enterprise market. Last year, Meta released a preview of its video creation, Emu Video, which can produce video content from simple text or still image inputs. Google also announced its AI video generation model, Lumiere, which it said would offer the same results.
Who’s Impacted
- CEOs
- AI developers
- IT teams
What’s Next?
- Before investing in generative AI tools, enterprise leaders should conduct thorough due diligence on the training data and the models themselves. This includes inquiring about the dataset’s sources, copyright permissions, and measures taken to mitigate bias.
- Establish clear internal policies regarding the responsible use of generative AI tools. Ensure that employees understand the ethical considerations and potential risks associated with these technologies.
- Implement ongoing monitoring systems to identify potential biases or harmful outputs from generative AI tools. Stay abreast of legal developments surrounding copyright and fair use laws as they apply to AI.
Related IBRS Advisory
1. A Pragmatic Approach to Generative AI
2. Generative AI for Enterprise Use – An Overview of the State-of-the-Art