OpenAI has unveiled a voice-cloning tool, dubbed "Voice Engine," designed to replicate speech patterns based on a 15-second audio sample. However, the company intends to exercise stringent control over its deployment until safeguards are in place to prevent misuse and deception.
The announcement, detailed in an OpenAI blog post, comes amidst growing concerns about the potential misuse of AI-powered applications, particularly in critical events such as elections. Voice cloning tools, known for their affordability, accessibility, and difficulty to detect, raise alarms among disinformation researchers.
In light of these concerns, OpenAI is collaborating with various stakeholders across government, media, entertainment, education, and civil society to incorporate feedback and ensure responsible deployment of the technology.
The company's cautious approach follows incidents like a political consultant's admission to orchestrating a robocall impersonating US President Joe Biden during the New Hampshire primary. Such events underscore the urgency of addressing the threat of AI-generated deepfake disinformation.
Partners involved in testing Voice Engine have agreed to adhere to strict guidelines, including obtaining explicit consent from individuals whose voices are cloned and transparently disclosing when AI-generated voices are used. OpenAI has also implemented safety measures such as watermarking to trace the origin of generated audio and proactive monitoring of its usage.
While Voice Engine holds promise for various applications, including entertainment and accessibility, OpenAI's commitment to responsible deployment underscores the importance of ethical considerations in the development and implementation of AI technologies.
More: https://techxplore.com/news/2024-03-openai-unveils-voice-cloning-tool.html
