Indian filmmaker Ram Gopal Varma is ditching human musicians for artificial intelligence, saying he’ll use only AI-generated tunes in future projects, a move that underscores AI’s growing reach in creative industries. The filmmaker and screenwriter, known for popular Bollywood movies including Company, Rangeela, Sarkar, and Satya has launched a venture, called RGV Den Music, that […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Indian filmmaker Ram Gopal Varma abandons human musicians for AI-generated music
Архив рубрики: Artificial Intelligence
QuickVid uses AI to generate short-form videos, complete with voiceovers
Generative AI is coming for videos. A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram, TikTok and Snapchat videos.
Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2 and adds a synthetic voiceover and background music from YouTube’s royalty-free music library. QuickVid’s creator, Daniel Habib, says that he’s building the service to help creators meet the “ever-growing” demand from their fans.
“By providing creators with tools to quickly and easily produce quality content, QuickVid helps creators increase their content output, reducing the risk of burnout,” Habib told TechCrunch in an email interview. “Our goal is to empower your favorite creator to keep up with the demands of their audience by leveraging advancements in AI.”
But depending on how they’re used, tools like QuickVid threaten to flood already-crowded channels with spammy and duplicative content. They also face potential backlash from creators who opt not to use the tools, whether because of cost ($10 per month) or on principle, yet might have to compete with a raft of new AI-generated videos.
Going after video
QuickVid, which Habib, a self-taught developer who previously worked at Meta on Facebook Live and video infrastructure, built in a matter of weeks, launched on December 27. It’s relatively bare bones at present — Habib says that more personalization options will arrive in January — but QuickVid can cobble together the components that make up a typical informational YouTube Short or TikTok video, including captions and even avatars.
It’s easy to use. First, a user enters a prompt describing the subject matter of the video they want to create. QuickVid uses the prompt to generate a script, leveraging the generative text powers of GPT-3. From keywords either extracted from the script automatically or entered manually, QuickVid selects a background video from the royalty-free stock media library Pexels and generates overlay images using DALL-E 2. It then outputs a voiceover via Google Cloud’s text-to-speech API — Habib says that users will soon be able to clone their voice — before combining all these elements into a video.
Image Credits: QuickVid
See this video made with the prompt “Cats”:
https://techcrunch.com/wp-content/uploads/2022/12/img_5pg7k95x9ig2tofh7mkrr_cfr.mp4
Or this one:
https://techcrunch.com/wp-content/uploads/2022/12/img_61ighv4x55slq9582dbx_cfr.mp4
QuickVid certainly isn’t pushing the boundaries of what’s possible with generative AI. Both Meta and Google have showcased AI systems that can generate completely original clips given a text prompt. But QuickVid amalgamates existing AI to exploit the repetitive, templated format of B-roll-heavy short-form videos, getting around the problem of having to generate the footage itself.
“Successful creators have an extremely high-quality bar and aren’t interested in putting out content that they don’t feel is in their own voice,” Habib said. “This is the use case we’re focused on.”
That supposedly being the case, in terms of quality, QuickVid’s videos are generally a mixed bag. The background videos tend to be a bit random or only tangentially related to the topic, which isn’t surprising given QuickVids being currently limited to the Pexels catalog. The DALL-E 2-generated images, meanwhile, exhibit the limitations of today’s text-to-image tech, like garbled text and off proportions.
In response to my feedback, Habib said that QuickVid is “being tested and tinkered with daily.”
Copyright issues
According to Habib, QuickVid users retain the right to use the content they create commercially and have permission to monetize it on platforms like YouTube. But the copyright status around AI-generated content is … nebulous, at least presently. The U.S. Patent and Trademark Office (USPTO) recently moved to revoke copyright protection for an AI-generated comic, for example, saying copyrightable works require human authorship.
When asked about how the USPTO decision might affect QuickVid, Habib said he believes that it only pertain to the “patentability” of AI-generated products and not the rights of creators to use and monetize their content. Creators, he pointed out, aren’t often submitting patents for videos and usually lean into the creator economy, letting other creators repurpose their clips to increase their own reach.
“Creators care about putting out high-quality content in their voice that will help grow their channel,” Habib said.
Another legal challenge on the horizon might affect QuickVid’s DALL-E 2 integration — and, by extension, the site’s ability to generate image overlays. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot, a code-generating system, to regurgitate sections of licensed code without providing credit. (Copilot was co-developed by OpenAI and GitHub, which Microsoft owns.) The case has implications for generative art AI like DALL-E 2, which similarly has been found to copy and paste from the datasets on which they were trained (i.e., images).
Habib isn’t concerned, arguing that the generative AI genie’s out of the bottle. “If another lawsuit showed up and OpenAI disappeared tomorrow, there are several alternatives that could power QuickVid,” he said, referring to the open source DALL-E 2-like system Stable Diffusion. QuickVid is already testing Stable Diffusion for generating avatar pics.
Moderation and spam
Aside from the legal dilemmas, QuickVid might soon have a moderation problem on its hands. While OpenAI has implemented filters and techniques to prevent them, generative AI has well-known toxicity and factual accuracy problems. GPT-3 spouts misinformation, particularly about recent events, which are beyond the boundaries of its knowledge base. And ChatGPT, a fine-tuned offspring of GPT-3, has been shown to use sexist and racist language.
That’s worrisome, particularly for people who’d use QuickVid to create informational videos. In a quick test, I had my partner — who’s far more creative than me, particularly in this area — enter a few offensive prompts to see what QuickVid would generate. To QuickVid’s credit, obviously problematic prompts like “Jewish new world order” and “9/11 conspiracy theory” didn’t yield toxic scripts. But for “Critical race theory indoctrinating students,” QuickVid generated a video implying that critical race theory could be used to brainwash schoolchildren.
See:
https://techcrunch.com/wp-content/uploads/2022/12/img_e4wba39us0vqtc8051491_cfr.mp4
Habib says that he’s relying on OpenAI’s filters to do most of the moderation work and asserts that it’s incumbent on users to manually review every video created by QuickVid to ensure “everything is within the boundaries of the law.”
“As a general rule, I believe people should be able to express themselves and create whatever content they want,” Habib said.
That apparently includes spammy content. Habib makes the case that the video platforms’ algorithms, not QuickVid, are best positioned to determine the quality of a video, and that people who produce low-quality content “are only damaging their own reputations.” The reputational damage will naturally disincentivize people from creating mass spam campaigns with QuickVid, he says.
“If people don’t want to watch your video, then you won’t receive distribution on platforms like YouTube,” he added. “Producing low-quality content will also make people look at your channel in a negative light.”
But it’s instructive to look at ad agencies like Fractl, which in 2019 used an AI system called Grover to generate an entire site of marketing materials — reputation be damned. In an interview with The Verge, Fractl partner Kristin Tynski said that she foresaw generative AI enabling “a massive tsunami of computer-generated content across every niche imaginable.”
In any case, video-sharing platforms like TikTok and YouTube haven’t had to contend with moderating AI-generated content on a massive scale. Deepfakes — synthetic videos that replace an existing person with someone else’s likeness — began to populate platforms like YouTube several years ago, driven by tools that made deepfaked footage easier to produce. But unlike even the most convincing deepfakes today, the types of videos QuickVid creates aren’t obviously AI-generated in any way.
Google Search’s policy on AI-generated text might be a preview of what’s to come in the video domain. Google doesn’t treat synthetic text differently from human-written text where it concerns search rankings but takes actions on content that’s “intended to manipulate search rankings and not help users.” That includes content stitched together or combined from different web pages that “[doesn’t] add sufficient value” as well as content generated through purely automated processes, both of which might apply to QuickVid.
In other words, AI-generated videos might not be banned from platforms outright should they take off in a major way but rather simply become the cost of doing business. That isn’t likely to allay the fears of experts who believe that platforms like TikTok are becoming a new home for misleading videos, but — as Habib said during the interview — “there is no stopping the generative AI revolution.”
QuickVid uses AI to generate short-form videos, complete with voiceovers by Kyle Wiggers originally published on TechCrunch
QuickVid uses AI to generate short-form videos, complete with voiceovers
Twelve Labs lands $12M for AI that understands the context of videos
To Jae Lee, a data scientist by training, it never made sense that video — which has become an enormous part of our lives, what with the rise of platforms like TikTok, Vimeo and YouTube — was difficult to search across due to the technical barriers posed by context understanding. Searching the titles, descriptions and tags of videos was always easy enough, requiring no more than a basic algorithm. But searching within videos for specific moments and scenes was long beyond the capabilities of tech, particularly if those moments and scenes weren’t labeled in an obvious way.
To solve this problem, Lee, alongside friends from the tech industry, built a cloud service for video search and understanding. It became Twelve Labs, which went on to raise $17 million in venture capital — $12 million of which came from a seed extension round that closed today. Radical Ventures led the extension with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told TechCrunch in an email.
“The vision of Twelve Labs is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure,” Lee said.
A demo of the Twelve Labs platform’s capabilities. Image Credits: Twelve Labs
Twelve Labs, which is currently in closed beta, uses AI to attempt to extract “rich information” from videos such as movement and actions, objects and people, sound, text on screen, and speech to identify the relationships between them. The platform converts these various elements into mathematical representations called “vectors” and forms “temporal connections” between frames, enabling applications like video scene search.
“As a part of achieving the company’s vision to help developers create intelligent video applications, the Twelve Labs team is building ‘foundation models’ for multimodal video understanding,” Lee said. “Developers will be able to access these models through a suite of APIs, performing not only semantic search but also other tasks such as long-form video ‘chapterization,’ summary generation and video question and answering.”
Google takes a similar approach to video understanding with its MUM AI system, which the company uses to power video recommendations across Google Search and YouTube by picking out subjects in videos (e.g., “acrylic painting materials”) based on the audio, text and visual content. But while the tech might be comparable, Twelve Labs is one of the first vendors to market with it; Google has opted to keep MUM internal, declining to make it available through a public-facing API.
That being said, Google, as well as Microsoft and Amazon, offer services (i.e., Google Cloud Video AI, Azure Video Indexer and AWS Rekognition) that recognize objects, places and actions in videos and extract rich metadata at the frame level. There’s also Reminiz, a French computer vision startup that claims to be able to index any type of video and add tags to both recorded and live-streamed content. But Lee asserts that Twelve Labs is sufficiently differentiated — in part because its platform allows customers to fine-tune the AI to specific categories of video content.
Mockup of API for fine-tuning the model to work better with salad-related content. Image Credits: Twelve Labs
“What we’ve found is that narrow AI products built to detect specific problems show high accuracy in their ideal scenarios in a controlled setting, but don’t scale so well to messy real-world data,” Lee said. “They act more as a rule-based system, and therefore lack the ability to generalize when variances occur. We also see this as a limitation rooted in lack of context understanding. Understanding of context is what gives humans the unique ability to make generalizations across seemingly different situations in the real world, and this is where Twelve Labs stands alone.”
Beyond search, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently figuring out, for example, which videos showing knives are violent versus instructional. It can also be used for media analytics and real-time feedback, he says, and to automatically generate highlight reels from videos.
A little over a year after its founding (March 2021), Twelve Labs has paying customers — Lee wouldn’t reveal how many exactly — and a multiyear contract with Oracle to train AI models using Oracle’s cloud infrastructure. Looking ahead, the startup plans to invest in building out its tech and expanding its team. (Lee declined to reveal the current size of Twelve Labs’ workforce, but LinkedIn data shows it’s roughly 18 people.)
“For most companies, despite the huge value that can be attained through large models, it really does not make sense for them to train, operate and maintain these models themselves. By leveraging a Twelve Labs platform, any organization can leverage powerful video understanding capabilities with just a few intuitive API calls,” Lee said. “The future direction of AI innovation is heading straight towards multimodal video understanding, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
Twelve Labs lands $12M for AI that understands the context of videos by Kyle Wiggers originally published on TechCrunch
Twelve Labs lands $12M for AI that understands the context of videos
Regie secures $10M to generate marketing copy using AI
Regie.ai, a startup using OpenAI’s GPT-3 text-generating system to create sales and marketing content for brands, today announced that it raised $10 million in Series A funding led by Scale Venture Partners with participation from Foundation Capital, South Park Commons, Day One Ventures and prominent angel investors. The fresh investment comes as VCs see a growing opportunity in AI-powered, copy-generating adtech companies, whose tech promises to save time while potentially increasing personalization.
Regie was founded in 2020 by Matt Millen and Srinath Sridhar. Previously a software engineer at Google and Meta, Sridhar is a data scientist by trade, having developed enterprise-scale AI systems that detect duplicate images and rank search results. Millen was formerly a VP at T-Mobile, leading the national sales teams (e.g., strategic accounts and public sector).
With Regie, Sridhar says he and Millen aimed to create a way for companies to communicate with their customers via channels like email, social media, text, podcasts, online advertising and more. Because companies have so many platforms and mediums at their disposal to speak with customers, he notes, it can be a challenge for content marketers to produce continuously compelling content to reach their customers.
“The way content is getting generated has fundamentally changed,” Sridhar told TechCrunch in an email interview. “Marketers and copywriters working in the enterprise … increasingly [need] to produce and manage content and content workflows at scale.”
Regie uses GPT-3 to power its service — the same GPT-3 that can generate poetry, prose and academic papers. But it’s a “flavor” of GPT-3 fine-tuned on a training data set of roughly 20,000 sales sequences (the series of steps to convert prospects into paying customers) and nearly 100 million sales emails. Also in the mix are custom language systems built by Regie to reflect brands and their messaging, designed to be integrated with existing sale platforms like Outreach, HubSpot, and Salesloft.
Image Credits: Regie
Lest the systems spew problematic language, Regie says that every system goes through “human curation” and vetting before being released. The startup also claims to train the systems on “inclusive” language and test them for biases, like bias against certain demographic groups.
Customers can use Regie to generate original, optimized-for-search-engines content or create custom sales sequences. The platform also offers blog- and social-media-post-authoring tools for personalizing messages, as well as a Chrome extension that analyzes the “quality” of emails that customers send — and optionally rewrites the text.
“Generative AI is completely disrupting the way content is created today. The biggest competitors of Regie would be the large content authoring and management platforms that will be completely redesigned AI first going forward,” Sridhar said confidently. “For example, Adobe’s suite of products including Acrobat, Illustrator, Photoshop, now Figma as well as Adobe Experience Cloud will start to get outdated as Regie continues to build on an intelligent content creation and management platform for the enterprise.”
More immediately, Regie competes with vendors like Jasper, Phrasee, Copysmith and Copy.ai — all of which tap AI to generate bespoke marketing copy. But Sridhar argues that Regie is a more vertical platform that caters to go-to-market teams in the enterprise while combining text, images and workflows into a single glass pane.
“Generative AI is such a paradigm shift that not only productivity and top-line of companies will go up as a result, but the bottom line will also go down simultaneously. There are very few products that can improve both sides of that financial equation,” Sridhar continued. “So if a company wants to reduce costs because they want to assimilate sales tools, or reduce outsourced writing while simultaneously increasing revenue, Regie can do that. If you are an outsourced marketing agency looking to retain more customers and efficiently generate content at scale, Regie can definitely do that for agencies as well.”
The company currently has more than 70 software-as-a-service customers on annual contracts, including AT&T, Sophos, Okta and Crunchbase. Sridhar didn’t reveal revenue but said that he expects the 25-person company to grow “meaningfully” this year.
“This is a revolutionary new field. And as always, adoption will require educating the users,” Sridhar said. “It is clear to us as practitioners that the world has changed. But it will take time for others to get their hands dirty and convince themselves that this is happening — and that it is a very positive development. So we have to be patient in educating the industry. We also have to show that content quality isn’t compromised and that it can perform better and be maintained more consistently with the strategic application of AI.”
To date, Regie has raised $14.8 million.
Regie secures $10M to generate marketing copy using AI by Kyle Wiggers originally published on TechCrunch
Regie secures $10M to generate marketing copy using AI
OpenAI begins allowing users to edit faces with DALL-E 2
After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people’s faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures.
OpenAI claims that improvements to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content. In an email to customers, the company wrote:
Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes … [We] built new detection and response techniques to stop misuse.
The change in policy isn’t opening the floodgates necessarily. OpenAI’s terms of service will continue to prohibit uploading pictures of people without their consent or images that users don’t have the rights to — although it’s not clear how consistent the company’s historically been about enforcing those policies.
In any case, it’ll be a true test of OpenAI’s filtering technology, which some customers in the past have complained about being overzealous and somewhat inaccurate. Deepfakes come in many flavors, from fake vacation photos to presidents of war-torn countries. Accounting for every emerging form of abuse will be a never-ending battle, in some cases with very high stakes.
No doubt, OpenAI — which has the backing of Microsoft and notable VC firms including Khosla Ventures — is eager to avoid the controversy associated with Stability AI’s Stable Diffusion, an image-generating system that’s available in an open source format without any restrictions. As TechCrunch recently wrote about, it didn’t take long before Stable Diffusion — which can also edit face images — was being used by some to create pornographic, nonconsensual deepfakes of celebrities like Emma Watson.
So far, OpenAI has positioned itself as a brand-friendly, buttoned-up alternative to the no-holds-barred Stability AI. And with the constraints around the new face editing feature for DALL-E 2, the company is maintaining the status quo.
DALL-E 2 remains in invite-only beta. In late August, OpenAI announced that over a million people are using the service.
OpenAI begins allowing users to edit faces with DALL-E 2 by Kyle Wiggers originally published on TechCrunch
OpenAI begins allowing users to edit faces with DALL-E 2