To Jae Lee, a data scientist by training, it never made sense that video — which has become an enormous part of our lives, what with the rise of platforms like TikTok, Vimeo and YouTube — was difficult to search across due to the technical barriers posed by context understanding. Searching the titles, descriptions and tags of videos was always easy enough, requiring no more than a basic algorithm. But searching within videos for specific moments and scenes was long beyond the capabilities of tech, particularly if those moments and scenes weren’t labeled in an obvious way.
To solve this problem, Lee, alongside friends from the tech industry, built a cloud service for video search and understanding. It became Twelve Labs, which went on to raise $17 million in venture capital — $12 million of which came from a seed extension round that closed today. Radical Ventures led the extension with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told TechCrunch in an email.
“The vision of Twelve Labs is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure,” Lee said.
A demo of the Twelve Labs platform’s capabilities. Image Credits: Twelve Labs
Twelve Labs, which is currently in closed beta, uses AI to attempt to extract “rich information” from videos such as movement and actions, objects and people, sound, text on screen, and speech to identify the relationships between them. The platform converts these various elements into mathematical representations called “vectors” and forms “temporal connections” between frames, enabling applications like video scene search.
“As a part of achieving the company’s vision to help developers create intelligent video applications, the Twelve Labs team is building ‘foundation models’ for multimodal video understanding,” Lee said. “Developers will be able to access these models through a suite of APIs, performing not only semantic search but also other tasks such as long-form video ‘chapterization,’ summary generation and video question and answering.”
Google takes a similar approach to video understanding with its MUM AI system, which the company uses to power video recommendations across Google Search and YouTube by picking out subjects in videos (e.g., “acrylic painting materials”) based on the audio, text and visual content. But while the tech might be comparable, Twelve Labs is one of the first vendors to market with it; Google has opted to keep MUM internal, declining to make it available through a public-facing API.
That being said, Google, as well as Microsoft and Amazon, offer services (i.e., Google Cloud Video AI, Azure Video Indexer and AWS Rekognition) that recognize objects, places and actions in videos and extract rich metadata at the frame level. There’s also Reminiz, a French computer vision startup that claims to be able to index any type of video and add tags to both recorded and live-streamed content. But Lee asserts that Twelve Labs is sufficiently differentiated — in part because its platform allows customers to fine-tune the AI to specific categories of video content.
Mockup of API for fine-tuning the model to work better with salad-related content. Image Credits: Twelve Labs
“What we’ve found is that narrow AI products built to detect specific problems show high accuracy in their ideal scenarios in a controlled setting, but don’t scale so well to messy real-world data,” Lee said. “They act more as a rule-based system, and therefore lack the ability to generalize when variances occur. We also see this as a limitation rooted in lack of context understanding. Understanding of context is what gives humans the unique ability to make generalizations across seemingly different situations in the real world, and this is where Twelve Labs stands alone.”
Beyond search, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently figuring out, for example, which videos showing knives are violent versus instructional. It can also be used for media analytics and real-time feedback, he says, and to automatically generate highlight reels from videos.
A little over a year after its founding (March 2021), Twelve Labs has paying customers — Lee wouldn’t reveal how many exactly — and a multiyear contract with Oracle to train AI models using Oracle’s cloud infrastructure. Looking ahead, the startup plans to invest in building out its tech and expanding its team. (Lee declined to reveal the current size of Twelve Labs’ workforce, but LinkedIn data shows it’s roughly 18 people.)
“For most companies, despite the huge value that can be attained through large models, it really does not make sense for them to train, operate and maintain these models themselves. By leveraging a Twelve Labs platform, any organization can leverage powerful video understanding capabilities with just a few intuitive API calls,” Lee said. “The future direction of AI innovation is heading straight towards multimodal video understanding, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
Twelve Labs lands $12M for AI that understands the context of videos by Kyle Wiggers originally published on TechCrunch
Twelve Labs lands $12M for AI that understands the context of videos
Архив рубрики: machine learning
Deep Render believes AI holds the key to more efficient video compression
Chri Besenbruch, CEO of Deep Render, sees many problems with the way video compression standards are developed today. He thinks they aren’t advancing quickly enough, bemoans the fact that they’re plagued with legal uncertainty and decries their reliance on specialized hardware for acceleration.
“The codec development process is broken,” Besenbruch said in an interview with TechCrunch ahead of Disrupt, where Deep Render is participating in the Disrupt Battlefield 200. “In the compression industry, there is a significant challenge of finding a new way forward and searching for new innovations.”
Seeking a better way, Besenbruch co-founded Deep Render with Arsalan Zafar, whom he met at Imperial College London. At the time, Besenbruch was studying computer science and machine learning. He and Zafar collaborated on a research project involving distributing terabytes of video across a network, during which they say they experienced the shortcomings of compression technology firsthand.
The last time TechCrunch covered Deep Render, the startup had just closed a £1.6 million seed round ($1.81 million) led by Pentech Ventures with participation from Speedinvest. In the roughly two years since then, Deep Render has raised an additional several million dollars from existing investors, bringing its total raised to $5.7 million.
“We thought to ourselves, if the internet pipes are difficult to extend, the only thing we can do is make the data that flows through the pipes smaller,” Besenbruch said. “Hence, we decided to fuse machine learning and AI and compression technology to develop a fundamentally new way of compression data getting significantly better image and video compression ratios.”
Deep Render isn’t the first to apply AI to video compression. Alphabet’s DeepMind adapted a machine learning algorithm originally developed to play board games to the problem of compressing YouTube videos, leading to a 4% reduction in the amount of data the video-sharing service needs to stream to users. Elsewhere, there’s startup WaveOne, which claims its machine learning-based video codec outperforms all existing standards across popular quality metrics.
But Deep Render’s solution is platform-agnostic. To create it, Besenbruch says that the company compiled a dataset of over 10 million video sequences on which they trained algorithms to learn to compress video data efficiently. Deep Render used a combination of on-premise and cloud hardware for the training, with the former comprising over a hundred GPUs.
Deep Render claims the resulting compression standard is 5x better than HEVC, a widely used codec and can run in real time on mobile devices with a dedicated AI accelerator chip (e.g., the Apple Neural Engine in modern iPhones). Besenbruch says the company is in talks with three large tech firms — all with market caps over $300 billion — about paid pilots, though he declined to share names.
Eddie Anderson, a founding partner at Pentech and board member at Deep Render, shared via email: “Deep Render’s machine learning approach to codecs completely disrupts an established market. Not only is it a software route to market, but their [compression] performance is significantly better than the current state of the art. As bandwidth demands continue to increase, their solution has the potential to drive vastly improved commercial performance for current media owners and distributors.”
Deep Render currently employs 20 people. By the end of 2023, Besenbruch expects that number will more than triple to 62.
Deep Render believes AI holds the key to more efficient video compression by Kyle Wiggers originally published on TechCrunch
Deep Render believes AI holds the key to more efficient video compression
AI is taking over the iconic voice of Darth Vader, with the blessing of James Earl Jones
From the cringe-inducing Jar Jar Binks to unconvincing virtual Leia and Luke, Disney’s history with CG characters is, shall we say, mixed. But that’s not stopping them from replacing one of the most recognizable voices in cinema history, Darth Vader, with an AI-powered voice replica based on James Earl Jones.
The retirement of Jones, now 91, from the role, is of course well-earned. But if Disney continues to have its way (and there is no force in the world that can stop it), Vader is far from done. It would be unthinkable to recast the character, but if Jones is done, what can they do?
The solution is Respeecher, a Ukrainian company that trains text-to-speech machine learning models with the (licensed and released) recordings of actors who, for whatever reason, will no longer play a part.
Vanity Fair just ran a great story on how the company managed to put together the Vader replacement voice for Disney’s “Obi-Wan Kenobi” — while the country was being invaded by Russia. Interesting enough, but others noted that it serves as confirmation that the iconic voice of Vader would officially from now on be rendered by AI.
This is far from the first case where a well-known actor has had their voice synthesized or altered in this way. Another notable recent example is “Top Gun: Maverick,” in which the voice of Val Kilmer (reprising his role as Iceman) was synthesized due to the actor’s medical condition.
That sounded good, but a handful of whispered lines aren’t quite the same as a 1:1 replacement for a voice even children have known (and feared) for decades. Can a small company working at the cutting edge of machine learning tech pull it off?
You can judge for yourself — here’s one compilation of clips — and to me it seems pretty solid. The main criticism of that show wasn’t Vader’s voice, that’s for sure. If you weren’t expecting anything, you would probably just assume it was Jones speaking the lines, not another actor’s voice being modified to fit the bill.
The giveaway is that it doesn’t actually sound like Jones does now — it sounds like he did in the ’70s and ’80s when the original trilogy came out. That’s what anyone seeing Obi-Wan and Vader fight will expect, probably, but it’s a bit strange to think about.
It opens up a whole new can of worms. Sure, an actor may license their voice work for a character, but what about when that character ages? What about a totally different character they voice, but that there is some similarity to? What recourse do they have if their voice synthesis files leak and people are using it willy-nilly?
Spotify is acquiring Sonantic, the AI voice platform used to simulate Val Kilmer’s voice in ‘Top Gun: Maverick’
It’s an interesting new field to work in, but it’s hardly without pitfalls and ethical conundra. Disney has already broken the seal on many transformative technologies in filmmaking and television, and borne the deserved criticism when what it put out did not meet audiences’ expectations.
But they can take the hits and roll with them — maybe even take a page from George Lucas’s book and try to rewrite history, improving the rendering of Grand Moff Tarkin in a bid to make us forget how waxy he looked originally. As long as the technology is used to advance and complement the creativity of writers, directors and everyone else who makes movies magic, and not to save a buck or escape tricky rights situations, I can get behind it.
AI is taking over the iconic voice of Darth Vader, with the blessing of James Earl Jones by Devin Coldewey originally published on TechCrunch
AI is taking over the iconic voice of Darth Vader, with the blessing of James Earl Jones
OpenAI begins allowing users to edit faces with DALL-E 2
After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people’s faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures.
OpenAI claims that improvements to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content. In an email to customers, the company wrote:
Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes … [We] built new detection and response techniques to stop misuse.
The change in policy isn’t opening the floodgates necessarily. OpenAI’s terms of service will continue to prohibit uploading pictures of people without their consent or images that users don’t have the rights to — although it’s not clear how consistent the company’s historically been about enforcing those policies.
In any case, it’ll be a true test of OpenAI’s filtering technology, which some customers in the past have complained about being overzealous and somewhat inaccurate. Deepfakes come in many flavors, from fake vacation photos to presidents of war-torn countries. Accounting for every emerging form of abuse will be a never-ending battle, in some cases with very high stakes.
No doubt, OpenAI — which has the backing of Microsoft and notable VC firms including Khosla Ventures — is eager to avoid the controversy associated with Stability AI’s Stable Diffusion, an image-generating system that’s available in an open source format without any restrictions. As TechCrunch recently wrote about, it didn’t take long before Stable Diffusion — which can also edit face images — was being used by some to create pornographic, nonconsensual deepfakes of celebrities like Emma Watson.
So far, OpenAI has positioned itself as a brand-friendly, buttoned-up alternative to the no-holds-barred Stability AI. And with the constraints around the new face editing feature for DALL-E 2, the company is maintaining the status quo.
DALL-E 2 remains in invite-only beta. In late August, OpenAI announced that over a million people are using the service.
OpenAI begins allowing users to edit faces with DALL-E 2 by Kyle Wiggers originally published on TechCrunch
OpenAI begins allowing users to edit faces with DALL-E 2
AI is getting better at generating porn. We might not be prepared for the consequences.
A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.
AI-powered systems like Stable Diffusion, which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial) prints and full-blown marketing campaigns.
But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.
AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.
Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.
Called Porn Pen, the website allows users to customize the appearance of nude AI-generated models — all of which are women — using toggleable tags like “babe,” “lingerie model,” “chubby,” ethnicities (e.g. “Russian” and “Latina”) and backdrops (e.g. “bedroom,” “shower” and wildcards like “moon”). Buttons capture models from the front, back or side, and change the appearance of the generated photo (e.g. “film photo,” “mirror selfie”). There must be a bug on the mirror selfies, though, because in the feed of user-generated images, some mirrors don’t actually reflect a person — but of course, these models are not people at all. Porn Pen functions like “This Person Does Not Exist,” only it’s NSFW.
On Y Combinator’s Hacker News forum, a user purporting to be the creator describes Porn Pen as an “experiment” using cutting-edge text-to-image models. “I explicitly removed the ability to specify custom text to avoid harmful imagery from being generated,” they wrote. “New tags will be added once the prompt-engineering algorithm is fine-tuned further.” The creator did not respond to TechCrunch’s request for comment.
But Porn Pen raises a host of ethical questions, like biases in image-generating systems and the sources of the data from which they arose. Beyond the technical implications, one wonders whether new tech to create customized porn — assuming it catches on — could hurt adult content creators who make a living doing the same.
“I think it’s somewhat inevitable that this would come to exist when [OpenAI’s] DALL-E did,” Os Keyes, a PhD candidate at Seattle University, told TechCrunch via email. “But it’s still depressing how both the options and defaults replicate a very heteronormative and male gaze.”
Ashley, a sex worker and peer organizer who works on cases involving content moderation, thinks that the content generated by Porn Pen isn’t a threat to sex workers in its current state.
“There is endless media out there,” said Ashley, who did not want her last name to be published for fear of being harassed for their job. “But people differentiate themselves not by just making the best media, but also by being an accessible, interesting person. It’s going to be a long time before AI can replace that.”
On existing monetizable porn sites like OnlyFans and ManyVids, adult creators must verify their age and identity so that the company knows they are consenting adults. AI-generated porn models can’t do this, of course, because they aren’t real.
Ashley worries, though, that if porn sites crack down on AI porn, it might lead to harsher restrictions for sex workers, who are already facing increased regulation from legislation like SESTA/FOSTA. Congress introduced the Safe Sex Workers Study Act in 2019 to examine the affects of this legislation, which makes online sex work more difficult. This study found that “community organizations [had] reported increased homelessness of sex workers” after losing the “economic stability provided by access to online platforms.”
“SESTA was sold as fighting child sex trafficking, but it created a new criminal law about prostitution that had nothing about age,” Ashley said.
Currently, few laws around the world pertain to deepfaked porn. In the U.S., only Virginia and California have regulations restricting certain uses of faked and deepfaked pornographic media.
Systems such as Stable Diffusion “learn” to generate images from text by example. Fed billions of pictures labeled with annotations that indicate their content — for example, a picture of a dog labeled “Dachshund, wide-angle lens” — the systems learn that specific words and phrases refer to specific art styles, aesthetics, locations and so on.
This works relatively well in practice. A prompt like “a bird painting in the style of Van Gogh” will predictably yield a Van Gogh-esque image depicting a bird. But it gets trickier when the prompts are vaguer, refer to stereotypes or deal with subject matter with which the systems aren’t familiar.
For example, Porn Pen sometimes generates images without a person at all — presumably a failure of the system to understand the prompt. Other times, as alluded to earlier, it shows physically improbable models, typically with extra limbs, nipples in unusual places and contorted flesh.
“By definition [these systems are] going to represent those whose bodies are accepted and valued in mainstream society,” Keyes said, noting that Porn Pen only has categories for cisnormative people. “It’s not surprising to me that you’d end up with a disproportionately high number of women, for example.”
While Stable Diffusion, one of the systems likely underpinning Porn Pen, has relatively few “NSFW” images in its training dataset, early experiments from Redditors and 4chan users show that it’s quite competent at generating pornographic deepfakes of celebrities (Porn Pen — perhaps not coincidentally — has a “celebrity” option). And because it’s open source, there’d be nothing to prevent Porn Pen’s creator from fine-tuning the system on additional nude images.
“It’s definitely not great to generate [porn] of an existing person,” Ashley said. “It can be used to harass them.”
Deepfake porn is often created to threaten and harass people. These images are almost always developed without the subject’s consent out of malicious intent. In 2019, the research company Sensity AI found that 96% of deepfake videos online were non-consensual porn.
Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, says that there’s a possibility the dataset includes people who’ve not consented to their image being used for training in this way, including sex workers.
“Many of [the people in the nudes in the training data] may derive their income from producing pornography or pornography-adjacent content,” Cook said. “Just like fine artists, musicians or journalists, the works these people have produced are being used to create systems that also undercut their ability to earn a living in the future.”
In theory, a porn actor could use copyright protections, defamation and potentially even human rights laws to fight the creator of a deepfaked image. But as a piece in MIT Technology Review notes, gathering evidence in support of the legal argument can prove to be a massive challenge.
When more primitive AI tools popularized deepfaked porn several years ago, a Wired investigation found that nonconsensual deepfake videos were racking up millions of views on mainstream porn sites like Pornhub. Other deepfaked works found a home on sites akin to Porn Pen — according to Sensity data, the top four deepfake porn websites received more than 134 million views in 2018.
“AI image synthesis is now a widespread and accessible technology, and I don’t think anyone is really prepared for the implications of this ubiquity,” Cook continued. “In my opinion, we have rushed very, very far into the unknown in the last few years with little regard for the impact of this technology.”
To Cook’s point, one of the most popular sites for AI-generated porn expanded late last year through partner agreements, referrals and an API, allowing the service — which hosts hundreds of nonconsensual deepfakes — to survive bans on its payments infrastructure. And in 2020, researchers discovered a Telegram bot that generated abusive deepfake images of more than 100,000 women, including underage girls.
“I think we’ll see a lot more people testing the limits of both the technology and society’s boundaries in the coming decade,” Cook said. “We must accept some responsibility for this and work to educate people about the ramifications of what they are doing.”
AI is getting better at generating porn. We might not be prepared for the consequences.