Архив рубрики: Robotics & AI

Автоматически добавленное в WPeMatico

Camera maker Canon leans into software at CES

Depending on whether you spend most of your time in hospitals, offices or in the great outdoors, when you hear “Canon,” your mind will likely go to medical scanning equipment, high-end printers or cameras. At CES this year, the 85-year-old company is leaning in a new direction, with an interesting focus on software applications.
At the show, the imaging giant showed off a direction it has been hinting at before, but this time relying far less on its own hardware, and more on the software the company has developed, in part as a response to the COVID-19 pandemic casting a shadow over people’s ability to connect. To the chorus of “meaningful communication” and “powerful collaboration,” the Japanese imaging giant appears to be plotting out a new course for what’s next.

I guess you can (officially) use your fancy Canon camera as a webcam studio now

“Canon is creating ground-breaking solutions that help people connect in more ways than we ever could have imagined, redefining how they work and live at a time when many of them are embracing a hybrid lifestyle,» said Kazuto Ogawa, president and CEO, Canon U.S.A., Inc, in a press briefing at CES 2023. “Canon’s ultimate role is to bring people closer together by revealing endless opportunities for creators. Under our theme of ‘Limitless Is More,’ we will show CES 2023 attendees what we are creating as a company focused on innovation and a world without limits.”
Among other things, Canon showed off a somewhat gimmicky immersive experience tied in with M. Night Shyamalan’s upcoming thriller movie, “Knock at the Cabin.” The very Shyamalanesque movie trailer will give you a taster of the vibe. At the heart of things, however, Canon is tapping into a base desire in humanity; to feel connected to one another. The company is desperate to show off how its solutions can “remove the limits humanity faces to create more meaningful communication,” through four technologies it is showing off at the trade show this year.
Canon U.S.A. CEO Kevin Ogawa on stage at CES 2023 along with M. Night Shyamalan. Image Credits: Haje Kamps/TechCrunch
3D calling: Kokomo
The flagship solution Canon is showing off is Kokomo, which the company describes as a first-of-its-kind immersive VR software package. It is designed to combine VR with an immersive calling experience. The solution is pretty elegant: Using a VR headset and a smartphone, the Kokomo software enables users to see and hear one another in real time with their live appearance and expression in a photo-real environment.
The Kokomo solution brings 3D video calling to a home near you. Image Credits: Canon
In effect, the software package scans your face to learn what you look like, then turns you into a photo-realistic avatar. The person you are in a call with can see you — sans VR headset — showing your physical appearance and facial expressions. The effect is to experience a 3D video call. At the show, Canon is demoing the tech by letting visitors step into a 1×1 conversation with the “Knock at the Cabin” characters.
We spoke with the team behind Kokomo to figure out how the project came about, why Canon is dipping its toe in standalone software, what the future of this technology is, and how it is going to make money.

With Kokomo VR meeting software, Canon takes a step away from its hardware roots

Realtime 3D video: Free Viewpoint
Aimed at the sports market, Free Viewpoint is a solution that combines more than 100 high-end cameras with a cloud-based solution that makes it possible to move a virtual camera to any location. The software takes all the video feeds, creating a point-cloud-based 3D model that enables a virtual camera operator to create a number of angles that would otherwise have been impossible: Drone-like replay footage, swooping into the action, for example, or detailed in-the-thick-of-things-type footage, enabling viewers to see plays from the virtual perspective of one of the players.
In the U.S., the system has already been installed at two NBA arenas (including at the home of the Cavaliers and the Nets). The video can be broadcast live or compiled into replay clips. Canon also points out that the system enables “virtual advertising and other opportunities for monetization,” so I suppose we have that to look forward to as well.

Canon takes tentative step towards eliminating photographers with robotic PICK camera

Returning to the “Knock at the Cabin” theme, at CES, Canon showed off a virtual action scene captured with the Free Viewpoint video system, captured at Canon’s Volumetric Video Studio in Kawasaki, Japan. The effect of watching an action scene “through the eyes” of various characters was a wonderfully immersive experience.
Augmented reality tech: MREAL
Canon also showed off some earlier-stage tech that isn’t quite ready for prime-time viewing yet, including MREAL. This is tech that helps integrated simulation-like immersive worlds, merging the real and the virtual worlds. Use cases might include pre-visualization for movies, training scenarios and interactive mixed-reality entertainment. The company tells TechCrunch that the technology is in the market research phase.
The company is trying to figure out what to develop further and how to market the product. In other words: Who would use this, what would they use it for and what would they be willing to pay for it.

Augmented reality’s half-decade of stagnation

Remote presence: AMLOS
Activate My Line of Sight (AMLOS) is what Canon is calling its solution for hybrid meeting environments, where some participants are in person, while others are off-site. If you’ve ever been in a meeting in that configuration, you’ll often find that attending remotely is a deeply frustrating experience, as the in-person meeting participants are engaging with each other while the remote attendees are off on a screen somewhere.
Canon hopes that AMLOS can help solve that; it’s a software-and-camera set of products aiming to improve the level of engagement. It adds panning, tilting and zooming capabilities to remote camera systems, giving remote users the ability to customize their viewing and participation experience. So far, the solution is not quite intuitive enough to overcome the barrier of not being in the room, but it’s certainly better than being a disembodied wall of heads on a screen.

Camera maker Canon leans into software at CES by Haje Jan Kamps originally published on TechCrunch
Camera maker Canon leans into software at CES

QuickVid uses AI to generate short-form videos, complete with voiceovers

Generative AI is coming for videos. A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram, TikTok and Snapchat videos.
Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2 and adds a synthetic voiceover and background music from YouTube’s royalty-free music library. QuickVid’s creator, Daniel Habib, says that he’s building the service to help creators meet the “ever-growing” demand from their fans.
“By providing creators with tools to quickly and easily produce quality content, QuickVid helps creators increase their content output, reducing the risk of burnout,” Habib told TechCrunch in an email interview. “Our goal is to empower your favorite creator to keep up with the demands of their audience by leveraging advancements in AI.”
But depending on how they’re used, tools like QuickVid threaten to flood already-crowded channels with spammy and duplicative content. They also face potential backlash from creators who opt not to use the tools, whether because of cost ($10 per month) or on principle, yet might have to compete with a raft of new AI-generated videos.
Going after video
QuickVid, which Habib, a self-taught developer who previously worked at Meta on Facebook Live and video infrastructure, built in a matter of weeks, launched on December 27. It’s relatively bare bones at present — Habib says that more personalization options will arrive in January — but QuickVid can cobble together the components that make up a typical informational YouTube Short or TikTok video, including captions and even avatars.
It’s easy to use. First, a user enters a prompt describing the subject matter of the video they want to create. QuickVid uses the prompt to generate a script, leveraging the generative text powers of GPT-3. From keywords either extracted from the script automatically or entered manually, QuickVid selects a background video from the royalty-free stock media library Pexels and generates overlay images using DALL-E 2. It then outputs a voiceover via Google Cloud’s text-to-speech API — Habib says that users will soon be able to clone their voice — before combining all these elements into a video.
Image Credits: QuickVid
See this video made with the prompt “Cats”:

Or this one:
QuickVid certainly isn’t pushing the boundaries of what’s possible with generative AI. Both Meta and Google have showcased AI systems that can generate completely original clips given a text prompt. But QuickVid amalgamates existing AI to exploit the repetitive, templated format of B-roll-heavy short-form videos, getting around the problem of having to generate the footage itself.
“Successful creators have an extremely high-quality bar and aren’t interested in putting out content that they don’t feel is in their own voice,” Habib said. “This is the use case we’re focused on.”
That supposedly being the case, in terms of quality, QuickVid’s videos are generally a mixed bag. The background videos tend to be a bit random or only tangentially related to the topic, which isn’t surprising given QuickVids being currently limited to the Pexels catalog. The DALL-E 2-generated images, meanwhile, exhibit the limitations of today’s text-to-image tech, like garbled text and off proportions.
In response to my feedback, Habib said that QuickVid is “being tested and tinkered with daily.”
Copyright issues
According to Habib, QuickVid users retain the right to use the content they create commercially and have permission to monetize it on platforms like YouTube. But the copyright status around AI-generated content is … nebulous, at least presently. The U.S. Patent and Trademark Office (USPTO) recently moved to revoke copyright protection for an AI-generated comic, for example, saying copyrightable works require human authorship.
When asked about how the USPTO decision might affect QuickVid, Habib said he believes that it only pertain to the “patentability” of AI-generated products and not the rights of creators to use and monetize their content. Creators, he pointed out, aren’t often submitting patents for videos and usually lean into the creator economy, letting other creators repurpose their clips to increase their own reach.
“Creators care about putting out high-quality content in their voice that will help grow their channel,” Habib said.
Another legal challenge on the horizon might affect QuickVid’s DALL-E 2 integration — and, by extension, the site’s ability to generate image overlays. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot, a code-generating system, to regurgitate sections of licensed code without providing credit. (Copilot was co-developed by OpenAI and GitHub, which Microsoft owns.) The case has implications for generative art AI like DALL-E 2, which similarly has been found to copy and paste from the datasets on which they were trained (i.e., images).
Habib isn’t concerned, arguing that the generative AI genie’s out of the bottle. “If another lawsuit showed up and OpenAI disappeared tomorrow, there are several alternatives that could power QuickVid,” he said, referring to the open source DALL-E 2-like system Stable Diffusion. QuickVid is already testing Stable Diffusion for generating avatar pics.
Moderation and spam
Aside from the legal dilemmas, QuickVid might soon have a moderation problem on its hands. While OpenAI has implemented filters and techniques to prevent them, generative AI has well-known toxicity and factual accuracy problems. GPT-3 spouts misinformation, particularly about recent events, which are beyond the boundaries of its knowledge base. And ChatGPT, a fine-tuned offspring of GPT-3, has been shown to use sexist and racist language.
That’s worrisome, particularly for people who’d use QuickVid to create informational videos. In a quick test, I had my partner — who’s far more creative than me, particularly in this area —  enter a few offensive prompts to see what QuickVid would generate. To QuickVid’s credit, obviously problematic prompts like “Jewish new world order” and “9/11 conspiracy theory” didn’t yield toxic scripts. But for “Critical race theory indoctrinating students,” QuickVid generated a video implying that critical race theory could be used to brainwash schoolchildren.

Habib says that he’s relying on OpenAI’s filters to do most of the moderation work and asserts that it’s incumbent on users to manually review every video created by QuickVid to ensure “everything is within the boundaries of the law.”
“As a general rule, I believe people should be able to express themselves and create whatever content they want,” Habib said.
That apparently includes spammy content. Habib makes the case that the video platforms’ algorithms, not QuickVid, are best positioned to determine the quality of a video, and that people who produce low-quality content “are only damaging their own reputations.” The reputational damage will naturally disincentivize people from creating mass spam campaigns with QuickVid, he says.
“If people don’t want to watch your video, then you won’t receive distribution on platforms like YouTube,” he added. “Producing low-quality content will also make people look at your channel in a negative light.”
But it’s instructive to look at ad agencies like Fractl, which in 2019 used an AI system called Grover to generate an entire site of marketing materials — reputation be damned. In an interview with The Verge, Fractl partner Kristin Tynski said that she foresaw generative AI enabling “a massive tsunami of computer-generated content across every niche imaginable.”
In any case, video-sharing platforms like TikTok and YouTube haven’t had to contend with moderating AI-generated content on a massive scale. Deepfakes — synthetic videos that replace an existing person with someone else’s likeness — began to populate platforms like YouTube several years ago, driven by tools that made deepfaked footage easier to produce. But unlike even the most convincing deepfakes today, the types of videos QuickVid creates aren’t obviously AI-generated in any way.
Google Search’s policy on AI-generated text might be a preview of what’s to come in the video domain. Google doesn’t treat synthetic text differently from human-written text where it concerns search rankings but takes actions on content that’s “intended to manipulate search rankings and not help users.” That includes content stitched together or combined from different web pages that “[doesn’t] add sufficient value” as well as content generated through purely automated processes, both of which might apply to QuickVid.
In other words, AI-generated videos might not be banned from platforms outright should they take off in a major way but rather simply become the cost of doing business. That isn’t likely to allay the fears of experts who believe that platforms like TikTok are becoming a new home for misleading videos, but — as Habib said during the interview — “there is no stopping the generative AI revolution.”
QuickVid uses AI to generate short-form videos, complete with voiceovers by Kyle Wiggers originally published on TechCrunch
QuickVid uses AI to generate short-form videos, complete with voiceovers

Twelve Labs lands $12M for AI that understands the context of videos

To Jae Lee, a data scientist by training, it never made sense that video — which has become an enormous part of our lives, what with the rise of platforms like TikTok, Vimeo and YouTube — was difficult to search across due to the technical barriers posed by context understanding. Searching the titles, descriptions and tags of videos was always easy enough, requiring no more than a basic algorithm. But searching within videos for specific moments and scenes was long beyond the capabilities of tech, particularly if those moments and scenes weren’t labeled in an obvious way.
To solve this problem, Lee, alongside friends from the tech industry, built a cloud service for video search and understanding. It became Twelve Labs, which went on to raise $17 million in venture capital — $12 million of which came from a seed extension round that closed today. Radical Ventures led the extension with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told TechCrunch in an email.
“The vision of Twelve Labs is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure,” Lee said.
A demo of the Twelve Labs platform’s capabilities. Image Credits: Twelve Labs
Twelve Labs, which is currently in closed beta, uses AI to attempt to extract “rich information” from videos such as movement and actions, objects and people, sound, text on screen, and speech to identify the relationships between them. The platform converts these various elements into mathematical representations called “vectors” and forms “temporal connections” between frames, enabling applications like video scene search.
“As a part of achieving the company’s vision to help developers create intelligent video applications, the Twelve Labs team is building ‘foundation models’ for multimodal video understanding,” Lee said. “Developers will be able to access these models through a suite of APIs, performing not only semantic search but also other tasks such as long-form video ‘chapterization,’ summary generation and video question and answering.”
Google takes a similar approach to video understanding with its MUM AI system, which the company uses to power video recommendations across Google Search and YouTube by picking out subjects in videos (e.g., “acrylic painting materials”) based on the audio, text and visual content. But while the tech might be comparable, Twelve Labs is one of the first vendors to market with it; Google has opted to keep MUM internal, declining to make it available through a public-facing API.
That being said, Google, as well as Microsoft and Amazon, offer services (i.e., Google Cloud Video AI, Azure Video Indexer and AWS Rekognition) that recognize objects, places and actions in videos and extract rich metadata at the frame level. There’s also Reminiz, a French computer vision startup that claims to be able to index any type of video and add tags to both recorded and live-streamed content. But Lee asserts that Twelve Labs is sufficiently differentiated — in part because its platform allows customers to fine-tune the AI to specific categories of video content.
Mockup of API for fine-tuning the model to work better with salad-related content. Image Credits: Twelve Labs
“What we’ve found is that narrow AI products built to detect specific problems show high accuracy in their ideal scenarios in a controlled setting, but don’t scale so well to messy real-world data,” Lee said. “They act more as a rule-based system, and therefore lack the ability to generalize when variances occur. We also see this as a limitation rooted in lack of context understanding. Understanding of context is what gives humans the unique ability to make generalizations across seemingly different situations in the real world, and this is where Twelve Labs stands alone.”
Beyond search, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently figuring out, for example, which videos showing knives are violent versus instructional. It can also be used for media analytics and real-time feedback, he says, and to automatically generate highlight reels from videos.
A little over a year after its founding (March 2021), Twelve Labs has paying customers — Lee wouldn’t reveal how many exactly — and a multiyear contract with Oracle to train AI models using Oracle’s cloud infrastructure. Looking ahead, the startup plans to invest in building out its tech and expanding its team. (Lee declined to reveal the current size of Twelve Labs’ workforce, but LinkedIn data shows it’s roughly 18 people.)
“For most companies, despite the huge value that can be attained through large models, it really does not make sense for them to train, operate and maintain these models themselves. By leveraging a Twelve Labs platform, any organization can leverage powerful video understanding capabilities with just a few intuitive API calls,” Lee said. “The future direction of AI innovation is heading straight towards multimodal video understanding, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
Twelve Labs lands $12M for AI that understands the context of videos by Kyle Wiggers originally published on TechCrunch
Twelve Labs lands $12M for AI that understands the context of videos

Meet Unstable Diffusion, the group trying to monetize AI porn generators

When Stable Diffusion, the text-to-image AI developed by startup Stability AI, was open sourced earlier this year, it didn’t take long for the internet to wield it for porn-creating purposes. Communities across Reddit and 4chan tapped the AI system to generate realistic and anime-style images of nude characters, mostly women, as well as non-consensual fake nude imagery of celebrities.
But while Reddit quickly shut down many of the subreddits dedicated to AI porn, and communities like NewGrounds, which allows some forms of adult art, banned AI-generated artwork altogether, new forums emerged to fill the gap.
By far the largest is Unstable Diffusion, whose operators are building a business around AI systems tailored to generate high-quality porn. The server’s Patreon — started to keep the server running as well as fund general development — is currently raking in over $2,500 a month from several hundred donors.
“In just two months, our team expanded to over 13 people as well as many consultants and volunteer community moderators,” Arman Chaudhry, one of the members of the Unstable Diffusion admin team, told TechCrunch in a conversation via Discord. “We see the opportunity to make innovations in usability, user experience and expressive power to create tools that professional artists and businesses can benefit from.”
Unsurprisingly, some AI ethicists are as worried as Chaudhry is optimistic. While the use of AI to create porn isn’t new  — TechCrunch covered an AI-porn-generating app just a few months ago — Unstable Diffusion’s models are capable of generating higher-fidelity examples than most. The generated porn could have negative consequences particularly for marginalized groups, the ethicists say, including the artists and adult actors who make a living creating porn to fulfill customers’ fantasies.
A censored image from Unstable Diffusion’s Discord server. Image Credits: Unstable Diffusion
“The risks include placing even more unreasonable expectations on women’s bodies and sexual behavior, violating women’s privacy and copyrights by feeding sexual content they created to train the algorithm without consent and putting women in the porn industry out of a job,” Ravit Dotan, VP of responsible AI at Mission Control, told TechCrunch. “One aspect that I’m particularly worried about is the disparate impact AI-generated porn has on women. For example, a previous AI-based app that can ‘undress’ people works only on women.”
Humble beginnings
Unstable Diffusion got its start in August — around the same time that the Stable Diffusion model was released. Initially a subreddit, it eventually migrated to Discord, where it now has roughly 50,000 members.
“Basically, we’re here to provide support for people interested in making NSFW,” one of the Discord server admins, who goes by the name AshleyEvelyn, wrote in an announcement post from August. “Because the only community currently working on this is 4chan, we hope to provide a more reasonable community which can actually work with the wider AI community.”
Early on, Unstable Diffusion served as a place simply for sharing AI-generated porn — and methods to bypass the content filters of various image-generating apps. Soon, though, several of the server’s admins began exploring ways to build their own AI systems for porn generation on top of existing open source tools.
Stable Diffusion lent itself to their efforts. The model wasn’t built to generate porn per se, but Stability AI doesn’t explicitly prohibit developers from customizing Stable Diffusion to create porn so long as the porn doesn’t violate laws or clearly harm others. Even then, the company has adopted a laissez-faire approach to governance, placing the onus on the AI community to use Stable Diffusion responsibly.
Stability AI didn’t respond to a request for comment.
The Unstable Diffusion admins released a Discord bot to start. Powered by the vanilla Stable Diffusion, it let users generate porn by typing text prompts. But the results weren’t perfect: the nude figures the bot generated often had misplaced limbs and distorted genitalia.
Image Credits: Unstable Diffusion
The reason why was that the out-of-the-box Stable Diffusion hadn’t been exposed to enough examples of porn to “know” how to produce the desired results. Stable Diffusion, like all text-to-image AI systems, was trained on a dataset of billions of captioned images to learn the associations between written concepts and images, like how the word “bird” can refer not only to bluebirds but parakeets and bald eagles in addition to more abstract notions. While many of the images come from copyrighted sources, like Flickr and ArtStation, companies such as Stability AI argue their systems are covered by fair use — a precedent that’s soon to be tested in court.
Only a small percentage of Stable Diffusion’s dataset — about 2.9% — contains NSFW material, giving the model little to go on when it comes to explicit content. So the Unstable Diffusion admins recruited volunteers — mostly members of the Discord server — to create porn datasets for fine-tuning Stable Diffusion, the way you would give it more pictures of couches and chairs if you wanted to make a furniture generation AI.
Much of the work is ongoing, but Chaudhry tells me that some of it has already come to fruition, including a technique to “repair” distorted faces and arms in AI-generated nudes. “We are recording and addressing challenges that all AI systems run into, namely collecting a diverse dataset that is high in image quality, captioned richly with text, covering the gamut of preferences of our users,” he added.
The custom models power the aforementioned Discord bot and Unstable Diffusion’s work-in-progress, not-yet-public web app, which the admins say will eventually allow people to follow AI-generated porn from specific users.
Growing community
Today, the Unstable Diffusion server hosts AI-generated porn in a range of different art styles, sexual preferences and kinks. There’s a “men-only” channel, a softcore and “safe for work” stream, channels for hentai and furry artwork, a BDSM and “kinky things” subgroup — and even a channel reserved expressly for “nonhuman” nudes. Users in these channels can invoke the bot to generate art that fits the theme, which they can then submit to a “starboard” if they’re especially pleased with the results.
Unstable Diffusion claims to have generated over 4,375,000 images to date. On a semiregular basis, the group hosts competitions that challenge members to recreate images using the bot, the results of which are used in turn to improve Unstable Diffusion’s models.
Image Credits: Unstable Diffusion
As it grows, Unstable Diffusion aspires to be an “ethical” community for AI-generated porn — i.e. one that prohibits content like child pornography, deepfakes and excessive gore. Users of the Discord server must abide by the terms of service and submit to moderation of the images that they generate; Chaudhry claims the server employs a filter to block images containing people in its “named persons” database and has a full-time moderation team.
“We strictly allow only fictional and law-abiding generations, for both SFW and NSFW on our Discord server,” he said. “For professional tools and business applications, we will revisit and work with partners on the moderation and filtration rules that best align with their needs and commitments.”
But one imagines Unstable Diffusion’s systems will become tougher to monitor as they’re made more widely available. Chaudhry didn’t lay out plans for moderating content from the web app or Unstable Diffusion’s forthcoming subscription-based Discord bot, which third-party Discord server owners will be able to deploy within their own communities.
“We need to … think about how safety controls might be subverted when you have an API-mediated version of the system that carries controls preventing misuse,” Abhishek Gupta, the founder and principal researcher at the Montreal AI Ethics Institute, told TechCrunch via email. “Servers like Unstable Diffusion become hotbeds for accumulating a lot of problematic content in a single place, showing both the capabilities of AI systems to generate this type of content and connecting malicious users with each other to further their ‘skills’ in the generation of such content .. At the same time, they also exacerbate the burden placed on content moderation teams, who have to face trauma as they review and remove offensive content.”
A separate but related issue pertains to the artists whose artwork was used to train Unstable Diffusion’s models. As evidenced recently by the artist community’s reaction to DeviantArt’s AI image generator, DreamUp, which was trained on art uploaded to DeviantArt without creators’ knowledge, many artists take issue with AI systems that mimic their styles without giving proper credit or compensation.
Character designers like Hollie Mengert and Greg Rutkowski, whose classical painting styles and fantasy landscapes have become one of the most commonly used prompts in Stable Diffusion, have decried what they see as poor AI imitations that are nevertheless tied to their names. They’ve also expressed concerns that AI-generated art imitating their styles will crowd out their original works, harming their income as people start using AI-generated images for commercial purposes. (Unstable Diffusion grants users full ownership of — and permission to sell — the images they generate.)
Gupta raises another possibility: artists who’d never want their work associated with porn might become collateral damage as users realize certain artists’ names yield better results in Unstable Diffusion prompts — e.g., “nude women in the style of [artist name]”.
Image Credits: Unstable Diffusion
Chaudhry says that Unstable Diffusion is looking at ways to make its models “be more equitable toward the artistic community” and “give back [to] and empower artists.” But he didn’t outline specific steps, like licensing artwork or allowing artists to preclude their work from training datasets.
Artist impact
Of course, there’s a fertile market for adult artists who draw, paint and photograph suggestive works for a living. But if anyone can generate exactly the images they want to see with an AI, what will happen to human artists?
It’s not an imminent threat, necessarily. As adult art communities grapple with the implications of text-to-image generators, Simply finding a platform to publish AI-generated porn beyond the Unstable Diffusion Discord might prove to be a challenge. The furry art community FurAffinity decided to ban AI-generated art altogether, as did Newgrounds, which hosts mature art behind a content filter.
When reached for comment, one of the larger adult content hosts, OnlyFans, left open the possibility that AI art might be allowed on its platform in some form. While it has a strict policy against deepfakes, OnlyFans says that it permits content — including AI-generated content, presumably — as long as the person featured in the content is a verified OnlyFans creator.
Of course, the hosting question might be moot if the quality isn’t up to snuff.
“AI generated art to me, right now, is not very good,” said Milo Wissig, a trans painter who has experimented with how AIs depict erotic art of non-binary and trans people. “For the most part, it seems like it works best as a tool for an artist to work off of… but a lot of people can’t tell the difference and want something fast and cheap.”
For artists working in kink, it’s especially obvious to see where AI falls flat. In the case of bondage, in which tying ropes and knots is a form of art (and safety mechanism) in itself, it’s hard for the AI to replicate something so intricate.
“For kinks, it would be difficult to get an AI to make a specific kind of image that people would want,” Wissig told TechCrunch. “I’m sure it’s very difficult to get the AI to make the ropes make any sense at all.”
The source material behind these AIs can also amplify biases that already exist in traditional erotica – in other words, straight sex between white people is the norm.
“You get images that are pulled from mainstream porn,” said Wissig. “You get the whitest, most hetero stuff that the machine can think up, unless you specify not to do that.”
Image Credits: Milo Wissig
These racial biases have been extensively documented across applications of machine learning, from facial recognition to photo editing.
When it comes to porn, the consequences may not be as stark – yet there is still a special horror to watching as an AI twists and augments ordinary people until they become racialized, gendered caricatures. Even AI models like DALLE-2, which went viral when its mini version was released to the public, have been criticized for disproportionately generating art in European styles.
Last year, Wissig tried using VQGAN to generate images of “sexy queer trans people,” he wrote in an Instagram post. “I had to phrase my terms carefully just to get faces on some of them,” he added.
In the Unstable Diffusion Discord, there is little evidence to support that the AI can adequately represent genderqueer and transgender people. In a channel called “genderqueer-only,” nearly all of the generated images depict traditionally feminine women with penises.
Branching out
Unstable Diffusion isn’t strictly focusing on in-house projects. Technically a part of Equilibrium AI, a company founded by Chaudhry, the group is funding other efforts to create porn-generating AI systems including Waifu Diffusion, a model fine-tuned on anime images.
Chaudhry sees Unstable Diffusion evolving into an organization to support broader AI-powered content generation, sponsoring dev groups and providing tools and resources to help teams build their own systems. He claims that Equilibrium AI secured a spot in a startup accelerator program from an unnamed “large cloud compute provider” that comes with a “five-figure” grant in cloud hardware and compute, which Unstable Diffusion will use to expand its model training infrastructure.
In addition to the grant, Unstable Diffusion will launch a Kickstarter campaign and seek venture funding, Chaudhry says. “We plan to create our own models and fine-tune and combine them for specialized use cases which we shall spin off into new brands and products,” he added.
The group has its work cut out for it. Of all the challenges Unstable Diffusion faces, moderation is perhaps the most immediate — and consequential. Recent history is filled with examples of spectacular failures at adult content moderation. In 2020, MindGeek, Pornhub’s parent company, lost the support of major payment processors after the site site was found to be circulating child porn and sex-trafficking videos.
Will Unstable Diffusion suffer the same fate? It’s not yet clear. But with at least one senator calling on companies to implement stricter content filtering in their AI systems, the group doesn’t appear to be on the steadiest ground.
Meet Unstable Diffusion, the group trying to monetize AI porn generators by Kyle Wiggers originally published on TechCrunch
Meet Unstable Diffusion, the group trying to monetize AI porn generators

Deep Render believes AI holds the key to more efficient video compression

Chri Besenbruch, CEO of Deep Render, sees many problems with the way video compression standards are developed today. He thinks they aren’t advancing quickly enough, bemoans the fact that they’re plagued with legal uncertainty and decries their reliance on specialized hardware for acceleration.
“The codec development process is broken,” Besenbruch said in an interview with TechCrunch ahead of Disrupt, where Deep Render is participating in the Disrupt Battlefield 200. “In the compression industry, there is a significant challenge of finding a new way forward and searching for new innovations.”
Seeking a better way, Besenbruch co-founded Deep Render with Arsalan Zafar, whom he met at Imperial College London. At the time, Besenbruch was studying computer science and machine learning. He and Zafar collaborated on a research project involving distributing terabytes of video across a network, during which they say they experienced the shortcomings of compression technology firsthand.
The last time TechCrunch covered Deep Render, the startup had just closed a £1.6 million seed round ($1.81 million) led by Pentech Ventures with participation from Speedinvest. In the roughly two years since then, Deep Render has raised an additional several million dollars from existing investors, bringing its total raised to $5.7 million.
“We thought to ourselves, if the internet pipes are difficult to extend, the only thing we can do is make the data that flows through the pipes smaller,” Besenbruch said. “Hence, we decided to fuse machine learning and AI and compression technology to develop a fundamentally new way of compression data getting significantly better image and video compression ratios.”
Deep Render isn’t the first to apply AI to video compression. Alphabet’s DeepMind adapted a machine learning algorithm originally developed to play board games to the problem of compressing YouTube videos, leading to a 4% reduction in the amount of data the video-sharing service needs to stream to users. Elsewhere, there’s startup WaveOne, which claims its machine learning-based video codec outperforms all existing standards across popular quality metrics.
But Deep Render’s solution is platform-agnostic. To create it, Besenbruch says that the company compiled a dataset of over 10 million video sequences on which they trained algorithms to learn to compress video data efficiently. Deep Render used a combination of on-premise and cloud hardware for the training, with the former comprising over a hundred GPUs.
Deep Render claims the resulting compression standard is 5x better than HEVC, a widely used codec and can run in real time on mobile devices with a dedicated AI accelerator chip (e.g., the Apple Neural Engine in modern iPhones). Besenbruch says the company is in talks with three large tech firms — all with market caps over $300 billion — about paid pilots, though he declined to share names.
Eddie Anderson, a founding partner at Pentech and board member at Deep Render, shared via email: “Deep Render’s machine learning approach to codecs completely disrupts an established market. Not only is it a software route to market, but their [compression] performance is significantly better than the current state of the art. As bandwidth demands continue to increase, their solution has the potential to drive vastly improved commercial performance for current media owners and distributors.”
Deep Render currently employs 20 people. By the end of 2023, Besenbruch expects that number will more than triple to 62.
Deep Render believes AI holds the key to more efficient video compression by Kyle Wiggers originally published on TechCrunch
Deep Render believes AI holds the key to more efficient video compression