Архив рубрики: Robotics & AI

Автоматически добавленное в WPeMatico

Regie secures $10M to generate marketing copy using AI

Regie.ai, a startup using OpenAI’s GPT-3 text-generating system to create sales and marketing content for brands, today announced that it raised $10 million in Series A funding led by Scale Venture Partners with participation from Foundation Capital, South Park Commons, Day One Ventures and prominent angel investors. The fresh investment comes as VCs see a growing opportunity in AI-powered, copy-generating adtech companies, whose tech promises to save time while potentially increasing personalization.
Regie was founded in 2020 by Matt Millen and Srinath Sridhar. Previously a software engineer at Google and Meta, Sridhar is a data scientist by trade, having developed enterprise-scale AI systems that detect duplicate images and rank search results. Millen was formerly a VP at T-Mobile, leading the national sales teams (e.g., strategic accounts and public sector).
With Regie, Sridhar says he and Millen aimed to create a way for companies to communicate with their customers via channels like email, social media, text, podcasts, online advertising and more. Because companies have so many platforms and mediums at their disposal to speak with customers, he notes, it can be a challenge for content marketers to produce continuously compelling content to reach their customers.
“The way content is getting generated has fundamentally changed,” Sridhar told TechCrunch in an email interview. “Marketers and copywriters working in the enterprise … increasingly [need] to produce and manage content and content workflows at scale.”
Regie uses GPT-3 to power its service — the same GPT-3 that can generate poetry, prose and academic papers. But it’s a “flavor” of GPT-3 fine-tuned on a training data set of roughly 20,000 sales sequences (the series of steps to convert prospects into paying customers) and nearly 100 million sales emails. Also in the mix are custom language systems built by Regie to reflect brands and their messaging, designed to be integrated with existing sale platforms like Outreach, HubSpot, and Salesloft.
Image Credits: Regie
Lest the systems spew problematic language, Regie says that every system goes through “human curation” and vetting before being released. The startup also claims to train the systems on “inclusive” language and test them for biases, like bias against certain demographic groups.
Customers can use Regie to generate original, optimized-for-search-engines content or create custom sales sequences. The platform also offers blog- and social-media-post-authoring tools for personalizing messages, as well as a Chrome extension that analyzes the “quality” of emails that customers send — and optionally rewrites the text.
“Generative AI is completely disrupting the way content is created today. The biggest competitors of Regie would be the large content authoring and management platforms that will be completely redesigned AI first going forward,” Sridhar said confidently. “For example, Adobe’s suite of products including Acrobat, Illustrator, Photoshop, now Figma as well as Adobe Experience Cloud will start to get outdated as Regie continues to build on an intelligent content creation and management platform for the enterprise.”
More immediately, Regie competes with vendors like Jasper, Phrasee, Copysmith and Copy.ai — all of which tap AI to generate bespoke marketing copy. But Sridhar argues that Regie is a more vertical platform that caters to go-to-market teams in the enterprise while combining text, images and workflows into a single glass pane.
“Generative AI is such a paradigm shift that not only productivity and top-line of companies will go up as a result, but the bottom line will also go down simultaneously. There are very few products that can improve both sides of that financial equation,” Sridhar continued. “So if a company wants to reduce costs because they want to assimilate sales tools, or reduce outsourced writing while simultaneously increasing revenue, Regie can do that. If you are an outsourced marketing agency looking to retain more customers and efficiently generate content at scale, Regie can definitely do that for agencies as well.”
The company currently has more than 70 software-as-a-service customers on annual contracts, including AT&T, Sophos, Okta and Crunchbase. Sridhar didn’t reveal revenue but said that he expects the 25-person company to grow “meaningfully” this year.
“This is a revolutionary new field. And as always, adoption will require educating the users,” Sridhar said. “It is clear to us as practitioners that the world has changed. But it will take time for others to get their hands dirty and convince themselves that this is happening — and that it is a very positive development. So we have to be patient in educating the industry. We also have to show that content quality isn’t compromised and that it can perform better and be maintained more consistently with the strategic application of AI.”
To date, Regie has raised $14.8 million.
Regie secures $10M to generate marketing copy using AI by Kyle Wiggers originally published on TechCrunch
Regie secures $10M to generate marketing copy using AI

AI is taking over the iconic voice of Darth Vader, with the blessing of James Earl Jones

From the cringe-inducing Jar Jar Binks to unconvincing virtual Leia and Luke, Disney’s history with CG characters is, shall we say, mixed. But that’s not stopping them from replacing one of the most recognizable voices in cinema history, Darth Vader, with an AI-powered voice replica based on James Earl Jones.
The retirement of Jones, now 91, from the role, is of course well-earned. But if Disney continues to have its way (and there is no force in the world that can stop it), Vader is far from done. It would be unthinkable to recast the character, but if Jones is done, what can they do?
The solution is Respeecher, a Ukrainian company that trains text-to-speech machine learning models with the (licensed and released) recordings of actors who, for whatever reason, will no longer play a part.
Vanity Fair just ran a great story on how the company managed to put together the Vader replacement voice for Disney’s “Obi-Wan Kenobi” — while the country was being invaded by Russia. Interesting enough, but others noted that it serves as confirmation that the iconic voice of Vader would officially from now on be rendered by AI.
This is far from the first case where a well-known actor has had their voice synthesized or altered in this way. Another notable recent example is “Top Gun: Maverick,” in which the voice of Val Kilmer (reprising his role as Iceman) was synthesized due to the actor’s medical condition.
That sounded good, but a handful of whispered lines aren’t quite the same as a 1:1 replacement for a voice even children have known (and feared) for decades. Can a small company working at the cutting edge of machine learning tech pull it off?
You can judge for yourself — here’s one compilation of clips — and to me it seems pretty solid. The main criticism of that show wasn’t Vader’s voice, that’s for sure. If you weren’t expecting anything, you would probably just assume it was Jones speaking the lines, not another actor’s voice being modified to fit the bill.
The giveaway is that it doesn’t actually sound like Jones does now — it sounds like he did in the ’70s and ’80s when the original trilogy came out. That’s what anyone seeing Obi-Wan and Vader fight will expect, probably, but it’s a bit strange to think about.
It opens up a whole new can of worms. Sure, an actor may license their voice work for a character, but what about when that character ages? What about a totally different character they voice, but that there is some similarity to? What recourse do they have if their voice synthesis files leak and people are using it willy-nilly?

Spotify is acquiring Sonantic, the AI voice platform used to simulate Val Kilmer’s voice in ‘Top Gun: Maverick’

It’s an interesting new field to work in, but it’s hardly without pitfalls and ethical conundra. Disney has already broken the seal on many transformative technologies in filmmaking and television, and borne the deserved criticism when what it put out did not meet audiences’ expectations.
But they can take the hits and roll with them — maybe even take a page from George Lucas’s book and try to rewrite history, improving the rendering of Grand Moff Tarkin in a bid to make us forget how waxy he looked originally. As long as the technology is used to advance and complement the creativity of writers, directors and everyone else who makes movies magic, and not to save a buck or escape tricky rights situations, I can get behind it.
AI is taking over the iconic voice of Darth Vader, with the blessing of James Earl Jones by Devin Coldewey originally published on TechCrunch
AI is taking over the iconic voice of Darth Vader, with the blessing of James Earl Jones

OpenAI begins allowing users to edit faces with DALL-E 2

After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people’s faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures.
OpenAI claims that improvements to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content. In an email to customers, the company wrote:
Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes … [We] built new detection and response techniques to stop misuse.
The change in policy isn’t opening the floodgates necessarily. OpenAI’s terms of service will continue to prohibit uploading pictures of people without their consent or images that users don’t have the rights to — although it’s not clear how consistent the company’s historically been about enforcing those policies.
In any case, it’ll be a true test of OpenAI’s filtering technology, which some customers in the past have complained about being overzealous and somewhat inaccurate. Deepfakes come in many flavors, from fake vacation photos to presidents of war-torn countries. Accounting for every emerging form of abuse will be a never-ending battle, in some cases with very high stakes.
No doubt, OpenAI — which has the backing of Microsoft and notable VC firms including Khosla Ventures — is eager to avoid the controversy associated with Stability AI’s Stable Diffusion, an image-generating system that’s available in an open source format without any restrictions. As TechCrunch recently wrote about, it didn’t take long before Stable Diffusion — which can also edit face images — was being used by some to create pornographic, nonconsensual deepfakes of celebrities like Emma Watson.
So far, OpenAI has positioned itself as a brand-friendly, buttoned-up alternative to the no-holds-barred Stability AI. And with the constraints around the new face editing feature for DALL-E 2, the company is maintaining the status quo.
DALL-E 2 remains in invite-only beta. In late August, OpenAI announced that over a million people are using the service.
OpenAI begins allowing users to edit faces with DALL-E 2 by Kyle Wiggers originally published on TechCrunch
OpenAI begins allowing users to edit faces with DALL-E 2

AI is getting better at generating porn. We might not be prepared for the consequences.

A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.
AI-powered systems like Stable Diffusion, which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial) prints and full-blown marketing campaigns.
But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.
AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.
Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.
Called Porn Pen, the website allows users to customize the appearance of nude AI-generated models — all of which are women — using toggleable tags like “babe,” “lingerie model,” “chubby,” ethnicities (e.g. “Russian” and “Latina”) and backdrops (e.g. “bedroom,” “shower” and wildcards like “moon”). Buttons capture models from the front, back or side, and change the appearance of the generated photo (e.g. “film photo,” “mirror selfie”). There must be a bug on the mirror selfies, though, because in the feed of user-generated images, some mirrors don’t actually reflect a person — but of course, these models are not people at all. Porn Pen functions like “This Person Does Not Exist,” only it’s NSFW.
On Y Combinator’s Hacker News forum, a user purporting to be the creator describes Porn Pen as an “experiment” using cutting-edge text-to-image models. “I explicitly removed the ability to specify custom text to avoid harmful imagery from being generated,” they wrote. “New tags will be added once the prompt-engineering algorithm is fine-tuned further.” The creator did not respond to TechCrunch’s request for comment.
But Porn Pen raises a host of ethical questions, like biases in image-generating systems and the sources of the data from which they arose. Beyond the technical implications, one wonders whether new tech to create customized porn — assuming it catches on — could hurt adult content creators who make a living doing the same.
“I think it’s somewhat inevitable that this would come to exist when [OpenAI’s] DALL-E did,” Os Keyes, a PhD candidate at Seattle University, told TechCrunch via email. “But it’s still depressing how both the options and defaults replicate a very heteronormative and male gaze.”
Ashley, a sex worker and peer organizer who works on cases involving content moderation, thinks that the content generated by Porn Pen isn’t a threat to sex workers in its current state.
“There is endless media out there,” said Ashley, who did not want her last name to be published for fear of being harassed for their job. “But people differentiate themselves not by just making the best media, but also by being an accessible, interesting person. It’s going to be a long time before AI can replace that.”
On existing monetizable porn sites like OnlyFans and ManyVids, adult creators must verify their age and identity so that the company knows they are consenting adults. AI-generated porn models can’t do this, of course, because they aren’t real.
Ashley worries, though, that if porn sites crack down on AI porn, it might lead to harsher restrictions for sex workers, who are already facing increased regulation from legislation like SESTA/FOSTA. Congress introduced the Safe Sex Workers Study Act in 2019 to examine the affects of this legislation, which makes online sex work more difficult. This study found that “community organizations [had] reported increased homelessness of sex workers” after losing the “economic stability provided by access to online platforms.”
“SESTA was sold as fighting child sex trafficking, but it created a new criminal law about prostitution that had nothing about age,” Ashley said.
Currently, few laws around the world pertain to deepfaked porn. In the U.S., only Virginia and California have regulations restricting certain uses of faked and deepfaked pornographic media.
Systems such as Stable Diffusion “learn” to generate images from text by example. Fed billions of pictures labeled with annotations that indicate their content — for example, a picture of a dog labeled “Dachshund, wide-angle lens” — the systems learn that specific words and phrases refer to specific art styles, aesthetics, locations and so on.
This works relatively well in practice. A prompt like “a bird painting in the style of Van Gogh” will predictably yield a Van Gogh-esque image depicting a bird. But it gets trickier when the prompts are vaguer, refer to stereotypes or deal with subject matter with which the systems aren’t familiar.
For example, Porn Pen sometimes generates images without a person at all — presumably a failure of the system to understand the prompt. Other times, as alluded to earlier, it shows physically improbable models, typically with extra limbs, nipples in unusual places and contorted flesh.
“By definition [these systems are] going to represent those whose bodies are accepted and valued in mainstream society,” Keyes said, noting that Porn Pen only has categories for cisnormative people. “It’s not surprising to me that you’d end up with a disproportionately high number of women, for example.”
While Stable Diffusion, one of the systems likely underpinning Porn Pen, has relatively few “NSFW” images in its training dataset, early experiments from Redditors and 4chan users show that it’s quite competent at generating pornographic deepfakes of celebrities (Porn Pen — perhaps not coincidentally — has a “celebrity” option). And because it’s open source, there’d be nothing to prevent Porn Pen’s creator from fine-tuning the system on additional nude images.
“It’s definitely not great to generate [porn] of an existing person,” Ashley said. “It can be used to harass them.”
Deepfake porn is often created to threaten and harass people. These images are almost always developed without the subject’s consent out of malicious intent. In 2019, the research company Sensity AI found that 96% of deepfake videos online were non-consensual porn.
Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, says that there’s a possibility the dataset includes people who’ve not consented to their image being used for training in this way, including sex workers.
“Many of [the people in the nudes in the training data] may derive their income from producing pornography or pornography-adjacent content,” Cook said. “Just like fine artists, musicians or journalists, the works these people have produced are being used to create systems that also undercut their ability to earn a living in the future.”
In theory, a porn actor could use copyright protections, defamation and potentially even human rights laws to fight the creator of a deepfaked image. But as a piece in MIT Technology Review notes, gathering evidence in support of the legal argument can prove to be a massive challenge.
When more primitive AI tools popularized deepfaked porn several years ago, a Wired investigation found that nonconsensual deepfake videos were racking up millions of views on mainstream porn sites like Pornhub. Other deepfaked works found a home on sites akin to Porn Pen — according to Sensity data, the top four deepfake porn websites received more than 134 million views in 2018.
“AI image synthesis is now a widespread and accessible technology, and I don’t think anyone is really prepared for the implications of this ubiquity,” Cook continued. “In my opinion, we have rushed very, very far into the unknown in the last few years with little regard for the impact of this technology.”
To Cook’s point, one of the most popular sites for AI-generated porn expanded late last year through partner agreements, referrals and an API, allowing the service — which hosts hundreds of nonconsensual deepfakes — to survive bans on its payments infrastructure. And in 2020, researchers discovered a Telegram bot that generated abusive deepfake images of more than 100,000 women, including underage girls.
“I think we’ll see a lot more people testing the limits of both the technology and society’s boundaries in the coming decade,” Cook said. “We must accept some responsibility for this and work to educate people about the ramifications of what they are doing.”
AI is getting better at generating porn. We might not be prepared for the consequences.

Daily Crunch: Snap lays off one-fifth of its workforce after missing revenue and growth targets

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.
Midweek? More like mid-weak! Okay, terrible pun, but we’re a little low energy in this heat wave today, so it kinda made sense.
Oh! And good news, btw, we’re offering 15% off Disrupt tickets (excluding online or expo tickets) for you, our trusty Daily Crunch readers. Use promo code “DC” to claim your discount!
See you tomorrow!  — Christine and Haje
The TechCrunch Top 3
Slumdog $5-illonnaire: Landa is the latest startup to attract venture capital, in this case $33 million, to democratize real estate ownership, Mary Ann writes. Its approach enables people to invest in the real estate sector, which is known for providing generational wealth, but in a less expensive, more fractional way, and in some cases, for as little as $5 initially.
Snap, crackle and . . . fizzle: Despite the myriad of news and new revenue streams we’ve reported about Snap right here in this newsletter, Evan Spiegel said the words no tech employee wants to hear right now: “restructuring our business.” Amanda reports that this unfortunately means cutting 20% of staff.
Obstacles abroad: Amazon faces some tough competition in India, and Manish reports that has presented some challenges in the e-commerce giant’s ability to gain a more prominent foothold in the country.
Startups and VC
This week, Haje went deep with a founder who’s building digital license plates. He mused that building an easy-to-copy hardware product in an incredibly tightly regulated industry where winner-takes-all would be an utter nightmare, but when it works, it works, and it’s fascinating to see Reviver build a company, one license plate at the time.
Populus, the San Francisco–based transportation data startup, got its start as shared scooter mania took hold and cities tried to make sense of how infrastructure was being used by fleets of tiny vehicles. Now, Populus co-founder and CEO Regina Clewlow is repositioning the company to take advantage of another hot opportunity: curbs and congestion, Rebecca writes. It’s a really good read from the TechCrunch transportation desk with an undertone of “the power of great pivots.”
Raisin’ money, raisin’ hell:
Looking beyond the matrix: Ron reports on CodeSee’s latest product, which helps organizations visualize their code base.
Turning coaching into a team sport: Natasha M reports that the founder of Human Q disagrees with some of the biggest and most valuable competitors out there. Instead of one-to-one coaching, Human Q wants to make group coaching an impactful alternative. This founder wants to take on the biggest coaching startups with a group-focused approach.
Stretching the chains: Supply chain firm NFI inks a $10 million deal to deploy Boston Dynamics’ Stretch robots, reports Brian.
Fintech, that’s like fly-fishing, right?: Christine reports that Solid raised a $63 million Series B round of funding to continue providing its fintech-as-a-service offering for companies wanting to launch and scale their own fintech products. 
Like twitch3: Rita reports that Stacked raised $13 million to be the Twitch for web3 gamers.
 Crafting a XaaS customer success strategy that drives growth
Image Credits: THEPALMER (opens in a new window) / Getty Images
Giving users better service than they expected could literally save a software startup. In one study, companies that spent 10% of yearly revenue on customer success attained peak net recurring revenue.
“Companies mostly deploy two or more customer success archetypes,” according to TC+ contributors Rachel Parrinello and John Stamos. “They usually vary by customer segment, business versus technical focus and sales motion focus: adopt, renew, upsell and cross-sell.”
If you’re interested in optimizing revenue through CS, read the rest for a full overview of job design methodology, because “companies should not design their customer success roles in a vacuum.”

Crafting a XaaS customer success strategy that drives growth

(TechCrunch+ is our membership program, which helps founders and startup teams get ahead. You can sign up here.)
Big Tech Inc.
Social media and privacy don’t often go hand in hand, especially when children can see a lot on the internet already. Twitter got caught up in this when it reportedly tried to monetize adult content in an effort to compete with OnlyFans. It later scrapped the program when it was found that its system couldn’t “detect child sexual abuse material and non-consensual nudity at scale,” Amanda writes. Meanwhile, California lawmakers wasted no time moving ahead to put in place statewide online privacy protections for children where there are none at the federal level, Taylor reports.
Stepping on the gas, er, EV pedal: Toyota is accelerating its investment in U.S. electric vehicles, and will park some $3.8 billion into that initiative, up from an initial $1.3 billion, Jaclyn writes.
Cashing in on NFTs: Event organizers working with Ticketmaster can now issue NFTs tied to tickets on Flow, Ivan reports.
It’s almost fall and that means another Apple event: Brian has the skinny on all the things you should know about Apple’s iPhone 14 event on September 7.
New satellite on the block: Royal Caribbean is going “all-in on satellite service,” and will outfit its fleet of ships with Starlink internet, Devin writes.

Daily Crunch: Snap lays off one-fifth of its workforce after missing revenue and growth targets