Архив рубрики: Enterprise

Regie secures $10M to generate marketing copy using AI

Regie.ai, a startup using OpenAI’s GPT-3 text-generating system to create sales and marketing content for brands, today announced that it raised $10 million in Series A funding led by Scale Venture Partners with participation from Foundation Capital, South Park Commons, Day One Ventures and prominent angel investors. The fresh investment comes as VCs see a growing opportunity in AI-powered, copy-generating adtech companies, whose tech promises to save time while potentially increasing personalization.
Regie was founded in 2020 by Matt Millen and Srinath Sridhar. Previously a software engineer at Google and Meta, Sridhar is a data scientist by trade, having developed enterprise-scale AI systems that detect duplicate images and rank search results. Millen was formerly a VP at T-Mobile, leading the national sales teams (e.g., strategic accounts and public sector).
With Regie, Sridhar says he and Millen aimed to create a way for companies to communicate with their customers via channels like email, social media, text, podcasts, online advertising and more. Because companies have so many platforms and mediums at their disposal to speak with customers, he notes, it can be a challenge for content marketers to produce continuously compelling content to reach their customers.
“The way content is getting generated has fundamentally changed,” Sridhar told TechCrunch in an email interview. “Marketers and copywriters working in the enterprise … increasingly [need] to produce and manage content and content workflows at scale.”
Regie uses GPT-3 to power its service — the same GPT-3 that can generate poetry, prose and academic papers. But it’s a “flavor” of GPT-3 fine-tuned on a training data set of roughly 20,000 sales sequences (the series of steps to convert prospects into paying customers) and nearly 100 million sales emails. Also in the mix are custom language systems built by Regie to reflect brands and their messaging, designed to be integrated with existing sale platforms like Outreach, HubSpot, and Salesloft.
Image Credits: Regie
Lest the systems spew problematic language, Regie says that every system goes through “human curation” and vetting before being released. The startup also claims to train the systems on “inclusive” language and test them for biases, like bias against certain demographic groups.
Customers can use Regie to generate original, optimized-for-search-engines content or create custom sales sequences. The platform also offers blog- and social-media-post-authoring tools for personalizing messages, as well as a Chrome extension that analyzes the “quality” of emails that customers send — and optionally rewrites the text.
“Generative AI is completely disrupting the way content is created today. The biggest competitors of Regie would be the large content authoring and management platforms that will be completely redesigned AI first going forward,” Sridhar said confidently. “For example, Adobe’s suite of products including Acrobat, Illustrator, Photoshop, now Figma as well as Adobe Experience Cloud will start to get outdated as Regie continues to build on an intelligent content creation and management platform for the enterprise.”
More immediately, Regie competes with vendors like Jasper, Phrasee, Copysmith and Copy.ai — all of which tap AI to generate bespoke marketing copy. But Sridhar argues that Regie is a more vertical platform that caters to go-to-market teams in the enterprise while combining text, images and workflows into a single glass pane.
“Generative AI is such a paradigm shift that not only productivity and top-line of companies will go up as a result, but the bottom line will also go down simultaneously. There are very few products that can improve both sides of that financial equation,” Sridhar continued. “So if a company wants to reduce costs because they want to assimilate sales tools, or reduce outsourced writing while simultaneously increasing revenue, Regie can do that. If you are an outsourced marketing agency looking to retain more customers and efficiently generate content at scale, Regie can definitely do that for agencies as well.”
The company currently has more than 70 software-as-a-service customers on annual contracts, including AT&T, Sophos, Okta and Crunchbase. Sridhar didn’t reveal revenue but said that he expects the 25-person company to grow “meaningfully” this year.
“This is a revolutionary new field. And as always, adoption will require educating the users,” Sridhar said. “It is clear to us as practitioners that the world has changed. But it will take time for others to get their hands dirty and convince themselves that this is happening — and that it is a very positive development. So we have to be patient in educating the industry. We also have to show that content quality isn’t compromised and that it can perform better and be maintained more consistently with the strategic application of AI.”
To date, Regie has raised $14.8 million.
Regie secures $10M to generate marketing copy using AI by Kyle Wiggers originally published on TechCrunch
Regie secures $10M to generate marketing copy using AI

OpenAI begins allowing users to edit faces with DALL-E 2

After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people’s faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures.
OpenAI claims that improvements to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content. In an email to customers, the company wrote:
Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes … [We] built new detection and response techniques to stop misuse.
The change in policy isn’t opening the floodgates necessarily. OpenAI’s terms of service will continue to prohibit uploading pictures of people without their consent or images that users don’t have the rights to — although it’s not clear how consistent the company’s historically been about enforcing those policies.
In any case, it’ll be a true test of OpenAI’s filtering technology, which some customers in the past have complained about being overzealous and somewhat inaccurate. Deepfakes come in many flavors, from fake vacation photos to presidents of war-torn countries. Accounting for every emerging form of abuse will be a never-ending battle, in some cases with very high stakes.
No doubt, OpenAI — which has the backing of Microsoft and notable VC firms including Khosla Ventures — is eager to avoid the controversy associated with Stability AI’s Stable Diffusion, an image-generating system that’s available in an open source format without any restrictions. As TechCrunch recently wrote about, it didn’t take long before Stable Diffusion — which can also edit face images — was being used by some to create pornographic, nonconsensual deepfakes of celebrities like Emma Watson.
So far, OpenAI has positioned itself as a brand-friendly, buttoned-up alternative to the no-holds-barred Stability AI. And with the constraints around the new face editing feature for DALL-E 2, the company is maintaining the status quo.
DALL-E 2 remains in invite-only beta. In late August, OpenAI announced that over a million people are using the service.
OpenAI begins allowing users to edit faces with DALL-E 2 by Kyle Wiggers originally published on TechCrunch
OpenAI begins allowing users to edit faces with DALL-E 2

Pliops lands $100M for chips that accelerate analytics in data centers

Analyzing data generated within the enterprise — for example, sales and purchasing data — can lead to insights that improve operations. But some organizations are struggling to process, store and use their vast amounts of data efficiently. According to an IDC survey commissioned by Seagate, organizations collect only 56% of the data available throughout their lines of business, and out of that 56%, they only use 57%.

Part of the problem is that data-intensive workloads require substantial resources, and that adding the necessary compute and storage infrastructure is often expensive. For companies moving to the cloud specifically, IDG reports that they plan to devote $78 million toward infrastructure this year. Thirty-six percent cited controlling costs as their top challenge.
That’s why Uri Beitler launched Pliops, a startup developing what he calls “data processors” for enterprise and cloud data centers. Pliop’s processors are engineered to boost the performance of databases and other apps that run on flash memory, saving money in the long run, he claims.
“It became clear that today’s data needs are incompatible with yesterday’s data center architecture. Massive data growth has collided with legacy compute and storage shortcomings, creating slowdowns in computing, storage bottlenecks and diminishing networking efficiency,” Beitler told TechCrunch in an email interview. “While CPU performance is increasing, it’s not keeping up, especially where accelerated performance is critical. Adding more infrastructure often proves to be cost prohibitive and hard to manage. As a result, organizations are looking for solutions that free CPUs from computationally intensive storage tasks.”
Pliops isn’t the first to market with a processor for data analytics. Nvidia sells the BlueField-3 data processing unit (DPU). Marvell has its Octeon technology. Oracle’s SPARC M7 chip has a data analytics accelerator coprocessor with a specialized set of instructions for data transformation. And in the realm of startups, Blueshift Memory and Speedata are creating hardware that they say can perform analytics tasks significantly faster than standard processors.
Image Credits: Pliops
But Pliops claims to be further along than most, with deployments and pilots with customers (albeit unnamed) including fintechs, “medium-sized” communication service providers, data center operators and government labs. The startup’s early traction won over investors, it would seem, which poured $100 million into its Series D round that closed today.
Koch Disruptive Technologies led the tranche, with participation from SK Hynix and Walden International’s Lip-Bu Tan, bringing Pliops’ total capital raised to date to more than $200 million. Beitler says that it’ll be put toward building out the company’s hardware and software roadmap, bolstering Pliops’ footprint with partners and expanding its international headcount.
“Many of our customers saw tremendous growth during the COVID-19 pandemic, thanks in part to their ability to react quickly to the new work environment and conditions of uncertainty. Pliops certainly did. While some customers were affected by supply chain issues, we were not,” Beitler said. “We do not see any slowdown in data growth — or the need to leverage it. Pliops was strong before this latest funding round and even stronger now.”
Accelerating data processing
Beitler, the former director of advanced memory solutions at Samsung’s Israel Research Center, co-founded Pliops in 2017 alongside Moshe Twitto and Aryeh Mergi. Twitto was a research scientist at Samsung developing signal processing technologies for flash memory, while Mergi co-launched a number of startups — including two that were acquired by EMC and SanDisk — prior to joining Pliops.
Pliop’s processor delivers drive fail protection for solid-state drives (SSD) as well as in-line compression, a technology that shrinks the size of data by finding identical data sequences and then saving only the first sequence. Beitler claims the company’s technology can reduce drive space while expanding capacity, mapping “variable-sized” compressed objects within storage to reduce wasted space.
A core component of Pliops’ processor is its hardware-accelerated key-value storage engine. In key-value databases — databases where data is stored in a “key-value” format and optimized for reading and writing — key-value engines manage all the persistent data directly. Beitler makes the case that CPUs are typically over-utilized when running these engines, resulting in apps not taking full advantage of SSD’s capabilities.
“Organizations are looking for solutions that free CPUs from computationally-intensive storage tasks. Our hardware helps create a modern data center architecture by leveraging a new generation of hardware-accelerated data processing and storage management technology — one that delivers orders of magnitude improvement in performance, reliability and scalability,” Beitler said. “In short, Pliops enables getting more out of existing infrastructure investments.”
Pliops’ processor became commercially available last July. The development team’s current focus is accelerating the ingest of data for machine learning use cases, Beitler says — use cases that have grown among Pliops’ current and potential customers.
Image Credits: Pliops
The road ahead
Certainly, Pliops has its work cut out for it. Nvidia is a formidable competitor in the data processing accelerator space, having spent years developing its BlueField lineup. And AMD acquired DPU vendor Pensando for $1.9 billion, signaling its wider ambitions.
A move that could pay dividends for Pliops is joining the Open Programmable Infrastructure Project (OPI), a relatively new venture under the Linux Foundation that aims to create standards around data accelerator hardware. While Pliops isn’t a member yet — current members include Intel, Nvidia, Marvell, F5, Red Hat, Dell and Keysight Technologies — it stands to reason that becoming one could expose its technology to a larger customer base.
Beitler demurred when asked about OPI, but pointed out that the market for data acceleration is still nascent and growing.
“We continue to see both infrastructure and application teams being overwhelmed with underperforming storage and overwhelmed applications that aren’t meeting company’s data demands,” Beitler said. “The overall feedback is that our processor is a game-changing product and without it companies are required to make years of investments in software and hardware engineering to solve the same problem.”
Pliops lands $100M for chips that accelerate analytics in data centers

Amazon launches AWS Private 5G so companies can build their own 4G mobile networks

Amazon’s cash-cow cloud division AWS has launched a new service designed to help companies deploy their own private 5G networks — eventually, at least.
AWS first announced AWS Private 5G in early preview late last year, but it’s now officially available to AWS customers starting in its U.S. East (Ohio), U.S. East (N. Virginia), and U.S. West (Oregon) regions, with plans to roll it out internationally “in the near future.”
But — and this is a big “but” — despite its name, AWS Private 5G currently only supports 4G LTE.
“It supports 4G LTE today, and will support 5G in the future, both of which give you a consistent, predictable level of throughput with ultra low latency,” AWS chief evangelist Jeff Barr wrote in a blog post.
With AWS Private 5G, companies order the hardware (a radio unit) and a bunch of special SIM cards directly from AWS, and AWS then provides all the necessary software and APIs (application programming interfaces) to enable businesses to set up their own private mobile network on-site. This incorporates the AWS Management Console, through which users specify where they want to build their network and their required capacity, with AWS automating the network setup and deployment once the customer has activated their small-cell radio units.
Crucially, the AWS-managed network infrastructure plays nicely with other AWS services, including its Identity and Access Management (IAM) offering, which enables IT to control who and what devices can access the private network. AWS Private 5G also channels into Amazon’s CloudWatch observability service, which provides insights into the network’s health, among other useful data points.
In terms of costs, AWS charges $10 per hour for each radio unit it installs, with each radio supporting speeds of 150 Mbps across up to 100 SIMs (i.e. individual devices). On top of that, AWS will bill for all data that transfers outwards to the internet, charged at Amazon’s usual EC2 (Elastic Compute Cloud) rates.
So in effect, Amazon is promising industries — such as smart factories or other locations (remote or otherwise) with high-bandwidth requirements — instant, localized 5G, while shoehorning them onto its sticky cloud infrastructure where the usual fees apply.
Public vs private
It’s clear that 5G has the potential to transform many industries, and will be the bedrock of everything from robotics and self-driving cars to virtual reality and beyond. But public 5G networks, which is what most consumers with 5G-enabled devices currently rely on, have limited coverage and the bandwidth may be shared by million of users. On top of that, companies have little control over the network, even if their premises are within range of the network. And that is why private 5G networks are an appealing proposition, particularly for enterprises with mission-critical applications that demand low-latency data transfers round-the-clock.
AWS Private 5G uses Citizen Broadband Radio Service (CBRS), a shared 3.5 GHz wireless spectrum that the Federal Communications Commission (FCC) authorized in early 2020 for use in commercial environments, as it had previously been reserved for the Department of Defense (DoD). So this update essentially opened CBRS to myriad use-cases, including businesses looking to build new 5G services, or extend existing 4G/LTE services.
At the same time, the FCC announced key Spectrum Access System (SAS) administrators who would be authorized to manage wireless communications in the CBRS band, a process effectively designed to protect “high priority” users (e.g. the DoD) from interference. Any device connecting to the CBRS spectrum needs authorization from a SAS administrator, which today includes Google, Sony, CommScope, Federated Wireless, Key Bridge Wireless and Amdocs.

How your company can adopt a usage-based business model like AWS

And this is a key component of the new AWS Private 5G service — it’s fully-integrated into the SAS administration process, with AWS managing everything on behalf of the customer, including taking on responsibility for interference issues among other troubleshooting tidbits relating to spectrum access.
So Amazon’s new private 5G offering is perhaps something of a misnomer as it stands today, insofar as it currently only supports 4G LTE. But the OnGo Alliance (then called the CBRS Alliance) completed its 5G specs for CBRS more than two years ago now, and the intervening months have been about setting the foundation to enable fully commercial 5G services — just yesterday, Samsung Electronics America announced a partnership with Kajeet to deploy a new private 5G network on CBRS.
But while “AWS Private 5G” is a nod to what it’s built to support in the future, the current branding may cause some consternation among interested parties seeking local 5G deployments today.
Amazon launches AWS Private 5G so companies can build their own 4G mobile networks

Google delays move away from cookies in Chrome to 2024

Google is again delaying plans to phase out Chrome’s use of third-party cookies — the files websites use to remember preferences and track online activity. In a blog post, Anthony Chavez, Google’s VP of Privacy Sandbox, said that the company is now targeting the “second half of 2024” as the timeframe for adopting an alternative technology.
It’ll be a long time coming. Last June, Google said it would depreciate cookies in the second half of 2023. Before then, in January 2020, the company pledged to make the switch by 2022.
“We’ve worked closely to refine our design proposals based on input from developers, publishers, marketers, and regulators via forums,” Chavez wrote. “The most consistent feedback we’ve received is the need for more time to evaluate and test the new … technologies before deprecating third-party cookies in Chrome.”
Google’s efforts to move away from cookies date back to 2019, when the company announced a long-term roadmap to adopt ostensibly more private ways of tracking web users. The linchpin is Privacy Sandbox, which aims to create web standards that power advertising without the use of so-called “tracking” cookies. Tracking cookies, used to personalize ads, can capture a person’s web history and remain active for years without their knowledge.
Privacy Sandbox proposes using an in-browser algorithm, Federated Learning of Cohorts (FLoC), to analyze a users’ activity and generate a “privacy-preserving” ID that can be used by advertisers for targeting. Google claims that FLoC is more anonymous than cookies, but the Electronic Frontier Foundation has described it as “the opposite of privacy-preserving technology” and akin to a “behavioral credit score.”
Privacy Sandbox has also prompted regulators to investigate whether Google’s adtech aims are anticompetitive. In January 2021, the Competition and Markets Authority (CMA) in the U.K. announced plans to focus on Privacy Sandbox’s potential impacts on both publishers and users. And in March, 15 attorneys general of U.S. states and Puerto Rico amended an antitrust complaint filed the previous December saying that the changes in the Privacy Sandbox would require advertisers to use Google as a middleman in order to advertise.
Google earlier this year reached an agreement with the CMA on how it develops and releases Privacy Sandbox in Chrome, which will include working with the CMA to “resolve concerns” and consulting and updating the CMA and the U.K.’s Information Commissioner’s Office on an ongoing basis.
In the meantime, Chavez says that Google will expand a trial of its Privacy Sandbox technologies to “millions” of Chrome users beginning in August. It’ll then gradually increase the trial population throughout the year into 2023, offering an opt-out option to users who don’t wish to participate.
Google now expects Privacy Sandbox APIs to be launched and generally available in Chrome by the third quarter of 2023.
“Improving people’s privacy, while giving businesses the tools they need to succeed online, is vital to the future of the open web,” Chavez wrote. “As the web community tests these APIs, we’ll continue to listen and respond to feedback.”
Google delays move away from cookies in Chrome to 2024