Архив метки: Google Search

Google celebrates Star Wars Day with a fun Grogu Easter egg

Google is celebrating Star Wars Day with a special Easter egg featuring Grogu, also known as Baby Yoda. When you type “Grogu” or “Baby Yoda” into the Google Search bar, the character will appear in the bottom right corner of your screen. Once you click on him, he will use the Force to knock out the top section of the results.
If you click on him again, he will continue to dismantle the search results.
Image Credits: Screenshot/TechCrunch
Grogu stars alongside Pedro Pascal in the Disney+ original series “The Mandalorian,” which just finished its third season. The character has been a fan favorite ever since the series premiered on Disney+ in November 2019.
Baby Yoda can knock out Search results on both the mobile and desktop versions of Google.

‘The Acolyte’ Star Wars series will debut on Disney+ in 2024

Google celebrates Star Wars Day with a fun Grogu Easter egg by Aisha Malik originally published on TechCrunch
Google celebrates Star Wars Day with a fun Grogu Easter egg

Google winds down feature that put playable podcasts directly in search results

Google confirmed it’s putting an end to a feature that allowed users to access playable podcasts directly from the Google Search results in favor of offering podcast recommendations. Officially launched in 2019, the feature surfaced podcasts when they matched a user’s query, including in those cases where a user specifically included the word “podcast” in their search terms. But a few weeks ago, some creators began noticing the podcast carousels had disappeared from Google Search results — and now the company is explaining why that’s the case.
The disappearance was first spotted by Podnews.net, which noted in January that searches for podcasts no longer returned any play buttons or links to Google Podcasts itself. When they tested the feature by searching for “history podcasts” they were only provided with a list of shows alongside links to podcast reviews, Apple Podcast pages and other places to stream.
At the time, Google simply told the site the feature was working “as intended.”
But a new announcement in Google Podcasts Manager indicates the feature is officially being shut down as of February 13.
“Google Search will stop showing podcast carousels by February 13. As a result, clicks and impressions in How people find your show will drop to zero after that date,” the message states. Podcasters are also being instructed to download any historical data they want to keep in advance of this final closure.
Of course, as many podcasters already discovered, their metrics had already declined as the feature was being wound down.
To be fair, playable podcasts in search wasn’t a remarkably well-executed product as it didn’t offer a way to do much more than click to play an episode. On YouTube’s Podcasts vertical, by comparison, podcast creators can create an index to the various parts of an episode, allowing users to jump directly to the section they wanted to hear. Plus, users can watch a video of the podcast, if the creator chooses to film.
YouTube has also proven to be more popular than Google Podcasts and other competitors. In a 2022 market survey of podcast listeners, for example, YouTube came out ahead of Spotify, Apple Podcasts and Google Podcasts as users’ preferred podcast platform. Though many podcast market analysis reports don’t consider YouTube when comparing the popularity of various podcast apps, one recent report by Buzzsprout at least suggests that using web browser as a listening app had a very small market share of just 3.5%. And that share had barely increased over the years, despite Google’s indexing of shows.
Reached for comment, Google explained its decision to wind down playable podcasts in Search will allow it to focus on a new addition instead.
“Our existing podcast features will gradually be replaced with a new, single feature, What to Podcast,” a spokesperson told us. They noted the feature is currently live on mobile for English users in the U.S. “This feature provides detailed information about podcasts, links to listen to shows on different platforms, and links to podcasters’ own websites, where available,” the spokesperson added.
According to the help documentation, these recommendations will be personalized to the user if they’re signed into their Google account and will factor in things like the user’s past searches and browsing history, saved podcasts and other podcast preferences. The personalized results can be turned off, however, if the user wants more generic suggestions, Google says.
Google winds down feature that put playable podcasts directly in search results by Sarah Perez originally published on TechCrunch
Google winds down feature that put playable podcasts directly in search results

QuickVid uses AI to generate short-form videos, complete with voiceovers

Generative AI is coming for videos. A new website, QuickVid, combines several generative AI systems into a single tool for automatically creating short-form YouTube, Instagram, TikTok and Snapchat videos.
Given as little as a single word, QuickVid chooses a background video from a library, writes a script and keywords, overlays images generated by DALL-E 2 and adds a synthetic voiceover and background music from YouTube’s royalty-free music library. QuickVid’s creator, Daniel Habib, says that he’s building the service to help creators meet the “ever-growing” demand from their fans.
“By providing creators with tools to quickly and easily produce quality content, QuickVid helps creators increase their content output, reducing the risk of burnout,” Habib told TechCrunch in an email interview. “Our goal is to empower your favorite creator to keep up with the demands of their audience by leveraging advancements in AI.”
But depending on how they’re used, tools like QuickVid threaten to flood already-crowded channels with spammy and duplicative content. They also face potential backlash from creators who opt not to use the tools, whether because of cost ($10 per month) or on principle, yet might have to compete with a raft of new AI-generated videos.
Going after video
QuickVid, which Habib, a self-taught developer who previously worked at Meta on Facebook Live and video infrastructure, built in a matter of weeks, launched on December 27. It’s relatively bare bones at present — Habib says that more personalization options will arrive in January — but QuickVid can cobble together the components that make up a typical informational YouTube Short or TikTok video, including captions and even avatars.
It’s easy to use. First, a user enters a prompt describing the subject matter of the video they want to create. QuickVid uses the prompt to generate a script, leveraging the generative text powers of GPT-3. From keywords either extracted from the script automatically or entered manually, QuickVid selects a background video from the royalty-free stock media library Pexels and generates overlay images using DALL-E 2. It then outputs a voiceover via Google Cloud’s text-to-speech API — Habib says that users will soon be able to clone their voice — before combining all these elements into a video.
Image Credits: QuickVid
See this video made with the prompt “Cats”:

https://techcrunch.com/wp-content/uploads/2022/12/img_5pg7k95x9ig2tofh7mkrr_cfr.mp4
Or this one:
https://techcrunch.com/wp-content/uploads/2022/12/img_61ighv4x55slq9582dbx_cfr.mp4
QuickVid certainly isn’t pushing the boundaries of what’s possible with generative AI. Both Meta and Google have showcased AI systems that can generate completely original clips given a text prompt. But QuickVid amalgamates existing AI to exploit the repetitive, templated format of B-roll-heavy short-form videos, getting around the problem of having to generate the footage itself.
“Successful creators have an extremely high-quality bar and aren’t interested in putting out content that they don’t feel is in their own voice,” Habib said. “This is the use case we’re focused on.”
That supposedly being the case, in terms of quality, QuickVid’s videos are generally a mixed bag. The background videos tend to be a bit random or only tangentially related to the topic, which isn’t surprising given QuickVids being currently limited to the Pexels catalog. The DALL-E 2-generated images, meanwhile, exhibit the limitations of today’s text-to-image tech, like garbled text and off proportions.
In response to my feedback, Habib said that QuickVid is “being tested and tinkered with daily.”
Copyright issues
According to Habib, QuickVid users retain the right to use the content they create commercially and have permission to monetize it on platforms like YouTube. But the copyright status around AI-generated content is … nebulous, at least presently. The U.S. Patent and Trademark Office (USPTO) recently moved to revoke copyright protection for an AI-generated comic, for example, saying copyrightable works require human authorship.
When asked about how the USPTO decision might affect QuickVid, Habib said he believes that it only pertain to the “patentability” of AI-generated products and not the rights of creators to use and monetize their content. Creators, he pointed out, aren’t often submitting patents for videos and usually lean into the creator economy, letting other creators repurpose their clips to increase their own reach.
“Creators care about putting out high-quality content in their voice that will help grow their channel,” Habib said.
Another legal challenge on the horizon might affect QuickVid’s DALL-E 2 integration — and, by extension, the site’s ability to generate image overlays. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot, a code-generating system, to regurgitate sections of licensed code without providing credit. (Copilot was co-developed by OpenAI and GitHub, which Microsoft owns.) The case has implications for generative art AI like DALL-E 2, which similarly has been found to copy and paste from the datasets on which they were trained (i.e., images).
Habib isn’t concerned, arguing that the generative AI genie’s out of the bottle. “If another lawsuit showed up and OpenAI disappeared tomorrow, there are several alternatives that could power QuickVid,” he said, referring to the open source DALL-E 2-like system Stable Diffusion. QuickVid is already testing Stable Diffusion for generating avatar pics.
Moderation and spam
Aside from the legal dilemmas, QuickVid might soon have a moderation problem on its hands. While OpenAI has implemented filters and techniques to prevent them, generative AI has well-known toxicity and factual accuracy problems. GPT-3 spouts misinformation, particularly about recent events, which are beyond the boundaries of its knowledge base. And ChatGPT, a fine-tuned offspring of GPT-3, has been shown to use sexist and racist language.
That’s worrisome, particularly for people who’d use QuickVid to create informational videos. In a quick test, I had my partner — who’s far more creative than me, particularly in this area —  enter a few offensive prompts to see what QuickVid would generate. To QuickVid’s credit, obviously problematic prompts like “Jewish new world order” and “9/11 conspiracy theory” didn’t yield toxic scripts. But for “Critical race theory indoctrinating students,” QuickVid generated a video implying that critical race theory could be used to brainwash schoolchildren.
See:

https://techcrunch.com/wp-content/uploads/2022/12/img_e4wba39us0vqtc8051491_cfr.mp4
Habib says that he’s relying on OpenAI’s filters to do most of the moderation work and asserts that it’s incumbent on users to manually review every video created by QuickVid to ensure “everything is within the boundaries of the law.”
“As a general rule, I believe people should be able to express themselves and create whatever content they want,” Habib said.
That apparently includes spammy content. Habib makes the case that the video platforms’ algorithms, not QuickVid, are best positioned to determine the quality of a video, and that people who produce low-quality content “are only damaging their own reputations.” The reputational damage will naturally disincentivize people from creating mass spam campaigns with QuickVid, he says.
“If people don’t want to watch your video, then you won’t receive distribution on platforms like YouTube,” he added. “Producing low-quality content will also make people look at your channel in a negative light.”
But it’s instructive to look at ad agencies like Fractl, which in 2019 used an AI system called Grover to generate an entire site of marketing materials — reputation be damned. In an interview with The Verge, Fractl partner Kristin Tynski said that she foresaw generative AI enabling “a massive tsunami of computer-generated content across every niche imaginable.”
In any case, video-sharing platforms like TikTok and YouTube haven’t had to contend with moderating AI-generated content on a massive scale. Deepfakes — synthetic videos that replace an existing person with someone else’s likeness — began to populate platforms like YouTube several years ago, driven by tools that made deepfaked footage easier to produce. But unlike even the most convincing deepfakes today, the types of videos QuickVid creates aren’t obviously AI-generated in any way.
Google Search’s policy on AI-generated text might be a preview of what’s to come in the video domain. Google doesn’t treat synthetic text differently from human-written text where it concerns search rankings but takes actions on content that’s “intended to manipulate search rankings and not help users.” That includes content stitched together or combined from different web pages that “[doesn’t] add sufficient value” as well as content generated through purely automated processes, both of which might apply to QuickVid.
In other words, AI-generated videos might not be banned from platforms outright should they take off in a major way but rather simply become the cost of doing business. That isn’t likely to allay the fears of experts who believe that platforms like TikTok are becoming a new home for misleading videos, but — as Habib said during the interview — “there is no stopping the generative AI revolution.”
QuickVid uses AI to generate short-form videos, complete with voiceovers by Kyle Wiggers originally published on TechCrunch
QuickVid uses AI to generate short-form videos, complete with voiceovers

Twelve Labs lands $12M for AI that understands the context of videos

To Jae Lee, a data scientist by training, it never made sense that video — which has become an enormous part of our lives, what with the rise of platforms like TikTok, Vimeo and YouTube — was difficult to search across due to the technical barriers posed by context understanding. Searching the titles, descriptions and tags of videos was always easy enough, requiring no more than a basic algorithm. But searching within videos for specific moments and scenes was long beyond the capabilities of tech, particularly if those moments and scenes weren’t labeled in an obvious way.
To solve this problem, Lee, alongside friends from the tech industry, built a cloud service for video search and understanding. It became Twelve Labs, which went on to raise $17 million in venture capital — $12 million of which came from a seed extension round that closed today. Radical Ventures led the extension with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told TechCrunch in an email.
“The vision of Twelve Labs is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure,” Lee said.
A demo of the Twelve Labs platform’s capabilities. Image Credits: Twelve Labs
Twelve Labs, which is currently in closed beta, uses AI to attempt to extract “rich information” from videos such as movement and actions, objects and people, sound, text on screen, and speech to identify the relationships between them. The platform converts these various elements into mathematical representations called “vectors” and forms “temporal connections” between frames, enabling applications like video scene search.
“As a part of achieving the company’s vision to help developers create intelligent video applications, the Twelve Labs team is building ‘foundation models’ for multimodal video understanding,” Lee said. “Developers will be able to access these models through a suite of APIs, performing not only semantic search but also other tasks such as long-form video ‘chapterization,’ summary generation and video question and answering.”
Google takes a similar approach to video understanding with its MUM AI system, which the company uses to power video recommendations across Google Search and YouTube by picking out subjects in videos (e.g., “acrylic painting materials”) based on the audio, text and visual content. But while the tech might be comparable, Twelve Labs is one of the first vendors to market with it; Google has opted to keep MUM internal, declining to make it available through a public-facing API.
That being said, Google, as well as Microsoft and Amazon, offer services (i.e., Google Cloud Video AI, Azure Video Indexer and AWS Rekognition) that recognize objects, places and actions in videos and extract rich metadata at the frame level. There’s also Reminiz, a French computer vision startup that claims to be able to index any type of video and add tags to both recorded and live-streamed content. But Lee asserts that Twelve Labs is sufficiently differentiated — in part because its platform allows customers to fine-tune the AI to specific categories of video content.
Mockup of API for fine-tuning the model to work better with salad-related content. Image Credits: Twelve Labs
“What we’ve found is that narrow AI products built to detect specific problems show high accuracy in their ideal scenarios in a controlled setting, but don’t scale so well to messy real-world data,” Lee said. “They act more as a rule-based system, and therefore lack the ability to generalize when variances occur. We also see this as a limitation rooted in lack of context understanding. Understanding of context is what gives humans the unique ability to make generalizations across seemingly different situations in the real world, and this is where Twelve Labs stands alone.”
Beyond search, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently figuring out, for example, which videos showing knives are violent versus instructional. It can also be used for media analytics and real-time feedback, he says, and to automatically generate highlight reels from videos.
A little over a year after its founding (March 2021), Twelve Labs has paying customers — Lee wouldn’t reveal how many exactly — and a multiyear contract with Oracle to train AI models using Oracle’s cloud infrastructure. Looking ahead, the startup plans to invest in building out its tech and expanding its team. (Lee declined to reveal the current size of Twelve Labs’ workforce, but LinkedIn data shows it’s roughly 18 people.)
“For most companies, despite the huge value that can be attained through large models, it really does not make sense for them to train, operate and maintain these models themselves. By leveraging a Twelve Labs platform, any organization can leverage powerful video understanding capabilities with just a few intuitive API calls,” Lee said. “The future direction of AI innovation is heading straight towards multimodal video understanding, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
Twelve Labs lands $12M for AI that understands the context of videos by Kyle Wiggers originally published on TechCrunch
Twelve Labs lands $12M for AI that understands the context of videos

Google refreshes its mobile search experience

Google today announced a subtle but welcome refresh of its mobile search experience. The idea here is to provide easier to read search results and a more modern look with a simpler, edge-to-edge design.
From what we’ve seen so far, this is not a radically different look, but the rounded and slightly shaded boxes around individual search results have been replaced with straight lines, for example, while in other places, Google has specifically added more roundness. You’ll find changes to the circles around the search bar and some tweaks to the Google logo. “We believe it feels more approachable, friendly and human,” a Google spokesperson told me. There’s a bit more whitespace in places, too, as well as new splashes of color that are meant to help separate and emphasize certain parts of the page.

Image Credits: Google

“Rethinking the visual design for something like Search is really complex,” Google designer Aileen Cheng said in today’s announcement. “That’s especially true given how much Google Search has evolved. We’re not just organizing the web’s information, but all the world’s information. We started with organizing web pages, but now there’s so much diversity in the types of content and information we have to help make sense of.”

Image Credits: Google

Google is also extending its use of the Google Sans font, which you are probably already quite familiar with thanks to its use in Gmail and Android. “Bringing consistency to when and how we use fonts in Search was important, too, which also helps people parse information more efficiently,” Cheng writes.
In many ways, today’s refresh is a continuation of the work Google did with its mobile search refresh in 2019. At that time, the emphasis, too, was on making it easier for users to scan down the page by adding site icons and other new visual elements to the page. The work of making search results pages more readable is clearly never done.
For the most part, though, comparing the new and old design, the changes are small. This isn’t some major redesign — we’re talking about minor tweaks that the designers surely obsessed over but that the users may not even really notice. Now if Google had made it significantly easier to distinguish ads from the content you are actually looking for, that would’ve been something.

Image Credits: Google

Google refreshes its mobile search experience