Google today announced a new autofill experience for Chrome on mobile that will use biometric authentication for credit card transactions, as well as an updated built-in password manager that will make signing in to a site a bit more straightforward.
Image Credits: Google
Chrome already uses the W3C WebAuthn standard for biometric authentication on Windows and Mac. With this update, this feature is now also coming to Android .
If you’ve ever bought something through the browser on your Android phone, you know that Chrome always asks you to enter the CVC code from your credit card to ensure that it’s really you — even if you have the credit card number stored on your phone. That was always a bit of a hassle, especially when your credit card wasn’t close to you.
Now, you can use your phone’s biometric authentication to buy those new sneakers with just your fingerprint — no CVC needed. Or you can opt out, too, as you’re not required to enroll in this new system.
As for the password manager, the update here is the new touch-to-fill feature that shows you your saved accounts for a given site through a standard Android dialog. That’s something you’re probably used to from your desktop-based password manager already, but it’s definitely a major new built-in convenience feature for Chrome — and the more people opt to use password managers, the safer the web will be. This new feature is coming to Chrome on Android in the next few weeks, but Google says that “is only the start.”
Image Credits: Google
Архив за месяц: Июль 2020
Where is voice tech going?
Mark Persaud
Contributor
Share on Twitter
Mark Persaud is digital product manager and practice lead at Moonshot by Pactera, a digital innovation company that leads global clients through the next era of digital products with a heavy emphasis on artificial intelligence, data and continuous software delivery.
2020 has been all but normal. For businesses and brands. For innovation. For people.
The trajectory of business growth strategies, travel plans and lives have been drastically altered due to the COVID-19 pandemic, a global economic downturn with supply chain and market issues, and a fight for equality in the Black Lives Matter movement — amongst all that complicated lives and businesses already.
One of the biggest stories in emerging technology is the growth of different types of voice assistants:
Niche assistants such as Aider that provide back-office support.
Branded in-house assistants such as those offered by BBC and Snapchat.
White-label solutions such as Houndify that provide lots of capabilities and configurable tool sets.
With so many assistants proliferating globally, voice will become a commodity like a website or an app. And that’s not a bad thing — at least in the name of progress. It will soon (read: over the next couple years) become table stakes for a business to have voice as an interaction channel for a lovable experience that users expect. Consider that feeling you get when you realize a business doesn’t have a website: It makes you question its validity and reputation for quality. Voice isn’t quite there yet, but it’s moving in that direction.
Voice assistant adoption and usage are still on the rise
Adoption of any new technology is key. A key inhibitor of technology is often distribution, but this has not been the case with voice. Apple, Google, and Baidu have reported hundreds of millions of devices using voice, and Amazon has 200 million users. Amazon has a slightly more difficult job since they’re not in the smartphone market, which allows for greater voice assistant distribution for Apple and Google.
Image Credits: Mark Persaud
But are people using devices? Google said recently there are 500 million monthly active users of Google Assistant. Not far behind are active Apple users with 375 million. Large numbers of people are using voice assistants, not just owning them. That’s a sign of technology gaining momentum — the technology is at a price point and within digital and personal ecosystems that make it right for user adoption. The pandemic has only exacerbated the use as Edison reported between March and April — a peak time for sheltering in place across the U.S.
Facetune maker Lightricks brings its popular selfie retouching features to video
Lightricks, the startup behind a suite of photo and video editing apps — including most notably, selfie editor Facetune 2 — is taking its retouching capabilities to video. Today, the company is launching Facetune Video, a selfie video editing app, that allows users to retouch and edit their selfie and portrait videos using a set of A.I.-powered tools.
While there are other selfie video editors on the market, most today are generally focused on edits involving filters and presets, virtually adding makeup, or using AR or stickers to decorate your video in some way. Facetune Video, meanwhile, is focused on creating a photorealistic video by offering a set of features similar to those found in Lightricks’ flagship app, Facetune .
That means users are able to retouch their face with tools for skin smoothing, teeth whitening, and face reshaping, plus eye color, makeup, conceal, glow, and matte features. In addition, users can tweak tools for general video edits, like adjusting the brightness, contrast, color, and more, like other video editing apps allow for. And these edits can be applied in real-time to see how they look as the video plays, instead of after the fact.
In addition, users can apply the effect to one frame only and Facetune Video’s post-processing technology and neural networks will simultaneously apply an effect to the same area of every frame throughout the entire video, making it easier to quickly retouch a problem area without having to go frame-by-frame to do so.
“In Facetune Video, the 3D face model plays a significant role; users edit only one video frame, but it’s on us, behind-the-scenes, to automatically project the location of their edits to 2D face mesh coordinates derived from the 3D face model, and then apply them consistently on all other frames in the video,” explains Lightricks co-founder and CEO Zeev Farbman. “A Lightricks app needs to be not only powerful, but fun to use, so it’s critical to us that this all happens quickly and seamlessly,” he says.
Users can also save their favorite editing functions as “presets” allowing them to quickly apply their preferred settings to any video automatically.
In a future version of the app, the company plans to introduce a “heal” function which, like Facetune, will allow users to easily remove blemishes.
Image Credits: Lightricks
The technology that makes these selfie video edits work involves Lightricks’ deep neural networks that utilize facial feature detection and geometry analysis for the app’s retouching capabilities. These processes work in real-time without having to transmit data to the cloud first. There’s also no lag or delay while files are rendering.
In addition, Facetune Video uses the facial feature detection along with 3D face modeling A.I. to ensure that every part of the user’s face is captured for editing and retouching, the company says.
“What we’re also doing is taking advantage of lightweight neural networks. Before the user has even begun to retouch their selfie video, A.I.-powered algorithms are already working so that the user experience is quick and interactive,” says Farbman.
The app also does automated segmentation of more complex parts of the face like the interior of the eye, hair, or the lips, which helps it achieve a more accurate end result.
“It’s finding a balance between accuracy in the strength of the face modeling we use, and speed,” Farbman adds.
One challenge here was overcoming the issue of jittering effects, which is when the applied effect shakes as the video plays. The company didn’t want its resulting videos to have this problem, which makes the end result look gimmicky, so it worked to eliminate any shake-like effects and other face tracking issues so videos would look more polished and professional in the end.
The app builds off the company’s existing success and brand recognition with Facetune. With the new app, for example, the retouch algorithms mimic the original Facetune 2 experience, so users familiar with Facetune 2 will be able to quickly get the hang of the retouch tools.
Image Credits: Lightricks
The launch of the new app expands Lightricks further in the direction of video, which has become a more popular way of expressing yourself across social media, thanks to the growing use of apps like TikTok and features like Instagram Stories, for example.
Before, Lightricks’ flagship video product, however, was Videoleap, which focused on more traditional video editing, and not selfie videos where face retouching could be used.
Facetune has become so widely used, its name has become a verb — as in, “she facetunes her photos.” But it has also been criticized at times for its unrealistic results. (Of course, that’s more on the app’s users sliding the smoothing bar all the way to end.)
Across its suite of apps, which includes the original Facetune app (Facetune Classic), Facetune 2, Seen (for Stories), Photofox, Video Leap, Enlight Quickshot, Pixaloop, Boosted, and others, including a newly launched artistic editor, Quickart, the company has generated over 350 million downloads.
Its apps also now reach nearly 200 million users worldwide. And through its subscription model, Lightricks is now seeing what Farbman describes as revenues that are “increasing exponentially year-over-year,” but that are being continually reinvested into new products.
Like its other apps, Facetune Video will monetize by way of subscriptions. The app is free to use by will offer a VIP subscription for more features, at a price point of $8 per month, $36 per year, or a one-time purchase of $70.
Facetune 2 subscribers will get a discount on annual subscriptions, as well. The company will also sell the app in its Social Media Kit bundle on the App Store, which includes Facetune Video, Facetune 2, Seen and soon, an undisclosed fourth app. However, the company isn’t yet offering a single subscription that provides access to all bundled apps.
Facetune maker Lightricks brings its popular selfie retouching features to video
Nebraska and Iowa win advanced wireless testbed grants for rural broadband
Everyone wants more bandwidth from the skies, but it takes a lot of testing to turn laboratory research projects into real-world performant infrastructure. A number of new technologies, sometimes placed under the banner of “5G” and sometimes not, is embarking on that transition and being deployed in real-world scenarios.
Those research trials are crucial for productizing these technologies, and ultimately, delivering consumers better wireless broadband options.
We’ve talked a bit about one of those testbeds called COSMOS up in northern Manhattan near Columbia University, which is pioneering 5G technologies within a dense urban environment. The same National Science Foundation-funded research group that financed that project, the Platforms for Advanced Wireless Research program (PAWR), has now selected two finalists for its fourth location, which has a specific focus on rural infrastructure.
The 5G wireless revolution will come, if your city council doesn’t block it first
Research teams in Ames, Iowa affiliated with Iowa State University, and Lincoln, Nebraska affiliated with the University of Nebraska-Lincoln, each won $300,000 grants to accelerate their planning for the testbeds. Those teams will use the grants to optimize their proposals, with one expected to receive the final full grant next year.
The goal for this latest testbed is to find next-generation wireless technology stacks that can deliver cheaper and better bandwidth to rural America, areas of the country that are not well-served by traditional cable and fiber networks nor current wireless cell tower coverage.
Whoever wins will join the existing three wireless testbeds in New York City, Salt Lake City and the Research Triangle in North Carolina.
PAWR itself is a joint public-private initiative with $100 million in funding to accelerate America’s frontier wireless innovation. It’s co-led by US Ignite, an NSF-run initiative to bring smart city ideas to fruition, and Northeastern University.
Nebraska and Iowa win advanced wireless testbed grants for rural broadband
Pandora launches interactive voice ads into beta testing
Pandora is launching interactive voice ads into wider public testing, the company announced this morning. The music streaming service first introduced the new advertising format, where users verbally respond to advertiser prompts, back in December with help from a small set of early adopters, including Doritos, Ashley HomeStores, Unilever, Wendy’s, Turner Broadcasting, Comcast and Nestlé.
The ads begin by explaining to listeners what they are and how they work. They then play a short and simple message followed by a question that listeners can respond to. For example, a Wendy’s ad asked listeners if they were hungry, and if they say “yes,” the ad continued with a recommendation of what to eat. An Ashley HomeStores ads engaged listeners by offering tips on a better night’s sleep.
The format is meant in particular to aid advertisers in connecting with users who are not looking at their phone. For example, when people are listening to Pandora while driving, cooking, cleaning the house or doing some other hands-free activity.
Since their debut, Pandora’s own data indicated the ads have been fairly well-received, in terms of the voice format; 47% of users said they either liked or loved the concept of responding with their voice, and 30% felt neutral. The stats paint a picture of an overall more positive reception, given that users don’t typically like ads at all. In addition, 72% of users also said they found the ad format easy to engage with.
However, Pandora cautioned advertisers that more testing is needed to understand which ads get users to respond and which do not. Based on early alpha testing, ads with higher engagement seemed be those that were entertaining, humorous or used a recognizable brand voice, it says.
As the new ad format enters into beta testing, the company is expanding access to more advertisers. Advertisers including Acura, Anheuser-Busch, AT&T, Doritos, KFC, Lane Bryant, Purex Laundry Detergent, Purple, Unilever, T-Mobile, The Home Depot, Volvo and Xfinity, among others, are signed up to test the interactive ads.
This broader test aims to determine what the benchmarks should be for voice ads, whether the ads need tweaking to optimize for better engagement, and whether ads are better for driving conversions at the upper funnel or if consumers are ready to take action based on the ads’ content.
Related to the rollout of interactive voice ads, Pandora is also upgrading its “Voice Mode” feature, launched last year and made available to all users last July. The feature will now offer listeners on-demand access to specific tracks and albums in exchange for watching a brand video via Pandora’s existing Video Plus ad format, the same as for text-based searches.