Архив рубрики: Artificial Intelligence

Auto Added by WPeMatico

PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

Neji Tawo, the founder of boutique software development company Wiscount Corporation, says he was inspired by his dad to become an engineer. When Tawo was a kid, his dad tasked him with coming up with a formula to calculate the gas in the fuel tanks at his family’s station. Tawo then created an app for gas stations to help prevent gas siphoning.
The seed of the idea for Tawo’s latest venture came from a different source: a TV ad for a charity. Frustrated by his experience filling out donation forms, Tawo sought an alternative, faster way to complete such transactions. He settled on voice.
Tawo’s PayTalk, which is one of the first products in Amazon’s Black Founders Build with Alexa Program, uses conversational AI to carry out transactions via smart devices. Using the PayTalk app, users can do things like find a ride, order a meal, pay bills, purchase tickets and even apply for a loan, Tawo says.
“We see the opportunity in a generation that’s already using voice services for day-to-day tasks like checking the weather, playing music, calling friends and more,” Tawo said. “At PayTalk, we feel voice services should function like a person — being capable of doing several things from hailing you a ride to taking your delivery order to paying your phone bills.”

PayTalk is powered by out-of-the-box voice recognition models on the frontend and various API connectors behind the scenes, Tawo explains. In addition to Alexa, the app integrates with Siri and Google Assistant, letting users add voice shortcuts like “Hey Siri, make a reservation on PayTalk.”
“Myself and my team have bootstrapped this all along the way, as many VCs we approached early on were skeptical about voice being the device form factor of the future. The industry is in its nascent stages and many still view it with skepticism,” Tawo said. “With the COVID-19 pandemic and subsequent shift to doing more remotely across different types of transactions (i.e. ordering food from home, shopping online, etc.), we … saw that there was increased interest in the use of voice services. This in turn boosted demand for our product and we believe that we are positioned to continue to expand our offerings and make voice services more useful as a result.”
Tawo’s pitch for PayTalk reminded me much of Viv, the startup launched by Siri co-creator Adam Cheyer (later acquired by Samsung) that proposed voice as the connective tissue between disparate apps and services. It’s a promising idea — tantalizing, even. But where PayTalk is concerned, the execution isn’t quite there yet. 
The PayTalk app is only available for iOS and Android at the moment, and in my experience with it, it’s a little rough around the edges. A chatbot-like flow allows you to type commands — a nice fallback for situations where voice doesn’t make sense (or isn’t appropriate) — but doesn’t transition to activities particularly gracefully. When I used it to look for a cab by typing the suggested “book a ride” command, PayTalk asked for a pickup and dropoff location before throwing me into an Apple Maps screen without any of the information I’d just entered.
The reservation and booking functionality seems broken as well. PayTalk walked me through the steps of finding a restaurant, asking which time I’d like to reserve, the size of my party and so on. But the app let me “confirm” a table for 2 a.m. at SS106 Aperitivo Bar — an Italian restaurant in Alberta — on a day the restaurant closes at 10 p.m.
Image Credits: PayTalk
Other “categories” of commands in PayTalk are very limited in what they can accomplish — or simply nonfunctional. I can only order groceries from two services in my area (Downtown Brooklyn) at present — MNO African Market and Simi African Foods Market. Requesting a loan prompts an email with a link to Glance Capital, a personal loan provider for gig workers, that throws a 404 error when clicked. A command to book “luxury services” like a yacht or “sea plane” (yes, really) fails to reach anything resembling a confirmation screen, while the “pay for parking” command confusingly asks for a zone number.
To fund purchases through PayTalk (e.g. parking), there’s an in-app wallet. I couldn’t figure out a way to transfer money to it, though. The app purports to accept payment cards, but tapping on the “Use Card” button triggers a loading animation that quickly times out.
I could go on. But suffice it to say that PayTalk is in the very earliest stages of development. I began to think the app had been released prematurely, but PayTalk’s official Twitter account has been advertising it for at least the past few months.
Perhaps PayTalk will eventually grow into the shoes of the pitch Tawo gave me, so to speak — Wiscount is kicking off a four-month tenure at the Black Founders Build with Alexa Program. In the meantime, it must be pointed out that Alexa, Google Assistant and Siri are already capable of handling much of what PayTalk promises to one day accomplish.

The battle for voice recognition inside vehicles is heating up

“With the potential $100,000 investment [from the Black Founders Build with Alexa Program], we will seek to raise a seed round to expand our product offerings to include features that would allow customers to seamlessly carry out e-commerce and financial transactions on voice service-powered devices,” Tawo said. “PayTalk is mainly a business-to-consumer platform. However, as we continue to innovate and integrate voice-activated options … we see the potential to support enterprise use cases by replacing and automating the lengthy form filling processes that are common for many industries like healthcare.”
Hopefully, the app’s basic capabilities get attention before anything else.
PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

How Niantic evolved Pokémon GO for the year no one could go anywhere

Pokémon GO was created to encourage players to explore the world while coordinating impromptu large group gatherings — activities we’ve all been encouraged to avoid since the pandemic began.
And yet, analysts estimate that 2020 was Pokémon GO’s highest-earning year yet.

By twisting some knobs and tweaking variables, Pokémon GO became much easier to play without leaving the house.

Niantic’s approach to 2020 was full of carefully considered changes, and I’ve highlighted many of their key decisions below.
Consider this something of an addendum to the Niantic EC-1 I wrote last year, where I outlined things like the company’s beginnings as a side project within Google, how Pokémon Go began as an April Fools’ joke and the company’s aim to build the platform that powers the AR headsets of the future.
Hit the brakes
On a press call outlining an update Niantic shipped in November, the company put it on no uncertain terms: the roadmap they’d followed over the last ten-or-so months was not the one they started the year with. Their original roadmap included a handful of new features that have yet to see the light of day. They declined to say what those features were of course (presumably because they still hope to launch them once the world is less broken) — but they just didn’t make sense to release right now.
Instead, as any potential end date for the pandemic slipped further into the horizon, the team refocused in Q1 2020 on figuring out ways to adapt what already worked and adjust existing gameplay to let players do more while going out less.
Turning the dials
As its name indicates, GO was never meant to be played while sitting at home. John Hanke’s initial vision for Niantic was focused around finding ways to get people outside and playing together; from its very first prototype, Niantic had players running around a city to take over its virtual equivalent block by block. They’d spent nearly a decade building up a database of real-world locations that would act as in-game points meant to encourage exploration and wandering. Years of development effort went into turning Pokémon GO into more and more of a social game, requiring teamwork and sometimes even flash mob-like meetups for its biggest challenges.
Now it all needed to work from the player’s couch.
The earliest changes were those that were easiest for Niantic to make on-the-fly, but they had dramatic impacts on the way the game actually works.
Some of the changes:

Doubling the players “radius” for interacting with in-game gyms, landmarks that players can temporarily take over for their in-game team, earning occupants a bit of in-game currency based on how long they maintain control. This change let more gym battles happen from the couch.
Increasing spawn points, generally upping the number of Pokémon you could find at home dramatically.
Increasing “incense” effectiveness, which allowed players to use a premium item to encourage even more Pokémon to pop up at home. Niantic phased this change out in October, then quietly reintroduced it in late November. Incense would also last twice as long, making it cheaper for players to use.
Allowing steps taken indoors (read: on treadmills) to count toward in-game distance challenges.
Players would no longer need to walk long distances to earn entry into the online player-versus-player battle system.
Your “buddy” Pokémon (a specially designated Pokémon that you can level up Tamagotchi-style for bonus perks) would now bring you more gifts of items you’d need to play. Pre-pandemic, getting these items meant wandering to the nearby “Pokéstop” landmarks.

By twisting some knobs and tweaking variables, Pokémon GO became much easier to play without leaving the house — but, importantly, these changes avoided anything that might break the game while being just as easy to reverse once it became safe to do so.
GO Fest goes virtual

Like this, just … online. Image Credits: Greg Kumparak

Thrown by Niantic every year since 2017, GO Fest is meant to be an ultra-concentrated version of the Pokémon GO experience. Thousands of players cram into one park, coming together to tackle challenges and capture previously unreleased Pokémon.

How Niantic evolved Pokémon GO for the year no one could go anywhere

iPhones can now tell blind users where and how far away people are

Apple has packed an interesting new accessibility feature into the latest beta of iOS: a system that detects the presence of and distance to people in the view of the iPhone’s camera, so blind users can social distance effectively, among many other things.
The feature emerged from Apple’s ARKit, for which the company developed “people occlusion,” which detects people’s shapes and lets virtual items pass in front of and behind them. The accessibility team realized that this, combined with the accurate distance measurements provided by the lidar units on the iPhone 12 Pro and Pro Max, could be an extremely useful tool for anyone with a visual impairment.
Of course during the pandemic one immediately thinks of the idea of keeping six feet away from other people. But knowing where others are and how far away is a basic visual task that we use all the time to plan where we walk, which line we get in at the store, whether to cross the street and so on.

The new feature, which will be part of the Magnifier app, uses the lidar and wide-angle camera of the Pro and Pro Max, giving feedback to the user in a variety of ways.

The lidar in the iPhone 12 Pro shows up in this infrared video. Each dot reports back the precise distance of what it reflects off of.

First, it tells the user whether there are people in view at all. If someone is there, it will then say how far away the closest person is in feet or meters, updating regularly as they approach or move further away. The sound corresponds in stereo to the direction the person is in the camera’s view.
Second, it allows the user to set tones corresponding to certain distances. For example, if they set the distance at six feet, they’ll hear one tone if a person is more than six feet away, another if they’re inside that range. After all, not everyone wants a constant feed of exact distances if all they care about is staying two paces away.
The third feature, perhaps extra useful for folks who have both visual and hearing impairments, is a haptic pulse that goes faster as a person gets closer.
Last is a visual feature for people who need a little help discerning the world around them, an arrow that points to the detected person on the screen. Blindness is a spectrum, after all, and any number of vision problems could make a person want a bit of help in that regard.

As ADA turns 30, tech is just getting started helping people with disabilities

The system requires a decent image on the wide-angle camera, so it won’t work in pitch darkness. And while the restriction of the feature to the high end of the iPhone line reduces the reach somewhat, the constantly increasing utility of such a device as a sort of vision prosthetic likely makes the investment in the hardware more palatable to people who need it.
Here’s how it works so far:

Here’s how people detection works in iOS 14.2 beta – the voiceover support is a tiny bit buggy but still super cool https://t.co/vCyX2wYfx3 pic.twitter.com/e8V4zMeC5C
— Matthew Panzarino (@panzer) October 31, 2020

This is far from the first tool like this — many phones and dedicated devices have features for finding objects and people, but it’s not often that it comes baked in as a standard feature.
People detection should be available to iPhone 12 Pro and Pro Max running the iOS 14.2 release candidate that was just made available today. Details will presumably appear soon on Apple’s dedicated iPhone accessibility site.

Microsoft Soundscape helps the visually impaired navigate cities

iPhones can now tell blind users where and how far away people are

Accessibility’s nextgen breakthroughs will be literally in your head

Jim Fruchterman
Contributor

Share on Twitter

Jim Fruchterman is the founder of Tech Matters and Benetech, nonprofit developers of technology for social good.

More posts by this contributor

A $6 trillion wake up call for the tech industry

Predicting the future of technology for people with visual impairments is easier than you might think. In 2003, I wrote an article entitled “In the Palm of Your Hand” for the Journal of Visual Impairment & Blindness from the American Foundation for the Blind. The arrival of the iPhone was still four years away, but I was able to confidently predict the center of assistive technology shifting from the desktop PC to the smart phone. 
“A cell phone costing less than $100,” I wrote, “will be able to see for the person who can’t see, read for the person who can’t read, speak for the person who can’t speak, remember for the person who can’t remember, and guide the person who is lost.” Looking at the tech trends at the time, that transition was as inevitable as it might have seemed far-fetched.
We are at a similar point now, which is why I am excited to play a part of Sight Tech Global, a virtual event Dec. 2-3 that is convening the top technologists to discuss how AI and related technologies will usher in a new era of remarkable advances for accessibility and assistive tech, in particular for people who are blind or visually impaired.
To get to the future, let me turn to the past. I was walking around the German city of Speyer in the 1990s with pioneering blind assistive tech entrepreneur Joachim Frank. Joachim took me on a flight of fancy about what he really wanted from assistive technology, as opposed to what was then possible. He quickly highlighted three stories of how advanced tech could help him as he was walking down the street with me. 

As I walk down the street, and walk by a supermarket, I do not want it to read all of the signs in the window. However, if one of the signs notes that kasseler kipchen (smoked porkchops, his favorite) are on sale, and the price is particularly good, I would like that whispered in my ear.
And then, as a young woman approaches me walking in the opposite direction, I’d like to know if she’s wearing a wedding ring.
Finally, I would like to know that someone has been following me for the last two blocks, that he is a known mugger, and that if I quicken my walking speed, go fifty meters ahead, turn right, and go another seventy meters, I will arrive at a police substation! 

Joachim blew my mind. In one short walk, he outlined a far bolder vision of what tech could do for him, without bogging down in the details. He wanted help with saving money, meeting new friends and keeping himself safe. He wanted abilities which not only equaled what people with normal vision had, but exceeded them. Above all, he wanted tools which knew him and his desires and needs. 
We are nearing the point where we can build Joachim’s dreams.  It won’t matter if the assistant whispers in your ear, or uses a direct neural implant to communicate. We will probably see both. But, the nexus of tech will move inside your head, and become a powerful instrument for equality of access. A new tech stack with perception as a service. Counter-measures to outsmart algorithmic discrimination. Tech personalization. Affordability. 
That experience will be built on an ever more application rich and readily available technology stack in the cloud. As all that gets cheaper and cheaper to access, product designers can create and experiment faster than ever. At first, it will be expensive, but not for long as adoption – probably by far more than simply disabled people – drives down price. I started my career in tech for the blind by introducing a reading machine that was a big deal because it halved the price of that technology to $5,000. Today even better OCR is a free app on any smartphone.
We could dive into more details of how we build Joachim’s dreams and meet the needs of millions of others of individuals with vision disabilities. But it will be far more interesting to explore with the world’s top experts at Sight Tech Global on Dec. 2-3 how those tech tools will become enabled In Your Head!
Registration is free and open to all. 

Accessibility’s nextgen breakthroughs will be literally in your head

Daily Crunch: India bans PUBG and other Chinese apps

India continues to crack down on Chinese apps, Microsoft launches a deepfake detector and Google offers a personalized news podcast. This is your Daily Crunch for September 2, 2020.
The big story: India bans PUBG and other Chinese apps
The Indian government continues its purge of apps created by or linked to Chinese companies. It already banned 59 Chinese apps back in June, including TikTok.
India’s IT Ministry justified the decision as “a targeted move to ensure safety, security, and sovereignty of Indian cyberspace.” The apps banned today include search engine Baidu, business collaboration suite WeChat Work, cloud storage service Tencent Weiyun and the game Rise of Kingdoms. But PUBG is the most popular, with more than 40 million monthly active users.

The tech giants
Microsoft launches a deepfake detector tool ahead of US election — The Video Authenticator tool will provide a confidence score that a given piece of media has been artificially manipulated.
Google’s personalized audio news feature, Your News Update, comes to Google Podcasts — That means you’ll be able to get a personalized podcast of the latest headlines.
Twitch launches Watch Parties to all creators worldwide — Twitch is doubling down on becoming more than just a place for live-streamed gaming videos.
Startups, funding and venture capital
Indonesian insurtech startup PasarPolis gets $54 million Series B from investors including LeapFrog and SBI — The startup’s goal is to reach people who have never purchased insurance before with products like inexpensive “micro-policies” that cover broken device screens.
XRobotics is keeping the dream of pizza robots alive — XRobotics’ offering resembles an industrial 3D printer, in terms of size and form factor.
India’s online learning platform Unacademy raises $150 million at $1.45 billion valuation — India has a new startup unicorn.
Advice and analysis from Extra Crunch
The IPO parade continues as Wish files, Bumble targets an eventual debut — Alex Wilhelm looks at the latest IPO news, including Bumble planning to go public at a $6 to $8 billion valuation.
3 ways COVID-19 has affected the property investment market — COVID-19 has stirred up the long-settled dust on real estate investing.
Deep Science: Dog detectors, Mars mappers and AI-scrambling sweaters — Devin Coldewey kicks off a new feature in which he gets you all caught up on the most recent research papers and scientific discoveries.
(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)
Everything else
‘The Mandalorian’ launches its second season on Oct. 30 — The show finished shooting its second season right before the pandemic shut down production everywhere.
GM, Ford wrap up ventilator production and shift back to auto business — Both automakers said they’d completed their contracts with the Department of Health and Human Services.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Daily Crunch: India bans PUBG and other Chinese apps