Архив рубрики: Artificial Intelligence

Auto Added by WPeMatico

How Niantic evolved Pokémon GO for the year no one could go anywhere

Pokémon GO was created to encourage players to explore the world while coordinating impromptu large group gatherings — activities we’ve all been encouraged to avoid since the pandemic began.
And yet, analysts estimate that 2020 was Pokémon GO’s highest-earning year yet.

By twisting some knobs and tweaking variables, Pokémon GO became much easier to play without leaving the house.

Niantic’s approach to 2020 was full of carefully considered changes, and I’ve highlighted many of their key decisions below.
Consider this something of an addendum to the Niantic EC-1 I wrote last year, where I outlined things like the company’s beginnings as a side project within Google, how Pokémon Go began as an April Fools’ joke and the company’s aim to build the platform that powers the AR headsets of the future.
Hit the brakes
On a press call outlining an update Niantic shipped in November, the company put it on no uncertain terms: the roadmap they’d followed over the last ten-or-so months was not the one they started the year with. Their original roadmap included a handful of new features that have yet to see the light of day. They declined to say what those features were of course (presumably because they still hope to launch them once the world is less broken) — but they just didn’t make sense to release right now.
Instead, as any potential end date for the pandemic slipped further into the horizon, the team refocused in Q1 2020 on figuring out ways to adapt what already worked and adjust existing gameplay to let players do more while going out less.
Turning the dials
As its name indicates, GO was never meant to be played while sitting at home. John Hanke’s initial vision for Niantic was focused around finding ways to get people outside and playing together; from its very first prototype, Niantic had players running around a city to take over its virtual equivalent block by block. They’d spent nearly a decade building up a database of real-world locations that would act as in-game points meant to encourage exploration and wandering. Years of development effort went into turning Pokémon GO into more and more of a social game, requiring teamwork and sometimes even flash mob-like meetups for its biggest challenges.
Now it all needed to work from the player’s couch.
The earliest changes were those that were easiest for Niantic to make on-the-fly, but they had dramatic impacts on the way the game actually works.
Some of the changes:

Doubling the players “radius” for interacting with in-game gyms, landmarks that players can temporarily take over for their in-game team, earning occupants a bit of in-game currency based on how long they maintain control. This change let more gym battles happen from the couch.
Increasing spawn points, generally upping the number of Pokémon you could find at home dramatically.
Increasing “incense” effectiveness, which allowed players to use a premium item to encourage even more Pokémon to pop up at home. Niantic phased this change out in October, then quietly reintroduced it in late November. Incense would also last twice as long, making it cheaper for players to use.
Allowing steps taken indoors (read: on treadmills) to count toward in-game distance challenges.
Players would no longer need to walk long distances to earn entry into the online player-versus-player battle system.
Your “buddy” Pokémon (a specially designated Pokémon that you can level up Tamagotchi-style for bonus perks) would now bring you more gifts of items you’d need to play. Pre-pandemic, getting these items meant wandering to the nearby “Pokéstop” landmarks.

By twisting some knobs and tweaking variables, Pokémon GO became much easier to play without leaving the house — but, importantly, these changes avoided anything that might break the game while being just as easy to reverse once it became safe to do so.
GO Fest goes virtual

Like this, just … online. Image Credits: Greg Kumparak

Thrown by Niantic every year since 2017, GO Fest is meant to be an ultra-concentrated version of the Pokémon GO experience. Thousands of players cram into one park, coming together to tackle challenges and capture previously unreleased Pokémon.

How Niantic evolved Pokémon GO for the year no one could go anywhere

iPhones can now tell blind users where and how far away people are

Apple has packed an interesting new accessibility feature into the latest beta of iOS: a system that detects the presence of and distance to people in the view of the iPhone’s camera, so blind users can social distance effectively, among many other things.
The feature emerged from Apple’s ARKit, for which the company developed “people occlusion,” which detects people’s shapes and lets virtual items pass in front of and behind them. The accessibility team realized that this, combined with the accurate distance measurements provided by the lidar units on the iPhone 12 Pro and Pro Max, could be an extremely useful tool for anyone with a visual impairment.
Of course during the pandemic one immediately thinks of the idea of keeping six feet away from other people. But knowing where others are and how far away is a basic visual task that we use all the time to plan where we walk, which line we get in at the store, whether to cross the street and so on.

The new feature, which will be part of the Magnifier app, uses the lidar and wide-angle camera of the Pro and Pro Max, giving feedback to the user in a variety of ways.

The lidar in the iPhone 12 Pro shows up in this infrared video. Each dot reports back the precise distance of what it reflects off of.

First, it tells the user whether there are people in view at all. If someone is there, it will then say how far away the closest person is in feet or meters, updating regularly as they approach or move further away. The sound corresponds in stereo to the direction the person is in the camera’s view.
Second, it allows the user to set tones corresponding to certain distances. For example, if they set the distance at six feet, they’ll hear one tone if a person is more than six feet away, another if they’re inside that range. After all, not everyone wants a constant feed of exact distances if all they care about is staying two paces away.
The third feature, perhaps extra useful for folks who have both visual and hearing impairments, is a haptic pulse that goes faster as a person gets closer.
Last is a visual feature for people who need a little help discerning the world around them, an arrow that points to the detected person on the screen. Blindness is a spectrum, after all, and any number of vision problems could make a person want a bit of help in that regard.

As ADA turns 30, tech is just getting started helping people with disabilities

The system requires a decent image on the wide-angle camera, so it won’t work in pitch darkness. And while the restriction of the feature to the high end of the iPhone line reduces the reach somewhat, the constantly increasing utility of such a device as a sort of vision prosthetic likely makes the investment in the hardware more palatable to people who need it.
Here’s how it works so far:

Here’s how people detection works in iOS 14.2 beta – the voiceover support is a tiny bit buggy but still super cool https://t.co/vCyX2wYfx3 pic.twitter.com/e8V4zMeC5C
— Matthew Panzarino (@panzer) October 31, 2020

This is far from the first tool like this — many phones and dedicated devices have features for finding objects and people, but it’s not often that it comes baked in as a standard feature.
People detection should be available to iPhone 12 Pro and Pro Max running the iOS 14.2 release candidate that was just made available today. Details will presumably appear soon on Apple’s dedicated iPhone accessibility site.

Microsoft Soundscape helps the visually impaired navigate cities

iPhones can now tell blind users where and how far away people are

Accessibility’s nextgen breakthroughs will be literally in your head

Jim Fruchterman
Contributor

Share on Twitter

Jim Fruchterman is the founder of Tech Matters and Benetech, nonprofit developers of technology for social good.

More posts by this contributor

A $6 trillion wake up call for the tech industry

Predicting the future of technology for people with visual impairments is easier than you might think. In 2003, I wrote an article entitled “In the Palm of Your Hand” for the Journal of Visual Impairment & Blindness from the American Foundation for the Blind. The arrival of the iPhone was still four years away, but I was able to confidently predict the center of assistive technology shifting from the desktop PC to the smart phone. 
“A cell phone costing less than $100,” I wrote, “will be able to see for the person who can’t see, read for the person who can’t read, speak for the person who can’t speak, remember for the person who can’t remember, and guide the person who is lost.” Looking at the tech trends at the time, that transition was as inevitable as it might have seemed far-fetched.
We are at a similar point now, which is why I am excited to play a part of Sight Tech Global, a virtual event Dec. 2-3 that is convening the top technologists to discuss how AI and related technologies will usher in a new era of remarkable advances for accessibility and assistive tech, in particular for people who are blind or visually impaired.
To get to the future, let me turn to the past. I was walking around the German city of Speyer in the 1990s with pioneering blind assistive tech entrepreneur Joachim Frank. Joachim took me on a flight of fancy about what he really wanted from assistive technology, as opposed to what was then possible. He quickly highlighted three stories of how advanced tech could help him as he was walking down the street with me. 

As I walk down the street, and walk by a supermarket, I do not want it to read all of the signs in the window. However, if one of the signs notes that kasseler kipchen (smoked porkchops, his favorite) are on sale, and the price is particularly good, I would like that whispered in my ear.
And then, as a young woman approaches me walking in the opposite direction, I’d like to know if she’s wearing a wedding ring.
Finally, I would like to know that someone has been following me for the last two blocks, that he is a known mugger, and that if I quicken my walking speed, go fifty meters ahead, turn right, and go another seventy meters, I will arrive at a police substation! 

Joachim blew my mind. In one short walk, he outlined a far bolder vision of what tech could do for him, without bogging down in the details. He wanted help with saving money, meeting new friends and keeping himself safe. He wanted abilities which not only equaled what people with normal vision had, but exceeded them. Above all, he wanted tools which knew him and his desires and needs. 
We are nearing the point where we can build Joachim’s dreams.  It won’t matter if the assistant whispers in your ear, or uses a direct neural implant to communicate. We will probably see both. But, the nexus of tech will move inside your head, and become a powerful instrument for equality of access. A new tech stack with perception as a service. Counter-measures to outsmart algorithmic discrimination. Tech personalization. Affordability. 
That experience will be built on an ever more application rich and readily available technology stack in the cloud. As all that gets cheaper and cheaper to access, product designers can create and experiment faster than ever. At first, it will be expensive, but not for long as adoption – probably by far more than simply disabled people – drives down price. I started my career in tech for the blind by introducing a reading machine that was a big deal because it halved the price of that technology to $5,000. Today even better OCR is a free app on any smartphone.
We could dive into more details of how we build Joachim’s dreams and meet the needs of millions of others of individuals with vision disabilities. But it will be far more interesting to explore with the world’s top experts at Sight Tech Global on Dec. 2-3 how those tech tools will become enabled In Your Head!
Registration is free and open to all. 

Accessibility’s nextgen breakthroughs will be literally in your head

Daily Crunch: India bans PUBG and other Chinese apps

India continues to crack down on Chinese apps, Microsoft launches a deepfake detector and Google offers a personalized news podcast. This is your Daily Crunch for September 2, 2020.
The big story: India bans PUBG and other Chinese apps
The Indian government continues its purge of apps created by or linked to Chinese companies. It already banned 59 Chinese apps back in June, including TikTok.
India’s IT Ministry justified the decision as “a targeted move to ensure safety, security, and sovereignty of Indian cyberspace.” The apps banned today include search engine Baidu, business collaboration suite WeChat Work, cloud storage service Tencent Weiyun and the game Rise of Kingdoms. But PUBG is the most popular, with more than 40 million monthly active users.

The tech giants
Microsoft launches a deepfake detector tool ahead of US election — The Video Authenticator tool will provide a confidence score that a given piece of media has been artificially manipulated.
Google’s personalized audio news feature, Your News Update, comes to Google Podcasts — That means you’ll be able to get a personalized podcast of the latest headlines.
Twitch launches Watch Parties to all creators worldwide — Twitch is doubling down on becoming more than just a place for live-streamed gaming videos.
Startups, funding and venture capital
Indonesian insurtech startup PasarPolis gets $54 million Series B from investors including LeapFrog and SBI — The startup’s goal is to reach people who have never purchased insurance before with products like inexpensive “micro-policies” that cover broken device screens.
XRobotics is keeping the dream of pizza robots alive — XRobotics’ offering resembles an industrial 3D printer, in terms of size and form factor.
India’s online learning platform Unacademy raises $150 million at $1.45 billion valuation — India has a new startup unicorn.
Advice and analysis from Extra Crunch
The IPO parade continues as Wish files, Bumble targets an eventual debut — Alex Wilhelm looks at the latest IPO news, including Bumble planning to go public at a $6 to $8 billion valuation.
3 ways COVID-19 has affected the property investment market — COVID-19 has stirred up the long-settled dust on real estate investing.
Deep Science: Dog detectors, Mars mappers and AI-scrambling sweaters — Devin Coldewey kicks off a new feature in which he gets you all caught up on the most recent research papers and scientific discoveries.
(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)
Everything else
‘The Mandalorian’ launches its second season on Oct. 30 — The show finished shooting its second season right before the pandemic shut down production everywhere.
GM, Ford wrap up ventilator production and shift back to auto business — Both automakers said they’d completed their contracts with the Department of Health and Human Services.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Daily Crunch: India bans PUBG and other Chinese apps

Microsoft’s Seeing AI founder Saqib Shaikh is speaking at Sight Tech Global

When Microsoft CEO Satya Nadella introduced Saqib Shaikh on stage at BUILD in 2016, he was obviously moved by the engineer’s “passion and empathy,” which Nadella said, “is going to change the world.”
That assessment was on the mark because Shaikh went on to co-found the mobile app Seeing AI, which is a showcase for the power of AI applied to the needs of people who are blind or visually impaired. Using the camera on a phone, the Seeing AI app can describe a physical scene, identify persons and their demeanor, read documents (including handwritten ones), read currency values and tell colors. The latest version uses haptic technology to help the user discover the position of objects and people in an image. The app has been used 20 million times since launch nearly three years ago, and today it works in eight languages.
It’s exciting to announce that Shaikh will be speaking at Sight Tech Global, a virtual, global event that addresses how rapid advances in technology, many of them AI-related, will influence the development of accessibility and assistive technology for people who are blind or visually impaired. The show, which is a project for the Vista Center for the Blind and Visually Impaired Silicon Valley, launched recently on TechCrunch. The virtual event is Dec. 2-3 and free to the public. Pre-register here. 
Shaikh lost his vision at the age of 7, and attended a school for blind students, where he was intrigued by computers that could “talk” to students. He went on to study computer science at the U.K.’s University of Sussex. “One of the things I had always dreamt of since university,” he says, “was something that could tell you at any moment who and what’s going on around you.”  That dream turned into his destiny.
After he joined Microsoft in 2006, Shaikh participated in Microsoft’s annual, week-long hackathons in 2014 and 2015 to develop the idea of applying AI in ways that could help people who are blind or visually impaired. Not long after, Seeing AI became an official project and Shaikh’s full-time job at Microsoft. The company’s Cognitive Services APIs have been critical to his work, and he now leads a team of engineers who are leveraging emerging technology to empower people who are blind.
“When it comes to AI,” says Shaikh, “I consider disabled people to be really good early adopters. We can point to history where  blind people have been using talking books for decades and so on, all the way through to OCR text-to-speech, which is early AI. Today, this idea that a computer can look at an image and turn it into a sentence has many use-cases but probably the most compelling is to describe that image to a blind person. For blind people this is incredibly empowering.” Below is a video Microsoft released in 2016 about Shaikh and the Seeing AI project. 

The Seeing AI project is an early example of a tool that taps various AI technologies in ways that produce an almost “intelligent” experience. Seeing AI doesn’t just read the text, for example, it also tells the user how to move the phone so the document is in the viewfinder. It doesn’t just tell you there are people in front of you, it tells you something about them, including who they are (if you have named them in the past) and their general appearance.
At Sight Tech Global, Shaikh will speak about the future of Seeing AI and his views on how accessibility will unfold in a world more richly enabled by cloud compute, low latency networks and ever more sophisticated AI algorithms and data sets. 
To pre-register for a free pass, please visit Sight Tech Global.
Please follow the event on Twitter @Globalsight.
Sponsors are welcome, and there are opportunities available ranging from branding support to content integration. Please email sponsor@sighttechglobal.com for more information.

Microsoft’s Seeing AI founder Saqib Shaikh is speaking at Sight Tech Global