Архив рубрики: machine learning

Auto Added by WPeMatico

Meme-based dating app Schmooze allows matches to share memes with each other

Sharing a meme with someone you just met can be tricky, especially if you’re trying to flirt with them. Are they into wholesome puppy memes? Or maybe they like dark Homelander memes? Whatever the case may be, Schmooze, the meme-based dating app, is designed to match users with like-minded people who want to make each other laugh.
To use the app, a person swipes right to like, left to dislike, and up to love memes from a selection curated via artificial intelligence. Match suggestions will pop on the screen based on meme choices, and the person can select either “Snooze” or “Schmooze.” The algorithm takes into consideration what types of humor both you and the potential match enjoy or reject.
Today, Schmooze launched a new feature called “Schmooze Flirts” for users to share memes with their matches. “Schmooze Flirts” handpicks memes from the app’s algorithm that a match would likely laugh at and encourages a user to send meme to their match to spark a conversation. The user has 48 hours to message the other person or the chat will automatically disappear. The app also provides icebreakers to help you out before a match gets removed.
Users can now set a limit on the distance for the matches as well.
Image Credits: Schmooze
Additional features are coming to Schmooze next week, including meme pick-up lines and “Schmooze Ayooo,” a place on the app for people to post funny pictures from their own camera roll. Currently, the app doesn’t allow memes from users to enter its algorithm due to quality and appropriateness control concerns. Over the next two months, Schmooze plans to create a user-content platform and let users upload their own memes.
Schmooze is available on the App Store and Google Play Store, however, “Schmooze Flirts” and the upcoming features are only available for iOS users. There will be a delay of two to three weeks until the features are available on Android devices.
When setting up a profile, the user must answer what their “#MemeVibe” is by selecting options such as dark humor, puns, wholesome, relatable, NSFW, gaming, anime, among other topics and categories. They then add profile photos, a bio, their four favorite music artists, memes they like and their TV show binge list.
Note that while there are systems in place to catch fake profiles, extensive background checks aren’t done to eliminate people with a history of crime or violence. The target audience for Schmooze is Gen Z (17+), so users should always prioritize safety, especially when it comes to dating online.

Meme-based dating is here: Meet Schmooze

Vidya Madhavan and Abhinav Anurag founded the startup and beta-tested the Schmooze app at Stanford University. In March 2021, Schmooze came out of beta. The dating app has grown to over 50,000 users with more than 15 million memes swiped and 750,000+ matches made.
A few months ago, the company raised $3.2 million in seed funding, led by Inventus Capital and Silicon Valley Quad, with participation from Lightspeed and Graph Ventures.
Meme-based dating app Schmooze allows matches to share memes with each other

Datch secures $10M to build voice assistants to factory floors

Datch, a company that develops AI-powered voice assistants for industrial customers, today announced that it raised $10 million in a Series A round led by Blackhorn Ventures. The proceeds will be used to expand operations, CEO Mark Fosdike said, as well as develop new software support, tools and capabilities.
Datch started when Fosdike, who has a background in aerospace engineering, met two former Siemens engineers — Aric Thorn and Ben Purcell. They came to the collective realization that voice products built for business customers have to overcome business-specific challenges, like understanding jargon, acronyms and syntax unique to particular customers.
“The way we extract information from systems changes every year, but the way we input information — especially in the industrial world — hasn’t changed since the invention of the keyboard and database,” Fosdike said. “The industrial world had been left in the dark for years, and we knew that developing a technology with voice-visual AI would help light the way for these factories.”
The voice assistants that Datch builds leverage AI to collect and structure data from users in a factory or in the field, parsing commands like “Report an issue for the Line 1 Spot Welder. I estimate it will take half a day to fix.” They run on a smartphone and link to existing systems to write and read records, including records from enterprise resource and asset management platforms.
Datch’s assistants provide a timeline of events and can capture data without an internet connection; they auto-sync once back online. Using them, workers can fill out company forms, create and update work orders, assign tasks and search through company records all via voice.
Fosdike didn’t go into detail about how Datch treats the voice data, save that it encrypts data both in-transit and at rest and performs daily backups.
“We have to employ a lot of tight, automated feedback loops to train the voice and [language] data, and so everyone’s interaction with Datch is slightly different, depending on the company and team they work within,” Fosdike explained. “Customers are exploring different use cases such as using the [language] data in predictive maintenance, automated classification of cause codes, and using the voice data to predict worker fatigue before it becomes a critical safety risk.”
That last bit about predicting worker fatigue is a little suspect. The idea that conditions like tiredness can be detected in a person’s voice isn’t a new one, but some researchers believe it’s unlikely AI can flag them with 100% accuracy. After all, people express tiredness in different ways, depending not only on the workplace environment but on their sex and cultural, ethnic and demographic backgrounds.
The tiredness-detecting scenario aside, Fosdike asserts that Datch’s technology is helping industrial clients get ahead of turbulence in the economy by “vastly improving” the efficiency of their operations. Frontline staff typically have to work with reporting tools that aren’t intuitive, he notes, and in many cases, voice makes for a less cumbersome, faster alternative form of input.
“We help frontline workers with productivity and solve the pain point of time wasted on their reports by decreasing the process time,” Fosdike said. “Industrial companies are fast realizing that to keep up with demand or position themselves to withstand a global pandemic, they need to find a way to scale with more than just peoplepower. Our AI offers these companies an efficient solution in a fraction of the time and with less overhead needed.”
Datch competes with Rain, Aiqudo and Onvego, all of which are developing voice technologies for industrial customers. Deloitte’s Maxwell, Genba and Athena are rivals in Fosdike’s eyes, as as well. But business remains steady — Datch counts ConEd, Singapore Airlines, ABB Robotics and the New York Power Authority among its clients.
“We raised this latest round earlier than expected due to the influx of demand from the market. The timing is right to capitalize on both the post-COVID boom in digital transformation as well as corporate investments driven by the infrastructure bill,” Fosdike said, referring to the $1 trillion package U.S. lawmakers passed last November. “Currently we have a team of 20, and plan to use the funds to grow to 55 to 60 people, scaling to roughly 40 by the end of the year.”
To date, Datch has raised $15 million in venture capital.
Datch secures $10M to build voice assistants to factory floors

PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

Neji Tawo, the founder of boutique software development company Wiscount Corporation, says he was inspired by his dad to become an engineer. When Tawo was a kid, his dad tasked him with coming up with a formula to calculate the gas in the fuel tanks at his family’s station. Tawo then created an app for gas stations to help prevent gas siphoning.
The seed of the idea for Tawo’s latest venture came from a different source: a TV ad for a charity. Frustrated by his experience filling out donation forms, Tawo sought an alternative, faster way to complete such transactions. He settled on voice.
Tawo’s PayTalk, which is one of the first products in Amazon’s Black Founders Build with Alexa Program, uses conversational AI to carry out transactions via smart devices. Using the PayTalk app, users can do things like find a ride, order a meal, pay bills, purchase tickets and even apply for a loan, Tawo says.
“We see the opportunity in a generation that’s already using voice services for day-to-day tasks like checking the weather, playing music, calling friends and more,” Tawo said. “At PayTalk, we feel voice services should function like a person — being capable of doing several things from hailing you a ride to taking your delivery order to paying your phone bills.”

PayTalk is powered by out-of-the-box voice recognition models on the frontend and various API connectors behind the scenes, Tawo explains. In addition to Alexa, the app integrates with Siri and Google Assistant, letting users add voice shortcuts like “Hey Siri, make a reservation on PayTalk.”
“Myself and my team have bootstrapped this all along the way, as many VCs we approached early on were skeptical about voice being the device form factor of the future. The industry is in its nascent stages and many still view it with skepticism,” Tawo said. “With the COVID-19 pandemic and subsequent shift to doing more remotely across different types of transactions (i.e. ordering food from home, shopping online, etc.), we … saw that there was increased interest in the use of voice services. This in turn boosted demand for our product and we believe that we are positioned to continue to expand our offerings and make voice services more useful as a result.”
Tawo’s pitch for PayTalk reminded me much of Viv, the startup launched by Siri co-creator Adam Cheyer (later acquired by Samsung) that proposed voice as the connective tissue between disparate apps and services. It’s a promising idea — tantalizing, even. But where PayTalk is concerned, the execution isn’t quite there yet. 
The PayTalk app is only available for iOS and Android at the moment, and in my experience with it, it’s a little rough around the edges. A chatbot-like flow allows you to type commands — a nice fallback for situations where voice doesn’t make sense (or isn’t appropriate) — but doesn’t transition to activities particularly gracefully. When I used it to look for a cab by typing the suggested “book a ride” command, PayTalk asked for a pickup and dropoff location before throwing me into an Apple Maps screen without any of the information I’d just entered.
The reservation and booking functionality seems broken as well. PayTalk walked me through the steps of finding a restaurant, asking which time I’d like to reserve, the size of my party and so on. But the app let me “confirm” a table for 2 a.m. at SS106 Aperitivo Bar — an Italian restaurant in Alberta — on a day the restaurant closes at 10 p.m.
Image Credits: PayTalk
Other “categories” of commands in PayTalk are very limited in what they can accomplish — or simply nonfunctional. I can only order groceries from two services in my area (Downtown Brooklyn) at present — MNO African Market and Simi African Foods Market. Requesting a loan prompts an email with a link to Glance Capital, a personal loan provider for gig workers, that throws a 404 error when clicked. A command to book “luxury services” like a yacht or “sea plane” (yes, really) fails to reach anything resembling a confirmation screen, while the “pay for parking” command confusingly asks for a zone number.
To fund purchases through PayTalk (e.g. parking), there’s an in-app wallet. I couldn’t figure out a way to transfer money to it, though. The app purports to accept payment cards, but tapping on the “Use Card” button triggers a loading animation that quickly times out.
I could go on. But suffice it to say that PayTalk is in the very earliest stages of development. I began to think the app had been released prematurely, but PayTalk’s official Twitter account has been advertising it for at least the past few months.
Perhaps PayTalk will eventually grow into the shoes of the pitch Tawo gave me, so to speak — Wiscount is kicking off a four-month tenure at the Black Founders Build with Alexa Program. In the meantime, it must be pointed out that Alexa, Google Assistant and Siri are already capable of handling much of what PayTalk promises to one day accomplish.

The battle for voice recognition inside vehicles is heating up

“With the potential $100,000 investment [from the Black Founders Build with Alexa Program], we will seek to raise a seed round to expand our product offerings to include features that would allow customers to seamlessly carry out e-commerce and financial transactions on voice service-powered devices,” Tawo said. “PayTalk is mainly a business-to-consumer platform. However, as we continue to innovate and integrate voice-activated options … we see the potential to support enterprise use cases by replacing and automating the lengthy form filling processes that are common for many industries like healthcare.”
Hopefully, the app’s basic capabilities get attention before anything else.
PayTalk promises to handle all sorts of payments with voice, but the app has a long way to go

Scandit raises $80M as COVID-19 drives demand for contactless deliveries

Enterprise barcode scanner company Scandit has closed an $80 million Series C round, led by Silicon Valley VC firm G2VP. Atomico, GV, Kreos, NGP Capital, Salesforce Ventures and Swisscom Ventures also participated in the round — which brings its total raised to date to $123M.
The Zurich-based firm offers a platform that combines computer vision and machine learning tech with barcode scanning, text recognition (OCR), object recognition and augmented reality which is designed for any camera-equipped smart device — from smartphones to drones, wearables (e.g. AR glasses for warehouse workers) and even robots.
Use-cases include mobile apps or websites for mobile shopping; self checkout; inventory management; proof of delivery; asset tracking and maintenance — including in healthcare where its tech can be used to power the scanning of patient IDs, samples, medication and supplies.
It bills its software as “unmatched” in terms of speed and accuracy, as well as the ability to scan in bad light; at any angle; and with damaged labels. Target industries include retail, healthcare, industrial/manufacturing, travel, transport & logistics and more.
The latest funding injection follows a $30M Series B round back in 2018. Since then Scandit says it’s tripled recurring revenues, more than doubling the number of blue-chip enterprise customers, and doubling the size of its global team.
Global customers for its tech include the likes of 7-Eleven, Alaska Airlines, Carrefour, DPD, FedEx, Instacart, Johns Hopkins Hospital, La Poste, Levi Strauss & Co, Mount Sinai Hospital and Toyota — with the company touting “tens of billions of scans” per year on 100+ million active devices at this stage of its business.
It says the new funding will go on further pressing on the gas to grow in new markets, including APAC and Latin America, as well as building out its footprint and ops in North America and Europe. Also on the slate: Funding more R&D to devise new ways for enterprises to transform their core business processes using computer vision and AR.
The need for social distancing during the coronavirus pandemic has also accelerated demand for mobile computer vision on personal smart devices, according to Scandit, which says customers are looking for ways to enable more contactless interactions.
Another demand spike it’s seeing is coming from the pandemic-related boom in ‘Click & Collect’ retail and “millions” of extra home deliveries — something its tech is well positioned to cater to because its scanning apps support BYOD (bring your own device), rather than requiring proprietary hardware.
“COVID-19 has shone a spotlight on the need for rapid digital transformation in these uncertain times, and the need to blend the physical and digital plays a crucial role,” said CEO Samuel Mueller in a statement. “Our new funding makes it possible for us to help even more enterprises to quickly adapt to the new demand for ‘contactless business’, and be better positioned to succeed, whatever the new normal is.”
Also commenting on the funding in a supporting statement, Ben Kortlang, general partner at G2VP, added: “Scandit’s platform puts an enterprise-grade scanning solution in the pocket of every employee and customer without requiring legacy hardware. This bridge between the physical and digital worlds will be increasingly critical as the world accelerates its shift to online purchasing and delivery, distributed supply chains and cashierless retail.”

Scandit raises $80M as COVID-19 drives demand for contactless deliveries

Google launches the first developer preview of Android 11

With the days of desert-themed releases officially behind it, Google today announced the first developer preview of Android 11, which is now available as system images for Google’s own Pixel devices, starting with the Pixel 2.
As of now, there is no way to install the updates over the air. That’s usually something the company makes available at a later stage. These first releases aren’t meant for regular users anyway. Instead, they are a way for developers to test their applications and get a head start on making use of the latest features in the operating system.
“With Android 11 we’re keeping our focus on helping users take advantage of the latest innovations, while continuing to keep privacy and security a top priority,” writes Google VP of Engineering Dave Burke. “We’ve added multiple new features to help users manage access to sensitive data and files, and we’ve hardened critical areas of the platform to keep the OS resilient and secure. For developers, Android 11 has a ton of new capabilities for your apps, like enhancements for foldables and 5G, call-screening APIs, new media and camera capabilities, machine learning, and more.”
Unlike some of Google’s previous early previews, this first version of Android 11 does actually bring quite a few new features to the table. As Burke noted, there are some obligatory 5G features like a new bandwidth estimate API, for example, as well as a new API that checks whether a connection is unmetered so apps can play higher-resolution video, for example.
With Android 11, Google is also expanding its Project Mainline lineup of updatable modules from 10 to 22. With this, Google is able to update critical parts of the operating system without having to rely on the device manufacturers to release a full OS update. Users simply install these updates through the Google Play infrastructure.
Users will be happy to see that Android 11 will feature native support for waterfall screens that cover a device’s edges, using a new API that helps developers manage interactions near those edges.
Also new are some features that developers can use to handle conversational experiences, including a dedicated conversation section in the notification shade, as well as a new chat bubbles API and the ability to insert images into replies you want to send from the notifications pane.
Unsurprisingly, Google is adding a number of new privacy and security features to Android 11, too. These include one-time permissions for sensitive types of data, as well as updates to how the OS handles data on external storage, which it first previewed last year.
As for security, Google is expanding its support for biometrics and adding different levels of granularity (strong, weak and device credential), in addition to the usual hardening of the platform you would expect from a new release.
There are plenty of other smaller updates as well, including some that are specifically meant to make running machine learning applications easier, but Google specifically highlights the fact that Android 11 will also bring a couple of new features to the OS that will help IT manage corporate devices with enhanced work profiles.
This first developer preview of Android 11 is launching about a month earlier than previous releases, so Google is giving itself a bit more time to get the OS ready for a wider launch. Currently, the release schedule calls for monthly developer preview releases until April, followed by three betas and a final release in Q3 2020.

Google launches the first developer preview of Android 11