Архив рубрики: Apps

33% of US TikTok users say they regularly get their news on the app, up from 22% in 2020

Earlier this summer, a Google exec admitted that TikTok was eating into its core Search business, particularly among younger users. But that’s not all TikTok is now being used for, a new Pew Research Center study indicates. According to the findings from a report that examined Americans’ use of social media for news consumption, 33% of TikTok users now say they regularly get their news on the social video app, up from just 22% in 2020.
Meanwhile, nearly every other social media site saw declines across that same metric — including, in particular, Facebook, where now only 44% of its users report regularly getting their news there, down from 54% just two years ago.
Image Credits: Pew Research
This data suggests TikTok has grown from being just an entertainment platform for lip syncs, dances, and comedy to one that many of its users turn to in order to learn about what’s happening in their world.
That may raise concerns, given TikTok’s connections to China — a topic it was recently pressed to clarify in a Senate hearing focused on national security. The hearing had followed the release of a BuzzFeed News report that had discovered how China-based ByteDance employees had been regularly accessing TikTok’s U.S. users’ private data.
If TikTok were to become one of the primary ways younger people in the U.S. learned about news and current events, then the app could potentially provide a channel for a foreign power to influence those users’ beliefs with subtle tweaks to its algorithm.

Meta, TikTok, YouTube and Twitter dodge questions on social media and national security

For the time being, however, TikTok is not a primary source of news consumption across social media — that honor still resides with Facebook.
Pew found that 31% of U.S. adults report regularly getting their news from Facebook, which is higher than the 25% who get their news from YouTube, the 14% who get it from Twitter, or the13% who get it from Instagram.
TikTok was in fifth place by this ranking, as only 10% of U.S. adults said they regularly get their news on the video app. (Of course, when TikTok’s sizable user base of those under the age of 18 grows up, these metrics could quickly change.)
LinkedIn (4%), Snapchat (4%), Nextdoor (4%), WhatsApp (3%) and Twitch (1%) were much smaller sources of news among Americans, the study also found.
Image Credits: Pew Research
In addition, Pew somewhat backed up Google’s assertion that it was losing traction to TikTok and other social media apps, as it noted that the percentage of U.S. adults who got their news via web search had dropped from 23% in 2020 to 18% in 2022.
But it didn’t necessarily point to TikTok or any other social platform as gaining, as the percentage of adults using social media of any sort for news consumption dropped from 23% to 17% between 2020 and 2022, as did other forms of news consumption like news websites and apps.
Image Credits: Pew Research
It’s not clear that any single platform is benefiting from these declines, as Pew didn’t uncover a shift from digital news sources to others, such as TV, print or radio — all those saw declines in news consumption as well.
Image Credits: Pew Research
Still, digital devices continue to outpace TV, Pew said, as the latter has seen its usage drop as a source for news consumption from 40% in 2020 to 31% in 2022.
Plus, when asked about preferences, more Americans (53%) said they would rather get their news digitally than on TV (33%), radio (7%), or print (5%) — an answer that’s stayed consistent since 2020.

Google exec suggests Instagram and TikTok are eating into Google’s core products, Search and Maps

33% of US TikTok users say they regularly get their news on the app, up from 22% in 2020 by Sarah Perez originally published on TechCrunch
33% of US TikTok users say they regularly get their news on the app, up from 22% in 2020

OpenAI begins allowing users to edit faces with DALL-E 2

After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people’s faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures.
OpenAI claims that improvements to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content. In an email to customers, the company wrote:
Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes … [We] built new detection and response techniques to stop misuse.
The change in policy isn’t opening the floodgates necessarily. OpenAI’s terms of service will continue to prohibit uploading pictures of people without their consent or images that users don’t have the rights to — although it’s not clear how consistent the company’s historically been about enforcing those policies.
In any case, it’ll be a true test of OpenAI’s filtering technology, which some customers in the past have complained about being overzealous and somewhat inaccurate. Deepfakes come in many flavors, from fake vacation photos to presidents of war-torn countries. Accounting for every emerging form of abuse will be a never-ending battle, in some cases with very high stakes.
No doubt, OpenAI — which has the backing of Microsoft and notable VC firms including Khosla Ventures — is eager to avoid the controversy associated with Stability AI’s Stable Diffusion, an image-generating system that’s available in an open source format without any restrictions. As TechCrunch recently wrote about, it didn’t take long before Stable Diffusion — which can also edit face images — was being used by some to create pornographic, nonconsensual deepfakes of celebrities like Emma Watson.
So far, OpenAI has positioned itself as a brand-friendly, buttoned-up alternative to the no-holds-barred Stability AI. And with the constraints around the new face editing feature for DALL-E 2, the company is maintaining the status quo.
DALL-E 2 remains in invite-only beta. In late August, OpenAI announced that over a million people are using the service.
OpenAI begins allowing users to edit faces with DALL-E 2 by Kyle Wiggers originally published on TechCrunch
OpenAI begins allowing users to edit faces with DALL-E 2

Movie and TV app ReelTime helps you track your viewing, check ratings and more

It’s not easy to keep up with all the content being released to dozens of streaming services. However, TV tracker apps make our binge-watching habits a little more manageable. ReelTime is an app for iOS device users who want to track TV shows and movies that they’ve streamed, content they’re currently watching and titles they want to watch.
With its latest update, ReelTime 1.6, the redesigned app now includes ratings from Rotten Tomatoes, IMDb and The Movie Database (TMDB), as well as an updated home screen and lock screen widgets so users can see upcoming movies and TV shows, changes to their library and watch progress without opening the app. Similar TV tracking apps like JustWatch and Reelgood also include IMDb ratings — but not Rotten Tomatoes or TMDB ratings.
Powered by streaming guide JustWatch and the use of the Trakt API to keep track of movies and shows, ReelTime notifies users of newly added episodes, release date changes, new posters and more, all on one interface. Users can also customize which notifications they want to receive.
Image Credits: ReelTime
 
Image Credits: ReelTime
ReelTime was created by Maxwell Handelman, who launched the app on the Apple Store a year ago.
“ReelTime is not a streaming service, but I’m aiming for it to be the absolute best in terms of entertainment reference and tracking. Since the very beginning, I’ve been listening to my users and adding the features that they want … I’ve got a lot planned for the future of the app,” Handelman told TechCrunch.
Handelman is working on adding new features to the app.
In the future, users will get a discover feature that allows them to find more titles. As of now, ReelTime only lets users search for content they’re already aware of or browse popular titles based on data from TMDB and JustWatch.
The community feature will give users the ability to leave comments under movies and TV shows.
Another feature in the works will let users share their ratings and comments.
The TV tracker app is only available to download on iPhones and iPads. It’s free and doesn’t require any in-app purchases or subscriptions. ReelTime claims that it doesn’t track users or collect any of their data.

Whip Media Group, parent to TV show tracking app TV Time, raises $50M

Movie and TV app ReelTime helps you track your viewing, check ratings and more by Lauren Forristal originally published on TechCrunch
Movie and TV app ReelTime helps you track your viewing, check ratings and more

AI is getting better at generating porn. We might not be prepared for the consequences.

A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.
AI-powered systems like Stable Diffusion, which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial) prints and full-blown marketing campaigns.
But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.
AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.
Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.
Called Porn Pen, the website allows users to customize the appearance of nude AI-generated models — all of which are women — using toggleable tags like “babe,” “lingerie model,” “chubby,” ethnicities (e.g. “Russian” and “Latina”) and backdrops (e.g. “bedroom,” “shower” and wildcards like “moon”). Buttons capture models from the front, back or side, and change the appearance of the generated photo (e.g. “film photo,” “mirror selfie”). There must be a bug on the mirror selfies, though, because in the feed of user-generated images, some mirrors don’t actually reflect a person — but of course, these models are not people at all. Porn Pen functions like “This Person Does Not Exist,” only it’s NSFW.
On Y Combinator’s Hacker News forum, a user purporting to be the creator describes Porn Pen as an “experiment” using cutting-edge text-to-image models. “I explicitly removed the ability to specify custom text to avoid harmful imagery from being generated,” they wrote. “New tags will be added once the prompt-engineering algorithm is fine-tuned further.” The creator did not respond to TechCrunch’s request for comment.
But Porn Pen raises a host of ethical questions, like biases in image-generating systems and the sources of the data from which they arose. Beyond the technical implications, one wonders whether new tech to create customized porn — assuming it catches on — could hurt adult content creators who make a living doing the same.
“I think it’s somewhat inevitable that this would come to exist when [OpenAI’s] DALL-E did,” Os Keyes, a PhD candidate at Seattle University, told TechCrunch via email. “But it’s still depressing how both the options and defaults replicate a very heteronormative and male gaze.”
Ashley, a sex worker and peer organizer who works on cases involving content moderation, thinks that the content generated by Porn Pen isn’t a threat to sex workers in its current state.
“There is endless media out there,” said Ashley, who did not want her last name to be published for fear of being harassed for their job. “But people differentiate themselves not by just making the best media, but also by being an accessible, interesting person. It’s going to be a long time before AI can replace that.”
On existing monetizable porn sites like OnlyFans and ManyVids, adult creators must verify their age and identity so that the company knows they are consenting adults. AI-generated porn models can’t do this, of course, because they aren’t real.
Ashley worries, though, that if porn sites crack down on AI porn, it might lead to harsher restrictions for sex workers, who are already facing increased regulation from legislation like SESTA/FOSTA. Congress introduced the Safe Sex Workers Study Act in 2019 to examine the affects of this legislation, which makes online sex work more difficult. This study found that “community organizations [had] reported increased homelessness of sex workers” after losing the “economic stability provided by access to online platforms.”
“SESTA was sold as fighting child sex trafficking, but it created a new criminal law about prostitution that had nothing about age,” Ashley said.
Currently, few laws around the world pertain to deepfaked porn. In the U.S., only Virginia and California have regulations restricting certain uses of faked and deepfaked pornographic media.
Systems such as Stable Diffusion “learn” to generate images from text by example. Fed billions of pictures labeled with annotations that indicate their content — for example, a picture of a dog labeled “Dachshund, wide-angle lens” — the systems learn that specific words and phrases refer to specific art styles, aesthetics, locations and so on.
This works relatively well in practice. A prompt like “a bird painting in the style of Van Gogh” will predictably yield a Van Gogh-esque image depicting a bird. But it gets trickier when the prompts are vaguer, refer to stereotypes or deal with subject matter with which the systems aren’t familiar.
For example, Porn Pen sometimes generates images without a person at all — presumably a failure of the system to understand the prompt. Other times, as alluded to earlier, it shows physically improbable models, typically with extra limbs, nipples in unusual places and contorted flesh.
“By definition [these systems are] going to represent those whose bodies are accepted and valued in mainstream society,” Keyes said, noting that Porn Pen only has categories for cisnormative people. “It’s not surprising to me that you’d end up with a disproportionately high number of women, for example.”
While Stable Diffusion, one of the systems likely underpinning Porn Pen, has relatively few “NSFW” images in its training dataset, early experiments from Redditors and 4chan users show that it’s quite competent at generating pornographic deepfakes of celebrities (Porn Pen — perhaps not coincidentally — has a “celebrity” option). And because it’s open source, there’d be nothing to prevent Porn Pen’s creator from fine-tuning the system on additional nude images.
“It’s definitely not great to generate [porn] of an existing person,” Ashley said. “It can be used to harass them.”
Deepfake porn is often created to threaten and harass people. These images are almost always developed without the subject’s consent out of malicious intent. In 2019, the research company Sensity AI found that 96% of deepfake videos online were non-consensual porn.
Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, says that there’s a possibility the dataset includes people who’ve not consented to their image being used for training in this way, including sex workers.
“Many of [the people in the nudes in the training data] may derive their income from producing pornography or pornography-adjacent content,” Cook said. “Just like fine artists, musicians or journalists, the works these people have produced are being used to create systems that also undercut their ability to earn a living in the future.”
In theory, a porn actor could use copyright protections, defamation and potentially even human rights laws to fight the creator of a deepfaked image. But as a piece in MIT Technology Review notes, gathering evidence in support of the legal argument can prove to be a massive challenge.
When more primitive AI tools popularized deepfaked porn several years ago, a Wired investigation found that nonconsensual deepfake videos were racking up millions of views on mainstream porn sites like Pornhub. Other deepfaked works found a home on sites akin to Porn Pen — according to Sensity data, the top four deepfake porn websites received more than 134 million views in 2018.
“AI image synthesis is now a widespread and accessible technology, and I don’t think anyone is really prepared for the implications of this ubiquity,” Cook continued. “In my opinion, we have rushed very, very far into the unknown in the last few years with little regard for the impact of this technology.”
To Cook’s point, one of the most popular sites for AI-generated porn expanded late last year through partner agreements, referrals and an API, allowing the service — which hosts hundreds of nonconsensual deepfakes — to survive bans on its payments infrastructure. And in 2020, researchers discovered a Telegram bot that generated abusive deepfake images of more than 100,000 women, including underage girls.
“I think we’ll see a lot more people testing the limits of both the technology and society’s boundaries in the coming decade,” Cook said. “We must accept some responsibility for this and work to educate people about the ramifications of what they are doing.”
AI is getting better at generating porn. We might not be prepared for the consequences.

Daily Crunch: Embedded finance fintech Pezesha raises $11M pre-Series A equity-debt round

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.
Hey, hey, hey! Good to have you back with us again. Today, we’re mostly amazed at how quiet Twitter gets during Burning Man, and excited that we’re doing a Labor Day sale for TechCrunch Plus, if you’ve been wanting to read our subscription site but you’ve been holding off for whatever reason. — Christine and Haje
The TechCrunch Top 3
Embed that finance: Pezesha, a Kenyan-based fintech startup, is flush with $11 million in new capital as it seeks to bridge the gap between access to financial products and what is a “$330 billion financing deficit for the small enterprises that make up 90% of Africa’s businesses,” Annie reports.
We’re all connected: If you haven’t yet seen yourself in one of your Twitter connection’s Circles, you may soon. The social media giant is launching the “Close Friends” features globally, Ivan reports. Add a bunch of people to your Circle and get tweeting.
No delivery for you: Delivery platform Gopuff has only been in Europe since November 2021, but Natasha L writes it made the decision to discontinue its service in Spain. She cites that perhaps this is to focus more on the United Kingdom market where revenue there is increasing 30% month over month.
Startups and VC
Initialized Capital was VC Garry Tan’s answer to a need first highlighted by Y Combinator. As a partner at the accelerator from 2010 to 2015, Tan spent time working with companies to better understand what they needed from investors after they graduated. This week, he announced he’s back at the helm at YC, and Natasha M interviewed him about what’s next for Y Combinator.
The company behind last summer’s hot social app, Poparazzi, appears to be readying a round two following its $15 million Series A announced in June. A new listing in the App Store under the developer’s account, TTYL, is teasing a pre-release app called Made with Friends, Sarah reports.
When the news hits your eye, like a big pizza pie, that’s a-more-news:
Notification bubbles: Devin reports that, at long last, there’s an underwater messaging app.
Money for laundering: Flush with fresh funds, U.K. “eco laundry” startup Oxwash raised $12 million to spin up its growth plans, Natasha L reports.
Faster when further afield: The U.K.’s £5 billion Project Gigabit gives out its first contract to connect rural areas to high-speed broadband, Paul reports.
PriceOye gets the Thiel seal of approval: Islamabad-based startup PriceOye offers a range of electronics products, including smartphones, TVs and home appliances. It just closed a round of funding from investors, including Peter Thiel, reports Jagmeet.
Dodging the SPAC bullet: Alex and Anna wrote a really interesting piece on TC+ (use “DC” for a 15% discount if you’re not a subscriber yet) about SPACs, how they are falling apart, and how Europe may have dodged a bullet on that front.
How to communicate to your crypto community when things aren’t going well
Image Credits: Peter Dazeley (opens in a new window) / Getty Images
Because it’s a nascent industry that’s largely unregulated, crypto companies are not generally skilled at crisis communications. (We’re being generous here.)
When a bank or financial services company experiences a massive security failure or a volatility shock, federal laws dictate how it must communicate with its customers. Crypto startups, however, must rely on their own best judgment.
“There’s little benefit in declaring that the sky is falling and begging your community for investment, but an overly rosy outlook won’t fool anyone either,” says Tahem Verma, co-founder and CEO of Mesha.

How to communicate with your crypto community when things aren’t going well

(TechCrunch+ is our membership program, which helps founders and startup teams get ahead. You can sign up here.)
Big Tech Inc.
Last chance to get your game on in the Facebook Gaming app. The social media giant said it is shutting down its stand-alone app at the end of October, Aisha reports. Don’t worry, you can still find your games in Gaming on actual Facebook. When launching the separate app two years ago, it seemed to be more difficult than Facebook bargained for, so it decided to join ’em instead of beating ’em.
Data duh!: Millions of faces and vehicle license plates were leaked online in China, Zack writes.
Ghosts can drive?: A Tesla Model 3 owner filed a class action lawsuit against the electric vehicle maker alleging the car keeps “phantom braking,” Jaclyn reports. 
New security regime: Broadband and mobile carriers in the United Kingdom could face fines of up to $117,000 per day or 10% of their sales if they don’t abide by some new cybersecurity rules, Ingrid writes.
More Elon: Taylor has the 411 on Elon Musk’s new strategy for getting out of the Twitter deal — hint, it involves the company’s whistleblower. Meanwhile, Paul goes over the new subpoena related to the ongoing battle.

Daily Crunch: Embedded finance fintech Pezesha raises $11M pre-Series A equity-debt round