Архив рубрики: Media & Entertainment

Автоматически добавленное в WPeMatico

Movie and TV app ReelTime helps you track your viewing, check ratings and more

It’s not easy to keep up with all the content being released to dozens of streaming services. However, TV tracker apps make our binge-watching habits a little more manageable. ReelTime is an app for iOS device users who want to track TV shows and movies that they’ve streamed, content they’re currently watching and titles they want to watch.
With its latest update, ReelTime 1.6, the redesigned app now includes ratings from Rotten Tomatoes, IMDb and The Movie Database (TMDB), as well as an updated home screen and lock screen widgets so users can see upcoming movies and TV shows, changes to their library and watch progress without opening the app. Similar TV tracking apps like JustWatch and Reelgood also include IMDb ratings — but not Rotten Tomatoes or TMDB ratings.
Powered by streaming guide JustWatch and the use of the Trakt API to keep track of movies and shows, ReelTime notifies users of newly added episodes, release date changes, new posters and more, all on one interface. Users can also customize which notifications they want to receive.
Image Credits: ReelTime
 
Image Credits: ReelTime
ReelTime was created by Maxwell Handelman, who launched the app on the Apple Store a year ago.
“ReelTime is not a streaming service, but I’m aiming for it to be the absolute best in terms of entertainment reference and tracking. Since the very beginning, I’ve been listening to my users and adding the features that they want … I’ve got a lot planned for the future of the app,” Handelman told TechCrunch.
Handelman is working on adding new features to the app.
In the future, users will get a discover feature that allows them to find more titles. As of now, ReelTime only lets users search for content they’re already aware of or browse popular titles based on data from TMDB and JustWatch.
The community feature will give users the ability to leave comments under movies and TV shows.
Another feature in the works will let users share their ratings and comments.
The TV tracker app is only available to download on iPhones and iPads. It’s free and doesn’t require any in-app purchases or subscriptions. ReelTime claims that it doesn’t track users or collect any of their data.

Whip Media Group, parent to TV show tracking app TV Time, raises $50M

Movie and TV app ReelTime helps you track your viewing, check ratings and more by Lauren Forristal originally published on TechCrunch
Movie and TV app ReelTime helps you track your viewing, check ratings and more

The week an Apple event and YC Demo Day collided

Happy Saturday, friends. Welcome back to Week in Review, the newsletter where we very quickly sum up the most read TechCrunch stories from the past week. Want it in your inbox every Saturday AM? Get it here.
This week saw two big events running in parallel: an Apple hardware announcement and Y Combinator’s Demo Day. Either one of those on their own would generally lead our traffic for the week — having them smash into each other on the same day was … interesting. And maybe a little exhausting.
most read
The Apple stuff: Apple’s event, as their events tend to do, mostly dominated the tech news cycle this week. Rather than turn this entire newsletter into one big list of Apple things, I’ll just say: new iPhones, new AirPods, and a beefy new Apple Watch. Want more words than that? Here’s our roundup of the news.
Y Combinator moonshots: Startups are hard. But every YC batch has at least a handful of companies that seem a little extra hard — the moonshots, if you will. From faux fish to teams that want to reinvent flying, the Demo Day team rounded up some of the wildest pitches.
Musk/Twitter drama continues: Elon Musk is still aiming to undo his multibillion-dollar offer for Twitter, and Twitter still wants to hold him to it. This week a Delaware judge made two decisions in the ordeal: The trial will not be delayed by a month as Musk’s legal team had requested, but Musk will be allowed to “amend his counterclaim with details” disclosed by Twitter security whistleblower Peiter “Mudge” Zatko earlier this month.
LG wants you to buy NFTs on your TV: NFT sales have reportedly tanked over the last few months. Will the ability to buy/sell/trade NFTs on LG smart TVs be the thing that turns that around? No, no, it will not.
Kim Kardashian’s new gig: “America’s favorite reality star is leveling up her repertoire,” writes Anita, with another job title: private equity investor. Kardashian is teaming up with Jay Sammons, formerly the head of Consumer/Media/Retail at the Carlyle Group, to launch a new private equity firm called SKKY Partners.
Jeep’s EVs: Another legendary auto brand is diving deep into electric vehicles — this time it’s Jeep, which this week revealed plans to roll out three different EVs (the Recon, Wagoneer S, and Avenger) by 2025. The company, notes Jaclyn, expects “EVs to compose half of its sales in North America — and all of its sales in Europe — by 2030.”
Patreon layoffs: Patreon, a company that helps creators build out paid membership offerings, laid off employees this week. The layoffs purportedly leave Patreon without much of a security team, which seems … not ideal?
Image Credits: Bryce Durbin
audio roundup
What’s up in TC podcast land this week? “Selling Sunset” star Christine Quinn stopped by Found to tell ’em about her new startup, the Chain Reaction crypto crew talked about the latest drama at Binance, and Burnsy took a virtual trip to Minnesota to put the spotlight on the Minneapolis startup scene for TechCrunch Live.
techcrunch+
Want 15% off an annual TechCrunch+ subscription? Use promo code “WIR” when signing up. Just want to know what TC+ readers were reading most this week? Here’s the breakdown:
YC Demo Day favs: Nearly 230 pitches later, which Y Combinator S22 companies stood out to the Demo Day team? Here are their favorite pitches from Day 1 and Day 2.
The most important slides in your pitch deck: Reporter/former VC/resident pitch deck expert Haje shares his insights on which of the perhaps-too-many slides in your deck are most crucial.
The freemium bar is shifting: Across products from Slack to Google Meet to Heroku, many companies are shifting up their free tiers to offer less. Why now? Anita explores the trend.
The week an Apple event and YC Demo Day collided by Greg Kumparak originally published on TechCrunch
The week an Apple event and YC Demo Day collided

Daily Crunch: PSG, Battery Ventures invest $100M in open source password manager Bitwarden 

To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.
Hey, hey, hey! It’s going to be a busy week for the TC crew this week. We’re excited about the Apple event, and Y Combinator has its demo day. Alex welcomes you to YC and Apple week on the Equity podcast, and your trusty Daily Crunch team is poised at our laptops to share the cream of the news-crop with you!
Stay tuned, it’s going to be a wild one!  — Christine and Haje
The TechCrunch Top 3
2Dh1?..Spth!Lmng: Bitwarden’s ability to generate hard-to-guess passwords has made it attractive to investors who just pumped $100 million of new funding into the company, which aims to rid the world of people using the same passwords across their personal and business lives, Paul writes.
‘Wild West’ of climate tech: Mike has a story about Ceezer closing on €4.2 million to figure out a better way for businesses to carbon-offset.
DAO makes us proud: Gaming guild Metaverse Magna is now valued at $30 million after raising $3.2 million in a recent round. Tage writes the company plans to build “Africa’s largest gaming DAO.”
Startups and VC
The EU — those guys who ensured we ended up with cookie banners on every damn website you’ve ever visited — are back at it with a new initiative that could have some major-league unintended consequences on open source software, Kyle reports. The EU’s AI Act could have a chilling effect — “if a company were to deploy an open source AI system that led to some disastrous outcome (…) it could sue the open source developers.”
Our brains are melting in the heat, so here’s some truly god-awful puns to match our current mental age:
What’s a pirate’s favorite growth metric? ARR: Userpilot, a product-led growth platform for SaaS companies, raises $4.6 million, Annie reports.
What do you call suburban justice? Lawn and order: Dominic-Madori reports that JusticeText raises $2.2 million to increase transparency in criminal evidence-gathering.
What kind of bees are made of plastic? Frisbees: Hardware startup Mantle is 3D-printing manufacturing tooling, which could drastically reduce the amount of time to make new plastic parts, Haje reports.
My credit card company is proud of my circus skills. They keep telling me I have outstanding balance: Brex’s CRO is leaving to join Founders Fund, and in her fintech newsletter this weekend, Mary Ann talks with him to figure out what drove that decision.
I walked into an EV dealership, and asked them how much they charge: Exciting news for cars-with-built-in-solar-panels fans; EV carmaker Lightyear raised $85 million and starts to gear up for production, Paul reports.
10 onboarding improvements that cut our customer churn by nearly 3x
Image Credits: Hill Street Studios (opens in a new window) / Getty Images
Managers who run businesses that rely on recurring revenue are often distracted by the never-ending sprint to maintain favorable KPIs. But one metric may rule them all: customer churn.
If new users can’t quickly figure out how to use (or benefit from) your products, it won’t matter how many new customers you onboard each month. But to reduce churn, marketing and product teams need onboarding goals, says Sam DeBrule, co-founder and head of marketing of Heyday.
In a TC+ guest post, he explains the tactics he and his co-founder used to insert themselves into the customer journey, and how the changes helped them reduce turnover by almost 3x.
“If you’re working on onboarding and saw something you liked here, feel free to steal it.”

10 onboarding improvements that cut our customer churn by nearly 3x

(TechCrunch+ is our membership program, which helps founders and startup teams get ahead. You can sign up here.)
Big Tech Inc.
Manish was behind two of our big stories over the weekend, including crypto exchange Binance announcing it would stop supporting USDC, USDP and TUSD and begin converting the three rival stablecoins into its own stablecoin, BUSD, on September 29. He also writes about India’s information technology junior minister sending a summons to Wikipedia after edits were made to the page of cricketer Arshdeep Singh, “suggesting that some people from Pakistan were behind the act and were attempting to disrupt peace in the South Asian market.”
More to the story: Zack is back with some new developments on Samsung’s data breach notice last month.
Data dilemma: Instagram was handed “a fat fine” by the European Union after it was determined Meta’s social media platform was not properly handling children’s data, Natasha L writes.
Don’t click on that: The Los Angeles School District, the second-largest in the U.S., warned its community of disruptions while the district manages an ongoing ransomware attack, Carly reports.
Trading places: European trading platform Bitpanda added commodities to its list of items that can be traded. Romain writes the move comes as natural gas prices soar across the continent due to the ongoing conflict between Russia and the Ukraine.
Writing the playbook on video games: Rita talks to Tencent’s Steve Martin about the Chinese social networking and gaming company’s ambitions around intellectual property and autonomy.
Daily Crunch: PSG, Battery Ventures invest $100M in open source password manager Bitwarden  by Christine Hall originally published on TechCrunch
Daily Crunch: PSG, Battery Ventures invest $100M in open source password manager Bitwarden 

AI is getting better at generating porn. We might not be prepared for the consequences.

A red-headed woman stands on the moon, her face obscured. Her naked body looks like it belongs on a poster you’d find on a hormonal teenager’s bedroom wall — that is, until you reach her torso, where three arms spit out of her shoulders.
AI-powered systems like Stable Diffusion, which translate text prompts into pictures, have been used by brands and artists to create concept images, award-winning (albeit controversial) prints and full-blown marketing campaigns.
But some users, intent on exploring the systems’ murkier side, have been testing them for a different sort of use case: porn.
AI porn is about as unsettling and imperfect as you’d expect (that red-head on the moon was likely not generated by someone with an extra arm fetish). But as the tech continues to improve, it will evoke challenging questions for AI ethicists and sex workers alike.
Pornography created using the latest image-generating systems first arrived on the scene via the discussion boards 4chan and Reddit earlier this month, after a member of 4chan leaked the open source Stable Diffusion system ahead of its official release. Then, last week, what appears to be one of the first websites dedicated to high-fidelity AI porn generation launched.
Called Porn Pen, the website allows users to customize the appearance of nude AI-generated models — all of which are women — using toggleable tags like “babe,” “lingerie model,” “chubby,” ethnicities (e.g. “Russian” and “Latina”) and backdrops (e.g. “bedroom,” “shower” and wildcards like “moon”). Buttons capture models from the front, back or side, and change the appearance of the generated photo (e.g. “film photo,” “mirror selfie”). There must be a bug on the mirror selfies, though, because in the feed of user-generated images, some mirrors don’t actually reflect a person — but of course, these models are not people at all. Porn Pen functions like “This Person Does Not Exist,” only it’s NSFW.
On Y Combinator’s Hacker News forum, a user purporting to be the creator describes Porn Pen as an “experiment” using cutting-edge text-to-image models. “I explicitly removed the ability to specify custom text to avoid harmful imagery from being generated,” they wrote. “New tags will be added once the prompt-engineering algorithm is fine-tuned further.” The creator did not respond to TechCrunch’s request for comment.
But Porn Pen raises a host of ethical questions, like biases in image-generating systems and the sources of the data from which they arose. Beyond the technical implications, one wonders whether new tech to create customized porn — assuming it catches on — could hurt adult content creators who make a living doing the same.
“I think it’s somewhat inevitable that this would come to exist when [OpenAI’s] DALL-E did,” Os Keyes, a PhD candidate at Seattle University, told TechCrunch via email. “But it’s still depressing how both the options and defaults replicate a very heteronormative and male gaze.”
Ashley, a sex worker and peer organizer who works on cases involving content moderation, thinks that the content generated by Porn Pen isn’t a threat to sex workers in its current state.
“There is endless media out there,” said Ashley, who did not want her last name to be published for fear of being harassed for their job. “But people differentiate themselves not by just making the best media, but also by being an accessible, interesting person. It’s going to be a long time before AI can replace that.”
On existing monetizable porn sites like OnlyFans and ManyVids, adult creators must verify their age and identity so that the company knows they are consenting adults. AI-generated porn models can’t do this, of course, because they aren’t real.
Ashley worries, though, that if porn sites crack down on AI porn, it might lead to harsher restrictions for sex workers, who are already facing increased regulation from legislation like SESTA/FOSTA. Congress introduced the Safe Sex Workers Study Act in 2019 to examine the affects of this legislation, which makes online sex work more difficult. This study found that “community organizations [had] reported increased homelessness of sex workers” after losing the “economic stability provided by access to online platforms.”
“SESTA was sold as fighting child sex trafficking, but it created a new criminal law about prostitution that had nothing about age,” Ashley said.
Currently, few laws around the world pertain to deepfaked porn. In the U.S., only Virginia and California have regulations restricting certain uses of faked and deepfaked pornographic media.
Systems such as Stable Diffusion “learn” to generate images from text by example. Fed billions of pictures labeled with annotations that indicate their content — for example, a picture of a dog labeled “Dachshund, wide-angle lens” — the systems learn that specific words and phrases refer to specific art styles, aesthetics, locations and so on.
This works relatively well in practice. A prompt like “a bird painting in the style of Van Gogh” will predictably yield a Van Gogh-esque image depicting a bird. But it gets trickier when the prompts are vaguer, refer to stereotypes or deal with subject matter with which the systems aren’t familiar.
For example, Porn Pen sometimes generates images without a person at all — presumably a failure of the system to understand the prompt. Other times, as alluded to earlier, it shows physically improbable models, typically with extra limbs, nipples in unusual places and contorted flesh.
“By definition [these systems are] going to represent those whose bodies are accepted and valued in mainstream society,” Keyes said, noting that Porn Pen only has categories for cisnormative people. “It’s not surprising to me that you’d end up with a disproportionately high number of women, for example.”
While Stable Diffusion, one of the systems likely underpinning Porn Pen, has relatively few “NSFW” images in its training dataset, early experiments from Redditors and 4chan users show that it’s quite competent at generating pornographic deepfakes of celebrities (Porn Pen — perhaps not coincidentally — has a “celebrity” option). And because it’s open source, there’d be nothing to prevent Porn Pen’s creator from fine-tuning the system on additional nude images.
“It’s definitely not great to generate [porn] of an existing person,” Ashley said. “It can be used to harass them.”
Deepfake porn is often created to threaten and harass people. These images are almost always developed without the subject’s consent out of malicious intent. In 2019, the research company Sensity AI found that 96% of deepfake videos online were non-consensual porn.
Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, says that there’s a possibility the dataset includes people who’ve not consented to their image being used for training in this way, including sex workers.
“Many of [the people in the nudes in the training data] may derive their income from producing pornography or pornography-adjacent content,” Cook said. “Just like fine artists, musicians or journalists, the works these people have produced are being used to create systems that also undercut their ability to earn a living in the future.”
In theory, a porn actor could use copyright protections, defamation and potentially even human rights laws to fight the creator of a deepfaked image. But as a piece in MIT Technology Review notes, gathering evidence in support of the legal argument can prove to be a massive challenge.
When more primitive AI tools popularized deepfaked porn several years ago, a Wired investigation found that nonconsensual deepfake videos were racking up millions of views on mainstream porn sites like Pornhub. Other deepfaked works found a home on sites akin to Porn Pen — according to Sensity data, the top four deepfake porn websites received more than 134 million views in 2018.
“AI image synthesis is now a widespread and accessible technology, and I don’t think anyone is really prepared for the implications of this ubiquity,” Cook continued. “In my opinion, we have rushed very, very far into the unknown in the last few years with little regard for the impact of this technology.”
To Cook’s point, one of the most popular sites for AI-generated porn expanded late last year through partner agreements, referrals and an API, allowing the service — which hosts hundreds of nonconsensual deepfakes — to survive bans on its payments infrastructure. And in 2020, researchers discovered a Telegram bot that generated abusive deepfake images of more than 100,000 women, including underage girls.
“I think we’ll see a lot more people testing the limits of both the technology and society’s boundaries in the coming decade,” Cook said. “We must accept some responsibility for this and work to educate people about the ramifications of what they are doing.”
AI is getting better at generating porn. We might not be prepared for the consequences.

Google, YouTube outline plans for the US midterm elections

Google and its video sharing app YouTube outlined plans for handling the 2022 U.S. midterm elections this week, highlighting tools at its disposal to limit the effort to limit the spread of political misinformation.
When users search for election content on either Google or YouTube, recommendation systems are in place to highlight journalism or video content from authoritative national and local news sources such as The Wall Street Journal, Univision, PBS NewsHour and local ABC, CBS and NBC affiliates.
In today’s blog post, YouTube noted that it has removed “a number of videos” about the U.S. midterms that violate its policies, including videos that make false claims about the 2020 election. YouTube’s rules also prohibit inaccurate videos on how to vote, videos inciting violence and any other content that it determines interferes with the democratic process. The platform adds that it has issued strikes to YouTube channels that violate policies related to the midterms and have temporarily suspended some channels from posting new videos.
Image Credits: Google
Google Search will now make it easier for users to look up election coverage by local and regional news from different states. The company is also rolling out a tool on Google Search that it has used before, which directs voters to accurate information about voter registration and how to vote. Google will be working with The Associated Press again this year to offer users authoritative election results in search.
YouTube will also direct voters to an information panel on voting and a link to Google’s “how to vote” and “how to register to vote” features. Other election-related features YouTube announced today include reminders on voter registration and election resources, information panels beneath videos, recommended authoritative videos within its “watch next” panels and an educational media literacy campaign with tips about misinformation tactics.
On Election Day, YouTube will share a link to Google’s election results tracker, highlight livestreams of election night and include election results below videos. The platform will also launch a tool in the coming weeks that gives people searching for federal candidates a panel that highlights essential information, such as which office they’re running for and what their political party is.
Image Credits: YouTube
With two months left until Election Day, Google’s announcement marks the latest attempt by a tech giant to prepare for the pivotal moment in U.S. history. Meta, TikTok and Twitter have also recently addressed how they will approach the 2022 U.S. midterm elections.
YouTube faced scrutiny over how it handled the 2020 presidential election, waiting until December 2020 to announce a policy that would apply to misinformation swirling around the previous month’s election.
Before the policy was initiated, the platform didn’t remove videos with misleading election-related claims, allowing speculation and false information to flourish. That included a video from One America News Network (OAN) posted on the day after the 2020 election falsely claiming that Trump had won the election. The video was viewed more than 340,000 times, but YouTube didn’t immediately remove it, stating the video didn’t violate its rules.

YouTube declares war on US election misinformation… a month late

In a new study, researchers from New York University found that YouTube’s recommendation system had a part in spreading misinformation about the 2020 presidential election. From October 29 to December 8, 2020, the researchers analyzed the YouTube usage of 361 people to determine if YouTube’s recommendation system steered users toward false claims regarding the election in the immediate aftermath of the election. The researchers concluded that participants who were very skeptical about the election’s legitimacy were recommended significantly more election fraud-related claims than participants who weren’t unsure about the election results.
YouTube pushed back against the study in a conversation with TechCrunch, arguing that its small sample size undermined its potential conclusions. “While we welcome more research, this report doesn’t accurately represent how our systems work,” YouTube spokesperson Ivy Choi told TechCrunch. “We’ve found that the most viewed and recommended videos and channels related to elections are from authoritative sources, like news channels.”
The researchers acknowledged that the number of fraud-related videos in the study was low overall and that the data doesn’t consider what channels the participants were subscribed to. Nonetheless, YouTube is clearly a key vector of potential political misinformation — and one to watch as the U.S. heads into its midterm elections this fall.

Facebook will disable new political ads a week before US midterm elections

Google, YouTube outline plans for the US midterm elections