Kim Cattrall’s Famous Scat Song is Back in 4K

By | March 28, 2024

Many of us have quietly cut back on our quarantine hobbies, but internet creator Teigan Reamsbottom is still waking up at 4 a.m. to make camp classic pop culture video clips. It also does its best to improve video quality using a dedicated gaming PC filled with RAM and a software suite that uses machine learning algorithms to preserve the original characteristics of the clips.

In a world where movies have disappeared due to tax cuts and physical media is scarce, it’s encouraging to see works of pop culture being unearthed, even if it means no celebrity from the ’80s or ’90s is safe.

More from IndieWire

Exhibit A: An interview with Kim Cattrall, from the “Sex and the City” heyday, in which she sings while her then-husband Mark plays flat bass. The clip has long been a fixture of the internet (it was once the subject of an exhibition at the Lower East Side gallery THNK1994). It’s a great example of Reamsbottom’s delightfully unhinged and sometimes cringe-worthy take on camp from the pre-TikTok world. In a certain sector of the Internet, its restoration was cause for celebration. Check out the restored version of the blurry clip we’ve been obsessed with for years below:

But scaling video (increasing the quality of a clip so it can fit on our higher resolution screens) is a process fraught (and/or intertwined) with challenges. Capturing and transforming data is labor-intensive, and each method leaves behind artifacts that affect the appearance of new images. This danger increases as machine learning models remove human labor and judgment from the equation.

Even professional 4K transfers can look airbrushed or plasticky, an unnecessarily flattened version of something that looked perfect in its original format. (If you’ve made it this far in this article, this is your PSA to make sure you turn off motion smoothing on your TV.) As Chris Person noted about the pitfalls of AI video upscaling for The Aftermath, “Why transfer the tape correctly when you can make a computer make a bad guess?” Let’s record it?

This is essentially what AI video software does. It predicts where a person’s face begins and ends, how their hands move, how light and humidity react to their skin. He does it very badly, but he does it so much that he (hopefully) eventually gets close enough to being mostly right. For Reamsbottom, this is the best solution for releasing clips where source video is not available and for the kind of short campy moments that might not attract the attention of professional restoration houses.

He told IndieWire it’s a constant trial and error, balancing the different AI models available and adjusting sharpness and shadows to avoid things where upgrade programs are most likely to fail. Like teeth.

“As the details on the face increase, the AI ​​model will also super-increase the detail in the teeth and you will be able to see the dark lines between individual teeth,” Reamsbottom said. “So I” It might seem a little scary. “Sometimes it looks like he suddenly has very dark teeth because he makes the outline of each tooth so clear.”

Reamsbottom has to play with the details and, in his words, always tries to avoid turning the subject of the video into a Pixar character. It should also take into account the inherent challenges of the visuals. “I did one of Phyllis Diller recently, which was really hard to make big because her outfit was sequined,” Reamsbottom said. “Her face may look beautiful, but suddenly the sequins don’t look good. You really need to play a lot.

“Playing around” doesn’t begin to capture the time this requires. Even with a dedicated upgrade computer, making a single pass on a 30-minute video could take more than a day, Reamsbottom said. And machine learning can’t make pixelation perfect.

“To turn something into something truly beautiful, you need to have something of at least medium quality,” he said. “Even then it might be questionable, but some of the super low-quality, highly pixelated stuff I’m trying to work on? It’s tough. Faces are hard, teeth are hard, or a nose might disappear.”

While new Samsung phones (and similar models coming to the iPhone) have generative editing “AI” features that seem to work much faster, being able to represent reality really well through deep learning models is still the province of people who work professionally with video. and those like Reamsbottom who can devote a lot of time and money to the effort.

The time demand may still be high, but the tools to achieve this are not expensive; Reamsbottom uses a $300 software package from Topaz Labs. And when done right, the results can be incredibly rewarding.

Reamsbottom has been working on an archive of tapes of Connie Francis donated to him by the family of a fan who recorded numerous images of the pop singer. “There’s footage of him and personal footage of Connie, and when you see it magnified you almost get emotional because it’s like you’re experiencing something for the first time,” Reamsbottom said. “The end result seems fascinating and extremely clear. It’s like you’re there again.”

Best of IndieWire

Sign up for Indiewire’s Newsletter. Follow us on Facebook, Twitter and Instagram for the latest news.

Leave a Reply

Your email address will not be published. Required fields are marked *