
Hey everyone, as usual, every once awhile I'll look at other tech than audiophile stuff. In late 2024, I wrote about the nVidia GeForce RTX 4070 Super GPU and mentioned in that article about the upcoming RTX 5070 Ti and some of the expectations I had about this interesting card in comparison to the previous generation. Well, now in 2025, the new generation is here and as you can see above, I've got one to test out.
The box above is the Gigabyte GeForce RTX 5070 Ti Windforce SFF 16GB model which is probably the lowest-priced "MSRP" version of this class of cards; I got one of these at a local dealer. This is similar to the Gigabyte GeForce RTX 5070 Ti Windforce SFF OC (Canada) I see available online at places like Amazon but you'd save some money with a non-OC model if you can find it. (As usual, check out other brands like MSI, ASUS, Zotac, PNY for comparisons and relative deals.)
The only difference between OC and non-OC that I can tell for the Gigabyte Windforce is that this card is obviously not pre-overclock from the factory and I believe software like Afterburner power limits at 100% instead of 110%. Since I really have no desire to overclock GPUs (if anything, I do the "Power Limited Overclock" thing by pushing up GPU/Memory clocks but lower the power limit), I'm willing to stick with stock speed and tweak if I have the inclination.
Here are the pieces in this package unboxed:
It's a "SFF" card which is supposed to be "
Small Form Factor" (defined as max 151mm height, 304mm length, 50mm wide). It's full length with 3 fans and will take up 2.5 slots. Notice the inclusion of mounting hardware if you're going to put this in an upright tower case as it lies sideways, needing reinforcement (as
shown when I examined the GeForce RTX 4090).
For this Gigabyte Windforce card,
they advertise the benefits of their "Hawk Fan", graphene nano lubricant, "server-grade thermal gel", copper plate and copper composite heat pipes, etc. Each company will have their variations on this theme.
As usual with recent nVidia RTX GPUs, we have 1xHDMI 2.1b and 3xDisplayPort 2.1b A/V outputs:
And on the top we can see the recessed
12VHPWR connector and right beside it the small dip-switch to select "PERFORMANCE/SILENT" fan BIOS option. The fan will spin down and will stop when GPU is minimally used. There's some fan-stop noise which I personally did not think was an issue although I've seen posts about this from others concerned (see
here). I find coil whine more of an annoyance; I did not notice any issue with this card.
 |
While the power plug is slightly recessed, it's the traditional design that sticks straight up. I liked the angled plug I saw in the nVidia Founders Edition cards like this. |
I'll just keep the dip switch in the "PERFORMANCE" default position for these tests and in use.
As with my recent
RTX 4070 Super article, I'll put it in the same Silverstone HTPC case used in my living room bought before 2010!
Fan noise is obvious when playing for awhile but let's say it's not any worse than the RTX 4070 Super and maybe about the same as my old XBOX 360 when pushed; all to say it's quite typical noise-wise. One of these days, it would be nice to see high-powered graphics cards run very quietly without complex contraptions like liquid-cooling.
Here are the technical stats as per
GPU-Z:
 |
nVidia drivers 576.28, dated April 27, 2025 used for testing. |
This is the "Blackwell" generation of GPU processors, still 5nm process as with the previous RTX 40XX generation. A key new feature is the faster Samsung
GDDR7 memory - twice the speed of GDDR6 while more energy-efficient. For the RTX 5070 Ti, we have a 256-bit bus with GPU base clock at 2295MHz, boosting to 2452MHz. Notice with this non-OC version, as expected, the base and boost clock numbers are no different between the "Default Clock" and "GPU Clock" rows.
The
GB203 chip has 45.6B transistors; a pretty amazing number (the top-end RTX 4090 has 76.3B, and RTX 5090 at 92.2B)!
One complaint from gamers is the desire for more VRAM, feeling that nVidia has been stingy in not increasing that quicker beyond 16GB into something like 24GB. I'm sure this will increase, all in good time. Presumably in the near future, we might be seeing content with
neural texture compression (great background info and details in the
NTC paper from 2023) which could help significantly increase texture detail while reducing VRAM use. This would also reduce the amount of SSD/HD storage for those huge hundred-plus gigabyte game installs!
Benchmarks:
Let's look at some graphics benchmarks for comparison using
3DMark:
An average of
68.7fps in "Steel Nomad" (4K) is an impressive
52% increase over the ASUS RTX 4070 Super and about 30% below the Gigabyte RTX 4090. Good to see that my score is a little above average based on the numbers submitted to 3DMark even with a non-OC card.
Let's have a look at the ray-traced "Speed Way" (1440P default) benchmark:
75.9fps puts the RTX 5070 Ti again about 52% over the RTX 4070 Super and 29.5% below the RTX 4090. This time the result is a little below the average of online submissions (which includes overclockers). Interesting that my result is right between the 2 peaks on the graph, I wonder what that's about!?
And for older DirectX 11 games, here's "Fire Strike Ultra" (4K):
Still an impressive 46% increase for the RTX 5070 Ti over the RTX 4070 Super. And around 20% below the RTX 4090.
Nice to see the relatively consistent 45-50% uplift in speed between the RTX 5070 Ti and the RTX 4070 Super for both rasterization ("Steel Nomad", "Fire Strike Ultra") and ray-traced ("Speed Way") benchmarks. With the same CPU/memory/motherboard, I'm noticing a greater increase in performance than what I had anticipated previously based on shader cores and clock speed, suggesting an additional 10-15% generational improvement likely thanks to the GDDR7 VRAM and GPU optimizations.
So speedwise, the closest GPU the RTX 5070 Ti resembles appears to be the
RTX 4080 Super from the previous generation.
For possible future reference, here's my 3DMark "Time Spy Extreme" (4K, DirectX 12) result, another older benchmark released back in 2017:
For fun, let's also run the
GameTechBench "Refuge" ray-traced benchmark in 4K (v.1.17):
My RTX 5070 Ti setup here scores 1674 points. It's a relatively new benchmark so I'm not sure if the code is stabilized enough; maybe one I'll keep my eyes on going forward with other hardware.
DLSS 4:
No doubt, GPU buyers these days are well aware of the battle between nVidia RTX and AMD Radeon cards. Yeah, I know that one could buy the 16GB Radeon RX 9070 XT and RX 9070 cards for less money (prices can fluctuate a lot!). There are many comparison reviews out there already as well including some advocating for the AMD.
For me, here's the bottom line between the nVidia RTX 5070 Ti and the AMD Radeon RX 9070 XT: on average, the RTX 5070 Ti is a little faster by 5% raster speed, and 15-20% faster ray-tracing than the AMD RX 9070 XT. The AMD is priced realistically on the street maybe about 15-20% less than the nVidia which is fair.
However, other considerations like driver quality and use in AI (I'll use this card for local LLM testing) easily pushes me towards the market leader nVidia GPUs. Furthermore, we're inevitably going to be increasingly tied towards AI-enabled features like frame-generation for improving frame rates. For now, and into the foreseeable future, nVidia's DLSS remains the premium product.
With nVidia's "Deep Learning Super Sampling" (DLSS) 4 and multi-frame generation, we can now achieve numbers like these with Cyberpunk 2077:
152fps for Cyberpunk 2077 in 4K, 4x multi-frame generation, with all graphics features including ray and path-tracing turned on.
Yes folks, this has an impressive, clearly noticeable, positive effect on gameplay and visuals. While my experience with DLSS4 has been positive, there are artifacts at times with fast-moving scenes when DLSS with frame-generation was turned on. You'll see relatively mild, fleeting, anomalies around static HUD elements for example when moving, worse with higher 3x or 4x multi-frame generation.
I believe nVidia is correct in pursuing these enhancements using AI and not just being satisfied with following a linear path of generationally higher and higher rasterization performance or raw ray-tracing speed. We'll talk more about this below.
AMD and their FSR (FidelityFX Super Resolution) is also an important step forward. I especially applaud their open-source model, but they do have a ways to go before fully challenging DLSS image quality.
Other than image quality, for gaming, the other issue with multi-frame generation is higher input latency (time between moving your controller and seeing the on-screen effect) which nVidia Reflex 2 ("Frame Warp", asynchronous reprojection) is supposed to deal with. So far, it looks like they're doing a good job for those who play competitive first-person shooters. Regardless, I think if one is using multi-frame generation, the underlying frame rate being rendered (as opposed to interpolated by the AI algorithm) should probably be at least 30fps, preferably 60fps to improve both quality of the frame gen image and not feeling like the game is too "floaty" in play.
A little Power-Limited Overclocking:
As a non-factory-overclocked card, let's use MSI Afterburner to push the GPU and memory clock speeds a bit and see how we do:
Increasing max GPU core clock +250MHz, and memory +1000MHz worked well; I'm sure I could push higher if needed. The score increased from 6869 stock speed to 6979 (an extra 1fps average, still small 1.6% gain) while dropping the power limit to 95%. A little extra speed with less power draw is always a good thing!
Of course, this would not be good if unstable. To verify, let's use the Stress Test mode for 20 runs of the "Steel Nomad" benchmark:
Great! <1% drop in frame rate over 20 repeated runs across about 20 minutes of the "Steel Nomad" 4K benchmark indicating that there's no deterioration in performance as the card heats up (no thermal throttling). It's especially important to test for stability given the use of that relatively small 520W power supply!
Just in case, since ray-tracing will have different stresses, here's the "Speed Way" 1440P benchmark stress test result:
Again, this looks excellent with <1% variance in the average framerate between runs across about 20 minutes.
The temperature sensor across the stress test plateaus around 70°C is quite normal as a target for many GPUs. My CPU runs hotter.
Finally, as a test of overclockability, let's push the card harder with +350MHz Core, and +2000MHz Memory at 100% Power Level (this is typically higher than the OC-card overclocks which usually only push core less than +200MHz and leave memory unchanged):
Nice, it's scoring well past the average 3DMark scores for 4K "Steel Nomad" submitted by others with similar hardware. With overclock, we're at 72fps average (almost 5% faster compared to stock, no DLSS of course) and stable with no glitches noticed in stress testing.
In the stress test, it can dip by ~1fps between runs but practically insignificant at almost 99% stability looking at the best and worst loops (bottom right). Notice that the memory overclock is max'ed out on MSI Afterburner, suggesting that the Samsung GDDR7 VRAM can probably be pushed even faster. Likewise, I might even be able to push Core beyond +350MHz.
One last bonus overclocked benchmark - ray-traced "Speed Way" in full 4K (2160P, default is only 1440P):
In 2025, I have no intention to play games on a 65" 4K/120fps screen in 1440P! So if I'm going to benchmark ray-traced performance, it's going to have to be in 4K as well like "Steel Nomad". Default score is 48.4fps and this jumps to 51.5fps overclocked (+350 Core, +2000 Mem), a gain of >6%.
On my stock RTX 4090 system, I'm getting 61.58fps with "Speed Way" in 4K, which is about 27% faster than the stock RTX 5070 Ti, and +20% above the overclock.
In all likelihood, I won't be routinely overclocking, but certainly I can't complain about the extra latitude!
Summary:
While recent-generation GPU prices are high, especially when new (these nVidia RTX 50XX "Blackwell" generation cards just came out in late-January, with the RTX 5070 Ti on Feb 20, 2025), they do represent an advancement. Needless to say, as used in a gaming machine, this is generationally beyond current game consoles; the idea of a PS6 that's capable of 4K/120fps as rumored to come out in 2027 using "advanced upscaling", or an equivalent XBOX is already what modern RTX cards do.
Specifically regarding this Gigabyte GeForce RTX 5070 Ti Windforce, it seems to be well-built and runs as anticipated. Performance-wise, it's 10-15% better than I had initially estimated with some latitude for overclocking (another +5% with my card here). This level of performance in the previous generation would have been equivalent to the RTX 4080 Super or about 45-50% over the RTX 4070 Super discussed late last year.
 |
Indiana Jones and the Great Circle with DLSS4, 4X Multi-Frame Generation, remake of the iconic Golden Idol scene at the start. Insane VRAM requirements even at just "high" quality! With frame-gen hitting ~120fps, it looks pretty smooth although has some distortions mainly around static HUD elements. Plays quite well and probably the only way to get high framerates in 4K (DLSS "Quality" upsampling), with ray/path-tracing. |
With the RTX 50XX generation having improved architectural features (eg. INT4/FP4 support, improved INT32 bandwidth, shader reordering, RT core enhancements, etc.) plus more modern GDDR7 VRAM, the DLSS4 multi-frame generation is the first of other "neural rendering" benefits. It's still early with DLSS4 and other features are yet to be unveiled in games currently in the pipeline.
For those willing to spend even more money on a GPU 😏, consider the RTX 5080 with its 10,752 shader cores. This bad boy will get you another 20% performance over the RTX 5070 Ti, but of course more expensive, and unfortunately still 16GB VRAM.
Again, check out the AMD Radeon RX 9070 XT and RX 9070 cards which perform well and comparable to the RTX 5070 Ti in most games and the XT might even run a little faster at times in non-ray-traced games. The price might be OK, and if I didn't have the need to do some AI-related LLM testing, could probably have easily grabbed one of those. I've seen a few articles recommending the Radeon cards that seemed a bit overly dramatic. 😂
One last thing, notice that in the GPU-Z image above, the PhysX feature is not checked whereas it was with the RTX 4070 Super? nVidia was called out back in February for quietly dumping 32-bit PhysX so if you're playing a lot of older PhysX games like Batman: Arkham series (among others), the new RTX 50XX generation might not be for you. I trust not many of us still spend a lot of time on these relatively few pre-2015 games? In April, the PhysX SDK has since gone open-source so there might be some way to get this working again in the future if citizen developers get at the library with 32-to-64-bit wrappers.
Like with everything (including audiophile gear of course!), price-to-performance value is important so shop around for reasonable deals if you're lusting for a new GPU.
Game on!
--------------------
Embracing the intelligent, and imagine, relinquishing "lossless" orthodoxy.
IMAGINE! That's an important concept isn't it? We live literally in our heads/minds, imagined. We build models of our world that filter the way we experience life and understand the stimuli we're fed from our limited sensory systems. Within our imaginations, the experience is in no way "exact" to the real world.
When talking about audio, we're typically not dealing with huge amounts of digital data by today's standards. While the human ears are remarkable organs, technically, CD-standard 16-bit/44.1kHz will do a phenomenal job, perhaps limits surpassed
only if one is truly a "Golden Ear" (almost ideal hearing abilities,
young, no previous major ear infections...) listening in a very
very quiet environment, with fantastic room acoustics, using great gear, at loud reference levels, listening to very high dynamic range music of course (the
various domains of sound quality). I would guess that 99.9% of the time, this is not the case.
Indeed, for audio recordings, especially of 2-channel stereo material, we have plenty of storage these days to own huge collections of "lossless" bit-perfect compressed FLAC, or ALAC, or even uncompressed WAV (just
don't tell me lossless uncompressed sounds better 😉).
As such, we can desire "lossless" audio because technically this is "perfect". What does it actually mean to be "lossless"? Well, it's just a term we apply to digital data compression. Analogue is always "lossy" with generational copies and even each time we play, quality may deteriorate. It just means that the container, the coder/decoder used to store the digital data maintains bit-perfect exactness when the data is retrieved. With audio, it actually does not imply anything about the perceived resolution of the data nor is it a reflection of "goodness" of the music - whether a person will enjoy the contents of that data.
We've known for a long time now that lossy audio compression can sound
indistinguishable from lossless,
even going back to high-bitrate MP3 (and some might even
prefer the very subtle effect from lossy compression). Lossy compression is powerful because it
applies "intelligence" by using psychoacoustic science into the ways that the data can be encoded then reconstructed into a very close approximation of waveforms with accuracy often beyond the ability of human listeners to discern. It's not some willy-nilly throwing away of bits
implied by people like Neil Young back in the day when he was trying to sell lossless hi-res 192kHz files as if these sounded much different just because of high sample rates, or data size larger than CD or lossy files!
It's not a simple binary choice between "lossless = good", and "lossy = bad" as I've often seen in discussions with indignant audiophiles insisting that good audio must be lossless. There's a qualitative gradient depending on the bitrate that is chosen for that specific lossy encoding format.
Conservatively, I have not seen data to suggest that with modern encoders, there's any lack of transparency between lossless CD-quality 16/44.1 and AAC 192kbps. An uncompressed stereo CD-quality 16/44.1 stream is approximately 1.4Mbps, lossless FLAC can bring this down to 50% or ~700kbps. With lossy 192kbps AAC, data rate is only 14% of the original uncompressed "CD" audio. Even if you insist on 320kbps to ensure quality, that's still only 23% of the original uncompressed data; and less than 50% of the FLAC lossless compression. Sure, we can argue that if we're going to use 320kbps, we might as well keep it as lossless FLAC given that storage is cheap.
As media gets more complex though, such as multichannel, we'll need to multiply the amount of data needed. For 7.1 (8 channels) of 24/48, it's still "just" over 9.2Mbps. With something like
MLP/Dolby-TrueHD lossless compression applied, similar to FLAC, we can save 50% or so depending on the material which would bring it down to 5Mbps. These days, we can add all kinds of object metadata (
like Atmos) and this could increase data rate to 6-18Mbps. We're now getting into the territory of large files by today's standards. A complex 15Mbps 4-minute 7.1 TrueHD/Atmos song would be about 450MB. That's almost 1/2 a gigabyte for just 4 minutes which could be fine on a music BluRay disc (typically 25-50GB), but would this be reasonable if we had a large library with thousands of songs, and would this amount of data transfer seem reasonable for a 1-time stream across the Internet? I would argue probably not as a consumer. (Obviously, for studio work or for archiving, lossless is the way to go.)
And so it is that streamed lossy Dolby EAC3-JOC Atmos at 768kbps such as from Apple Music or Tidal or Amazon Music is completely reasonable and for the vast majority of situations, completely transparent (and
why I believe Stereophile was ridiculously dramatic). For my own encoding, the rule-of-thumb I follow based on personal preference along with some foundational evidence from documents such as
EBU-TECH 3324 (2007) is that for
modern lossy codecs like AAC,
Vorbis,
MPEG-H 3D Audio,
Opus, EAC3 or EAC4 using a good encoder,
96kbps/channel would be transparent already. While this is a broad generalization since some codecs like Opus might be better than others at lower bitrates (
see here), for modern lossy encoders, anything around 192kbps stereo or 512kbps for 5.1 (96 x 6 = 576, but since the .1 channel is typically low-passed below 120Hz, it barely takes space) would be indistinguishable from lossless.
[If you look around, like this, the typical recommendation for modern lossy codec transparency is 64kbps/channel. As an audiophile, my 96kbps/channel recommendation incorporates a "perfectionistic premium"! I'll also make sure that the codec cut-off is at least up to 20kHz to ensure full audible bandwidth encoded.]
As for MP3, I'd be a little more generous with the bits and say 128kbps/channel would be more than adequate. So I don't think I'd fault
LAME-encoded 256kbps MP3 files. While we can think of MP3 as the "lowest common denominator" lossy codec, excellent for compatibility, it is 34 years old now in 2025. We should at least move on to AAC (
28 years old), with base
AAC-LC patents expired already. It has been said that AAC's ability to use
smaller block size can improve transients as a concrete example of improvement.
Complexity and need for data increases for high-quality video, which is a different beast with both horizontal and vertical dimensions - a doubling of detail in each dimension results in squaring (4x) the number of pixels. As far as I know, other than maybe low resolution animated
GIF 🙂, there is no lossless video in any meaningful consumer format (for archival/production use,
FFV1 provides ~50% compression or so). It's just not practical for consumers to have lossless video content at all. Consider for a moment how much data is needed for 1 frame of 1080P (1920x1080) 8-bit RGB image - about 6.2MB! That's just 1 frame. 1 second of 24fps = 149MB. 1 minute of 24fps = 8.96GB. 1 hour of 24fps, 1080P, 8-bit RGB = 537GB. A 2-hour movie would need slightly more than a 1TB SSD! And that's just 1080P - quadruple that for 4K, and we're looking at a 4.3TB drive for a 2-hour movie! Then don't forget to more than double that if you want 60fps, and increase again if you want 10-bit HDR color. Quadruple those numbers again for 8K!
Is it any wonder then that movies are always lossy encoded to something like MP4 (H.264),
AV1, VP9 (
YouTube) or HEVC (H.265, HDR BluRay) by the time it reaches the consumer? Let's also not forget the use of
chroma subsampling to compress color further.
While in the audiophile world, we might still want to argue about lossy vs. lossless codecs, that's just because we have the "benefit" of our hearing mechanism capable of only processing much less information per unit time so we have the luxury to download lossless files. Beyond audio, lossy compression is necessary with demonstrably excellent quality. There's no need to be prejudiced against lossy compression; in fact, I would argue that for most of us, lossless files are actually a waste of storage space even if intellectually, as audiophiles, we "think" we need all these FLAC, ALAC, WV, etc. files. (Including myself - we're all neurotic, just
don't get too neurotic!)
How does this discussion about lossy compression then possibly pivot to a blog post about a GPU?! 🤔 Basically in the idea that we can use "smart" algorithms to create high-quality experiences with minimal perceptible loss and less data.
Gaming is about creating complex imagined worlds with space and time dimensions, that normally require a lot of data and computation, right? Just as the perfectionistic lossless-insisting audiophile might need to accept that algorithms with "intelligence" can be used to reconstruct lossy data in ways that can be transparent to human perception, so too we are seeing an evolution through application of AI to transform the traditional way we think about creating the high-quality experience.
Compared to brute force 1:1 exact rendering of each frame ("lossless"), when a system like DLSS4 recreates all those "fake" frames ("lossy"), it does so in a "smart" way that presents to the user an "imagined" smoother framerate experience. It's not just data-reduction like audio or video, but computation-reduction by "imagining" the scene through machine-learning algorithms of what things as supposed to look like.
This idea of using less data to generate what is perceived can be used in all sorts of other ways. For example, in the tools that nVidia is making available with its
RTX Toolkit and "
neural rendering" of materials, textures, faces, geometry, lighting, etc. Content is
learned, not 1:1 memorized, to create that complex detail. It's quite impressive that even currently, with 4x multi-frame generation, only 20% of pixels shown on-screen are actually coming directly from classic rendered computation (in reality even less when we consider that there is typically upscaling applied in DLSS).
 |
Early demo of RTX NTC - much reduced texture memory by -88% compared to traditional BCn/S3TC, while maintaining excellent quality at the expense of slight hit to framerate (-18%). Let's see how this feature translates in an actual game! |
[For completeness, since I had not seen this published, the Cooperative Vector Processing feature which is part of the RTX 40XX and 50XX generations to access Tensor cores processing appears to be very important for RTX NTC performance:
With CoopVec turned off, we see that frame rate drops by -70% in this demo compared to -13% with my RTX 4090 when feature activated! I wonder whether the AMD Radeons have this feature hardware accelerated. More information here from recent GDC2025.]
Despite what will look increasingly more complex, more
realistic, perhaps one day surpassing the human ability to discern a real picture from the "dream" of a machine, the imagery is still being reconstructed through algorithmic approximations, akin to our minds creating all kinds of vivid dreams - and nightmares. (Beyond visual imagery, AI-generated music of course will be part of the
multimodal expansion as discussed last year.)
At some point, technologies like multi-frame generation in today's GPUs will appear just as normal and transparent to gamers as lossy compression of digital audio is for Spotify users, the majority of whom probably don't know the difference nor care whether the content was lossless or reconstructed through a psychoacoustic model. In fact, in many circumstances, a non-bit-perfect, DSP-manipulated signal already could sound better anyways (think
room correction, or
crosstalk cancellation).
I quite like where nVidia is going with these technologies, it's a "smarter" direction that will in time expand possibilities with interactive media. Lots of exciting stuff ahead which I think will even leave tech-savvy hobbyists amazed in the months and years to come!
Computers using intelligence based on internal models to create a sense of reality using sparse data? Nothing all that surprising. After all, isn't that in part what it means to be human? To have emergent experiences based on limited information, inevitably colored by imaginary mental projections, learned and encoded within our 80-100 billion neurons.
Hi amigo. I didn’t read the full article—I’ve no interest in video games. I only read a small part about lossless and lossy audio. There is an obvious audible difference between lossy and lossless, but it’s small, and I can’t say that lossy audio sounds worse.
ReplyDeleteI first heard the album Shadows & Dust by Kataklysm on Spotify, and when I heard the CD, I wondered what was wrong with it. The drums sounded different. On the lossy audio, the drums are more vain and clicky, and they stand out a bit more—that was the most obvious difference to me. On the CD, the drums have more depth, but they’re not as punchy.
Apologies for the rubbish description—I hope you can understand what I’m saying.
I think anything beyond 16-bit and 48kHz is overkill. My amp is limited to around 16-bit SNR (I can’t quite remember—it may even be lower). I also read a little about video quality. For me, I really enjoy surround sound and would love it to become the default mix for music albums. I think 1080p HDR video and lossless audio is all that's needed, and I still think 4K is a scam. I want to be able to justify buying height speakers for Dolby Atmos—I like the idea, but it just costs too much and there aren’t enough films available.
I personally don’t understand why CDs are still being produced. I think CDs are an outdated waste of resources. It won’t be long before potentially billions of CDs become unplayable and end up dumped in landfills—and the same goes for records and blu ray.
I think it’s best to move on to other digital audio containers that use fewer resources and can be recycled, with a standard of 16-bit 48kHz for consumer playback. I believe the metals in hard drives can be recycled—though I’m not sure. If anyone knows better, let me know.
From my experience even loseless uploads to Youtube ends up with elevated treble after it lossy compresses to Opus. Makes tracks that are originally borderline sibilant becomes unlistenable.
DeleteHey Dan and Jonathan,
DeleteYeah we have to be careful to compare apples-to-apples of course to determine whether it's totally transparent or not.
Personally, I use dBPowerAmp for conversion to something like AAC or Opus and then have tried ABX listening tests vs. the original lossless. I have never been able to tell the difference between any of the modern codecs at 192kbps (stereo 2-channel) and lossless FLAC.
The problem with Spotify and YouTube is that we're never sure if:
1. It's the exact mastering.
2. Whether DSP has been applied (for example volume normalization, possible subtle EQ, etc.)
3. Whether there has been further bitrate changes. Lossy encoding does bring with it "generational losses" if for example the streamer applies further lossy compression or variable bitrate in the streaming process.
These days, I'm totally OK to admit that I can't hear a difference so for most music, am no longer too neurotic about losslessness. For my "core collection" of albums that I love - Dylan, Davis, The Stones, '80s pop, Joni Mitchell, Enigma, classicals, prog rock like Pink Floyd, etc. I already have them as lossless in the library and this will not change. For newer music and multichannel, unless truly pristine recordings, I'm OK with lossy, especially for live albums or loudly compressed masterings.
I agree that we don't have to go crazy with "hi-res audio", 24-bits and 96+kHz; this is more than likely unnecessary although I really appreciate when studios release hi-res accompanied by remasters that improve more essential characteristics like higher dynamic range!
I still collect albums in lossless as I apply DRC for listening at home. I also apply headphone EQ and crossfeed when I prepare albums for listening outdoors with my portable player. When you do all this processing, it's better to have the original audio files in a lossless format, not in a lossy one. Storage cards for portable players are cheap, so why should I save the resulting audio files as AAC? I see no reason for doing so. I save them as in the original sample rate and differ them to 24 bits, even if the original bit rate was 16.
DeleteI agree fgk,
DeleteIf we're going to be applying further processing on just stereo content which isn't particularly storage-hungry, there's definitely rationale to keep it as lossless.
I mainly keep lossy audio for 5.1 (5.1 AAC512) and other multichannel if I want to be mindful of space.
"when I heard the CD, I wondered what was wrong with it" - Did you make sure that the album from Spotify and on the CD were the same version of the album? There is a chance that you listened to two different versions (different mastering) of the same album.
ReplyDeleteHi FKG, I hadn’t considered different masters. I’ve had a look, and my CD's dynamic range measured and scored DR6. I don’t know how to confirm whether it’s the same master on Spotify, but my guess is that it is. Unfortunately, I’m not interested enough to do a blind test between my CD and lossy audio converted from the same CD to find out properly.
DeleteJust rip an audio track from the CD and download the same track from Spotify, then bit-compare them in Foobar.
DeleteNvidia is definitely the way to go if you want to experiment with local AI. Not only is it more capable, but the ecosystem is much better. This translates to both increased capability as well as a much easier time.
ReplyDeleteThat said, I'm an AMD boi through and through. The 5070 is never going to be *great* at AI because AI needs lots of RAM, and 16GB ain't much. NVIDIA has always been relatively stingy with their RAM provisioning, but this is likely to continue because they can use it to price-discriminate and make tons of profit off of business customers. But more to the point, I'll leave local AI to people who know what they're doing. Cloud AI makes a lot more sense for most people.
Where AMD really shines is Linux. Open drivers vs. proprietary drivers make a huge difference. I'm not a FOSS hardliner by any stretch, but Linux is a Tower of Babel of different implementations for things, driven by developer enthusiasm. Nvidia is neither interested nor likely very capable of supporting all the various combinations, but the closed-source drivers make it very hard to crowdsource that tough work.
Then there's the meta. Amd has long championed open vs closed versions of various technologies. FreeSync vs G-Sync, DLSS vs FSR, etc. I don't think they do this out of principle, but simply because they need all the help they can get to ensure that displays and games support anything more than Nvidia's proprietary technologies. But regardless of motive, open standards are good, and Intel (and even old Nvidia!) GPUs benefit from them.
Now that AMD has for once achieved performance parity, it benefits users as a whole to support them. Unless of course there's a specific reason not to, like an interest in LLMs where the homebrew AI scene heavily favors NVIDIA.
Hey there Neil,
DeleteYup, totally agree, many good reasons to go for AMD especially for gaming applications and the support of open-source standards.
Indeed 16GB is not much for AI LLM implementations, but for the purposes of some of what I'm using it for, it allows me to trial performance on "lower-end" hardware.
Something I am looking forward to are the upcoming nVidia DGX boxes with 128GB unified RAM coming supposedly in Q3-2025. Let's see... With that kind of horsepower, could go a long way towards advancing!