I have an Intel ASRock Challenger A380 on my Plex server. The Plex server is installed as a Proxmox Ubuntu Server container via the tteck Helper-Scripts. Hardware transcoding is running fine on the latest Plex server stable release. Transcoding with HDR tone mapping as well as subtitle burn are working as well.
I moved my Plex Server to a HP Mini with the 6500T and the transcode performance on the iGPU is excellent for something that uses very little power and alot of people connecting to it.
Yea the 12900k is probably a nearly best case for a Intel iGPU, but older models are still very imprerssive with much lower prices and can be found in super lower power systems.
I'm surprised dedicated GPUs didn't do better. I'm using N100 mini PC with Jellyfin. QuickSync enabled, Tone Mapping enabled. I just tried 4K HDR to 10 Mbps SDR 1080p transcoding and it showed 86 FPS, which roughly matches what you got out of your iGPU on 12900K (3 concurrent streams). With so many variables, it is hard to test these things, I imagine.
I really wanted to test Jellyfin and more but kept running into variables and different things I didn't think of. overall the Intel iGPUs seem to be the best goto transcoding hardware, with dGPUs making sense if the iGPU isn't available.
Any plans on making efficient and cheap Proxmox setup(s) on "normal" pc hardware in future? Would be interesting to hear your take on components as hell man, you're by far the best source for stuff like this.
I'm curious what 'normal' hardware is to you. I think we have all have different ideas of a what a normal system is. But trying Proxmox on standard hardware is something I'm adding to my future video list.
@@ElectronicsWizardry I think he meant no server or rack mounted hardware. More like old hardware, pc or laptop, and what can we do/accomplish with Proxmox. Personally I tried Proxmox to have several OS running with the way Drives work was unintuitive for me and I do not have the time learn all the commands and configurations. I dropped that project and opted for FreeNAS, which I also dropped when it started giving me issues now I finally migrated to ZorinOS because I can no longer stay in W11, right now I only use Windows for gaming, nothing else.
Every 1GB of VRAM roughly translates to 3.5 concurrent transcodes in my experience with GTX 1660 on Ubuntu 20.04 with Plex. I stick to an older driver and plex version, since things are wonky lately. You can find many discussions on the Plex forum about their recent issues; a bunch of staff were fired and they're currently working on migrating to a newer ffmpeg version. Jellyfin is also in limbo since they're working on migrating to EF Core instead of having random sqlite calls littered throughout the codebase, so hopefully both projects will get things sorted out sooner rather than later.
Just did a build with some older Xeons, Coral TPU and an A380 for endcoding / decoding in Frigate and must say that the A380 likely has the best encoding quality and rock solid H.264/H.265 (HEVC) performance - especially @ 4K. And (while not yet supported in Frigate) - it's one of the few discrete GPUs below the many hundred dollars range that support AV1 with stellar performance. I mean who will put a 4070Ti into a rackmount server just for encoding...
It's crazy how good Arc GPUs are at encoding, the fact that a $100 GPU can come close to the dual nvenc cards is incredible. I have an arc A770 (wanted the vram I know encoding performance is the same) that I use in a dual pc setup where I record 1440p120fps clips from my gaming pc and also have a jellyfin server running that uses it for transcodes. Encoding Speed isn't as impressive as the dual nvenc cards but it's certainly impressive. From what I've noticed dual nvenc cards have some sort of load balancing where both nvenc encoders will be at the same usage but with arc cards and the dual media engines only one will be used at a time by a given application. For example if I'm recording 1440p120fps footage it will be at around 70% on one but 0% on the other. If I then start up something on jellyfin it will then use the media engine with 0% usage MOST of the time. If arc cards had the same sort of load balancing I'm sure they would be more capable than the dual nvenc cards. With a deeplink compatible CPU though they may actually be better than the dual nvenc cards.
Did you install the appropriate opencl icd for tone mapping? Also igpu being the best makes sense, since it has “Zero-Copy” capabilities. It just needs to map the buffer in order to use it, it doesn’t need to map, enqueuewrite, transfer, wait for transfer syncing, running the tone mapping, wait for sync again, and read it back with all the steps that it took to write it to the device. Another consideration is the quality of the transcode, not something super significant, but I’ve always found HighQuality preset on NVEnc to be significantly better than anything QucikSync can put out. Thanks for the video! Really was just here for a preview of the A400 ❤
I run a quadro p2000 in a proxmox Ubuntu vm passed through and transcoding works great. There was a kernel bug preivously that would cause transcode to fail on occasion for nvidia gpus but I have not recently had this happen. It performs well but is somewhat limited by the 5gb of on board memory on that card.
Thankjs for the test, now a radxa x4 is the best alternative for me regarding transcoding, and its even cheaper thatnthose dedicated gpus and power consuption might even be better @5 watts
From what I understand is that the decoding power is based on the generation of the CPU or GPU. For example intel CPUs of one architecture will all have the same quick sync decoding, Nvidia GPUs of the same generation will have the same nvenc chip and same for AMD VCE
I paired a rx 470 8gb (it was cheap off eBay) forst with a gen 3 and then a gen 4 xeon cpu. The 3rd gen picture was so dark and deep in color richness, still a decent picture. The 4th gen cpu was the exact opposite. Bright amazing colar. I found this very supprising. Tbh i didn't think that there would be any difference but to my supeise the 3 to 4 gen xeons really did improve a lot into the 4th generation cpus.
@@ElectronicsWizardry agreed. My truenas is a 2700x w\ 64gb ram 4x 240gb ssd in zfs-1 for docker) and x8 6tb hdd for zfs-2 for media. I have gotten up to 12 streams. Need to test like you did to see the limit
Great video. As costly as gaming GPUs have gotten now, seeing these cards benchmarked for non-gaming (Plex) use is wonderful. It's actually really nice to see how well the 12th+ generation iGPUs perform for Plex transcoding. It's nice to think that, for once, you don't have to go out and get an add-on card (if you even have a PCIe slot) to do the thing. It's rare in modern computing just to have random excellent extra functionality just sitting there. (And I'm not just saying that because this video makes me feel a lot better about my i7-12700T and i5-1235U* Proxmox nodes.) * Well, the 12700T has the UHD 770 you tested. The 1235U has … something with fewer execution units and a confusing name. (Xe … something something.) But still. I feel even better about having jumped into this when 12th gen made the most sense to build with. I wasn't even planning a Plex server at the time.
Yea its impressive how good the Intel iGPUs are for this use case. One idea this gave me was to compare the 12900k to a N100. The should be the same generation of iGPU, so I want to see if the encoding performance is effected by the slower memory, CPU, and less GPU cores. I think the mobile chips often have a better iGPU than the desktop chips as there is a market for a bit faster of a iGPU without needing the extra board space of a dGPU on a laptop. Intel also seems to have the mobile chips getting the newest generation first, and then trickling down to the desktop, possibly as the yields are better on the small dies.
I picked up a used Nvidia Tesla p4 from ebay in 2023 for $100 and it works great for transcoding my files for plex. As a bonus I can also use it for running my own AI with Ollama.
I'd be interested in seeing this done with a low end AMD card. I kinda feel like for most home users any of these is sufficient so it comes down to which is the least hassle. Which should be the iGPU in my older Intel systems, but I haven't gotten them to work with Jellyfin and it wasn't a big enough issue for me to bother chasing down. lol
I should give Plex + Jellyfin a go on my AMD cards. I know they typically have worse support and quality, but would be interesting to see how they hold up.
I was looking for a comparison for these, kudos on the video. I would love to be able to see the actual FPS for each of the scenarios clearly (I see you have them displayed in an Excel file on the monitor, could you please share this Excel?), not just the power consumption. If I may add, some timestamps in the description would be highly appreciated in the future! I myself have a TrueNAS server and am doing decoding with the iGPU from the Intel 12400 I've thrown inside, but was wondering whether the A400/A380/A310 would be an upgrade, or if I'm better of sticking with the iGPU for Plex on TrueNAS for transcoding. What do you think?
I added the excel file I had up as a shared google doc in the description. I also add timed stamps to the video. I made them when editing, but just seemed to have forgotten to add them to the video description. From the data I saw its probably best to stick with the iGPU performance wise. I'd probably only switch if your having issues with the iGPU.
@ElectronicsWizardry I have an A2000 12gb, but I can't reproduce your test exactly. What I can tell you is that driver do work on Linux using Jellyfin, and I get more the fps than my i7-8700 igpu. 50% extra with tone mapping, around double without it, with a bit of variability. Similar results on Windows. Now for me, the i7 igpu is more than enough (I rarely have more than 1 client needing transcoding at the same time, I get 80ish fps in most cases, so a lot of headroom), and I use the A2000 for a different use case, but I tested the performance just for curiosity. I don't have a power usage value to give you tho, I have too much stuff running to isolate the single effect as you did.
@12gark thanks for sharing. I think your better gpu and older igpu explains the difference. I’d still look toward a newer intel igpu as my first pick but adding a gpu to an older system can be an easier and cheaper upgrade than a new platform many times.
@ElectronicsWizardry yeah, I absolutely agree, and even as old as my 8700 is, I can stream 2 4k HDR down to 1080 SDR with tone mapping with zero issues and very low power consumption. Newer one can only do better. The A2000 was never intended for transcoding in my case, I just played with it a bit and shared my results, especially considering your thoughts on ram usage (A2000 is small, but 12gb are far more than 4) and the different decoder. I bought it to render my cad projects off of my awfully loud laptop, as far from my ears as possible.
@ElectronicsWizardry Ps, another system that would greatly benefit from a intel or nvidia gpu to increase decoding capacity is an AMD machine. A friend of mine has a zen2 apu in his homelab, added an 310 and improved transcoding by a factor of 5. Wild difference honestly. Actually, even a 7900xtx is outperformed by the cheapest alchemist in decoding and encoding, even without considering AV1.
Since you got these 2 SFF GPUs, would you be interested in performance testing them, alongside with others if possible, with ollama / LLMs (aka AI at home)?
I don't have one to compare, but there pretty cheap on ebay so I might get one. The RTX A400 is a bit newer for a bit better quality, but probably isn't noticeable.
@@ElectronicsWizardry I just bought a hp z2 g4 that came with a p620 was just curious on the comparison odds are I’ll end up getting a a2000 at somepoint when they go down in price.
Yea p2000 would have been a interesting comparison, especially its since its about the same price as what I got my RTX A400 for. I'd love to have every GPU for comparison, but I can only justify buying so many of them.
try fire fox I have issue with chromium browsers not using Gstreamer on linux. Firefox bake codec in. However Fedora ship the wrong mesa with transcoding free only and blocking non-free by default. I run a N100 and i3 12100 with the same UHD730 encoding 1-4 stream is ok 4 above get shitty but the UHD770 has more power, making then good for plex servers but look at the igpu and power draw as some intel chip are heavy on power. a i3 sit idle around 35w with out power setting tuned out of box. N100 7w-25w. My N100 is going to become my home server soon.
Im very opposed to transcoding 4k. I have a main plex server that i share with 1080p content, and a plex docker for 4k direct stream that only myself and my girlfriend have access to which only has 4k media. While we are at the point where we can transcode 4k, i would argue we arent at a point where we should, especially considering how hit and miss tonemapping is.
I'm curious, do you have 2 copies of your media files in 1080p and 4k, and only let you and your GF watch the 4k version? I think a method like this can makes sense if transcoding limits get in the way.
I only bought the A380 by Intel because I couldn't quite afford the A380 by Airbus.
The Airbus takes more than two slots, as well. And if it crashes, rebooting won't help.
😂😂😂
Intel 770 integrated graphics is a transcode beast. Thanks for this comprehensive test.
yes unless you want to use xeon x99 for budget servers, then adding a gpu is necessary
I have an Intel ASRock Challenger A380 on my Plex server. The Plex server is installed as a Proxmox Ubuntu Server container via the tteck Helper-Scripts. Hardware transcoding is running fine on the latest Plex server stable release. Transcoding with HDR tone mapping as well as subtitle burn are working as well.
I moved my Plex Server to a HP Mini with the 6500T and the transcode performance on the iGPU is excellent for something that uses very little power and alot of people connecting to it.
Yea the 12900k is probably a nearly best case for a Intel iGPU, but older models are still very imprerssive with much lower prices and can be found in super lower power systems.
You're videos are awesome dude thanks
This is an extremely informative video. Working on my DIY Plex/NAS Server as we speak. 😊
Great video. Thanks for all the hard work!
I'm surprised dedicated GPUs didn't do better. I'm using N100 mini PC with Jellyfin. QuickSync enabled, Tone Mapping enabled. I just tried 4K HDR to 10 Mbps SDR 1080p transcoding and it showed 86 FPS, which roughly matches what you got out of your iGPU on 12900K (3 concurrent streams). With so many variables, it is hard to test these things, I imagine.
I really wanted to test Jellyfin and more but kept running into variables and different things I didn't think of. overall the Intel iGPUs seem to be the best goto transcoding hardware, with dGPUs making sense if the iGPU isn't available.
Any plans on making efficient and cheap Proxmox setup(s) on "normal" pc hardware in future? Would be interesting to hear your take on components as hell man, you're by far the best source for stuff like this.
I'm curious what 'normal' hardware is to you. I think we have all have different ideas of a what a normal system is.
But trying Proxmox on standard hardware is something I'm adding to my future video list.
@@ElectronicsWizardry I think he meant no server or rack mounted hardware. More like old hardware, pc or laptop, and what can we do/accomplish with Proxmox. Personally I tried Proxmox to have several OS running with the way Drives work was unintuitive for me and I do not have the time learn all the commands and configurations. I dropped that project and opted for FreeNAS, which I also dropped when it started giving me issues now I finally migrated to ZorinOS because I can no longer stay in W11, right now I only use Windows for gaming, nothing else.
Every 1GB of VRAM roughly translates to 3.5 concurrent transcodes in my experience with GTX 1660 on Ubuntu 20.04 with Plex. I stick to an older driver and plex version, since things are wonky lately. You can find many discussions on the Plex forum about their recent issues; a bunch of staff were fired and they're currently working on migrating to a newer ffmpeg version. Jellyfin is also in limbo since they're working on migrating to EF Core instead of having random sqlite calls littered throughout the codebase, so hopefully both projects will get things sorted out sooner rather than later.
Important questions, which one did that cat prefer? Was the sniff test done in 720p or 1080p?
Just did a build with some older Xeons, Coral TPU and an A380 for endcoding / decoding in Frigate and must say that the A380 likely has the best encoding quality and rock solid H.264/H.265 (HEVC) performance - especially @ 4K. And (while not yet supported in Frigate) - it's one of the few discrete GPUs below the many hundred dollars range that support AV1 with stellar performance. I mean who will put a 4070Ti into a rackmount server just for encoding...
It's crazy how good Arc GPUs are at encoding, the fact that a $100 GPU can come close to the dual nvenc cards is incredible.
I have an arc A770 (wanted the vram I know encoding performance is the same) that I use in a dual pc setup where I record 1440p120fps clips from my gaming pc and also have a jellyfin server running that uses it for transcodes. Encoding Speed isn't as impressive as the dual nvenc cards but it's certainly impressive.
From what I've noticed dual nvenc cards have some sort of load balancing where both nvenc encoders will be at the same usage but with arc cards and the dual media engines only one will be used at a time by a given application. For example if I'm recording 1440p120fps footage it will be at around 70% on one but 0% on the other. If I then start up something on jellyfin it will then use the media engine with 0% usage MOST of the time. If arc cards had the same sort of load balancing I'm sure they would be more capable than the dual nvenc cards. With a deeplink compatible CPU though they may actually be better than the dual nvenc cards.
Did you install the appropriate opencl icd for tone mapping? Also igpu being the best makes sense, since it has “Zero-Copy” capabilities. It just needs to map the buffer in order to use it, it doesn’t need to map, enqueuewrite, transfer, wait for transfer syncing, running the tone mapping, wait for sync again, and read it back with all the steps that it took to write it to the device.
Another consideration is the quality of the transcode, not something super significant, but I’ve always found HighQuality preset on NVEnc to be significantly better than anything QucikSync can put out.
Thanks for the video! Really was just here for a preview of the A400 ❤
I run a quadro p2000 in a proxmox Ubuntu vm passed through and transcoding works great. There was a kernel bug preivously that would cause transcode to fail on occasion for nvidia gpus but I have not recently had this happen. It performs well but is somewhat limited by the 5gb of on board memory on that card.
Thankjs for the test, now a radxa x4 is the best alternative for me regarding transcoding, and its even cheaper thatnthose dedicated gpus and power consuption might even be better @5 watts
From what I understand is that the decoding power is based on the generation of the CPU or GPU. For example intel CPUs of one architecture will all have the same quick sync decoding, Nvidia GPUs of the same generation will have the same nvenc chip and same for AMD VCE
I paired a rx 470 8gb (it was cheap off eBay) forst with a gen 3 and then a gen 4 xeon cpu. The 3rd gen picture was so dark and deep in color richness, still a decent picture. The 4th gen cpu was the exact opposite. Bright amazing colar. I found this very supprising. Tbh i didn't think that there would be any difference but to my supeise the 3 to 4 gen xeons really did improve a lot into the 4th generation cpus.
This is great, thank you!. I would also love to see this test running Truenas scale running plex under a docker with Pass-Through GPU enabled 😁
Thanks for the idea. It would be interesting to see how passthough and docker affect performance of the GPUs.
@@ElectronicsWizardry agreed. My truenas is a 2700x w\ 64gb ram 4x 240gb ssd in zfs-1 for docker) and x8 6tb hdd for zfs-2 for media. I have gotten up to 12 streams. Need to test like you did to see the limit
Got Sparkle Intel Arc A310 ECO on TrueNAS ElectricEel-24.10-BETA.1 - works flawless with Plex on the ElectricEel new docker
Great video. As costly as gaming GPUs have gotten now, seeing these cards benchmarked for non-gaming (Plex) use is wonderful.
It's actually really nice to see how well the 12th+ generation iGPUs perform for Plex transcoding. It's nice to think that, for once, you don't have to go out and get an add-on card (if you even have a PCIe slot) to do the thing. It's rare in modern computing just to have random excellent extra functionality just sitting there.
(And I'm not just saying that because this video makes me feel a lot better about my i7-12700T and i5-1235U* Proxmox nodes.)
* Well, the 12700T has the UHD 770 you tested. The 1235U has … something with fewer execution units and a confusing name. (Xe … something something.) But still. I feel even better about having jumped into this when 12th gen made the most sense to build with. I wasn't even planning a Plex server at the time.
Yea its impressive how good the Intel iGPUs are for this use case.
One idea this gave me was to compare the 12900k to a N100. The should be the same generation of iGPU, so I want to see if the encoding performance is effected by the slower memory, CPU, and less GPU cores.
I think the mobile chips often have a better iGPU than the desktop chips as there is a market for a bit faster of a iGPU without needing the extra board space of a dGPU on a laptop. Intel also seems to have the mobile chips getting the newest generation first, and then trickling down to the desktop, possibly as the yields are better on the small dies.
I picked up a used Nvidia Tesla p4 from ebay in 2023 for $100 and it works great for transcoding my files for plex. As a bonus I can also use it for running my own AI with Ollama.
I should give Ollama a try on some different GPUs to see how they different in this usecase.
I use a n100 to run jellyfin. Id like to see other igpu and dedicated graphics performance in jellyfin if your going to follow up.
I'd be interested in seeing this done with a low end AMD card. I kinda feel like for most home users any of these is sufficient so it comes down to which is the least hassle. Which should be the iGPU in my older Intel systems, but I haven't gotten them to work with Jellyfin and it wasn't a big enough issue for me to bother chasing down. lol
I should give Plex + Jellyfin a go on my AMD cards. I know they typically have worse support and quality, but would be interesting to see how they hold up.
VERY detailed video thank you so much
My like and windows machine will rock a old 1700 budget build and Ryzen 7000 for my next linux build cash being the main issue.
only reason to go dGPU is igpu bar Ryzen 8000> and 7000> mobile have AV1 encode. all am5 and lga 1700 have really good 8k decoders.
I was looking for a comparison for these, kudos on the video. I would love to be able to see the actual FPS for each of the scenarios clearly (I see you have them displayed in an Excel file on the monitor, could you please share this Excel?), not just the power consumption. If I may add, some timestamps in the description would be highly appreciated in the future! I myself have a TrueNAS server and am doing decoding with the iGPU from the Intel 12400 I've thrown inside, but was wondering whether the A400/A380/A310 would be an upgrade, or if I'm better of sticking with the iGPU for Plex on TrueNAS for transcoding. What do you think?
I added the excel file I had up as a shared google doc in the description. I also add timed stamps to the video. I made them when editing, but just seemed to have forgotten to add them to the video description.
From the data I saw its probably best to stick with the iGPU performance wise. I'd probably only switch if your having issues with the iGPU.
I never got the AsRock Intel ARC 380 Working with Unraid 7 Plex. We need some good tutuorials for transcoding and Unraid Plex.
As far as I know, the a400 only 1 decoder compared to the 2 you have on the 3050 or a1000 (and any other ampere gpu really).
Now I wish i had one of those gpus to try. The a400 is a pretty cut down card features wise.
@ElectronicsWizardry I have an A2000 12gb, but I can't reproduce your test exactly. What I can tell you is that driver do work on Linux using Jellyfin, and I get more the fps than my i7-8700 igpu. 50% extra with tone mapping, around double without it, with a bit of variability.
Similar results on Windows.
Now for me, the i7 igpu is more than enough (I rarely have more than 1 client needing transcoding at the same time, I get 80ish fps in most cases, so a lot of headroom), and I use the A2000 for a different use case, but I tested the performance just for curiosity. I don't have a power usage value to give you tho, I have too much stuff running to isolate the single effect as you did.
@12gark thanks for sharing. I think your better gpu and older igpu explains the difference. I’d still look toward a newer intel igpu as my first pick but adding a gpu to an older system can be an easier and cheaper upgrade than a new platform many times.
@ElectronicsWizardry yeah, I absolutely agree, and even as old as my 8700 is, I can stream 2 4k HDR down to 1080 SDR with tone mapping with zero issues and very low power consumption. Newer one can only do better.
The A2000 was never intended for transcoding in my case, I just played with it a bit and shared my results, especially considering your thoughts on ram usage (A2000 is small, but 12gb are far more than 4) and the different decoder.
I bought it to render my cad projects off of my awfully loud laptop, as far from my ears as possible.
@ElectronicsWizardry Ps, another system that would greatly benefit from a intel or nvidia gpu to increase decoding capacity is an AMD machine. A friend of mine has a zen2 apu in his homelab, added an 310 and improved transcoding by a factor of 5. Wild difference honestly.
Actually, even a 7900xtx is outperformed by the cheapest alchemist in decoding and encoding, even without considering AV1.
For those who want to know you can mod a single slot cooler onto an a380
Would love to see the quadro p400 againts these two. I got mine for $20 and it idles at around 14w
Yea those cards have gotten cheaper since I looked last. If I see one for around that $20 dollar point I'll pick one up and give it a try.
Omg my little i5-8350 laptop with Intel iGpu is the best xD and up to 35W under load off the wall :)
Gollum ! Gollum !
Since you got these 2 SFF GPUs, would you be interested in performance testing them, alongside with others if possible, with ollama / LLMs (aka AI at home)?
Yea that would be a interesting use case. It will be interesting to see what models run ok on the low vRAM of these CPUs.
could you compare these to a p620?
I don't have one to compare, but there pretty cheap on ebay so I might get one. The RTX A400 is a bit newer for a bit better quality, but probably isn't noticeable.
@@ElectronicsWizardry I just bought a hp z2 g4 that came with a p620 was just curious on the comparison odds are I’ll end up getting a a2000 at somepoint when they go down in price.
You should have tested the most popular card the Quadro p2000
Yea p2000 would have been a interesting comparison, especially its since its about the same price as what I got my RTX A400 for. I'd love to have every GPU for comparison, but I can only justify buying so many of them.
Fingers crossed p1000 is best (because that's what I have)
Nope, went with the RTX A400. Lower end, but newer pro card.
WTF are you a mind reader?! I've been wondering if its time to get ARC for my Plex server.
try fire fox I have issue with chromium browsers not using Gstreamer on linux. Firefox bake codec in. However Fedora ship the wrong mesa with transcoding free only and blocking non-free by default. I run a N100 and i3 12100 with the same UHD730 encoding 1-4 stream is ok 4 above get shitty but the UHD770 has more power, making then good for plex servers but look at the igpu and power draw as some intel chip are heavy on power. a i3 sit idle around 35w with out power setting tuned out of box. N100 7w-25w. My N100 is going to become my home server soon.
I'm baffled.
Why is the IGPU better than the dGPU, it makes no sense.
VRAM. Each transcode uses 250-500MB of memory lol
A400 costs 2x more than A380 in Czechia. 😅
Im very opposed to transcoding 4k. I have a main plex server that i share with 1080p content, and a plex docker for 4k direct stream that only myself and my girlfriend have access to which only has 4k media. While we are at the point where we can transcode 4k, i would argue we arent at a point where we should, especially considering how hit and miss tonemapping is.
I'm curious, do you have 2 copies of your media files in 1080p and 4k, and only let you and your GF watch the 4k version? I think a method like this can makes sense if transcoding limits get in the way.
ôhno
Best solution is handbreak / ffmpeg and preparation of media for your use case. :)