Benchmarks for various NVIDIA graphics cards?
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Benchmarks for various NVIDIA graphics cards?
I need to replace a dead GPU in a Windows desktop PC. There are a huge variety of used GPUs available and pricing tends to reflect the gaming/mining performance, but I only care about the performance in Magic. Many of my scenes have their refinement limited by my wish to preserve a 60Hz real-time framerate.
Are there any GPU specs (eg CUDA count) or benchmark suites/sites that give a Magic-relevant comparison of different GPUs?
Are there any GPU specs (eg CUDA count) or benchmark suites/sites that give a Magic-relevant comparison of different GPUs?
Re: Benchmarks for various NVIDIA graphics cards?
Sorry to hear your GPU died, especially with the current market. I don't know of any magic specific benchmarks but that wouldn't stop us from making one and having folks on here help to build a GPU ranking. Resolume took the approach of how many layers of video (noisy and not) and gathered together a matrix of machines specs (CPU, GPU, SSD, RAM) which reported the max # layers for 30fps. Not too many surprises there though - the better the system the better the performance.
We (should) all know that one can easily kill our frame rates with with over-complicated or poorly constructed or just pushing the limits - I know I do and sometimes it doesn't matter if only rendering out or your projector is only 30fps.
We would need a benchmark that tested 3d. shaders, video, layers, scene caching and a few other things. Might be a challenge to create a single metric out of all that that would help choose a GPU.
We (should) all know that one can easily kill our frame rates with with over-complicated or poorly constructed or just pushing the limits - I know I do and sometimes it doesn't matter if only rendering out or your projector is only 30fps.
We would need a benchmark that tested 3d. shaders, video, layers, scene caching and a few other things. Might be a challenge to create a single metric out of all that that would help choose a GPU.
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
Fortunately the GPU was an lowly GTX750 4GB card in an old spare PC - not a great loss. I wasn't expecting a Magic-specific benchmark, but was wondering if CUDA or similar benchmarks would correlate well with Magic performance.
As most of my work is real time and audio- or video-responsive I like to keep my frame rate at 60Hz to minimise latency. I drop spatial resolution rather than frame rate if a scene is still too heavy despite optimisations.
Thinking further about a dedicated Magic benchmark project, different scenes could measure different aspects of performance. This would give several metrics per GPU and users could choose based on the scenes/metrics most relevant to their work. By "scene caching" do you mean the time taken to load a scene? It's an important metric, but I can't figure out how it could be quantified.
As most of my work is real time and audio- or video-responsive I like to keep my frame rate at 60Hz to minimise latency. I drop spatial resolution rather than frame rate if a scene is still too heavy despite optimisations.
Thinking further about a dedicated Magic benchmark project, different scenes could measure different aspects of performance. This would give several metrics per GPU and users could choose based on the scenes/metrics most relevant to their work. By "scene caching" do you mean the time taken to load a scene? It's an important metric, but I can't figure out how it could be quantified.
Re: Benchmarks for various NVIDIA graphics cards?
I guess Eric knows best what modules would be the most representative (processing and memory wise) for a proper GPU test, so maybe he could create a project (maybe from the sample projects which are already there) and as measurement we could use the time it needs to render the project in specific resolution to disk.
Of course this would also measure the disk IO, but in real life you have this anyway (and nowadays 'normal' disks are already quite fast writing one large file).
Of course this would also measure the disk IO, but in real life you have this anyway (and nowadays 'normal' disks are already quite fast writing one large file).
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
Disc speed aside, rendering to disk uses additional CPU (and perhaps) GPU resources so is not a measure of just the GPU's real-time performance. There would need to be some means of correction for the encoding time, and I think this would need changes within Magic.TKS wrote:... and as measurement we could use the time it needs to render the project in specific resolution to disk...
I think real-time FPS measurement(s), as common used in gaming benchmarks, would give a much more meaningful and repeatable result. That said, an additional measure of encoding performance could be useful, but it would have to be qualified with details of the full system used (CPU, RAM etc) plus perhaps the version of the codec.
Re: Benchmarks for various NVIDIA graphics cards?
I completely agree, but the more accurate (and therefore time-intense to implement) it should be the less likely it will show up at allTerry Payman wrote:I think real-time FPS measurement(s), as common used in gaming benchmarks, would give a much more meaningful and repeatable result.

(and, BTW, I have tested lots of different GPUs and for non-trivial projects the GPU has by far the most influence on the rendering time. And it scales quite well with the number of GPUs you put into your computer)
Re: Benchmarks for various NVIDIA graphics cards?
I meant 'Keep Scene in Memory' which has an impact on scene loading times - possibly affected by GPU or PCI version or memory speed - but your right, perhaps not all that useful for the average scene - but GPU memory is a factor here.Terry Payman wrote:By "scene caching" do you mean the time taken to load a scene? It's an important metric, but I can't figure out how it could be quantified.
Re: Benchmarks for various NVIDIA graphics cards?
A good place to start would be:
1) Choosing a few specific shaders that come with Magic, preferably generative (not effect) shaders so they can be run by themselves.
2) Setting Magic to 1920x1080 resolution so that this part is standardized.
3) Disabling vertical sync and closing all other apps to get a true read of the frame rate.
4) Listing results as: what GPU you have, what CPU you have, and what your FPS result was with each shader.
1) Choosing a few specific shaders that come with Magic, preferably generative (not effect) shaders so they can be run by themselves.
2) Setting Magic to 1920x1080 resolution so that this part is standardized.
3) Disabling vertical sync and closing all other apps to get a true read of the frame rate.
4) Listing results as: what GPU you have, what CPU you have, and what your FPS result was with each shader.
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
Thanks Eric
Very helpful suggestions which inspired an evening of experimentation, project attached and results linked. I hope that stepping through the playlist scenes in turn will be sufficient explanation together with the scene naming, module annotation and the notes below. I'm too tired to add more as it's way past my bedtime.
With one exception (GLSL grid 4), all the shaders that I tried ran so fast that the FPS could not be measured consistently, as the FPS jumped around. I decided to try using the Iterator module to effectively multiply the instances of each shader under test. This worked perfectly on the effect shaders, but not on the generative shaders so I only have good measurements for effects. For my own projects this is fine, as I usually have only a few generative shaders in a project (each in their own scene) and feed these few sources into many scenes each of which has many effect shaders.
I'm wondering whether generative shaders only render once and that output is then used repeatedly as the source for the iterated effect (this would be my preference). Unfortunately this benefit is not available to "generative" shaders that need the AudioToImage module as an input, as they are treated as effects and are iterated unnecessarily at great cost to framerate.
Out-of-place feature request: It would be awesome if there was a means by which an arbitrary chain of modules could be treated as generative, perhaps by feeding the output via a "Stop Iteration" module into the iterated effects. Perhaps also interesting to have a complementary "Enable Iteration" module that would facilitate accurate speed measurements of generative shaders.
My attached prototype benchmark project uses the Iterator module to measure the effect of 100 passes through each effect shader. This gives a low and stable reading. Here's a link to my first set of results https://docs.google.com/spreadsheets/d/ ... sp=sharing
A means is needed whereby an individual contributor can add/update their own results while all other entries are read-only protected.

With one exception (GLSL grid 4), all the shaders that I tried ran so fast that the FPS could not be measured consistently, as the FPS jumped around. I decided to try using the Iterator module to effectively multiply the instances of each shader under test. This worked perfectly on the effect shaders, but not on the generative shaders so I only have good measurements for effects. For my own projects this is fine, as I usually have only a few generative shaders in a project (each in their own scene) and feed these few sources into many scenes each of which has many effect shaders.
I'm wondering whether generative shaders only render once and that output is then used repeatedly as the source for the iterated effect (this would be my preference). Unfortunately this benefit is not available to "generative" shaders that need the AudioToImage module as an input, as they are treated as effects and are iterated unnecessarily at great cost to framerate.
Out-of-place feature request: It would be awesome if there was a means by which an arbitrary chain of modules could be treated as generative, perhaps by feeding the output via a "Stop Iteration" module into the iterated effects. Perhaps also interesting to have a complementary "Enable Iteration" module that would facilitate accurate speed measurements of generative shaders.
My attached prototype benchmark project uses the Iterator module to measure the effect of 100 passes through each effect shader. This gives a low and stable reading. Here's a link to my first set of results https://docs.google.com/spreadsheets/d/ ... sp=sharing
A means is needed whereby an individual contributor can add/update their own results while all other entries are read-only protected.
- Attachments
-
- Magic Benchmarks 0.1.magic
- A prototype benchmarking framework - all suggestions and corrections welcomed!
- (5.2 KiB) Downloaded 1902 times
Re: Benchmarks for various NVIDIA graphics cards?
Thanks Terry. What was your Magic throttling set to?
Recommend adding shader tube5.txt - would like to hear how M1 Max and 30 series cope with this.
Recommend adding shader tube5.txt - would like to hear how M1 Max and 30 series cope with this.
Re: Benchmarks for various NVIDIA graphics cards?
Ah yes, forgot to say that throttling should be set to 0.
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
Thanks Sadler and Eric for spotting the throttling issue which I'd entirely overlooked
Setting it to 0 gave a huge uplift in the FPS for the faster shaders.
I've re-done my spreadsheet results (link repeated below for convenience). I have one more GTX 1070 PC to add.
Additional contibutions welcomed! I'd be most interested to see a few RTX 2080 results
as well as RTX 30xx.
https://docs.google.com/spreadsheets/d/ ... sp=sharing
I've attached a new version (0.11) of the benchmark project, having added tube5 (thanks Sadler
)
These benchmarks aside, M1 and RTX 30xx are interesting with their potential for faster loading from system memory. M1 with its unified memory, and RTX 30xx with Resizable BAR (if NVIDIA can be persuaded to allow Magic to use it).

Setting it to 0 gave a huge uplift in the FPS for the faster shaders.
I've re-done my spreadsheet results (link repeated below for convenience). I have one more GTX 1070 PC to add.
Additional contibutions welcomed! I'd be most interested to see a few RTX 2080 results

https://docs.google.com/spreadsheets/d/ ... sp=sharing
I've attached a new version (0.11) of the benchmark project, having added tube5 (thanks Sadler

These benchmarks aside, M1 and RTX 30xx are interesting with their potential for faster loading from system memory. M1 with its unified memory, and RTX 30xx with Resizable BAR (if NVIDIA can be persuaded to allow Magic to use it).
- Attachments
-
- Magic Benchmarks 0.11.magic
- Slightly expanded benchmarks. Thanks Sadler!
- (5.61 KiB) Downloaded 1900 times
Re: Benchmarks for various NVIDIA graphics cards?
I guess someone has to start to get this rolling
:

Code: Select all
Benchmark-Results for:
Intel Core i7-4790K 4000.0 MHz
32 GB DDR3 RAM
NVIDIA GeForce RTX 3070 Ti 8 GB DDR6
GLSL default: 2578.3
GLSL grid4: 114.2
GLSL tube5: 59.4
ISF Solid Color: 2533.9
Image: 2824.9
Text: 3430.1
Waveform: 3341.4
Generator: 3490.3
Harness 1: 951.0
Harness 2: 711.3
Iterator: 1355.2
HueSaturation: 166.7
MultiPass Gaussian Blur: 42.5
Replicate: 90.0
RGBtoHSVtoRGB: 87.4
RGBA Swap: 150.2
3D Rotate: 147.9
Contrast: 175.2
Invert: 174.9
Re: Benchmarks for various NVIDIA graphics cards?
This should paste into a column easier for you...
Code: Select all
System X
2DV
Clevo P775TM1
Windows 10 Home
v2.31
0.11
i7-9700K @3.6GHz, 16GB DDR3 1600MHz
RTX 2080 8GB Mobile
FPS
2392...1670 (CPU throttling?)
80
32
2500
2730
2730
2720
2650
1080
820
1580
105
36
22
55
104
106
107
97
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
Many thanks TKS and Sadler
I've entered your results into the spreadsheet in GPU generation order (thanks Sadler for the easily pasted column).
https://docs.google.com/spreadsheets/d/ ... sp=sharing
It seems that only the "heavy" benchmarks (say FPS below 300) are meaningful, scaling progressively with increasing GPU power.
Very interesting to see the significant speed improvements GTX 1070 > RTX 2080 > RTX 3070Ti
It would be good to have some further results for different model GPUs within the same generation.

I've entered your results into the spreadsheet in GPU generation order (thanks Sadler for the easily pasted column).
https://docs.google.com/spreadsheets/d/ ... sp=sharing
It seems that only the "heavy" benchmarks (say FPS below 300) are meaningful, scaling progressively with increasing GPU power.
Very interesting to see the significant speed improvements GTX 1070 > RTX 2080 > RTX 3070Ti
It would be good to have some further results for different model GPUs within the same generation.
Re: Benchmarks for various NVIDIA graphics cards?
It only recently occurred to me that I also have a desktop PC with a GTX970 in it. The numbers are surprisingly disappointing but not unusable given a little care. Here is its benchmark...
AMD Ryzen 7 2700X Eight-Core Processor, 3700 Mhz
Windows 10 Home
GTX970 3.5GB
880
27
11
2700
3010
3020
3020
3020
1000
810
1520
43
9
6
22
40
47
47
45
AMD Ryzen 7 2700X Eight-Core Processor, 3700 Mhz
Windows 10 Home
GTX970 3.5GB
880
27
11
2700
3010
3020
3020
3020
1000
810
1520
43
9
6
22
40
47
47
45
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
Thanks Sadler
This is a very helpful addition to the table as the GTX 970 cards have a useful amount of graphics memory and are much cheaper than later generations.

This is a very helpful addition to the table as the GTX 970 cards have a useful amount of graphics memory and are much cheaper than later generations.
Re: Benchmarks for various NVIDIA graphics cards?
Is there away to set the "Magic Window" to be the same size as the fixed resolution? as at the moment depending on how big that window is there can be a a big difference in FPS (over 1000FPS in some cases) which will skew the results?
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
There's no way to independently adjust the resolution of the Magic window, but I'm sure Eric has optimised its rendering to minimise the effect on frame rate.PoohBear wrote:Is there away to set the "Magic Window" to be the same size as the fixed resolution? as at the moment depending on how big that window is there can be a a big difference in FPS (over 1000FPS in some cases) which will skew the results?
Very many thanks for your observation - I hadn't noticed this effect before. A quick check suggests that on certain scenes the effect of Magic window size does indeed have a small but significant effect at practical frame rates (30-60fps). Other scenes showed no detectable effect at all, even at similar frame rates. I'll need to spend some time investigating.
.
Re: Benchmarks for various NVIDIA graphics cards?
Yes, it is true that the Magic Window size does matter a little bit. To standardize this across different computers, the desktop resolution would have to be set the same on all of them, i.e., 1920x1080.
-
- Posts: 724
- Joined: Sun Sep 14, 2014 8:15 am
- Location: UK
- Contact:
Re: Benchmarks for various NVIDIA graphics cards?
Thanks EricEric wrote:Yes, it is true that the Magic Window size does matter a little bit. To standardize this across different computers, the desktop resolution would have to be set the same on all of them, i.e., 1920x1080.

Addtionally I'll specify the Magic window at 50% of screen height for ease of judgement, and I'll attempt to quantify the effect of different desktop/display resolutions.