Jump to content
  • 0

[WIP] DDSopt & Texture Overhauls


z929669

Question

  • Answers 1.7k
  • Created
  • Last Reply

Top Posters For This Question

Recommended Posts

  • 0

When trying the most recent versions of DDSopt a System Error message pops up

"The program can't start because MSVCP110.dll is missing from your computer. Try reinstalling the program to fix this problem"

 

This happens on the pre4 and pre5 versions, but DDSopt v0.8.0 (preview III) works fine and this error message doesn't appear.

 

I use a shared computer, so unsure if something was deleted to get this error message.

Searching online, most recommendations were to use a registry cleaner but I am having hard time finding a good/free one to use.

 

Any advice would be really helpful and appreciated, kinda stuck and unsure what step to take to fix this problem.

Download and install
Link to comment
Share on other sites

  • 0

It is important to note that if anyone is running Xfire or SLI and using GPU-Z to assess' date=' you will need to divide all memory figures by 2, as GPU-Z reports the [i']additive [/i]memory allocation, even though GPU memory is functionally acting in parallel. The 2k packs, after DDSopt use about 1,800/2 = 900 Mb of VRAM (that includes optimized HRDLC underneath), so this works fine until a bunch more is added. With DDSopt, it may be possible to operate within the constraints of VRAM on 1 Gb cards even with full STEP. I have yet to test though, so stay tuned.

 

Also note that one can also operate without stutter up to about 1.5x combined VRAM capacity according to my testing. I will address my theory behind this fact in the guide as well. So, this means that you ultimately want to keep your eye on that combined VRAM figure and keep it under 1,500 Mb max for 1 Gb cards.

I want to say that SLI and crossfire cards have to keep copies of one anothesr VRAM, making their actual pool of VRAM resources closer to 1gb, even if they have 2gb of memory available (due to redundancy). I am not 100% sure on this... however I am confident that I have read something on the inefficiencies of SLI and Crossfire before when it comes to memory management. 

 

Some weak confirmation on this detail:

https://hardforum.com/showthread.php?t=1331011

 

I am a member of the OCN forums with some 5k posts... I will do a search in their news section (I know I have read at least a couple articles on it) and post back if I can find something more concrete on the topic. 

Link to comment
Share on other sites

  • 0

It is important to note that if anyone is running Xfire or SLI and using GPU-Z to assess' date=' you will need to divide all memory figures by 2, as GPU-Z reports the [i']additive [/i]memory allocation, even though GPU memory is functionally acting in parallel. The 2k packs, after DDSopt use about 1,800/2 = 900 Mb of VRAM (that includes optimized HRDLC underneath), so this works fine until a bunch more is added. With DDSopt, it may be possible to operate within the constraints of VRAM on 1 Gb cards even with full STEP. I have yet to test though, so stay tuned.

 

Also note that one can also operate without stutter up to about 1.5x combined VRAM capacity according to my testing. I will address my theory behind this fact in the guide as well. So, this means that you ultimately want to keep your eye on that combined VRAM figure and keep it under 1,500 Mb max for 1 Gb cards.

I want to say that SLI and crossfire cards have to keep copies of one anothesr VRAM, making their actual pool of VRAM resources closer to 1gb, even if they have 2gb of memory available (due to redundancy).

That's exactly what z929669 said.

Link to comment
Share on other sites

  • 0

It is important to note that if anyone is running Xfire or SLI and using GPU-Z to assess' date=' you will need to divide all memory figures by 2, as GPU-Z reports the [i']additive [/i]memory allocation, even though GPU memory is functionally acting in parallel. The 2k packs, after DDSopt use about 1,800/2 = 900 Mb of VRAM (that includes optimized HRDLC underneath), so this works fine until a bunch more is added. With DDSopt, it may be possible to operate within the constraints of VRAM on 1 Gb cards even with full STEP. I have yet to test though, so stay tuned.

 

Also note that one can also operate without stutter up to about 1.5x combined VRAM capacity according to my testing. I will address my theory behind this fact in the guide as well. So, this means that you ultimately want to keep your eye on that combined VRAM figure and keep it under 1,500 Mb max for 1 Gb cards.

I want to say that SLI and crossfire cards have to keep copies of one anothesr VRAM, making their actual pool of VRAM resources closer to 1gb, even if they have 2gb of memory available (due to redundancy). I am not 100% sure on this... however I am confident that I have read something on the inefficiencies of SLI and Crossfire before when it comes to memory management. 

 

Some weak confirmation on this detail:

https://hardforum.com/showthread.php?t=1331011

 

I am a member of the OCN forums with some 5k posts... I will do a search in their news section (I know I have read at least a couple articles on it) and post back if I can find something more concrete on the topic. 

I have a theory alluded to often on this and a few other threads, but I have failed to add it to the guide or to make it more visible on this thread. What you refer to is addressed by my suppositions as stated on this post (will update the OP as well)
Link to comment
Share on other sites

  • 0

That theory is more or less correct as I understand VRAM. VRAM operates very similar to system memory really. In windows or linux OS you have a "swap" which performs the same action as memory does for VRAM. If you can keep the swapped portion small enough, it is usually ok to have VRAM full with a portion of the rest in main memory.

 

I think, however, that the exact way VRAM works is different from system programs. Most of the video memory stored IS pertinent to whatever is going to the game at the time. In a system program all of the most frequented accessed data, the system stack and the system heap are all in main memory and what tends to get pushed off to the hard disk is the content of files that are loaded in memory at the time. VRAM has a stack and a heap as well, but the contents of VRAM memory are cycles through more often and more frequently than is main memory due to the fact that video cards often do needs to deal with very large files nearly randomly. So VRAM and a half in memory would be very close to threshold for acceptable performance. I think VRAM and a tenth or so would probably be approaching the territory where frame rates and noticeably effected. I don't have any data to back this claim up. However I could direct you to some very good articles on graphics and graphics processing if you are curious and want to know more.

Link to comment
Share on other sites

  • 0

That theory is more or less correct as I understand VRAM. VRAM operates very similar to system memory really. In windows or linux OS you have a "swap" which performs the same action as memory does for VRAM. If you can keep the swapped portion small enough, it is usually ok to have VRAM full with a portion of the rest in main memory.

 

I think, however, that the exact way VRAM works is different from system programs. Most of the video memory stored IS pertinent to whatever is going to the game at the time. In a system program all of the most frequented accessed data, the system stack and the system heap are all in main memory and what tends to get pushed off to the hard disk is the content of files that are loaded in memory at the time. VRAM has a stack and a heap as well, but the contents of VRAM memory are cycles through more often and more frequently than is main memory due to the fact that video cards often do needs to deal with very large files nearly randomly. So VRAM and a half in memory would be very close to threshold for acceptable performance. I think VRAM and a tenth or so would probably be approaching the territory where frame rates and noticeably effected. I don't have any data to back this claim up. However I could direct you to some very good articles on graphics and graphics processing if you are curious and want to know more.

My theoretical assessment is based on a lot of real-world testing and info submitted by others. The point is that the actual perceived threshold imposed by the GPU VRAM capacity is larger than that threshold. Whether it is closer to 1.1x or 1.5x remains to be empirically determined, preferably on a number of different systems, both single and multi-GPU.

 

If you reference the performance data (note table footnotes to col headers), you will see the evidence of the actual memory-management behavior between dedicated and dynamic VRAM. System RAM interaction is the black box, but the data in the tables suggests that 1.1x is pretty safe, and I am stipulating that as one approaches 1.5x, stuttering will become more and more noticeable. I assume that we would see stuttering something less than half of the time 1.5x ... how much less depends upon the sophistication of VRAM/RAM memory management, which will depend on a lot of things.

 

Theories are interesting, but only real test data provides the answers ;)

Link to comment
Share on other sites

  • 0

I appreciate your data link. Very interesting. Question though; did you optimize those textures at all? Very few of them ever exceeded 1k mb, if they were not optimized a very light run of any one of the texture optimizer mods could bring it under 1k. I have used this mod on the maximum strength runs, and found it both very effective and not detrimental at all to quality

 

https://skyrim.nexusmods.com/mods/12801 (I am sure you are aware of this tool but w.e.)

 

On the subject of testing the effects of VRAM overflow and performance I think that is problematic. For one, any time you are close to exceeding maximum VRAM you risk the "VRAM burst crash", so it can never really be recommended. Another problem is that performance is going to vary dramatically depending on what kind of scene you are surveying. An place that uses intense shaders, geometry, or particle effects is going to have greater performance costs but not necessarily high VRAM usage. The only way to isolate the variable would to be to create an artificial scene, which in itself would make the benchmark not representative of real gameplay. Another problem is that any software that monitor VRAM usage doesn't give you any information about system memory. If a program is reporting 95% VRAM usage that doesn't mean that there isn't 100mb or more of cache in main memory.It would be very difficult to determine how much of the other texture are in main memory and therefore get a baseline for how that ratio effects performance.

 

I think, in general, it is best to to keep VRAM usage within 90%-95% of your total VRAM capacity as that is the only way to provide any kind of guarantee of stability and performance.

Link to comment
Share on other sites

  • 0

I appreciate your data link. Very interesting. Question though; did you optimize those textures at all? Very few of them ever exceeded 1k mb, if they were not optimized a very light run of any one of the texture optimizer mods could bring it under 1k. I have used this mod on the maximum strength runs, and found it both very effective and not detrimental at all to quality

 

https://skyrim.nexusmods.com/mods/12801 (I am sure you are aware of this tool but w.e.)

Optimizer textures is much less powerful and adaptable than DDSopt (read through this thread some more). Also, you don't want to constrain textures to less than 1k (if I understand your question), that results in stripping out the highest mip levels and definitely affects quality (again, much info to find on this thread, so I won't go into that here).

On the subject of testing the effects of VRAM overflow and performance I think that is problematic. For one, any time you are close to exceeding maximum VRAM you risk the "VRAM burst crash", so it can never really be recommended. Another problem is that performance is going to vary dramatically depending on what kind of scene you are surveying. An place that uses intense shaders, geometry, or particle effects is going to have greater performance costs but not necessarily high VRAM usage. The only way to isolate the variable would to be to create an artificial scene, which in itself would make the benchmark not representative of real gameplay. Another problem is that any software that monitor VRAM usage doesn't give you any information about system memory. If a program is reporting 95% VRAM usage that doesn't mean that there isn't 100mb or more of cache in main memory.It would be very difficult to determine how much of the other texture are in main memory and therefore get a baseline for how that ratio effects performance.

 

I think, in general, it is best to to keep VRAM usage within 90%-95% of your total VRAM capacity as that is the only way to provide any kind of guarantee of stability and performance.

Skyrim Performance Monitor provides data on VRAM and RAM; however, I am uncertain as yet whether or not RAM = system RAM or dynamic VRAM.

 

Again, real data is far more dependable than speculation, and my testing has revealed that it would be overly conservative to constrain dedicated VRAM to anything less than 100% of available on-card VRAM ... at least on Sandy-Bridge systems using ATI HD6xxx and up. Take a look at the performance data again and note that none of these values cause any stuttering on my 1 GB VRAM system. The data and methodology speak for themselves I think :yes:

Link to comment
Share on other sites

  • 0

Lol. I am in a thread about a texture optimizer recommending another texture optimizer! Running the program now.. i haven't got to test it in game but it is compressing texture further from the tool I was using. Pretty cool.

 

On the performance question, I just want to point out that the whole topic is purely academic and speculative. Real world results for your system is going to differ from another persons, and is going to be very dependent on the architecture of the GPUs, sophistication of the drives, and other stuff. My biggest concern with overloading VRAM is the VRAM bursting problem. I don't know of a way to prevent that, and if there is a scene where the required textures to render that scene is greater than what the video card has in memory, than a VRAM burst is possible. I don't think, using an popular texture packs for skyrim, that a VRAM burst is very likely on modern GPUs with 1gb of memory. Especially using texture optimizer programs like DDSOpt or optimize texture. For people using 512mb cards though, I can't say its a good idea to overload your VRAM, mostly because of the VRAM burst issue.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines, Privacy Policy, and Terms of Use.