z929669 Posted April 25, 2012 Share Posted April 25, 2012 Discussion thread:DDSopt Guide by STEPWiki Link GET DDSopt:Github Pre-release versionsOfficial Nexus versions (select pre-release update 4) Link to comment Share on other sites More sharing options...
0 alt3rn1ty Posted October 28, 2012 Share Posted October 28, 2012 In the download is a html link to the required MSVC redistributable https://www.microsoft.com/en-us/download/details.aspx?id=30679 Choose the appropiate one for your Operating system (x32 or x64) Link to comment Share on other sites More sharing options...
0 frihyland Posted October 28, 2012 Share Posted October 28, 2012 When trying the most recent versions of DDSopt a System Error message pops up"The program can't start because MSVCP110.dll is missing from your computer. Try reinstalling the program to fix this problem" This happens on the pre4 and pre5 versions, but DDSopt v0.8.0 (preview III) works fine and this error message doesn't appear. I use a shared computer, so unsure if something was deleted to get this error message.Searching online, most recommendations were to use a registry cleaner but I am having hard time finding a good/free one to use. Any advice would be really helpful and appreciated, kinda stuck and unsure what step to take to fix this problem.Download and install Link to comment Share on other sites More sharing options...
0 gwoody79 Posted October 29, 2012 Share Posted October 29, 2012 Thanks for your help guys, that worked. Saved me from probably breaking my computer by doing stuff in the registry or reformatting. And ty Ethatron for this great utility. Link to comment Share on other sites More sharing options...
0 JudgmentJay Posted November 2, 2012 Share Posted November 2, 2012 Anyone have any more experience with pre5b? Would it be advisable to use the recommended pre3 settings referenced at the start of this thread and leave the new options at default? Link to comment Share on other sites More sharing options...
0 Besidilo Posted November 2, 2012 Share Posted November 2, 2012 Anyone have any more experience with pre5b? Would it be advisable to use the recommended pre3 settings referenced at the start of this thread and leave the new options at default?I was wondering about this as well. Link to comment Share on other sites More sharing options...
0 z929669 Posted November 2, 2012 Author Share Posted November 2, 2012 RE Pre 5b settings. I have not had an opportunity to test yet, but I would use the settings as I described on this thread, extrapolating as closely as possible given the added options. Any option not present in Pre 3 just leave at the default. Link to comment Share on other sites More sharing options...
0 Syrophir Posted November 11, 2012 Share Posted November 11, 2012 Thanks for the great tool and information. Link to comment Share on other sites More sharing options...
0 mothergoose729 Posted November 12, 2012 Share Posted November 12, 2012 It is important to note that if anyone is running Xfire or SLI and using GPU-Z to assess' date=' you will need to divide all memory figures by 2, as GPU-Z reports the [i']additive [/i]memory allocation, even though GPU memory is functionally acting in parallel. The 2k packs, after DDSopt use about 1,800/2 = 900 Mb of VRAM (that includes optimized HRDLC underneath), so this works fine until a bunch more is added. With DDSopt, it may be possible to operate within the constraints of VRAM on 1 Gb cards even with full STEP. I have yet to test though, so stay tuned. Also note that one can also operate without stutter up to about 1.5x combined VRAM capacity according to my testing. I will address my theory behind this fact in the guide as well. So, this means that you ultimately want to keep your eye on that combined VRAM figure and keep it under 1,500 Mb max for 1 Gb cards.I want to say that SLI and crossfire cards have to keep copies of one anothesr VRAM, making their actual pool of VRAM resources closer to 1gb, even if they have 2gb of memory available (due to redundancy). I am not 100% sure on this... however I am confident that I have read something on the inefficiencies of SLI and Crossfire before when it comes to memory management. Some weak confirmation on this detail:https://hardforum.com/showthread.php?t=1331011 I am a member of the OCN forums with some 5k posts... I will do a search in their news section (I know I have read at least a couple articles on it) and post back if I can find something more concrete on the topic. Link to comment Share on other sites More sharing options...
0 JudgmentJay Posted November 12, 2012 Share Posted November 12, 2012 It is important to note that if anyone is running Xfire or SLI and using GPU-Z to assess' date=' you will need to divide all memory figures by 2, as GPU-Z reports the [i']additive [/i]memory allocation, even though GPU memory is functionally acting in parallel. The 2k packs, after DDSopt use about 1,800/2 = 900 Mb of VRAM (that includes optimized HRDLC underneath), so this works fine until a bunch more is added. With DDSopt, it may be possible to operate within the constraints of VRAM on 1 Gb cards even with full STEP. I have yet to test though, so stay tuned. Also note that one can also operate without stutter up to about 1.5x combined VRAM capacity according to my testing. I will address my theory behind this fact in the guide as well. So, this means that you ultimately want to keep your eye on that combined VRAM figure and keep it under 1,500 Mb max for 1 Gb cards.I want to say that SLI and crossfire cards have to keep copies of one anothesr VRAM, making their actual pool of VRAM resources closer to 1gb, even if they have 2gb of memory available (due to redundancy).That's exactly what z929669 said. Link to comment Share on other sites More sharing options...
0 z929669 Posted November 12, 2012 Author Share Posted November 12, 2012 It is important to note that if anyone is running Xfire or SLI and using GPU-Z to assess' date=' you will need to divide all memory figures by 2, as GPU-Z reports the [i']additive [/i]memory allocation, even though GPU memory is functionally acting in parallel. The 2k packs, after DDSopt use about 1,800/2 = 900 Mb of VRAM (that includes optimized HRDLC underneath), so this works fine until a bunch more is added. With DDSopt, it may be possible to operate within the constraints of VRAM on 1 Gb cards even with full STEP. I have yet to test though, so stay tuned. Also note that one can also operate without stutter up to about 1.5x combined VRAM capacity according to my testing. I will address my theory behind this fact in the guide as well. So, this means that you ultimately want to keep your eye on that combined VRAM figure and keep it under 1,500 Mb max for 1 Gb cards.I want to say that SLI and crossfire cards have to keep copies of one anothesr VRAM, making their actual pool of VRAM resources closer to 1gb, even if they have 2gb of memory available (due to redundancy). I am not 100% sure on this... however I am confident that I have read something on the inefficiencies of SLI and Crossfire before when it comes to memory management. Some weak confirmation on this detail:https://hardforum.com/showthread.php?t=1331011 I am a member of the OCN forums with some 5k posts... I will do a search in their news section (I know I have read at least a couple articles on it) and post back if I can find something more concrete on the topic. I have a theory alluded to often on this and a few other threads, but I have failed to add it to the guide or to make it more visible on this thread. What you refer to is addressed by my suppositions as stated on this post (will update the OP as well) Link to comment Share on other sites More sharing options...
0 mothergoose729 Posted November 12, 2012 Share Posted November 12, 2012 That theory is more or less correct as I understand VRAM. VRAM operates very similar to system memory really. In windows or linux OS you have a "swap" which performs the same action as memory does for VRAM. If you can keep the swapped portion small enough, it is usually ok to have VRAM full with a portion of the rest in main memory. I think, however, that the exact way VRAM works is different from system programs. Most of the video memory stored IS pertinent to whatever is going to the game at the time. In a system program all of the most frequented accessed data, the system stack and the system heap are all in main memory and what tends to get pushed off to the hard disk is the content of files that are loaded in memory at the time. VRAM has a stack and a heap as well, but the contents of VRAM memory are cycles through more often and more frequently than is main memory due to the fact that video cards often do needs to deal with very large files nearly randomly. So VRAM and a half in memory would be very close to threshold for acceptable performance. I think VRAM and a tenth or so would probably be approaching the territory where frame rates and noticeably effected. I don't have any data to back this claim up. However I could direct you to some very good articles on graphics and graphics processing if you are curious and want to know more. Link to comment Share on other sites More sharing options...
0 z929669 Posted November 12, 2012 Author Share Posted November 12, 2012 That theory is more or less correct as I understand VRAM. VRAM operates very similar to system memory really. In windows or linux OS you have a "swap" which performs the same action as memory does for VRAM. If you can keep the swapped portion small enough, it is usually ok to have VRAM full with a portion of the rest in main memory. I think, however, that the exact way VRAM works is different from system programs. Most of the video memory stored IS pertinent to whatever is going to the game at the time. In a system program all of the most frequented accessed data, the system stack and the system heap are all in main memory and what tends to get pushed off to the hard disk is the content of files that are loaded in memory at the time. VRAM has a stack and a heap as well, but the contents of VRAM memory are cycles through more often and more frequently than is main memory due to the fact that video cards often do needs to deal with very large files nearly randomly. So VRAM and a half in memory would be very close to threshold for acceptable performance. I think VRAM and a tenth or so would probably be approaching the territory where frame rates and noticeably effected. I don't have any data to back this claim up. However I could direct you to some very good articles on graphics and graphics processing if you are curious and want to know more.My theoretical assessment is based on a lot of real-world testing and info submitted by others. The point is that the actual perceived threshold imposed by the GPU VRAM capacity is larger than that threshold. Whether it is closer to 1.1x or 1.5x remains to be empirically determined, preferably on a number of different systems, both single and multi-GPU. If you reference the performance data (note table footnotes to col headers), you will see the evidence of the actual memory-management behavior between dedicated and dynamic VRAM. System RAM interaction is the black box, but the data in the tables suggests that 1.1x is pretty safe, and I am stipulating that as one approaches 1.5x, stuttering will become more and more noticeable. I assume that we would see stuttering something less than half of the time 1.5x ... how much less depends upon the sophistication of VRAM/RAM memory management, which will depend on a lot of things. Theories are interesting, but only real test data provides the answers ;) Link to comment Share on other sites More sharing options...
0 mothergoose729 Posted November 12, 2012 Share Posted November 12, 2012 I appreciate your data link. Very interesting. Question though; did you optimize those textures at all? Very few of them ever exceeded 1k mb, if they were not optimized a very light run of any one of the texture optimizer mods could bring it under 1k. I have used this mod on the maximum strength runs, and found it both very effective and not detrimental at all to quality https://skyrim.nexusmods.com/mods/12801 (I am sure you are aware of this tool but w.e.) On the subject of testing the effects of VRAM overflow and performance I think that is problematic. For one, any time you are close to exceeding maximum VRAM you risk the "VRAM burst crash", so it can never really be recommended. Another problem is that performance is going to vary dramatically depending on what kind of scene you are surveying. An place that uses intense shaders, geometry, or particle effects is going to have greater performance costs but not necessarily high VRAM usage. The only way to isolate the variable would to be to create an artificial scene, which in itself would make the benchmark not representative of real gameplay. Another problem is that any software that monitor VRAM usage doesn't give you any information about system memory. If a program is reporting 95% VRAM usage that doesn't mean that there isn't 100mb or more of cache in main memory.It would be very difficult to determine how much of the other texture are in main memory and therefore get a baseline for how that ratio effects performance. I think, in general, it is best to to keep VRAM usage within 90%-95% of your total VRAM capacity as that is the only way to provide any kind of guarantee of stability and performance. Link to comment Share on other sites More sharing options...
0 z929669 Posted November 13, 2012 Author Share Posted November 13, 2012 I appreciate your data link. Very interesting. Question though; did you optimize those textures at all? Very few of them ever exceeded 1k mb, if they were not optimized a very light run of any one of the texture optimizer mods could bring it under 1k. I have used this mod on the maximum strength runs, and found it both very effective and not detrimental at all to quality https://skyrim.nexusmods.com/mods/12801 (I am sure you are aware of this tool but w.e.)Optimizer textures is much less powerful and adaptable than DDSopt (read through this thread some more). Also, you don't want to constrain textures to less than 1k (if I understand your question), that results in stripping out the highest mip levels and definitely affects quality (again, much info to find on this thread, so I won't go into that here).On the subject of testing the effects of VRAM overflow and performance I think that is problematic. For one, any time you are close to exceeding maximum VRAM you risk the "VRAM burst crash", so it can never really be recommended. Another problem is that performance is going to vary dramatically depending on what kind of scene you are surveying. An place that uses intense shaders, geometry, or particle effects is going to have greater performance costs but not necessarily high VRAM usage. The only way to isolate the variable would to be to create an artificial scene, which in itself would make the benchmark not representative of real gameplay. Another problem is that any software that monitor VRAM usage doesn't give you any information about system memory. If a program is reporting 95% VRAM usage that doesn't mean that there isn't 100mb or more of cache in main memory.It would be very difficult to determine how much of the other texture are in main memory and therefore get a baseline for how that ratio effects performance. I think, in general, it is best to to keep VRAM usage within 90%-95% of your total VRAM capacity as that is the only way to provide any kind of guarantee of stability and performance.Skyrim Performance Monitor provides data on VRAM and RAM; however, I am uncertain as yet whether or not RAM = system RAM or dynamic VRAM. Again, real data is far more dependable than speculation, and my testing has revealed that it would be overly conservative to constrain dedicated VRAM to anything less than 100% of available on-card VRAM ... at least on Sandy-Bridge systems using ATI HD6xxx and up. Take a look at the performance data again and note that none of these values cause any stuttering on my 1 GB VRAM system. The data and methodology speak for themselves I think Link to comment Share on other sites More sharing options...
0 mothergoose729 Posted November 13, 2012 Share Posted November 13, 2012 Lol. I am in a thread about a texture optimizer recommending another texture optimizer! Running the program now.. i haven't got to test it in game but it is compressing texture further from the tool I was using. Pretty cool. On the performance question, I just want to point out that the whole topic is purely academic and speculative. Real world results for your system is going to differ from another persons, and is going to be very dependent on the architecture of the GPUs, sophistication of the drives, and other stuff. My biggest concern with overloading VRAM is the VRAM bursting problem. I don't know of a way to prevent that, and if there is a scene where the required textures to render that scene is greater than what the video card has in memory, than a VRAM burst is possible. I don't think, using an popular texture packs for skyrim, that a VRAM burst is very likely on modern GPUs with 1gb of memory. Especially using texture optimizer programs like DDSOpt or optimize texture. For people using 512mb cards though, I can't say its a good idea to overload your VRAM, mostly because of the VRAM burst issue. Link to comment Share on other sites More sharing options...
Question
z929669
Discussion thread:
DDSopt Guide by STEP
Wiki Link
GET DDSopt:
Github Pre-release versions
Official Nexus versions (select pre-release update 4)
Link to comment
Share on other sites
Top Posters For This Question
360
353
51
51
Popular Days
May 7
26
Jan 31
26
Jan 22
23
Mar 22
21
Top Posters For This Question
Kelmych 360 posts
z929669 353 posts
Ethatron 51 posts
phazer11 51 posts
Popular Days
May 7 2012
26 posts
Jan 31 2013
26 posts
Jan 22 2013
23 posts
Mar 22 2013
21 posts
1,702 answers to this question
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now