Jump to content
  • 0

Skyrim Shadows


johnnyboy88

Question

I posted this in another forum a little while ago. I thought some people may find it useful here.

I wasn't sure if it should go here or under guides and resources sub forum.

 

This info is taken from around the web:

 

Keep your iShadowMapResolutionSecondary at 1024 to avoid some performance loss. These are the far shadows and it's not really as noticeable. Try to keep the primary at 4096 if possible though.

 

The real issue here is that Skyrim isn't using a lot of cascades. There are two cascades, the primary and the secondary. The entire shadow map is stretched over the distance set by fShadowDistance. Note that when you switch to medium this gets set to 2500. When at ultra it's set to 8000. Whatever the distance is, the shadow map is getting stretched. The higher the distance, the more stretching, the more pixelation. In the ideal world you'd have more cascades to minimize any stretching, but that isn't the case in Skyrim (yet anyway).

 

Lowering the distance remedies that problem, at the cost of not seeing shadows at the cut off distance. Thankfully shadows "fade" in instead of popping, so it's not completely jarring if you use a setting of 2500.

 

iShadowSplitCount=2 // How many times shadows are split. ie how many shadows one object can cast.

 

fShadowLODStartFade=400 // Lod shadows fade time. ie do the shadows fade out slowly or faster the further away you get from them.

 

iShadowMode=3 // Not really sure. I always thought this was the shader mode used. ie 1.0, 2.0, 3.0. Though I've seen some "tweaks" set this to 4 I can't really tell what it does.

 

iShadowMaskQuarter=4 // This controls the interior lights shadow mask, spolights, hemis and shadow onmis all have shadow masks.  Adjusts shadow crispness. Lower values make shadows less detailed. Performance impact can be major.

 

iShadowFilter=3 // Filters out the shadows. ie abit like anisotropic filtering for shadows.  Adjust shadow filtering or smoothness. Affects all shadows. low=1, medium=2, high=3, ultra=4

 

fShadowBiasScale=0.2500 // Affects exterior shadows look and position depending on the respective angle the pc is looking.  determines the degree to which a surface is shadowed. Determined by angle to light source. higher values reduce shadows. lower values increase shadowing.

 

iBlurDeferredShadowMask=2 // Valid values range from 0 - 7. Lower values will sharpen shadows (not the resolution), making vegetation more "vibrant." It gives a subtle increase in performance, but also gives more pixelated and striping effects to shadows. Higher values will make shadows softer and more blurred. Consider a value of two if using ENB.

 

fInteriorShadowDistance=3000.0000 // Distance interior shadowmap is stretched.  Increasing the value has no noticeable effect beyond reducing the quality of indoor shadows, and decreasing the value causes unsightly fade-in.

 

fShadowDistance=3000.0000 // Distance outdoor shadowmap is stretched.  Determines the distance at which shadows appear outside, as referred to earlier. 8000  to avoid shadow pop-in. Lowering the number increases shadow detail by a significant degree,  experiment with this setting until you find a balance between quality and view distance that you are happy with.

 

iShadowMapResolutionSecondary=2048 // Increases the detail level of shadows. The max setting, 8192, massively improves shadow quality, but in our testing we discovered a few objects and areas that absolutely crippled performance on even our beefiest machines when using 8192, but not 4096. As such we recommend sticking with this lower setting. Other possible values are 2048 and 1024.

 

iShadowMapResolutionPrimary=4096 // As above.

 

iShadowMapResolution=4096 // Directly controls shadow resolution. Values larger than 4096 are possible such as 8192 for a substantial fps hit.

 

The performance impact of Primary and Secondary is linked directly to the ShadowMapResolution settings. The defaults for Ultra are 1024 and 2048, respectively, which means that the performance ‘cost’ of Secondary shadowing will quadruple, and the cost of Primary shadowing double. If you’re losing too many frames per second lower the Secondary value first, and then the Primary if problems persist. iBlurDeferredShadowMask will have, at most, a few frames per second impact when going from the Ultra default, 3, to 0.

 

Finally keep in mind a four-channel uncompressed 2048x2048 texture takes up 17 MB of texture memory. For 4096x4096, this is 67 MB. For many GPUs out there, that is more than half of the available texture memory. It's simply not feasible to go bigger than 2048x2048.

 

Also somebody mentioned shadows are done by the CPU. That is simply not the case.

 

 

 

The following are quotes from around the web because they say it rather well:  

    "Arguing that the CPU is rendering shadows is like arguing that the Sun revolves around the Earth. Skyrim uses shadow mapping. Anyone that knows anything about 3D graphics programming will tell you the same thing : If Skyrim was using the CPU to handle shadow mapping, it would be a slideshow. As in, you would measure your speed in frames per hour rather than frames per second."

 

    "Shadow mapping works by creating a special image called a depth buffer from the perspective of every light source. Anything the light source cannot see but the camera can see (the camera is what you see on your screen) is considered shadowed. Every light source has it's own depth buffer (or shadow map, if you will). The shadow map size doesn't refer to the number of samples per second, it refers to the resolution of the texture holding the depth buffer. Higher shadow map sizes mean the precision is higher as far as determining which pixels are shadowed."

 

    "In order to do shadow mapping on the CPU, the CPU would have to render a depth buffer for EVERY light source on the scene. It's simply not possible to do that in real time on a CPU."

 

    "Let's assume the CPU is doing this, and is enchanted with pixie dust that enables it to execute at a stable rate of one instruction per clock cycle with no memory bandwidth or latency issues despite all the other stuff going on, and the shadow-calculating program is an absolute piece of genius that requires an average of 12 instructions per shadow sample, including everything needed to set up the shadowmap render, geometry processing and so on. Some computers have been observed running the game at 60 FPS at least occasionally, even with 4096 by 4096 shadow maps. This means the CPU should be capable of producing 4096x4096x60 shadow samples per second. The CPU core must be running at least at... 4096*4096*60*12 / 10^9 = 12.08 GHz."

How the CPU affects GPU:  

    Lets say the CPU takes 33.33ms per frame to complete instructions (This is about 30 frames per second). The GPU takes 16.6ms to render the scene with the information given to it by the CPU (which is about 60 frames per second). Now because the CPU takes LONGER to finish its instructions per frame, your FRAMES PER SECOND are gonna be BOTTLENECKED by the CPU. You will get 30 frames per second and the GPU sitting at 50% utilization as it is idle for 16.6ms per frame. A GPU bottleneck would involve the CPU completing instructions at 16ms and the GPU taking 30ms to render the image.

 

    Now say you lower the draw distance for the shadows or other objects and you get better fps. This ISN'T because the shadows are drawn on the CPU as having not as many draw calls allows the CPU to get everything done in, say 21ms. This is about 47 fps. Your GPU now is rendering less and is able to render the frame in 10ms, which again is ~50% of how long it takes the CPU to complete instructions. Having lots of draw calls is HORRIBLY CPU intensive in older versions of Direct 3D because of the software overhead of the API. Consoles, on the other hand, can handle many more draw calls (~2500-3000) before it starts bogging down the CPU. From my understanding, if the render was coded properly using D3D10 or D3D11, it would allow the GPU to do more without having the CPU tell it what to do (that or they optimise the render). Or I could be wrong and the problem could be in how the engine handles NPC scripting which having a new render won't solve, but only Beth knows what is going on with the engine. Either way, it still doesn't mean the shadows render on the CPU, which stated earlier takes hundreds of milliseconds to draw on the CPU.

    I hope this sheds some light on how the CPU and GPU function. I am no expert, but this is my basic understanding of the subject.

Link to comment
Share on other sites

2 answers to this question

Recommended Posts

  • 0

Well the part about the CPU is certainly new information (for me anyways)... I know I have been one of the people saying it for months.. since I saw people saying it when I was still learning. And from what I could dig up then it still made sense since the game was console made and dx9 hence it did not have any of all the fancy new GPU features to draw on. Also increasing shadow parameters never really altered how my GPU was performing noticeable... however the CPU load was larger. Guess I was fooled by the cores, or there is some semi shared relationship due to how old the engine is compared to newer engines.

 

I know that in Blender I can get both the CPU and the GPU to render shadows (and the entire scene) if I want. Also if I pre run (think its called cooking or something similar?) the shadowmaps (Is that the right way of saying it ? ) then the CPU can do it quite fast... this is what I figured was going on or at least something similar.

 

I will leave the thread here for now for more people to type in questions. Eventually I might gather up all your long posts into one and put them into the guide section since you really have done quite a good job in the tech department! Once more hats of and thanks! :)

Link to comment
Share on other sites

  • 0

Comparing Blender and other 3D packages like Maya, Max, or Blender to a game engine is like comparing apples and oranges. They are two completely different animals. A game engine is more of a form of playback of pre created/rendered objects and files. The assets have already been rendered from the 3D package.

 

Most 3D packages are CPU based because GPUs have only recently begun to get powerful enough to be of any use to high quality renders. Not to mention GPU rendering require vram and CPU rendering require system ram. And even with a GTX Titan you can only get 6GB vram. They have only recently started implementing more code to optimize for offloading instructions to the GPU. They are CPU based because there are a lot more complex calculations involved with rendering a HQ image. And a lot more memory requirements. Even a "simple" scene of just a box with a lambert shader can take a second or two. That's just one image in one second. Whereas a game engine displays 30-60 images in a second. The more complex the image the longer it takes.

 

This completely demonstrates the differences behind CPU and GPU rendering. If the CPU were rendering the shadows there is no way you'd be able to get any kind of framerate that would playable. And 3D apps are starting to use the GPU because it is faster at a lot of rendering functions. But that demonstrates the point of the CPU not being capable of it as well.

 

Game engines are able to display at these speeds because most of their assets are pre-rendered and they sacrifice quality for speed. A 3D package needs to be able to produce 4K plus images capable of being displayed for IMAX in 3D HD.

 

Here's a GPU rendering engine and it still takes 23 seconds to render a full single frame

https://furryball.aaa-studio.cz/aboutFurryBall/whyGpu.html

So again, you can't really compare a 3D app to a Game engine's realtime "rendering" They work completely different. A game engine sacrifices a lot and is very limited to be able to do what it does.

 

TLDR:

Basically. You can see the difference of CPU and GPU rendering by firing up your favorite 3D app and building a basic scene that you see in Skyrim of; a house, trees, flora, fauna, your 2k texture packs, myriad of misc details like bowls and foods, sky, multiple light sources. It would take a long time to render 1 frame wouldn't it? And that's what would happen if the shadows were rendered on your cpu.

 

And if you pre-cook your scene like you mentioned, (similar to having pre created objects for a game engine). It does render faster. But nowhere near the speeds of playablity.

 

Here's a long discussion why games render realtime and 3D apps don't

https://forums.cgsociety.org/showthread.php?t=838177

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines, Privacy Policy, and Terms of Use.