Hey everyone, happy new year!
I've, just checked in some updates to how fixed normals work - you can now specify them either in camera space or model space and adjusting for lighting back faces now works correctly (there were a couple of bugs with it).
However correct back face lighting now requires mesh normals (its the only way the shader can know what direction a vertex is facing in).
The shader does now remind you what it wants in terms of tangents or normals though π
If you grab latest you might need to retick 'use fixed normals' on your materials as the defines have changed, sorry!
The majority of the time with Spine animations you don't need to adjust the tangents for correct back face lighting - it's needed only when you have rotated your object to face away from the camera and thus the tangents need to be flipped in the shader.
I recommend instead of rotating sprites, use the Spine skeleton.FlipX/Y. This means your Spine animation won't need normals and things like Z offset for submeshes will stay correct.
If you render Unity Sprites or Meshes then you might need to turn on the 'Adjust Backface Tangents' option if they face away from the camera.
03 Jan 2017, 12:39
@[deleted] yeah using a custom Depth+Normals buffer is pretty advanced to be fair.
What you're doing there is correct - that should render the depth+normals for the scene including soft alpha'd sprites.
However you want to render into a RenderTexture which you can then pass through to your Post Effects. (ie give the camera a target texture).
You will also need to edit the PostEffect shaders you're using to use this newly created RenderTexture instead of the default camera Depth+Normals texture.
Admittedly this is all pretty advanced stuff and you'll need to be able to edit your post effects shaders to get it to work but it's what I do for my game so def works π
I recommend reading this (plus the first 2 parts) which explains the CameraDepthNormals texture and how it gets used with a simple example.
http://willychyr.com/2013/11/unity-shaders-depth-and-normal-textures-part-3/
In this example he's telling his camera to render a depth+normals texture for him with the following line of code:
camera.depthTextureMode = DepthTextureMode.DepthNormals;
Then his post effect shader is using that texture with:
_CameraDepthNormalsTexture
Which inside a unity shader automatically grabs the last cameras generated DepthNormals texture.
In your case you don't want to generate a texture or use _CameraDepthNormalsTexture as you've rendered your own special one. So instead of using _CameraDepthNormalsTexture in the shader, pass it the texture you rendered into with
camera.RenderWithShader() and you should see the normals and depth for your Sprites.
This can then be adapted for things like Depth of Field or SAAO.