In this first tutorial, I wanted to talk about something that seems to have captured the interest of a lot of people: recreating worldspace position data in a texture using a shader, and using it to reconstruct parts of the world with procedural geometry.

I’ve been messing with this technique for well over a year now. The initial experiment used an enormous array of scripted CPU raycasts to modify the visualizing geometry with pretty awful performance and limited points per frame. Later, I wrote a version where all the world position data was obtained through an image effect shader on a camera, which would be written to a render texture, and then used in another shader/material to displace static geometry via vertex displacement shader. This allows for thousands (or even millions) of points to be updated per frame with excellent performance, since all of the functions can run in parallel per-pixel on the GPU, using data that already exists for usual rendering processes.
A more recent experiment of mine that I released (titled NL) utilizes this technique (along with using a couple other techniques, which I’d like to talk about in a later post) to construct a hazy, shifting dreamlike visual reconstruction of a concrete world.

I’m going to try to explain this in a way for people with little shader experience to get something out of it.
RENDERING AN IMAGE EFFECT
To get started, we’ll want a script to attach the shader to a camera. I’ll call it “VisualizeShader.cs” and refer to it as such, but anything will do fine.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[ExecuteInEditMode]
public class VisualizeShader : MonoBehaviour {
//the image effect shader will tied to a material
public Material material;
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, destination, material);
}
}
This script will take a material with a shader attached to it and will use it for post processing over the camera’s view. Before we can really see it do anything, we’ll need to create an Image Effect shader and a material that will use the shader.

I’ve titled mine “DepthToWorld.shader”.

The base template for an Image Effect shader will give you a simple shader that just inverts the colors of your camera’s rendertexture. You can create a new material (I’ve titled mine “ImageEffect”) and drag the “DepthToWorld” shader onto it. Now, you can add the “VisualizeShader” script to your camera, and drag the ImageEffect material to the public material variable. In game view, the colors should now be inverted.

Now that we have a shader being rendered across the camera’s entire view, we can start doing something more complex with it than just inverting the pixels.
RENDERING DEPTH DATA IN THE SHADER
Add this to the Start function in “VisualizeShader.cs”
void Start () {
this.GetComponent().depthTextureMode=DepthTextureMode.Depth;
}
This line has the camera record the depth buffer to a uniform 2D texture that can be sampled by the shader we have. We can test it by declaring it in the shader and sampling directly from it within the fragment shader of “DepthToWorld.shader”.
uniform sampler2D _CameraDepthTexture;
float4 frag (v2f i) : SV_Target
{
float4 col = tex2D(_CameraDepthTexture, i.uv);
return col;
}

Now that we know how to access the depth buffer, it’s time to translate that into a linear value and transform it by the inverse of the ViewProjection matrix.
STORING WORLD POSITIONS TO A TEXTURE USING DEPTH AND INVERSE CAMERA MATRICES
At this point, we can make use of some math in the forum thread I linked before. Here it is as written by XRA in the original thread.
//main problem encountered is camera.projectionMatrix = ??????? worked but further from camera became more inaccurate
//had to use GL.GetGPUProjectionMatrix( ) seems to stay pretty exact now
//in script somewhere:
Matrix4x4 viewMat = camera.worldToCameraMatrix;
Matrix4x4 projMat = GL.GetGPUProjectionMatrix( camera.projectionMatrix, false );
Matrix4x4 viewProjMat = (projMat * viewMat);
Shader.SetGlobalMatrix("_ViewProjInv", viewProjMat.inverse);
//in fragment shader:
uniform float4x4 _ViewProjInv;
float4 GetWorldPositionFromDepth( float2 uv_depth )
{
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv_depth);
float4 H = float4(uv_depth.x*2.0-1.0, (uv_depth.y)*2.0-1.0, depth, 1.0);
float4 D = mul(_ViewProjInv,H);
return D/D.w;
}
To break it down, there are two parts to this procedure:
- A script side function that calculates the inverse matrix for the camera on each frame and sends it to a globally accessible matrix
- A shader side function that takes in the matrix and transforms the depth buffer based on it and outputs floating-point color values that directly correlate to fixed positions in the world.
In the VisualizeShader script, I’ve added some private variables as well as a new function that’s called in OnRenderImage before rendering. This will calculate the inverse ViewProjection matrix on each frame and make it accessible from within the shader.
...
private Matrix4x4 viewMat;
private Matrix4x4 projMat;
private Matrix4x4 viewProjMat;
private Camera c;
...
void Start(){
...
c=this.GetComponent();
...
}
void VPI()
{
viewMat = c.worldToCameraMatrix;
projMat = GL.GetGPUProjectionMatrix(c.projectionMatrix, false);
viewProjMat = (projMat * viewMat);
Shader.SetGlobalMatrix("_ViewProjInv", viewProjMat.inverse);
}
void OnRenderImage(...)
{
VPI();
...
}
Now in the shader file, we’ll add this function and update the fragment shader.
uniform float4x4 _ViewProjInv;
uniform sampler2D _CameraDepthTexture;
float4 GetWorldPositionFromDepth( float2 uv_depth )
{
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv_depth);
#if defined(SHADER_API_OPENGL)
depth=depth*2.0-1.0;
#endif
float4 H = float4(uv_depth.x*2.0-1.0, (uv_depth.y)*2.0-1.0, depth, 1.0);
float4 D = mul(_ViewProjInv,H);
return D/D.w;
}
...
float4 frag (v2f i) : SV_Target
{
float4 col = GetWorldPositionFromDepth(i.uv);
return col;
}
...
In our game view, we can now see everything being rendered with worldspace positions as colors.

The colors are linear floating point values, meaning they can be either positive or negative, with their exact values correlating to world positions, with each of the three axes occupying a separate color channel.

(note how if you move the camera, objects will maintain their corresponding worldspace position color values)
Next comes the exciting part!
RECONSTRUCTING THE WORLDSPACE POSITIONS WITH A MESH AND VERTEX DISPLACEMENT
We can now set a Render Texture target on the camera so as to use this data in another shader to reconstruct these points. In order for the data to maintain its precision beyond the 0.0-1.0 color range, we’ll need to use an ARGB Float rendertexture format. Create a new rendertexture (I’ll call mine WSRT) and choose the ARGBFloat format.

Set your camera’s Target Texture to the WSRT rendertexture. Now, the worldspace position data will be recorded into WSRT, which we can use with another shader to reconstruct geometry based on the worldspace texture data.
For demonstration purposes, I’ll link a base geometry asset consisting of a 2D array of point geometry (256*256 points), though I’d usually generate the geometry at runtime via script, but that’s a lesson for another time.
The initial vertex positions for this mesh in both X and Y axes range from -128 to 128 since it has a variety of uses, keeping the mesh centered even without deformation. I’ll be using these positions to sample from the texture rather than using UV data, though it wouldn’t be too complicated to set up UV coordinates per point (or even rearranging the vertex positions) to be ranging from 0 to 1.
Create a new GameObject with a mesh renderer and mesh filter. Add mesh.asset to the filter. The point field mesh should start rendering in the scene view.

It’ll render magenta as there’s a null value for the material. We need to create the shader and material for this mesh.
Create a new Unlit shader file, label it “TextureToDisplacement”.
Most of the magic is going to be occurring within the vertex shader program.
v2f vert (appdata v)
{
v2f o;
//we'll use the following float2 to sample from the texture with values ranging from 0.0 to 1.0. This function is only really necessary for this test mesh.
float2 sPos = (v.vertex.xy+128.0)/256.0;
//next, displace the vertex positions based on the sampled data from the texture.
//because this in the vertex shader, we have to use tex2Dlod instead of tex2D.
v.vertex.xyz=tex2Dlod(_MainTex,float4(sPos.xy,0,0));
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = sPos;
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
Apply the shader to a new material, with Texture set to WSRT. Apply this material to the mesh and it should now conform to positions in worldspace!

Now if you move the camera around, you’ll see the mesh reconstructing anything that exists in front of the camera. You can play around with culling masks to make geometry exist only for the worldspace rendering camera and not the main camera and have things that can only be seen through the reconstructing mesh.

I’ve included the project files (along with a more fleshed out example scene) if you found anything hard to follow along with. Please feel free to leave any comments and questions about things I may have left unclear, as well as suggestions for future tutorials. This was my first ever attempt at writing a tutorial, so any criticism is very much appreciated!
