banner



How Do You Draw A Sphere

Rendering a Sphere on a Quad

Making the Sphere Impostor Feel More Competent

Ben Golus

GPU raytracing is all the rage these days, so lets talk about about it! Today we're going to raytrace a single sphere.

Using a fragment shader.

Yeah, I know. Not super fancy. You can do a search on Shadertoy and get hundreds of examples. There are even several great tutorials out there already for doing sphere impostors, which is what this is. So why would I write another article on it? It's not even the right kind of GPU raytracing!

Well, because the ra y tracing part isn't really the part I'm going to focus on. This article is more about how to inject opaque ray traced or ray marched objects into a rasterized scene in Unity. But also goes into some additional tricks for dealing with rendering a sphere impostor that aren't always immediately obvious or covered by the other tutorials I've seen. By the end of this we'll have a sphere impostor on a tight quad that supports multiple lights, shadow casting, shadow receiving, and orthographic cameras for the built in forward renderer that almost perfectly mimics a high poly mesh. With no extra c# script.

My First Sphere Impostor

As mentioned in the intro, this is a well trodden area. The accurate and efficient math for drawing a sphere is already known. So I'm just going to steal the applicable function from Inigo Quilez to make a basic raytraced sphere shader that we can slap on a cube mesh.

Inigo's examples are all written in GLSL. So we have to modify that code slightly to work with HLSL. Luckily for this function that really just means a find and replace of vec with float.

            float sphIntersect( float3 ro, float3 rd, float4 sph )
{
float3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
h = sqrt( h );
return -b - h;
}

That function takes 3 arguments, the ro (ray origin), rd (normalized ray direction), and sph (sphere position xyz and radius w). It returns the length of the ray from the origin to the sphere surface, or a -1.0 in the case of a miss. Nice and straight forward. So all we need is those three vectors and we've got a nice sphere.

The ray origin is perhaps the easiest point to get. For a Unity shader, that's going to be the camera position. Conveniently pass to every shader in the global shader uniform _WorldSpaceCameraPos. For an orthographic camera it's a little more complex, but luckily we don't need to worry about that.

*ominous foreshadowing*

For the sphere position we can use the world space position of the object we're applying the shader to. That can be easily extracted from the object's transform matrix with unity_ObjectToWorld._m03_m13_m23. The radius we can set as some arbitrary value. Lets go with 0.5 for no particular reason.

Lastly is the ray direction. This is just the direction from the camera to the world position of our surrogate mesh. That's easy enough to get by calculating it in the vertex shader and passing along the vector to the fragment.

            float3 worldPos = mul(unity_ObjectToWorld, v.vertex);
float3 rayDir = _WorldSpaceCameraPos.xyz - worldPos;

Note, it is important to not normalize this in the vertex shader. You will need to do that in the fragment shader as otherwise the interpolated values won't work out. The value we're interpolating is the surface position, not actually the ray direction.

But after all that we've got the three values we need to raytrace a sphere.

Now I said the above function returns the ray length. So to get the actual world space position of the sphere's surface, you multiply the normalized ray by the ray length and add the ray origin. You can even get the world normal by subtracting the sphere's position from the surface's position and normalizing. And we pass the ray length to the clip() function to hide anything outside the sphere as that function returns -1.0 in the case of a miss.

Depth Finder

The last little bit for an effective sphere impostor is the z depth. If we want our sphere to intersect with the world properly, we need to output the sphere's depth from the fragment shader. Otherwise we're stuck using the depth of the mesh we're using to render. This is actually way easier than it sounds. Since we're already calculating the world position in the fragment shader, we can apply the same view and projection matrices that we use in the vertex shader to get the z depth. Unity even includes a handy UnityWorldToClipPos() function to make it even easier. Then it's a matter of having an output argument that uses SV_Depthwith the clip space position's z divided by its w.

Put that all together with some basic lighting and you get something like this:

It looks like a sphere, but it's actually a cube.

A very round cube. Make all the boy cubes go *whaaah!*.

Texturing a Sphere

Well, that's not too exciting. We should put a texture on it. For that we need UVs, and luckily those are pretty easy for a sphere.

Equirectangular UVs

Lets slap an equirectangular texture on this thing. For that we just need to feed the normal direction into an atan2() and an acos() and we get something like this:

            float2 uv = float2(
// atan returns a value between -pi and pi
// so we divide by pi * 2 to get -0.5 to 0.5
atan2(normal.z, normal.x) / (UNITY_PI * 2.0),
// acos returns 0.0 at the top, pi at the bottom
// so we flip the y to align with Unity's OpenGL style
// texture UVs so 0.0 is at the bottom
acos(-normal.y) / UNITY_PI
);
fixed4 col = tex2D(_MainTex, uv);

Earth the final frontier.

And look at that we've got a perfectly … wait. What's this!?

Is that the Greenwich Mean Line?

That's a UV seam! How do we have a UV seam? Well, that comes down to how GPUs calculate mip level for mip maps.

Unseamly

GPUs calculate the mip level by what are known as screen space partial derivatives. Roughly speaking, this is the amount a value changes from one pixel to the one next to it, either above or below. GPUs can calculate this value for each set of 2x2 pixels, so the mip level is determine by how much the UVs change with each of these 2x2 "pixel quads". And when we're calculating the UVs here, the atan2() suddenly jumps from roughly 0.5 to roughly -0.5 between two pixels. That makes the GPU think the entire texture is being displayed between those two pixel. And thus it uses the absolutely smallest mip map it has in response.

So how do we work around this? Why by disabling the mip maps of course!

No no no! We absolutely do not do that. But that's the usual solution you'll find to most mip map related issues. (As you may have seen me complain about elsewhere.) Instead a solution was nicely presented by Marco Tarini.

The idea is to use two UV sets with the seams in different places. And for our specific case, the longitudinal UVs calculated by the atan2() are already a -0.5 to 0.5 range, so all we need is a frac() to get them into a 0.0 to 1.0 range. Then use those same partial derivatives to pick the UV set with the least change. The magical function fwidth() gives us how much the value is changing in any screen space direction.

            // -0.5 to 0.5 range
float phi = atan2(worldNormal.z, worldNormal.x) / (UNITY_PI * 2.0);
// 0.0 to 1.0 range
float phi_frac = frac(phi);
float2 uv = float2(
// uses a small bias to prefer the first 'UV set'
fwidth(phi) < fwidth(phi_frac) - 0.001 ? phi : phi_frac,
acos(-worldNormal.y) / UNITY_PI
);

And now we have no more seam!

I promise it's not hiding on the other side.

* edit: It's come to my attention that this technique may only work properly when using Direct3D, integrated Intel GPUs, or (some?) Android OpenGLES devices. The fwidth() function when using OpenGL on desktop may run using higher accuracy derivatives than used by the GPU to determine mip levels meaning the seam will still be visible. Metal is guaranteed to always run at a higher accuracy. Vulkan can be forced to run at the lower accuracy by using coarse derivative functions, but as of writing this Unity doesn't seems to properly transpile coarse or fine derivatives. I wrote a follow up with some alternate solutions here:

Alternative you could just use a cube map instead. Unity can convert an imported equirectangular texture into a cube map for you. But that means you loose out on anisotropic filtering. The UVW for a cube map texture sample is essentially just the sphere's normal. You do need to flip at least the x or z axis though, because cube maps are assumed to be viewed from the "inside" of a sphere and here we want it to map to the outside.

Crunchy Edges (aka Derivatives Strike Again)

At this point if we compare the raytraced sphere shader we have with an actual high poly mesh sphere using the same equirectangular UVs, you may notice something else odd. It looks like there's an outline around the raytraced sphere that the mesh does not have. A really aliased outline.

Crunchy "outline" on the impostor.

The cause is our pesky derivatives again. There's one more UV seam we missed! On a mesh, derivatives are calculated per pixel quad, per triangle. In fact, if a triangle only touches a single pixel of one of those 2x2 pixel quads, the GPU still runs the fragment shader for all 4 pixels! The advantage of this is it can accurately calculate plausible derivatives which prevents this problem on a real mesh. But we don't have a good UV outside of the sphere, the function just returns a constant -1.0 on a miss, so we have bogus UVs outside of the sphere. We can see this clearly if you comment out the clip() and outDepth lines in the shader.

The Hidden UV Seam

What we want is for the UVs to be something close to the value at the visible edge of the sphere, or maybe just past the edge. That's surprisingly complicated to calculate. But we can get something reasonably close by finding the closest point on a ray to the sphere center. At the exact sphere edge, this is 100% accurate, but it starts to curve towards the camera slightly as you get further away from the sphere. But this is cheap and good enough to get rid of the problem and is nearly indistinguishable from a fully correct fix.

Even better, we can apply this fix by replacing the ray length with a single dot() when the sphere intersection function returns a -1.0. A super power of the dot product of two vectors is, if at least one vector is normalized, the output is the magnitude other vector is along the direction of the normalized vector. This is great for getting how far away something is in a certain direction, like how far the camera is from the sphere's pivot along the view ray.

            // same sphere intersection function
float rayHit = sphIntersect(rayOrigin, rayDir, float4(0,0,0,0.5));
// clip if -1.0 to hide sphere on miss
clip(rayHit);
// dot product gets ray length at position closest to sphere
rayHit = rayHit < 0.0 ? dot(rayDir, spherePos - rayOrigin) : rayHit;

No longer seamful.

Object Scale & Rotation

So that's all going well, but what if we want to to make a bigger sphere or rotate it? We can move the mesh position around and the sphere tracks with it, but everything else is ignored.

We could change the sphere radius manually, but then you'd have to manually keep the mesh you're using in sync. So it'd be easier to extract the scale from the object transform itself. And we could apply an arbitrary rotation matrix, but again it'd be way easier if we could just use the object transform.

Or we could do something even easier and do the raytracing in object space! This comes with a few other benefits we'll get into. But before that we want to add a few lines to our shader code. First we want to use the unity_WorldToObject matrix to transform the ray origin and ray direction into object space in the vertex shader. In the fragment shader, we no longer need to get the world space object position from the transform since the sphere can now just be at the object's origin.

            // vertex shader
float3 worldSpaceRayDir = worldPos - _WorldSpaceCameraPos.xyz;
// only want to rotate and scale the dir vector, so w = 0
o.rayDir = mul(unity_WorldToObject, float4(worldSpaceRayDir, 0.0));
// need to apply full transform to the origin vector
o.rayOrigin = mul(unity_WorldToObject, float4(_WorldSpaceCameraPos.xyz, 1.0));
// fragment shader
float3 spherePos = float3(0,0,0);

With this change alone, you can now rotate and scale the game object and the sphere scales and rotates as you would expect. It even supports non-uniform scaling! Just remember that all of those "world space" positions in the shader are now in object space. So we need to transform the normal and sphere surface position to world space. Just be sure to use the object space normal for the UVs.

            // now gets an object space surface position instead of world space
float3 objectSpacePos = rayDir * rayHit + rayOrigin;
// still need to normalize this in object space for the UVs
float3 objectSpaceNormal = normalize(objectSpacePos);
float3 worldNormal = UnityObjectToWorldNormal(objectSpaceNormal);
float3 worldPos = mul(unity_ObjectToWorld, float4(objectSpacePos, 1.0));

Big, little, and terrible sandwich Earths.

Other advantages are better overall precision, as using world space for everything can cause some precision issues when getting far away from the origin. Those are at least partially avoided when using object space. It also means we can remove the usage of spherePos in several places since it's all zeros, simplifying the code a bit.

Using a Quad

So far we've been using a cube mesh for all of this. There are some minor benefits to using a cube for some use cases, but I promised a quad in the title of this article. Also because really there's no good reason to use an entire cube for a sphere. There's a lot of wasted space around the sides where we're paying the cost of rendering the sphere where we know it's not going to be. Especially the default Unity cube which has 24 vertices! Why also waste calculating the extra 20 vertices?

Billboard Shader

There are several examples of billboard shaders out there. The basic idea for all of them is you ignore the rotation (and scale!) of the object's transform and instead align the mesh to face the camera in some way.

View Facing Billboard

Probably the most common version of this is a view facing billboard. This is done by transforming the pivot position into view space and adding the vertex offsets to the view space position. This is relatively cheap to do. Just remember to update the ray direction to match.

            // get object's world space pivot from the transform matrix
float3 worldSpacePivot = unity_ObjectToWorld._m03_m13_m23;
// transform into view space
float3 viewSpacePivot = mul(UNITY_MATRIX_V, float4(worldSpacePivot, 1.0));
// object space vertex position + view pivot = billboarded quad
float3 viewSpacePos = v.vertex.xyz + viewSpacePivot;
// calculate the object space ray dir from the view space position
o.rayDir = mul(unity_WorldToObject,
mul(UNITY_MATRIX_I_V, float4(viewSpacePos, 0.0))
);
// apply projection matrix to get clip space position
o.pos = mul(UNITY_MATRIX_P, float4(viewSpacePos, 1.0));

However, if we just add the above code to our shader, there's something not quite right with the sphere. It's getting clipped on the edges, especially when the sphere is to the side or close to the camera.

Thinking too far outside the box.

This is because the quad is a flat plane, and the sphere is not. A sphere has some depth. Due to perspective the volume of the sphere will cover more of the screen than the quad does!

Artist's Recreation of the Crime Scene

A solution to this you might use is to scale the billboard up by some arbitrary amount. But this doesn't fully solve the problem as you have to scale the quad up quite a bit. Especially if you can get close to the sphere or have a very wide FOV. And this partially defeats the purpose of using a quad over a cube to begin with. Indeed more pixels are now rendering empty space than before with even relatively small scale increases compared to the cube.

Camera Facing Billboard

Luckily, we can do a lot better. A partial fix is to use a camera facing billboard instead of a view facing billboard and pull the quad towards the camera slightly. The difference between view facing and camera facing billboards is a view facing billboard is aligned with the direction the view is facing. A camera facing billboard is facing the camera's position. The difference can be subtle, and the code is a bit more complex.

Instead of doing things in view space, we instead need to construct a rotation matrix that rotates a quad towards the camera. This sounds scarier than it is. You just need to get the vector that points from the object position to the camera, the forward vector, and use cross products to get the up and right vectors. Put those three vectors together and you have yourself a rotation matrix.

            float3 worldSpacePivot = unity_ObjectToWorld._m03_m13_m23;            // offset between pivot and camera
float3 worldSpacePivotToCamera = _WorldSpaceCameraPos.xyz - worldSpacePivot;
// camera up vector
// used as a somewhat arbitrary starting up orientation
float3 up = UNITY_MATRIX_I_V._m01_m11_m2;
// forward vector is the normalized offset
// this it the direction from the pivot to the camera
float3 forward = normalize(worldSpacePivotToCamera);
// cross product gets a vector perpendicular to the input vectors
float3 right = normalize(cross(forward, up));
// another cross product ensures the up is perpendicular to both
up = cross(right, forward);
// construct the rotation matrix
float3x3 rotMat = float3x3(right, up, forward);
// the above rotate matrix is transposed, meaning the components are
// in the wrong order, but we can work with that by swapping the
// order of the matrix and vector in the mul()
float3 worldPos = mul(v.vertex.xyz, rotMat) + worldSpacePivot;
// ray direction
float3 worldRayDir = worldPos - _WorldSpaceCameraPos.xyz;
o.rayDir = mul(unity_WorldToObject, float4(worldRayDir, 0.0));
// clip space position output
o.pos = UnityWorldToClipPos(worldPos);

This is better, but still not good. The sphere is still clipping the edges of the quad. Actually, all the four edges now. At least it's centered. Well, we forgot to move the quad toward the camera! Technically we could also scale the quad by an arbitrary amount too, but lets come back to that point.

            float3 worldPos = mul(float3(v.vertex.xy, 0.3), rotMat) + worldSpacePivot;          

We're ignoring the z of the quad and adding a small (arbitrary) offset to pull it towards the camera. The advantage of this option vs an arbitrary scaling is it should stay more closely confined to the bounds of the sphere when further away, and scale when closer just due to the perspective change, just like the sphere itself. It only starts to cover significantly more screen space than needed when really close. I chose 0.3 in the above example because it was a good balance of not covering too much of the screen when close by, while still covering all of the viewable sphere until you're really, really close.

You know, you could probably figure the exact value to use to pull or scale the quad for a given distance from the sphere with a bit of math …

Perfect Perspective Billboard Scaling

Wait! We can figure out the value using a bit of math! We can get the exact size the quad needs to be at all camera distances from the sphere. Just needs some basic high school math!

We can calculate the angle between the camera to pivot vector and camera to visible edge of the sphere. In fact it's always a right triangle with the 90 degree corner at the sphere's surface! Remember your old friend SOHCAHTOA? We know the distance from the camera to the pivot, that's the hypotenuse. And we know the radius of the sphere. From that we can calculate the base of the right angle triangle formed from projecting that angle to the quad's plane. With that we can scale the quad instead of modifying the v.vertex.z.

            // get the sine of the right triangle with the hypotenuse being the // sphere pivot distance and the opposite using the sphere radius
float sinAngle = 0.5 / length(viewOffset);
// convert to cosine
float cosAngle = sqrt(1.0 - sinAngle * sinAngle);
// convert to tangent
float tanAngle = sinAngle / cosAngle;
// those previous two lines are the equivalent of this, but faster
// tanAngle = tan(asin(sinAngle));
// get the opposite face of the right triangle with the 90 degree
// angle at the sphere pivot, multiplied by 2 to get the quad size
float quadScale = tanAngle * length(viewOffset) * 2.0;
// scale the quad by the calculated size
float3 worldPos = mul(float3(v.vertex.xy, 0.0) * quadScale, rotMat) + worldSpacePivot;

Accounting for Object Scale

At the beginning of this we converted everything in to using object space so we could trivially support rotation and scale. We still support rotation, since the quad's orientation doesn't actually matter. But the quad doesn't scale with the object's transform like the cube did. The easiest fix for this is to extract the scale from the axis of the transform matrix and multiply the radius we're using by the max scale.

            // get the object scale
float3 scale = float3(
length(unity_ObjectToWorld._m00_m10_m20),
length(unity_ObjectToWorld._m01_m11_m21),
length(unity_ObjectToWorld._m02_m12_m22)
);
float maxScale = max(abs(scale.x), max(abs(scale.y), abs(scale.z)));
// multiply the sphere radius by the max scale
float maxRadius = maxScale * 0.5;
// update our sine calculation using the new radius
float sinAngle = maxRadius / length(viewOffset);
// do the rest of the scaling code

Now you can uniformly scale the game object and the sphere will still remain perfectly bound by the quad.

Ellipsoid Bounds?

It should also be possible to calculate the exact bounds of an ellipsoid, or non-uniformly scaled sphere. Unfortunately that's starting to get a bit more difficult. So I'm not going to put the effort into solving that problem now. I'll leave this as "an exercise for the reader." (Aka, I have no idea how to do it.)

Frustum Culling

One additional problem with using a quad is Unity's frustum culling. It has no idea that the quad is being rotated in the shader, so if the game object is rotated so it's being viewed edge on it may get frustum culled while the sphere should still be visible. The fix for this would be to use a custom quad mesh that's had its bounds manually modified from c# code to be a box. Alternatively you can use a quad mesh with one vertex pushed forward and one back by 0.5 on the z axis. And we're already flatten the mesh in the shader by replacing v.vertex.z with 0.0.

Shadow Casting

So now we have a nicely rendered sphere on a quad that is lit, textured, and can be moved, scaled, and rotated around. So lets make it cast shadows! For that we'll need to make a shadow caster pass in our shader. Luckily the same vertex shader can be reused for both passes, since all it does is create a quad and pass along the ray origin and direction. And those of course will be exactly the same for the shadows as it is for the camera, right? Then the fragment shader really just needs to output the depth, so you can delete all that pesky UV and lighting code.

Oh.

The ray origin and direction need to be coming from the light, not the camera. And the value we're using for the ray origin is always the current camera position, not the light. The good news is that's not hard to fix. We can replace any usage of _WorldSpaceCameraPos with UNITY_MATRIX_I_V._m03_m13_m23 which gets the current view's world position from the inverse view matrix. Now as long as the shadows are rendered with perspective projections it should all work!

Oh. Oh, no.

Directional shadows use an orthographic projection.

Orthographic Pain

The nice thing with perspective projection and ray tracing is the ray origin is where the camera is. That's really easy to get, even for arbitrary views, as shown above. For orthographic projections the ray direction is the forward view vector. That's easy enough to get from the inverse view matrix again.

            // forward in view space is -z, so we want the negative vector
float3 worldSpaceViewForward = -UNITY_MATRIX_I_V._m02_m12_m22;

But how do we get the orthographic ray origin? If you try and search online you'll probably come across a bunch of examples that use a c# script to get the inverse projection matrix. Or abuse the current unity_OrthoParams which has information about the orthographic projection's width and height. You can then use the clip space position to reconstruct the near view plane position the ray is originating from. The problem with these approaches is they're all getting the camera's orthographic settings, not the current light's. So instead we have to calculate the inverse matrix in the shader!

            float4x4 inverse(float4x4 m) {
float n11 = m[0][0], n12 = m[1][0], n13 = m[2][0], n14 = m[3][0];
float n21 = m[0][1], n22 = m[1][1], n23 = m[2][1], n24 = m[3][1];
float n31 = m[0][2], n32 = m[1][2], n33 = m[2][2], n34 = m[3][2];
float n41 = m[0][3], n42 = m[1][3], n43 = m[2][3], n44 = m[3][3];
float t11 = n23 * n34 * n42 - n24 * n33 * n42 + n24 * n32 * n43 - n22 * n34 * n43 - n23 * n32 * n44 + n22 * n33 * n44; // ... hold up, how many more lines are there of this?!

Ok, lets not do that. Those are just the first few lines of a >30 line function of increasing length and complexity. There's got to be a better way.

The Nearly View Plane

As it turns out, you don't need any of that. We don't actually need the ray origin to be at the near plane. The ray origin really just needs to be the mesh's position pulled back along the forward view vector. Just far enough to make sure it's not starting inside the volume of the sphere. At least assuming the camera itself isn't already inside the sphere. And a "near plane" at the camera's position instead of the actual near plane totally fits that bill.

We already know the world position of the vertex in the vertex shader. So we could transform the world position into view space. Zero out the viewSpacePos.z, and transform back into world space. That results in a usable ray origin for an orthographic projection!

            // transform world space vertex position into view space
float4 viewSpacePos = mul(UNITY_MATRIX_V, float4(worldPos, 1.0));
// flatten the view space position to be on the camera plane
viewSpacePos.z = 0.0;
// transform back into world space
float4 worldRayOrigin = mul(UNITY_MATRIX_I_V, viewSpacePos);
// orthographic ray dir
float3 worldRayDir = worldSpaceViewForward;
// and to object space
o.rayDir = mul(unity_WorldToObject, float4(worldRayDir, 0.0));
o.rayOrigin = mul(unity_WorldToObject, worldRayOrigin);

And really we don't even need to do all that. Remember that super power of the dot() I mentioned above? We just need the camera to vertex position vector and the normalized forward view vector. We already have the camera to vertex position vector, that's the original perspective world space ray direction. And we know the forward view vector by extracting it from the matrix mentioned above. Conveniently this vector comes already normalized! So we can remove two of the matrix multiplies in the above code and do this instead:

            float3 worldSpaceViewPos =               UNITY_MATRIX_I_V._m03_m13_m23;
float3 worldSpaceViewForward = -UNITY_MATRIX_I_V._m02_m12_m2;
// originally the perspective ray dir
float3 worldCameraToPos = worldPos - worldSpaceViewPos;
// orthographic ray dir
float3 worldRayDir = worldSpaceViewForward * -dot(worldCameraToPos, worldSpaceViewForward);
// orthographic ray origin
float3 worldRayOrigin = worldPos - worldRayDir;
o.rayDir = mul(unity_WorldToObject, float4(worldRayDir, 0.0));
o.rayOrigin = mul(unity_WorldToObject, float4(worldRayOrigin, 1.0));

* There is one minor caveat. This does not work for oblique projections (aka a sheared orthographic projection). For that you really do need the inverse projection matrix. Sheared perspective projections are fine though!

Light Facing Billboard

Remember how we're doing camera facing billboards? And that fancy math to scale the quad to account for the perspective? We don't need any of that for an orthographic projection. Just need to do view facing billboarding and scale the quad by only the object transform's max scale. However maybe lets not delete all of that code quite yet. We can use the existing rotation matrix construction as is, just change the forward vector to be the negative worldSpaceViewForward vector instead of the worldSpacePivotToCamera vector.

A Point of Perspective

In fact now might be a good time to talk about how the spot lights and point lights use perspective projection. If we want to support directional lights, spot lights, and point light shadows we're going to need to support both perspective and orthographic in the same shader. Unity also uses this pass to render the camera depth texture. This means we need to detect if the current projection matrix is orthographic or not and choose between the two paths.

Well, we can find out what kind of projection matrix we're using by checking a specific component of it. The very last component of a projection matrix will be 0.0 if it's a perspective projection matrix, and will be 1.0 if it's an orthographic projection matrix.

            bool isOrtho = UNITY_MATRIX_P._m33 == 1.0;            // billboard code
float3 forward = isOrtho ? -worldSpaceViewForward : normalize(worldSpacePivotToCamera);
// do the rest of the billboard code
// quad scaling code
float quadScale = maxScale;
if (!isOrtho)
{
// do that perfect scaling code
}
// ray direction and origin code
float3 worldRayOrigin = worldSpaceViewPos;
float3 worldRayDir = worldPos - worldSpaceRayOrigin;
if (isOrtho)
{
worldRayDir = worldSpaceViewForward * -dot(worldRayDir, worldSpaceViewForward);
worldRayOrigin = worldPos - worldRayDir;
}
o.rayDir = mul(unity_WorldToObject, float4(worldRayDir, 0.0));
o.rayOrigin = mul(unity_WorldToObject, float4(worldRayOrigin, 1.0));
// don't worry, I'll show the whole vertex shader later

And now we have a vertex function that can correctly handle both orthographic and perspective projection! And nothing needs to change in the fragment shader to account for this. Oh, and we really can use the same function for both the shadow caster and forward lit pass. And now you can use an orthographic camera as well!

Shadow Bias

Now if you'd been following along, you'll have a shadow caster pass outputting depth. But we're not calling any of the usual functions a shadow caster usually has for applying offset. At the moment this isn't obvious since we're not self shadowing yet, but it'll be a problem if we don't fix it.

We're not going to use the built in TRANSFER_SHADOW_CASTER_NORMALOFFSET(o) macro for the vertex shader for this since we need to do the bias in the fragment shader. Luckily, there's another benefit to doing the raytracing in object space. The first function that the shadow caster vertex shader macro calls assumes the position being passed to it is in object space! I mean, that makes sense, since it assumes it's working on the starting object space vertex position. But this means we can use the biasing functions the shadow caster macros call directly using the position we've raytraced and they'll just work!

Yeah, really still just a quad.
            Tags { "LightMode" = "ShadowCaster" }            ZWrite On ZTest LEqual            CGPROGRAM
#pragma vertex vert
#pragma fragment frag_shadow
#pragma multi_compile_shadowcaster // yes, I know the vertex function is missing fixed4 frag_shadow (v2f i,
out float outDepth : SV_Depth
) : SV_Target
{
// ray origin
float3 rayOrigin = i.rayOrigin;
// normalize ray vector
float3 rayDir = normalize(i.rayDir);
// ray sphere intersection
float rayHit = sphIntersect(rayOrigin, rayDir, float4(0,0,0,0.5));
// above function returns -1 if there's no intersection
clip(rayHit);
// calculate object space position
float3 objectSpacePos = rayDir * rayHit + rayOrigin;
// output modified depth
// yes, we pass in objectSpacePos as both arguments
// second one is for the object space normal, which in this case
// is the normalized position, but the function transforms it
// into world space and normalizes it so we don't have to
float4 clipPos = UnityClipSpaceShadowCasterPos(objectSpacePos, objectSpacePos);
clipPos = UnityApplyLinearShadowBias(clipPos);
outDepth = clipPos.z / clipPos.w;
return 0;
}
ENDCG

That it. And this works for every shadow caster variant.* Directional light shadows, spot light shadows, point light shadows, and the camera depth texture! You know, should we ever want to support multiple lights…

* I didn't add support for GLES 2.0 point light shadows. That requires outputting the distance from the light as the shadow caster pass's color value instead of just a hard coded 0 . It's not too hard to add, but it makes the shader a bit messier with a few #if and special case data we'd need to calculate. So I didn't include it.

* edit: I forgot one thing for handling depth on OpenGL platforms. Clip space z for OpenGL is a -w to +w range, so you need to do one extra step to convert that into the 0.0 to 1.0 range needed for the fragment shader output depth.

            #if !defined(UNITY_REVERSED_Z) // basically only OpenGL
outDepth = outDepth * 0.5 + 0.5;

0 Response to "How Do You Draw A Sphere"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel