New way of handling camera relative rendering 4

When Ogre 1.x users are far from origin, things begin to shake. Animations flicker. Eyes pop out of their sdockets. Playstation(R) 1 artifacts all over the place.

This is because of precision problems with floating point when using large numbers.

To help mitigate those artifacts, SceneManager had a setting called “setCameraRelativeRendering”.

The idea is that both World & View matrices are subtracted the camera’s position before sending it to the GPU. This works very well… but it’s implemented like a hack.

Problem #1: Inconsistent shadow mapping

The first problem with 1.x’s camera relative rendering is that the algorithm works per camera. So if the user Camera is at (1000, 0, 1000) and the shadow camera is at (1000, 1000, 1000); the world & view matrices from the object sent during the shadow pass differ to the ones passed during the normal pass (since we need the texture matrix to know where to sample in the shadow map).

This missmatch could produce precision issues leading to shadow mapping artifacts. I haven’t found enough evidence to support that the precision difference is big enough to actually cause a real problem.

So, let’s move on to the next problem.

Problem #2: Jittering animation / flickering

Despite Camera Relative Rendering being enabled; animations begin to shake baadly when away from origin. This was the reason I suggested to send animation matrices local space and work everything in view space (slide 135).

However, after much thinking, view space does bring its own share of problems, and there’s also a lot of legacy code to deal with that assumes coordinates are in world space. It wasn’t a great idea after all.

After much research, I found the real culprit of the problem: In OptimisedUtil::getImplementation()::concatenateAffineMatrices we concatenate the world matrix of the object against all the bone matrices, later they’re stripped the camera’s position, and sent to the GPU. Ergo the bone matrices suffer the effects of being far away from origin.

Stripping the camera’s position should happen before the concatenation with the bone matrices; and then we would have to somehow avoid stripping them later. This could be somehow fixed in a bizarre way, which leads me to…

Problem #3: Hacks all over the place

Everywhere we see the following snippet over and over:

if( cameraRelative )
    mat.m41 -= cameraPos.x;
    mat.m42 -= cameraPos.y;
    mat.m43 -= cameraPos.z;

First, we keep getting bug reports that camera relative rendering is broken because someone forgot to add that snippet to their code.

Second, it adds unnecessary dependencies (SceneManager to know if we’re using camera relative rendering, and Camera to retrieve the actual position)

Third, I’ve seen some bizarre code like the following:

Matrix4 *dst; //Points to write-combining memory (usually coming from RenderSystem API)
dst = object[i]->getWorldMatrix();
dst->m41 -= cameraPos.x; //Reading back from write-combining memory!!!
dst->m42 -= cameraPos.y;
dst->m43 -= cameraPos.z;

Often this code is more obscure so it’s hard to see whether dst is actually write-combining memory or it’s safe to read back.

And the fact that we just copy-pasted the snippet everywhere makes everything harder & slower.

Shifting the origin instead!

Some members of the community will probably hate me for doing this change, while other will probably love it.

Engines like Havok deal with precision problems by shifting the whole world (See SlidingWorldDemo). Apparently Bullet can do it too.

The idea is to shift all objects’ position by a fixed amount. How much is up to user (typically the user’s main camera) although physics engine tend to round the number since the broadphase is quantized.

This solution solves problems #1, 2 & 3. In Ogre 2.x there are two options when calling SceneManager::setRelativeOrigin:

1. Temporary shift method

Ogre 2.x will set the position of the Root scene node to the new relative origin. Nobody uses the root scene node anyway, so it’s a perfect place. If all objects are relative to the Root node, changing the Root node will naturally move everything.

If you are one of the very rare users who happen to use the root node, you have two choices:

  1. Manually perform the shifting yourself.
  2. Create your “root node” as a child of the real root node.

This technique has no performance impact (since we already transform against the root node and is both SIMD accelerated and threaded).


  1. No performance impact.
  2. Animations no longer jitter.
  3. Easy to debug, turn on & off.
  4. No need to hack around everywhere in the code


  1. The camera’s & objects’ local position value is still very big. If the camera or the objects are moving by small increments, their movement will jitter (but not when rotating).
  2. Occupies the Root Node
  3. All Node’s _getDerivedPosition will return the shifted value while getPosition won’t.

2. Permanent shift method

It’s similar to the temporary shift. However the result can’t usually be undone (because applying “+offset” then applying “-offset” back may not return the original values due to rounding)

Ogre will recursively walk the Root scene node and all of its children to apply the offset directly on them.

The algorithm has not option but to stop walking down on children when the SceneNode has attached objects: since attached objects don’t have a position, applying the offset to both the parent and children would cause the children to be further away than intended. The only exception are Cameras which do have a position, and we apply it (the algorithm can continue walking down).

For example if the Root Scene Node has the Ogre Head mesh attached (so it’s at the origin), the permanent shift won’t translate to any children of the Root Scene Node, causing the exact same effects as the Temporary Shift.

This technique has no performance impact, though traversing the nodes can add a small overhead during the call to setRelativeOrigin.


  1. Animations no longer jitter
  2. When done correctly, everything is shifted and shouldn’t be jitter of any kind. (Not even when moving the camera or objects by small increments). Also should match the physics’ engine shift amount.
  3. No need to hack around everywhere in the code


  1. Some Scene Hierarchy setups don’t go well with it and can introduce subtle bugs when detaching nodes from a parent and attaching them to another parent (due to mixing of spaces in which they’re represented). This can only happen if non leaf scene nodes have objects attached (except cameras).
  2. When done incorrectly, many Nodes’ getPosition will return the shifted value but some may not (see previous point), and all Node’s _getDerivedPosition will return the shifted value.

Permanent shift is a bit more dangerous to screw up; it’s safer, friendlier & easier to use temporary shifts + double precision (or just single precision, there’s already a big win just from using temporary shifts) like the old camera relative feature.

But when permanent shifts are suited for you, the precision win can be so large that you shouldn’t have a need for double precision at all.

A gotcha

The only example so far is Camera::lookAt (and SceneNode::lookAt). LookAt accepts a position in absolute space and the camera is supposed to look towards it.

However, what do you think the following snippet should do?

camera->lookAt( Vector3::ZERO );

Your answer is most likely that the camera should look at the origin. So if we’re using temporary shifts, it should at the relative origin.
The solution is to apply SceneManager::getRelativeOrigin to the input value so that it is properly shifted. We conclude the input should be in relative-origin space

But now I ask you another question: what do you think the following snippet should do?

camera->lookAt( someObject->_getDerivedPosition() );

The answer is obviously the camera should look at the object. But _getDerivedPosition has already been shifted! If we shift it again; it’s going to end up looking at the wrong place.
The solution is not to apply SceneManager::getRelativeOrigin inside lookAt. We conclude the input should be in absolute world space, which contradicts our previous statement.

There is no perfect solution. Either accept inputs in relative-origin or in world space, but not both. I ended up deciding to accept world space inputs; based on the premise that in real world scenarios it is more likely the user will call lookAt( someObject->_getDerivedPosition() ) or some other automated function rather than manual inputs.

This means that in order to work, the user should use the following snippet:

camera->lookAt( Vector3::ZERO + sceneManager->getRelativeOrigin() );
camera->lookAt( someObject->_getDerivedPosition() );

There may be more cases where this mix up of spaces show up, but I have a hard time thinking about more. Normally this problem appears when combining manual input with some calculated one; since we humans don’t like to think about “spaces”.

Leave a comment

Your email address will not be published. Required fields are marked *

4 thoughts on “New way of handling camera relative rendering

  • Lunkhound

    So great to clean up all those hacks!
    With the permanent shift method, if the shifts are multiples of a nice round power-of-2, you can get quite far from the origin without any precision issues. Of course that is assuming that the camera and other objects don’t stray far from the shifted origin.

    • Matias Post author

      Indeed. You can shift as many times if you want, so if the camera and other objects stray too far from the shifted origin, you can shift it again (which is useful for most use cases)

      • Lunkhound

        If you snap your shifted origin onto a uniform grid of 1024 meter steps, you can shift it out to 2^33 meters (8.5 billion meters) from the origin without loss. If that’s not a big enough world, you can stack them to dramatically increase the range (for each level of stacking you increase the range by 2^23–the number of bits in the mantissa), with 2 shifted origin offsets you can shift out to 2^56 meters (~180 light-years), and with 3 its 2^79 meters (1.5 billion light years)… all with single precision floats!

        The biggest drawback to this method (permanent shifting) I can see is that it is somewhat intrusive. Anyplace in the code–AI for instance–that stores positions from one frame to the next will need to be shifting-aware.

  • al2950

    Another great post! Although I still have a few queries, i think its a case of trying it and working it out myself.

    Also its worth mentioning that PhysX V3.3 introduces a shift origin function “PxScene::shiftOrigin(const PxVec3& shift)”