Ogre 2.1 FAQ        

This page contains a list of frequently asked questions (FAQ) around Ogre 2.1, pertaining to its general state, the supported rendering components as well as more specific building / compiling / coding questions. This list will be extended as new central questions arise.

For a general comparison of current Ogre versions, see the "What version to choose" page.


Table of contents

Is Ogre 2.1 stable?

Yes, it is stable. It rarely crashes, rarely leaks (no more, no less than other stable software).
However being still in development there is the occasional build-breaking change that usually takes 10 to 20 minutes of your dev time to adapt your code. There aren't huge/major changes anymore.

We've found the most common problem about working with Ogre 2.1 is the lack of up to date Wiki examples and plugins (e.g. CEGUI, etc.) instead of stability of the library itself. Some community users have ported existing plugins to 2.1, but we cannot comment on their exact quality (does not mean they're bad!).

If you're not convinced that Ogre 2.1 is stable and can be used for high-quality work, here's three projects:

Is there a manual?

Yes! Visit the latest Manual in the Ogre Cave Online. An older version in ODT form is located under Docs/2.0/Ogre 2.0 Porting Manual DRAFT.odt.

You can alternatively generate the online manual if you've downloaded the repo and have doxygen installed. Type 'make OgreDoc' to generate it.

Is there sample code / Are there examples?

Yes! You only need to enable OGRE_BUILD_SAMPLES2 in CMake (note the 2 at the end, not OGRE_BUILD_SAMPLES which is broken at the moment and will probably be removed).
If the samples aren't building, you may be missing the SDL2 dependency. Some samples use RapidJSON. If you've cloned ogre-next-deps you should have them both:
Note: You must clone the dependency repository. Downloading a zipped version will not work, as it won't download SDL2 which is linked as a subrepo module.

The samples are under %OgreRoot%/Samples/2.0 (the label says "2.0", but they are for "2.1" and onwards).

I'm getting compiler errors with the samples. Something about the "SampleBrowser".

The SampleBrowser is part of the 1.x samples. These are not the 2.1 samples. See Is there sample code / Are there examples?

Is Ogre 2.1 too different from 1.x? What changes should I expect?

Most of these changes have been covered in the manual. However here's a quick summary:

  1. Old stuff has been put under the "v1" namespace. If you get compiler errors, you may just need to append the "v1". i.e. Entity *myEntity --> v1::Entity *myEntity;
  2. Items replace Entity, which are faster and easier to setup. However Items don't support everything yet (e.g. Entity has pose animations useful for facial animations, Items do not...yet). Entity is still useful for porting.
  3. There is a new material system: the Hlms (High Level Material System).
  4. Old materials are not recommended unless it's for rendering a few Entities at most (since they're slow and clunky to support), or unless it's for post-processing (the place where they are most useful).
  5. Textures remain largely unmodified at the time being.
  6. The HlmsTextureManager handles textures for our new Hlms (High Level Material System), but don't let it fool you: Behind the curtains it's just cleverly managing the TextureManager for faster rendering performance. You could bypass it if you need to.
  7. Rendering is now done through Compositors. It's not an optional component for post-processing any more, but rather an integral part that tells Ogre how you want to render the scene.
  8. The default ParticleFX still works.
  9. Math (Vector3, Matrix4, Quaternion) has largely stayed the same.

How do I setup my own Ogre application?

Tutorial01 through Tutorial06 explain how to setup a robust render loop to handle games, such as shown in Fix your Timestep, including how to handle multi-threading.
The code under "Samples/2.0/Common" is supposed to get you bootstrapped.
For example you can see that Dergo uses GraphicsSystem.cpp
There is a CMake script (you'll need all the files) that will link to Ogre's build from source, copy the necessary DLLs and generate the plugins.cfg file. Eventually we'll bundle these scripts with our source once they mature enough.

I'm confused about threading support in Ogre 2.1

There is the "old threading" code and the "new" threading code.

The old threading code can be enabled from CMake, and requires a 3rd party dependency to work (Boost, POCO, TBB, etc.). It's meant to support background loading, although in my opinion (this is Matias <dark_sylinc> writing) it did a very poor job and thus I do not recommend it. However it has been left in our code, because some users did have moderate success with it and since it was not getting in our way, it stayed.

The "new" threading code is always enabled and uses system synchronization primitives directly. You don't need to toggle anything on CMake. This code scales much better and is used to update the scene graph in parallel, AABB calculations, frustum culling, selecting LOD, culling lights, and updating v2 skeleton animations.
You tell Ogre how many worker threads we will create, which gives you a lot of control on how many threads Ogre occupies, via Root::createSceneManager() (Note: You must create at least one worker thread). The worker threads are created per SceneManager, meaning if you create 2 SceneManagers with 4 threads each, Ogre will create 8 worker threads. Note that it is very likely at the moment that while the first 4 threads work, the other 4 threads will be sleeping because we still update SceneManagers serially.

Do I need Boost?

No.
Unless you want the "old" threading support, you don't need it at all. See question above.

I see several "2.1" branches. Which one should I choose?

There used to be two stable branches: 2.1 and 2.1-pso; which have been merged together now. Now the only stable 2.1 branch is the one that says "v2.1"
Any other branch labelled with prefix 2.1 (e.g. v2-1-hybrid-rendering) is unstable and should not be used.

What happened to the SceneManagers (e.g. OctreeSceneManager, BSP, etc)?

All of the previous SceneManagers got deprecated starting with 2.0.
The DefaultSceneManager (which only provides frustum culling) is much, much faster than any of the previous SceneManagers and can handle much larger object counts and distances, so there shouldn't be much of a problem.

Long term, there is plan to provide some simple SceneManager that would subdivide the scene into a grid for very, very large scenes (e.g. > 8x8 km), but it's not currently a priority. The reason is that if you've got such big scene, you'll probably still want to do some management yourself (to page in and out the details that aren't needed in your simulation).

But what about CHC, Portal scene managers? Don't they improve performance a lot?
Problems have shifted in the last decade. What the people at DICE (Battlefield) proved is that when it comes to scene management, optimized brute force beats smart and complex tree-based algorithms by several factors (which can't be optimized as easily in the same way as brute force due to their tree structure).
Problems shifted to being cache friendly and multi-core friendly, rather than trying to be too smart with complex algorithms.

Such algorithms have a tendency to be useful and advantageous in very particular scenarios, which makes them hard to maintain and unsuitable for generic rendering engines such as Ogre.

Is there Android Support?

Starting Ogre 2.3, there is Android support for Vulkan-capable devices. Andrdoi 7.0+ is supported, however Android 8.0+ is strongly recommended due to buggy drivers bundled with older versions.

For non-Vulkan-capable phones, see What about GLES support?

What about GLES support?

The plan is to eventually fix the now-broken GLES2 renderer, supporting both GLES2 and GLES3. However the idea for GLES2 is to support it for compatibility since it's a very ill-designed API but sadly present in millions of Android devices and isn't suited for high performance. So the focus will be more about compatibility and stability rather than performance (it should still be faster than it was in Ogre 1.x anyway). There may be other limitations we can't predict yet.
Plans for GLES3 in Android have been abandoned as drivers have proven to be very unreliable and buggy. They simply fail to run anything but the most simple shader code. For anything performance sensitive in Android, use the Vulkan RenderSystem.
The main incentive to support GLES3 is WebGL via Emscripten.

What about WebGL support?

Once GLES2 is ready, WebGL should be a piece of cake because it's 99% similar to GLES2.

What about D3D9 support?

There is no plan to support D3D9 going forward. It might be possible for someone to revive it by reusing whatever workarounds we'll write for GLES2 to run with Ogre 2.1. But most of the team isn't thrilled about D3D9 anymore, only Assaf still cares and maintains parts of it.
Though as long as GLES2 isn't ready, we will keep the D3D9 code, although no guarantees are given regarding its usability / stability or whether it can be compiled at all.

What about D3D11 level 9.x support?

It's not a priority. Again, it may get easier to support it using the same paths we'll write for GLES2. But D3D11 level 9.x is a special snowflake very hard to deal with: It imposes weird restrictions HW didn't have, it doesn't map well to GLES2 nor D3D9. Things would've been much easier if level 9.2 == Shader Model 2.x, level 9.3 == Shader Model 3.0, but unfortunately they mixed things and got the worst of all worlds.
The hope really is that by the time we reach this point, level 9.x hardware support would become irrelevant.

What about iOS support?

For older iOS devices, support is tied to GLES rendersystem.
For newer iOS devices, use the Metal RenderSystem.

What about OS X support?

macOS / OSX users can now use Metal thanks to the efforts from users berserkerviking and johughes. Support should be considered beta, if you encounter any issues please report them to us in the +2.0 forum.

For older Macs that do not support Metal, Ogre now supports GL3+ in compatibility mode thanks to the efforts and contributions from users Hotshot5000 and DimA. Note that performance may be reduced compared to the Metal version (or Windows & Linux versions) and feature set may be reduced because Apple hasn't updated its OpenGL drivers since version 4.1. For example Compute Shaders aren't supported, thus any advanced sample (e.g. Screen Space Reflections) depending on them won't work. However these features often require a powerful modern GPU, which means if you need those features to work well, the system probably already supports Metal.

What about Vulkan/D3D12 support?


Vulkan support has been added since Ogre version 2.3

Vulkan has greater priority than D3D12 because there is little D3D12 can do that D3D11 can't, Vulkan can cover the gap between D3D11 & 12 (except XBox), and because Vulkan is the only way to target high performance graphics in Android (a void no version of GLES is filling). But still not a huge deal because Vulkan capable Android devices are very rare.

I've created two (or more) RenderWindows and I'm having severe graphical glitches or I get many GL_INVALID_OPERATION errors

If you're using OpenGL, you need to reuse the OpenGL context for all of the RenderWindow you create. Otherwise bad things happen. See this post: http://www.ogre3d.org/forums/viewtopic.php?f=2&t=84711&p=522308#p522313

How do I enable Double precision? I'm getting compiler errors.

We're not yet officially maintaining double precision, but it works (mostly?). Before you continue, it's very likely the problem you want to solve doesn't need double precision at all, and you just need to learn how to use Camera-relative rendering / Relative Origin via SceneManager::setRelativeOrigin. A matter of precision by Tom Forsyth and Don't store that in a float by Bruce Dawson are very good reads as well.

But if you insist or you really need double precision floats. Here's how:

  1. Enable OGRE_CONFIG_DOUBLE
  2. Disable OGRE_SIMD_NEON
  3. Disable OGRE_SIMD_SSE2

NEON support in Android is optional. But Ogre is either compiled with or without NEON. How can I switch dynamically at runtime?

Ogre cannot switch between these implementations at runtime. This is on purpose. Supporting runtime switching would require either conditionals everywhere, or function pointers (or something similar, like virtual functions). The overhead from this would completely negate the benefits of using SIMD in the first place. Runtime switching is only useful when the amount of SIMD work is very large, and the number of times function pointers would be called is low (for example video codecs).

On Desktop the recommended approach to tackle this problem is via two builds and a third build that detects SSE2 support and then launches the correct exe. You can do the same on Android.

In Android you don't build a process. You build a library and a Java process. The Java process loads the library and then executes an entry point defined in the library. This process is not automatic. Your Java code first must load the library with your NDK code. Somewhere in your Java code there must be a snippet similar to this one:

System.loadLibrary("hello-jni");

Before you load your main NDK library, you would select, in Java, which build to load:

if( supportsNeon )
    System.loadLibrary("hello-jni-neon");
else
    System.loadLibrary("hello-jni");

Of course it adds some hassle into your build system since now you need to build your code twice (including Ogre) and your binary size (excluding assets) would double, but this is basically the same hassle desktop applications face. Of course to speed up iteration times, only build one of them for your device and build both versions when you need to deploy.

I get errors while compiling RTSS / Run Time Shader System.

The RTSS is deprecated in 2.1 and will probably be removed unless Assaf picks up the maintenance. The Hlms (High Level Material System) replaces the RTSS and is part of OgreMain. It's much faster, more stable, and easier to use.

Is it essential to have at least one HLMS C++ implementation in your project (e.g. OgreHlmsPbs, OgreHlmsPbsMobile, OgreHlmsUnlit, OgreHlmsUnlitMobile) if you're going to render something that is visible on the screen (like a Cube)?

To get PBS materials working you need to:

  • Link or include in your project the C++ source code of OgreHlmsPbs.
  • Have the template files under the folders "Samples/Media/Hlms/Common Samples/Media/Hlms/Pbs" bundled with your project for PBS. When you instantiate the HlmsPbs class you have to explicitly tell it the location of these files (see the samples). Important: Don't put the Common and Pbs template files in the same folder.


To get Unlit materials working you need to:

  • Link or include in your project the C++ source code of OgreHlmsUnlit to get Unlit materials.
  • Have the template files under the folders "Samples/Media/Hlms/Common Samples/Media/Hlms/Unlit" bundled with your project for Unlit. When you instantiate the HlmsUnlit class you have to explicitly tell it the location of these files (see the samples). Important: Don't put the Common and Unlit template files in the same folder.


You could write your own Hlms implementations but we provide our own for you that work out of the box.
At the time of writing the PbsMobile and UnlitMobile projects are not currently 100% working and were intended for GLES2 only.

I've added a Point/Spot light but it won't show up.

  1. First, make sure it's using a PBS material. Unlit materials are obviously not lit.
  2. Second, when a point and spot lights isn't casting shadows during that frame, PBS won't use it by default as it assumes it will be handled by a more advanced technique. You need to enable Forward3D for them to work. See the Forward3D sample.

I'm creating custom geometry but it shows black/white with PBS.

  • Make sure the material is valid.
  • Make sure your mesh has normals and that they are correct. Without normals, using PBS makes no sense as there can't be lighting.

How do I generate a Mesh programmatically?

  • If it's a v1 object, same as before.
  • If it's a v2 object see the DynamicGeometry and CustomRenderable samples.

Starting my app takes forever! (particularly Direct3D11)

Shader compilation takes a long time. Particularly in D3D11 where compiling can easily take 5 seconds per shader.
The solution is to enable the Shader Microcode cache and save it to disk.
When loading the microcode cache, one of the best places to do it is right after you've registered the Hlms implementations.
Perform:

GpuProgramManager::getSingleton().setSaveMicrocodesToCache( true ); //Make sure it's enabled.
DataStreamPtr shaderCacheFile = root->openFileStream( "D:/MyCache.cache" );
GpuProgramManager::getSingleton().loadMicrocodeCache( shaderCacheFile );


When saving (at exit, before the RenderSystems are shut down):

if( GpuProgramManager::getSingleton().isCacheDirty() )
{
    DataStreamPtr shaderCacheFile = root->createFileStream( "D:/MyCache.cache", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, true );
    GpuProgramManager::getSingleton().saveMicrocodeCache( shaderCacheFile );
}


Note these calls can throw if there are IO errors (a folder doesn't exist in the path, the file didn't exist, you don't have write or read access, etc) so make sure to wrap the calls in a try/catch block.

Direct3D11's shader cache can be used anywhere, including other systems.
OpenGL's shader cache can only be used in the computer it was generated with, and may be invalidated if the user updates his drivers or changes GPU.

It is because of this that you should plan on creating a test/benchmark scene that will show most, if not all, of the material combinations your app may encounter so that this cache can be build and saved.
On GL3+ you can run this benchmark after installation (or during the first run), remember that if the user upgrades his GPU you may have to run it again.
On D3D11 you can run this benchmark on your own PC to build the cache before deploying.

Using the cache significantly decreases the loading time, avoids random hitches during gameplay, and maximizes the user experience. It is highly recommended you enable it and plan for it (i.e. the test scene).

Can I leave "Fast Shader Build Hack" force-enabled for all my D3D11 users?


It is not recommended. Your users should be able to toggle it off, even if at least by modifying an obscure ini file.

It is not guaranteed the hack will work on future driver/GPUs, so your users should be able to turn it off for maximum compatibility.
Though, several high profile games are relying on this hack as well, so if a vendor breaks the hack, it will also break many games.

While debugging HLSL shaders in RenderDoc, many indexed variables return 0


Additionally, you may find most structures are only enumerated in range [0; 2).

Disable "Fast Shader Build Hack". See RenderDoc HLSL debugging with "Fast Shader Build Hack"

I'm using old materials. What happened to setDepthCheckEnabled, setDepthWriteEnabled, setCullMode, setSceneBlending, etc?

We now use macroblocks and blendblocks to modify these settings. Blocks allow for major performance optimizations, as well as they fit much better to modern APIs.

To change these settings you should retrieve the macroblock, modify the setting, and set the macroblock again:

HlmsMacroblock macroblock = *pass->getMacroblock(); //Get a hardcopy we can modify
macroblock.mDepthCheck = depthCheck;
macroblock.mDepthWrite = depthWrite;
macroblock.mCullMode = cullMode;
pass->setMacroblock( macroblock );

HlmsBlendblock blendblock = *pass->getBlendblock(); //Get a hardcopy we can modify
blendblock.setBlendType( SBT_TRANSPARENT_ALPHA );
pass->setBlendblock( blendblock );


Leaving functions such as Pass::setDepthCheckEnabled & co. was bad because they would have to modify the macroblock, which means retrieving an existing or generating a new macroblock that matches the requested settings.
If you would call:

pass->setDepthCheckEnabled( ... );
pass->setDepthWriteEnabled( ... );
pass->setCullMode( ... );
//etc...


in succession, you would end up with O(N^2) behavior.


I'm writing my own Hlms implementation, or just want to know more about it. Where do I find learning material/resources?

You can find it in the manual.
Aside from that, you will also find useful information in these forum links:

How do I reload an Hlms material?

Asuming it was loaded from a JSON file, see http://ogre3d.org/forums/viewtopic.php?f=2&p=534724#p534724

void reloadHlmsResource(Ogre::Root* root, const std::string& resourceName)
   {
      const std::array<Ogre::HlmsTypes, 7> searchHlms = {
            Ogre::HLMS_PBS, Ogre::HLMS_TOON, Ogre::HLMS_UNLIT, Ogre::HLMS_USER0,
            Ogre::HLMS_USER1, Ogre::HLMS_USER2, Ogre::HLMS_USER3 };

      Ogre::Hlms* hlms = nullptr;
      Ogre::HlmsDatablock* datablockToReload = nullptr;
      for (auto searchHlmsIt = searchHlms.begin(); searchHlmsIt != searchHlms.end() && datablockToReload == nullptr; ++searchHlmsIt)
      {
         hlms = root->getHlmsManager()->getHlms(*searchHlmsIt);
         if (hlms)
            datablockToReload = hlms->getDatablock(resourceName);
      }

      if (datablockToReload == nullptr || datablockToReload == hlms->getDefaultDatablock())
         return;

      Ogre::String const *filenameTmp, *resourceGroupTmp;
      datablockToReload->getFilenameAndResourceGroup(&filenameTmp, &resourceGroupTmp);
      if (filenameTmp && resourceGroupTmp && !filenameTmp->empty() && !resourceGroupTmp->empty())
      {
         const Ogre::String filename(*filenameTmp), resourceGroup(*resourceGroupTmp);
         Ogre::vector<Ogre::Renderable*>::type lrlist = datablockToReload->getLinkedRenderables();
         for (auto it = lrlist.begin(); it != lrlist.end(); ++it)
            (*it)->_setNullDatablock();

         hlms->destroyDatablock(resourceName);

         hlms->getHlmsManager()->loadMaterials(filename, resourceGroup);

         Ogre::HlmsDatablock *datablockNew = hlms->getDatablock(resourceName);
         for (auto it = lrlist.begin(); it != lrlist.end(); ++it)
            (*it)->setDatablock(datablockNew);
      }
      else {}
   }

Changes will take immediate effect for all Items using that object. The old HlmsDatablock pointer is destroyed though, so make sure your code doesn't keep references to it.
If you were asking how to reload shader code instead, checkout the code in TutorialGameState::keyReleased (Ctrl+F1 & Ctrl+F2 hotkeys) in ogresrc/Samples/2.0/Common/src/TutorialGameState.cpp:

Ogre::Root *root; //Assuming it's a valid ptr
Ogre::HlmsManager *hlmsManager = root->getHlmsManager();

Ogre::Hlms *hlms = hlmsManager->getHlms( Ogre::HLMS_PBS );
Ogre::GpuProgramManager::getSingleton().clearMicrocodeCache();
hlms->reloadFrom( hlms->getDataFolder() );



How can I debug a memory corruption error in Ogre?

Aside 3rd party tools such as Valgrind, we offer an incredibly useful and fast system for tracking memory corruption.

If you suspect Ogre has a memory corruption (or you're causing the corruption but this corruption affects Ogre) I suggest you run on a DEBUG build to have all sorts of assert checks on, and also you could give OGRE_CONFIG_ALLOCATOR = 5 a try.
I wrote that allocator to catch memory corruption. Beware it wastes A LOT of ram so you may have to use a 64-bit build (otherwise it may crash if you exceed the 2GB usage as common for 32-bit apps). If your process uses a lot of RAM, you may have to tweak the OGRE_TRACK_POOL_SIZE macro (which is set to 1GB by default).

How to use OGRE_CONFIG_ALLOCATOR = 5: At some point it will indicate you have a memory corruption, and tell you what block it was (i.e. let's say byte address 100568 is corrupted). Because it's deterministic between each run (assuming your process is also deterministic), next time run your process again and place a data breakpoint at MemoryPool + 100568. That way you'll be able to track the writes to that block of memory, and catch whenever your process writes to it (which it shouldn't do).
You can also modify the code at TrackAlignedAllocPolicy::allocateBytes so that you can place a breakpoint whenever it allocates the region of memory you want to start watching.

This allocator is very simple: It mallocs 1GB of memory and puts the pointer to TrackAllocPolicy::MemoryPool and this memory is initialized to a pattern. Every time you request memory we return MemoryPool + TrackAllocPolicy::CurrentOffset and increase CurrentOffset. When you deallocate, we check the pattern in the surrounding areas are intact; and then reset the whole memory region to the pattern.

Once a memory region is deallocated, it will never be used back again. So you have a limited amount of memory you can request from the pool; once the pool is over it's game over; and you'll need to recompile again with a bigger OGRE_TRACK_POOL_SIZE value if you wanted to keep going. Just to be clear, if you've allocated 896MB so far and now you free 512MB; you'll have 128MB left, because the memory you deallocate is never reused. This is not meant for deployment, it's just a silly but extremely useful trick to catch corruption issues.

If you are able to repro this bug in a deterministic manner (i.e. no user intervention, allocation patterns don't depend on undeterministic sources like RNGs seeded undeterministically) you'll have your corruption bug caught in no time.

Setting a breakpoint in XCode inside Ogre source files doesn't seem to work. The code is executed but breakpoints never hit

It seems XCode breakpoints have trouble dealing with "UNITY" builds. This happens if you've enabled OGRE_UNITY_BUILD in CMake before compiling Ogre.
Disable OGRE_UNITY_BUILD and recompile Ogre. Breakpoints should begin to work again.

SSAO sample applies AO to the whole image. Shouldn't it only be applied to the ambient term?

Yes.

However we have a chicken and egg problem: to calculate SSAO we need the scene rendered first to retrieve its depth information. To properly apply the effect we need SSAO first before rendering the scene.

This is not a problem at all in Deferred Shading engines but a weakness in Forward renderers like Ogre.

There are two possible solutions:

  1. Use a Z-prepass. Ogre already supports Z Prepass (e.g. see Screen Space Reflections sample). This means the scene must be rendered twice and may result in worse or better performance depending on many variables. After the Z Prepass, SSAO should be calculated, and then in the 2nd pass AO is only applied to the ambient term. Currently Ogre 2.1+ has no integration and would need to be enhanced to support retrieving SSAO information to affect only ambient.
  2. Fake it in a single pass. Save the % of contribution ambient has over the overall final colour; and store it somewhere (e.g. in the alpha channel or in another texture via MRT) and when applying SSAO, weight the SSAO strength based on this % to make it weaker. Hence areas with no direct lighting will have a stronger SSAO than well-lit areas. This can be achieved by extending HlmsPbs with custom pieces so that the ambient contribution is stored somewhere.