Table of contents
- UI Inspiration
- 3D Monitors / Glasses /...
- Multiple Viewport, RTT etc
- Instancing etc
- General Shaders
- Fancy Shader Effects
- Misc Materials
- Colouring Static Geometry
- Vector Graphics Rendering
- Pointers, Disposing Of Objects Etc
- Mesh Import Export
- Pre-Made Meshes
- Ogre Cleanup / Memory Management
- 3D Mice
- General C#
- Halloween Edition
- Editing ManualObject
- Memory Management
- Character Creation
- Using Webcam Data
- Video Capture
- Video Editing (for the poor)
- Charts / Graphs
- DirectX SDK
- Error Presenting Surfaces Exception
- Soft Body e.g. Clothing, Hair,Grass
- Checking Hardware Capabilities of Graphics Cards
- Slicing / Clipping
- Compositing and GUIs / HUDs
- Screen Burn-In
- Get The Mogre Version
- Mogre In WPF
- Procedural Geometry
- Render Cycle
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9382&sid=10ab9cc5d318ad175d94b8d993608073 is about debugging something that is running for a very long time before it crashes / dealing with an obscure and hard to reproduce error.
- http://www.ogre3d.org/addonforums/viewtopic.php?p=54540#p54540 is about the (2009) possibility of merging PyOgre with Mogre via IronPython.
- http://www.ogre3d.org/addonforums/viewtopic.php?p=39802&sid=ce193664e1d3d7c4af509e6f4e2718c6 - is about someone (2008) playing around with IronPython. Interesting.
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13391&sid=054584cc24c8e30b739ae9779b2a0dbe - About Gorilla and Canvas high speed 2D rendering of text and the like. Possibly useful as a back end to Miyagi. Frame rates of ~5000fps are quoted for simple layouts. Miyagi doesn't even get close.
- http://roecode.wordpress.com/2008/03/17/xna-framework-gameengine-development-part-19-hardware-instancing-pc-only/ - On instancing. Hardware instancing has been available since Shader Model 3 (DirectX 9.0c). According to http://en.wikipedia.org/wiki/Intel_GMA#Specifications - Shader model 3.0 has been in embedded graphics for a few years, although initially in software. This means that it would be safe to develop for it. http://www.gamedev.net/community/forums/topic.asp?topic_id=366247 - has some more on hardware instancing that suggests that with old / rubbish hardware, hardware instancing is very slow compared to shader instancing.
- http://en.wikipedia.org/wiki/Openexr - a fairly new image file format where you can do things like use half precision floating point and have an arbitrary number of numbers per pixel. It may (but I haven't found evidence for it) be supported directly by graphics cards / Cg. It has various compression schemes.
- http://www.songho.ca/opengl/index.html - some good notes on OpenGL with good diagrams.
- The AutoCAD Users Guide http://exchange.autodesk.com/autocad/enu/pdf-documentation has a section on "Prepare a Model for Rendering" which goes through some of the pitfalls that can make a rendered image not look so good. Given it is probably ray-tracing, there are probably more problems than are listed for real-time Direct3D / OpenGL type graphics, but it's a good place to start.
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=14013&sid=5a2ebeb356ea20ee447d99c8bbce6ef7 has some hints on how to avoid crashes. It also discusses some interesting points of OpenGL Vs D3D.
- The Vector3 class reference http://www.ogre3d.org/docs/api/html/classOgre_1_1Vector3.html has a lot of useful functions for comparing Vectors, seeing if they are within tolerances. It is probably better to use these functions, rather than constructing them in C#. Note that Vector3.Distance is the non-normalised version of Vector3.Length.
- "Setting field z on value type Vector3 may result in updating a copy. Use Vector3.z.SetValue(instance, value) if this is safe."
http://www.componentart.com/products/dv/ - has some quite cool rich client style web GUIs.
3D Monitors / Glasses /...
- Anaglyph is the red-green glasses style 3D images. I assume it would be easy to use - 2 cameras and a publicly known algorithm for merging them.
- The problem with 3D shutter glasses is monitors typically are only 60Hz -> max 30fps frame rate.
- http://www.iz3d.com/ - 3D graphics driver stuff, but I think you could just do it manually with Ogre??
- 23.6" full HD 120Hz monitor is currently (2010/10/05) £255 -> isn't that expensive.
- NVidia seems to be more into the shutter glasses than ATI. NVidia kit with shutter glasses and transmitter costs ~£120 (2010/10/05) with an extra set of glasses costing ~£60. They work with 60Hz and 120Hz monitors.
- I assume that if you use shutter glasses, then you really need to be able to do Vsync - although I suspect it would still work with lower / uneven frame rates as it just sends a signal when the frame has changed. -> I assume your software has to signal the NVidia kit when you change viewport.
- http://www.ogre3d.org/forums/viewtopic.php?f=1&t=53978 - stuff about trying to use shutter glasses with a projector. States that he has found a lot of info on the use of 3D with ogre, but has encountered some problems.
- http://developer.nvidia.com/object/nvision08-stereo.html - suggests that using shutters is a bit more than just sending the images sequentially, they can be sent in one go (P10) - but later, it seems like it goes back to being simple with the driver dealing with the rest (P19). Says you should give the user variable stereo separation as they tend to increase it as they get more comfortable with it.
- "We are looking for 3D Stereo showcase titles and promotions" Nvidia 2008.
Multiple Viewport, RTT etc
- http://www.ogre3d.org/forums/viewtopic.php?f=3&t=48376 - talks about joining multiple textures together, which haven't necessarily been rendered by the same application -> a possible way of doing multi-threading??
- Multiple viewports and windows is very easy - you just add a new RenderWindow, and a number of cameras, add the viewport of the cameras to the renderwindow stating the position and size of the window (remember coordinates are 0->1).
- Multiple viewports does seem to reduce the framerate approximately linearly with the number of viewports - at least for complex scenes. For simpler scenes it performs much better.
- Miyagi comes with examples of multiple cameras (haven't looked at them, but ControlRenderBox.cs)
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=5300 - on using Mogre with 2 renderers. It also
While the logs often suggest you use Vsync, it sometimes artificially limits the frame rate - e.g. 2 monitors, same graphics card and Vsync -> on one screen reaches 60fps, move to the other 60fps, fullscreen 30fps. Turn off Vsync and you get ~180fps.
- If you have a lot of identical objects, spread around the scene -> you would typically have the Static / Instanced Geometry object always being rendered, it would be quicker to cull the instancing, rather than cull the vertices, especially if you have a lot of vertices per object. Using hardware instancing would also allow you (I assume) to select a different resolution mesh according to proximity to the camera (check if this is possible and with which version of DirectX / OpenGL ), further improving the frame rate. If you did this, then you would probably also need different settings for different hardware as well (i.e. you probably need several shaders).
- http://http.developer.nvidia.com/GPUGems3/gpugems3_ch02.html (Animated Crowd Rendering 2.2) - Hardware Instancing - typically use a second vertex buffer to provide information on the position of instances - I don't know if that is
- http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter03.html - Stuff on geometry instancing (various types of). Interestingly, it suggests that re-creating the mega-mesh of several instances, all translated into global coordinates (i.e. like StaticGeometry) isn't totally unreasonable to be done each frame, because you are still saving on draw calls. It appears to be slightly dated overall. 3.3.4 Batching with the Geometry Instancing API.
- It looks like there is a separate DirectX function to call to enable hardware instancing-- but check...
- Can you set vertices to invisible so that nothing will be rendered, by the fragment shader, while using the standard Ogre material scripts???? Really needed for having static geometry where occasionally parts are invisible, which could be determined by a visibility mask or shader params that mask them.
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9025&sid=83566fa74fab0086ae10a544ab020dfe&start=420 - shows how to programmatically add a shader with various settings. In this case, the shader fades the colour to white by passing a "brightness" value to the shader.
Fancy Shader Effects
http://www.gamedev.net/reference/programming/features/GPUFur/ - real time dynamic fur on the GPU.
- Wireframe is applied in a pass and there can be multiple passes per material, so it would be very easy to create some combination material with wireframe and e.g. semi-transparent, or opaque. The same is true for when rendering the vertices.
- With semi-transparency, you want to make sure that depth_blend is on, or you will get weird effects where some things appear to be rendered behind things that are actually behind them. Using scene_blend modulate, you can't have a black background as it just modulates what is behind (it's recommended for things like smokey glass), whereas alpha_blend allows you to see the object, whatever is behind it.
Colouring Static Geometry
Here you want to colour (done 27/09/2010 - look at the TestMogre project)
- Give the mesh two sets of texture coordinates by running manualObject.TextureCoord() twice.
- Use the second texture coordinate to set the overall colour, so you use a fixed texture coordinate for each object, which corresponds to one pixel of an image. This pixel will be used to set the colour.
- You don't need any fancy shaders, just add a second texture_unit to your material script, but the key part being that you set text_coord_set 1 - use the second set of texture coordinates.
- I managed to get ~220fps with Radio 5850 and Xeon X5482 processors running single threaded showing 128x128 tetrahedrons with the colour being updated every frame, but the texture remaining static. 32x32 I get about 950fps.
- Optimisation (NB some stats a bit dodgy as may have used different sized windows, tried to run maximised by the end) (also NB computers aren't that consistent!):
- Colours for each tetrahedron were always stored in 2D arrays (not lists).
- (using float4 to store colour, not Vector3 (which had to be converted to float4); re-using byte objects, using constants in for loops) lead to 1020fps with 32x32; 128x128 you get ~290fps. I.e. this kind of optimisation does work!
- Without any colour changing 32x32 renders at ~1250fps, but is very dependent on how much of the screen is filled with tetrahedrons - zoom out and it goes to ~1500fps; 128x128 tetrahedrons runs at 720fps with a 32x32 texture and 680fps with a 128x128 texture - this demonstrates that updating the colour every frame doesn't lead to a great performance penalty (assuming the change in material to use two textures isn't a big hit) with a small number of entities, but it is significant with a large number.
- Re-making the texture each frame, but not changing the colour yields: 128x128 370fps. Re-doing the colour, but not re-making the texture: 128x128 680fps. I.e. the main hit is creating the texture and sending to the graphics card.
- By changing the texture from TU_DYNAMIC to TU_DYNAMIC_WRITE_ONLY_DISCARDABLE and re-using the old texture pointer, rather than creating a new one, you get: 32x32 1020fps Vs 920fps (with Vs without these changes) and 128x128 320fps Vs 290-295fps. i.e. they work! 32x32 not remaking the texture, but using TU_DYNAMIC you get 1050fps although camera position varies results a lot -> not scientific. With TU_DYNAMIC and 128x128 you get 320fps. In conclusion, don't remake the texture each frame. Use TU_DYNAMIC as it has at least as good performance, but doesn't risk weirdness if you don't update the texture every frame.
- Future optimisation ideas:
- Use float3 pixels, or don't alter the last float's value -> hard code the bytes for it.
- Use lower resolution colour - currently storing colour with 16Bytes per pixel (4 bytes per colour). For most uses you could get away with 1Byte colour (but not so swanky for some stuff). A much better option is probably half precision floating point, which uses 2Bytes per number -> 8Bytes for RGBA. An excessively complex scheme might encode the alpha in lower resolution than the RGB.
- Manipulate bytes rather than floats, to remove the conversion process.
- Only change pixels that have changed - not particularly hard given you access the byte array as an array / with indexes.
- Optimise buffer locks etc (see enums mentioned below / class ref).
- Don't remove the texture, update it.
- Compress the texture before sending it. Probably a big gamble for uncertain gain. Also, compression level would probably vary a lot with time e.g. if you use the same colour for all pixels, compression should be very good, but because colour will largely be uncorrelated with pixel position, compression might be very bad when you use different colours for the pixels.
- I think you must be able to state that the texture should always be in the graphics card -> speed up rendering for multiple viewports.
- see http://www.ogre3d.org/docs/api/html/classOgre_1_1HardwareBuffer.html#_details - has various flags that might improve performance. Some of these flags can be mixed together (i.e. use |). Similarly http://www.ogre3d.org/docs/api/html/classOgre_1_1HardwarePixelBuffer.html
Vector Graphics Rendering
- http://www.ogre3d.org/forums/viewtopic.php?f=11&t=47237 - about the use of Cairo with Ogre - looks like it's not currently updated. Cairo is used for drawing vector graphics across platforms and can use an OpenGL backend i.e. hardware acceleration. Could be useful for things like very fast and attractive graph rendering. It's used by things like firefox, GNOME (IIRC) etc. If using it, you might want to be clever about what you re-draw each frame and what you only draw once / occasionally, using Render To Texture to store the result.
- OpenVG is created by Kronos (of OpenGL fame) and is designed for fast rendering of vector graphics, although primarily on handheld devices.
- WRT vector graphics rendering - it would seem possible to use a variety of libraries, render separately and then dump in Ogre... the small print of how to do this might not be so simple though.
- the risk of using an external library would be continuing to support its integration with Ogre. It might be more simple for a number of things to roll-your-own... or it might not be!
- Cairo can include things like PDF, SVG, PS rendering in your application (possibly through external libraries).
- Cairo has wrappers for most languages out of the box -> may be able to bypass ogre and link via .NET.
- Issue with font glyphs messing up due to one row of the glyphs moving into the next row - http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9025&sid=cad8fb167e6d2c02af6b83ed9c5f8ee3&p=77795#p77795. Solutions include changing the font (e.g. DejaVu Sans apparently works well, BlueHighway doesn't) or reducing the character set that is turned into glyphs (generally the overlapping characters are letters with accents etc, which in English you don't generally use). http://www.fileformat.info/info/charset/UTF-32/list.htm has a list of UTF32 characters (which Miyagi uses by default) and you can pass in ranges (e.g. Range(32, 0x7E) includes most normal characters) in TrueTypeFont.Create().
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13997&sid=5a2ebeb356ea20ee447d99c8bbce6ef7&start=30 mentions some commercial games using Miyagi, namely http://www.mudtv.de/ and http://www.dungeons-game.com
There are a few Quaternion.* methods for interpolating between different quaternions e.g. Squad, which does a cubic interpolation between four quaternions and is often smoother than linear interpolation; slerp (Spherical Linear Interpolation), a linear interpolation;
Pointers, Disposing Of Objects Etc
- Holding on to pointers e.g. TexturePtr can be dodgy and should (AFAIK) generally not be done. For instance, if you are dynamically creating textures and hold onto the pointer, when you resize the window, Ogre dumps the texture in the GPU and it needs to be resent, but this needs a different pointer and keeping your old one blocks the dumping process (no idea why).
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13719&sid=eed78f70d5bf0614ae4e029cbaec0385 good and long discussion, encouraged byme- If you do implicit conversion of ResourcePtr to MaterialPtr etc, e.g. MaterialPtr matPtr = MaterialManager.GetMaterial() -> you will not have disposed of all the pointers as there is an implicit conversion for these kind of things. Best way (often) is to use using statements. Overall, it seems like you should try and dispose of everything, pointers and non-pointers alike.
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13825 - there is also a need to do texturePtr.Unload(), before removing from the TextureManager and disposing of it.
- http://www.ogre3d.org/tikiwiki/Materials#Lightmap - the shadow could be pre-baked using this lightmap style approach with fixed geometry. Of course, no shadows would be cast on moving characters and it would take quite a lot of work to create a general solution.
- If you used lightmaps, the you could make the dynamic shadows very short range to prevent unwanted effects, such as them going through floors, and to improve frame rate.
Mesh Import Export
- http://www.di.unito.it/~nunnarif/sketchup_ogre_export/ - Sketchup to Ogre exporter.
- http://blink3dworld.com/index.php?option=com_content&task=view&id=36&Itemid=46 sketchup export linked to.
- http://www.ogre3d.org/forums/viewtopic.php?t=42822 -about an ogre based tool that extracts Sims 2 and Spore meshes. This would be a great way to get fairly low poly count models. You could also look for converters from World Of Warcraft and Second Life. You can get avatars off sites like http://www.parsimonious.org/
- Google's 3D Warehouse has lots of meshes.
Ogre Cleanup / Memory Management
- http://www.ogre3d.org/forums/viewtopic.php?f=1&t=60466 - talks about using the SpacePilot Pro (3dConnexion) with OIS on Mac. There was a bug (Sept 2010).
- http://www.3dconnexion.com/forum/viewtopic.php?t=1005 - about OIS and 3DConnexion and patches. Mentions that you can pretend the 3D mouse is a joystick...
- http://www.wreckedgames.com - is where OIS is developed.
- http://www.wreckedgames.com/forum/index.php/topic,920.0.html - suggests that a patch was made and that it should work, at least on Win32. http://www.wreckedgames.com/forum/index.php/topic,871.0.html suggests that HID may be being used, rather than 3DConnexion drivers.
- OIS may not be the best way to go about it as it won't support the screens on SpaceNavigators, or all the buttons (AFAIK). However, the OIS route will be generally suitable for all joysticks.
- 3dConnexion SDK comes with a document "Programming for the 3D Mouse" which states that on windows, you should use the Raw Input API (available for C++ and .NET), which gives access to HID devices. http://www.codeproject.com/KB/system/rawinput.aspx is an example of using C# and Raw Input to manage several keyboards at once.
- http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx - Process Monitor - a tool for determining what threads are created, which files they access etc. AFAIK it's not programming language specific, it's a general windows tool.
- http://msdn.microsoft.com/en-us/library/dd460648.aspx - Managed Extensibility Framework (MEF) - provides a more flexible system for plugins etc than just specifying interfaces and allows faster loading by interrogating metadata rather than loading them up fully. It allows dependencies between components and doesn't statically link a plugin to a particular host application, so you can use the plugin in several applications. Overall, looks useful. It is related to (but separate to / you can use both in parallel) Managed Add-In Framework (MAF). http://msdn.microsoft.com/en-us/library/bb384200.aspx - Add-ins and Extensibility - provides another overview of this kind of stuff. Note that if you do start doing plugins, you need to deal with which versions are compatible, making sure that plugins can't cause too much harm etc.
- System.BitConverter http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx - converts to and from bytes. Useful for making textures etc.
- http://msdn.microsoft.com/en-us/library/ms173175%28v=VS.80%29.aspx - combining delegates - you can use operators like + / -. http://msdn.microsoft.com/en-us/library/system.multicastdelegate%28v=VS.80%29.aspx - which AFAIK just executes a number of delegates sequentially.
http://geekswithblogs.net/shahed/archive/2006/12/06/100427.aspx - reflection over an enum to get all the possible values. System.Enum.GetNamesMyEnum / GetValues. http://blogs.msdn.com/b/tims/archive/2004/04/02/106310.aspx shows that using Enum.Parse(...), you can convert from a string to the Enum.
- http://www.ogre3d.org/forums/viewtopic.php?f=8&t=55153&sid=0d400c6253915cac9490c62ac9ad6adb - some 3D models, including a skeleton.
- Reduce animation complexity, change colours, add bubbling animated texture where possible (and possibly have frogs appearing in it too).
http://www.ogre3d.org/forums/viewtopic.php?p=285753&sid=ce193664e1d3d7c4af509e6f4e2718c6 - it looks like you can't do things like go back and add texture coordinates easily, although some edit are possible (exactly what, I'm not sure, but I think adding vertices and sub-meshes are possible).
- http://www.ogre3d.org/tikiwiki/OgreProfiler - you might need to be using C++ and it recommends using the Ogre from source, but it might be good.
- Optimised game loop: http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13133
- http://www.radgametools.com/telemetry.htm - phenomenally expensive, but maybe worth seeing what it does. They produce a variety of super-expensive tools (but maybe cheaper if you talk to them).
- If you are comparing a lot of lengths of Vectors, you may be able to use Vector3.SquaredLength, rather than Vector3.Length if you are just doing a comparison, rather than using the length itself. Square roots are expensive to calculate. If you are comparing with a known length, consider squaring that length and comparing with Vector3.SquaredLength.
- If you are doing a lot of Vector maths, see if you can replace several functions with higher level functions that do the same thing e.g. the members of Vector3 class for comparing vectors etc. I assume that they will run faster (and may take advantage of more processor optimisations).
- Processing streams asynchronously http://msdn.microsoft.com/en-us/magazine/cc337900.aspx - relevant for creating textures very quickly??
- http://msdn.microsoft.com/en-us/library/1hsbd92d.aspx - System.ArraySegement structure - wraps a portion of an array so that you can iterate over it as if it were a complete array. Useful if you want to split up an array for processing in several threads etc.
- http://stackoverflow.com/questions/1460634/are-c-arrays-thread-safe - on whether it is thread safe to change what is stored at different indices of the array in multiple threads simultaneously - simple answer is yes. Also assignment by reference is atomic, so if you pass in virtually anything that isn't a struct, you can have multi-threaded access to that element without any issues.
- http://www.ericsink.com/entries/multicore_map.html - a C# generic implementation of the mutli-threaded map function.
- TaskParallelLibrary - http://msdn.microsoft.com/en-us/library/dd460717.aspx - .NET 4.0 - allows you to run things in parallel more easily. In some cases, you may still want to manually partition lists etc.
- http://msdn.microsoft.com/en-us/library/dd460693.aspx - overview of Parallel Programming in .NET, with several improvements for .NET 4.
- .NET 4 changes http://msdn.microsoft.com/en-us/library/ms171868.aspx
- Thread.Yield - http://msdn.microsoft.com/en-us/library/system.threading.thread.yield.aspx- causes the current thread to yield execution to another thread that is ready to run on the current processor - could be used to prioritise running FrameStarted? But you can already use Thread.Priority which might be better.
- http://msdn.microsoft.com/en-us/library/system.threading.tasks.aspx - System.Threading.Tasks - .NET 4 way to go about a lot of multi-threading. In general, .NET is moving to higher level code than ThreadPool and Threads.
- http://msdn.microsoft.com/en-us/library/dd460718.aspx - data structures for parallel programming / System.Collections.Concurrent - thread safe collections including things like blocking queues that have been lacking previously. Look at the Synchronization Primitives for faster options Vs locking.
- http://msdn.microsoft.com/en-us/library/system.action.aspx - System.Action - if you want generic delegates for e.g. running things in parallel, then this can help. Also useful may be the Func<TResult> http://msdn.microsoft.com/en-us/library/bb534960.aspx delegate which allows you to return a value. You could of course define these manually.
- http://msdn.microsoft.com/en-us/library/dd992634.aspx - Parallel.Invoke(Action) allows you to run a series of actions, possibly in parallel.
- C#'s version of map() is http://msdn.microsoft.com/en-us/library/bb548891.aspx IEnumerable.Select(func), but I assume it is single threaded. It uses deferred execution i.e. you only computer the result when you ask for the result from the return object.
- http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13996&p=79027#p79027 (Feb 2010) discussion of multi-threading in Ogre and Mogre, which in turn links to http://www.ogre3d.org/forums/viewtopic.php?f=2&t=63101 (which at the time of writing hadn't been answered much, but does at least put forwards some interesting questions). There may (in the near future) be multi-threading in a pre-compiled version of Mogre - there is already varying levels of multi-threading in Ogre.
Creating ManualObjects / Meshes In Threads
http://www.ogre3d.org/forums/viewtopic.php?f=2&t=56140 - it's possible using the right technique to multi-thread a lot of your mesh creation. Uses http://www.ogre3d.org/tikiwiki/Generating+A+Mesh style mesh creation rather than ManualObject, but AFAIK they are similar. http://www.ogre3d.org/forums/viewtopic.php?f=1&t=64389 has more discussion of similar.
- http://msdn.microsoft.com/en-us/library/ms404247.aspx - Weak References - if you have something that is taking up a lot of memory and you can recreate it, then you can create a weak reference when you're not directly using it which means that the garbage collector can take the resource away if you're running low on memory, but if it doesn't run low on memory, then you will be able to get a strong reference to it as if it was a strong reference all along. If it has been garbage collected, then obviously you will need to recreate the object.
- http://www.evolverpro.com/products/transport.aspx - includes Ogre exporter
- http://www.mixamo.com - cheap / free, but not sure it has Ogre exporter. It does have a blender exporter however.
Using Webcam Data
http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13851 - mentions how to speedily put data in textures.
http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9593&start=15 - claims has got it working fairly well. Has tried putting encoding on a background thread. Thread is fairly long running -> good chance of it being a good solution.
Video Editing (for the poor)
- Blender's VSE (Video Sequence Editor / Sequence View) allows you to crop (drag the ends of the sequence bars), cut ('K' splits), as well as adding titles, merging frames etc. I think you can pass the frames through various post processing steps.
- Some command line tools might be good for basic task e.g. http://www.catonmat.net/blog/how-to-save-time-by-watching-videos-at-higher-playback-speeds
- http://www.am-soft.ru/avifrate.html - allows you to change the headers of AVI files -> you can double the frame rate to make your application seem smoother.
- http://www.aoamedia.com/videojoiner.htm - a free tool to join videos files together.
- http://avisynth.org/mediawiki/Main_Page - AviSynth might be a good tool. GUI free / scriptable and open source and windows. http://neuron2.net/LVG/avisynth.html has a nice overview - it works as a middle man and can be used to join videos very easily. Looks like a very simple way to do a number of tasks.
Charts / Graphs
http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Ogre%20Line%20Chart - how to create custom line charts in Ogre. Uses shaders and creates nice graphs.
http://www.ogre3d.org/forums/viewtopic.php?f=2&t=29653&start=0 stuff about "OGRE EXCEPTION(2:InvalidParametersException): Point out of bounds in StaticGeometry::getRegionIndexes at ..\..\ogre\OgreMain\src\OgreStaticGeometry.cpp (line 215)" Exception. In short, StaticGeometry is broken up in to 1024x1024 (x1024??) chunks and if you don't set it up correctly, then you might add something outside of the range that has been set up so it throws an exception. This may be caused by the AABB of the entity you are adding being wrong, or having a vertex in a very weird place. By default the SG origin is (0,0,0) and it can have 1024 x (1000,1000,1000) regions.
- http://legalizeadulthood.wordpress.com/2009/06/28/direct3d-programming-tip-5-use-the-debug-runtime/ -
- Can be used for debugging crashes in Mogre / Ogre when the Mogre error message is insufficient. Not sure if it will pump the debug messages to file, but it should pump them out to the debug console in VisualStudio or alternatively a standalone program such as DebugView http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx, which can pump to a file. I think it will work even if you're using a pre-compiled binary / something without debug symbols.
Error Presenting Surfaces Exception
"12:02:13: OGRE EXCEPTION(3:RenderingAPIException): Error Presenting surfaces in D3D9Device::present at .\..\..\ogre\RenderSystems\Direct3D9\src\OgreD3D9Device.cpp (line 993)" - happened (probably) when creating or re-creating meshes i.e. Creating ManualObject and then putting in a StaticGeometry.
- http://www.ogre3d.org/forums/viewtopic.php?f=2&t=45711&start=0 - suggests enabling the DirectX Debug Runtime (http://legalizeadulthood.wordpress.com/2009/06/28/direct3d-programming-tip-5-use-the-debug-runtime/ - requires downloading the DirectX SDK)
- http://www.ogre3d.org/forums/viewtopic.php?f=2&t=51608 - suggests that it is probably something to do with ManualObject and more specifically ConvertToMesh()
- TODO: check if correctly disposing of all ManualObjects and Destroying them as well. Similarly the meshes. Might also be able to remove the calls to ConvertToMesh() and just use the ManualObjects directly. -> do a simple count on the number of CreateManualObject, DestroyManualObject, and (possibly dubiously) mo.Dispose
- TODO: need to create a stress test routine, which continuously rebuilds the ManualObjects and StaticGeometry, renders a few frames and then repeats.
Soft Body e.g. Clothing, Hair,Grass
- http://www.ogre3d.org/forums/viewtopic.php?f=1&t=65004#p429752 - Shroud Cloth - does some pretty great stuff to add realism.
Checking Hardware Capabilities of Graphics Cards
The cause for me to look into this was the fact that some (even fairly recent 2011) Intel graphics cards don't support Full Screen Anti-Aliasing (FSAA). It has proven difficult to find a reliable method of determining capabilities in this area.
- http://www.ogre3d.org/forums/viewtopic.php?f=2&t=65188 - uses Ogre::Root::getSingleton().getRenderSystem()->getConfigOptions(); http://www.ogre3d.org/forums/viewtopic.php?f=2&t=61986 uses a similar method. These didn't appear to work with me, with a capable Radeon reporting only 0 for FSAA modes, which was the same as for the Intel HD graphics card without FSAA support.
- http://www.ogre3d.org/forums/viewtopic.php?f=5&t=48492 - renderSystem.CreateRenderSystemCapabilities() needs to be called after a render window has been created (even a dummy one), or it returns null.
- FSAA current status: Looks like we want multi-sample type not FSAA quality as the quality relates to NVidia CSAA only, and not FSAA on other cards, yet it seems like when you set the FSAA level it does set (what I would call) the quality. See D3D9RenderSystem::determineFSAASettings.
Slicing / Clipping
For if you want to show a section through an object.
- Can subdivide the object into discrete chunks that you selectively turn on / off the visibility. E.g. a building, you might turn off the whole wall if part of it intersects a slicing plane.
- http://glbook.gamedev.net/moglgp/advclip.asp - shows a method using an OpenGL clip plane. The example includes taking chunks out, rather than simply a complete plane. http://www.youtube.com/watch?v=1JkpvGxeT_I is a video of similar. It has been used on a heart surgery training program http://www.ogre3d.org/forums/viewtopic.php?f=11&t=64966&p=429465#p429450
Compositing and GUIs / HUDs
- Needs research.
- Window Managers using things such as Compiz will (AFAIK) allow different elements of the screen (windows) to be rendered at different frame rates. The windows can be combined and overlapped. A similar technique could be used to create a 3D program with a high frame rate HUD, with responsive user interaction, and a potentially slower frame rate main render window. This is important as when the frame rate drops to say 20fps, mouse moving can seem very sluggish, but animation of the main scene can seem sufficient. This also gives the illusion of a much faster program if you can do smooth animations on the HUD.
WRT the issue of whether if you are having a very similar image on screen for a long time, you need to change the image to prevent it being permanently etched on the screen:
- http://www.techlore.com/article/10099/Do-LCD-TVs-Burn-In-/ - NB article is about TVs simple summary is LCDs don't really burn in except under extreme circumstances, which a computer program might be categorised as.
- http://en.wikipedia.org/wiki/Screen_burn-in - Plasmas do get burn in (but are going out of fashion). OLED burns more than plasma currently (but at time of writing isn't used much). With plasma and LCD, you can get temporary burn in like behaviour, which can be relieved by leaving the screen off for a long time. Apparently some signage companies shift the image very slightly so that while it does burn in, the edges are softer, so you don't notice it so much.
Overall, while LCD remains the dominant technology, it's not a great problem.
Get The Mogre Version
Mogre In WPF
- http://www.codeproject.com/KB/WPF/OgreInTheWpf.aspx - slightly dated (Sept 2008), but likely to still work.
- WPF is going to be a lot better if you are (definitely) only going to be developing on Windows, but if you ever move to Linux, you probably won't have any ability to use it - although given you'd need to get Mogre redone for Linux, you've probably already locked yourself into Windows.
- Look also to the forum topic Ogre with WPF client, which contains good information and links, e.g. to the MogreInWPF demo. More links are in this post (of the same forum topic).
There is a project http://code.google.com/p/ogre-procedural/ / http://www.ogre3d.org/tikiwiki/Ogre+Procedural+Geometry+Library to help easily create procedural geometry i.e. easily create meshes of common objects such as spheres (which are made out of lots of triangles). The more interesting side is that you can do extrusion and use SVGs as the basis for the object to extrude. This means that you can do things like easily create a 3D map based on an SVG you found. Currently there is even a screenshot of rendering OpenStreetMap XML data to 3D.
- Frame listeners: http://www.ogre3d.org/tikiwiki/Basic+Tutorial+4&structure=Tutorials#FrameListeners - FrameStarted / FrameRenderingQueued / FrameEnded - the order of them is not (apparently) not predictable and you should always use FrameRenderingQueued to update your program for best performance. http://www.ogre3d.org/docs/api/html/classOgre_1_1FrameListener.html "Of course because the frame's rendering commands have already been issued, any changes you make will only take effect from the next frame, but in most cases that's not noticeable." My personal experience is that FrameStarted Vs FrameRenderingQueued (D3D9) doesn't make any noticeable difference.
- It is fairly important to have a fairly predictable frame rate i.e. all frames take as long as the previous one. Issues tend to occur because you are predicting how long it will take to render a frame, and using that time prediction for animation (of scene objects or the camera) a common symptom is a stuttering camera - although you should also check your camera control. Consistent frame rate can be difficult with things like garbage collection (even if it should be asynchronous) and especially where you cannot naturally achieve a high frame rate. If you have a very high frame rate, then you can turn on VSync (or do something similar yourself) so that each frame takes a time which is consistently long, even if it could run faster. If you do have stutter, then it would be a poor decision to use the previous frame time as a direct indicator of the next one. Consider averaging several frames and consider trying to remove inconsistent values e.g. exceptionally low or high values, but if you are trying to run a real time animation, filtering values may make you drift off the real clock. Filtering may however be sensible for human interaction e.g. camera movement where you are trying to reduce large jolts.
- Attempt to make as much as possible asynchronous to the render cycle - but this may be easier said than done.
- Should investigate how to make the time predictions the most accurate e.g. timing at the end of the frame cycle to update very critical parts (camera)...