Table of contents
- Project Information
- Weekly Progress
- Mesh Voxelisation
- Heightmap Voxelisation
- Procedural Terrain Generation
- More CSG base primitives
- De-/Serialize the Chunk Tree
- Affine CSG
- Parallax Occlusion Mapping
- Infrastructure to make plugins possible
- Ambient Occlusion
- 1. Definitions
- 2. Fair Use Rights
- 3. License Grant
- 4. Restrictions
- 5. Representations, Warranties and Disclaimer
- 6. Limitation on Liability.
- 7. Termination
- 8. Miscellaneous
Proposal and discussion thread: http://www.ogre3d.org/forums/viewtopic.php?f=13&t=69449
Mercurial repository: https://bitbucket.org/philiplb/ogrevolumeterrain/
- Mai 21. - Mai 27.: GSoC start. Working on non ambigous Marching Cubes implementation.
- Mai 28. - June 03.: Marching Cubes done
- June 04. - June 10.: Working on LOD
- June 11. - June 17.: Working on LOD done
- June 18. - June 24.: Revisiting the octree generation, research on Adaptive Distance Fields as split policy and / or QEFs again or something completly different. Also making things somewhat more usable.
- June 25. - July 01.: Refactoring storage and handling of the geometry to reduce batch count, more optimizations.
- July 02. - July 08.: Extending the triplanar texturing to a fullblown shader with lights, fog, etc..
- July 09. - July 15.: Triplanar Texturing reference shader done. Loading basic (non-CSG) terrain from a config file. Starting to have a deep look at the WorkQueue to parallelize loading of chunks.
- July 16. - July 22.: Have a deep look at the WorkQueue to parallelize loading of chunks.
- July 23. - July 29.: Move the actual volume rendering code to a plugin. Starting with having a RTTSSystem triplanar component.
- July 30. - August 05.: Having a RTTSSystem triplanar component.
- August 06. - August 12.: Having a RTTSSystem triplanar component done. Creating a proper sample out of the current one.
- August 13. - August 19.: Testing, bells & whistles, refactoring, optimizing, documentating, buffer
- August 20. - August 26.: Testing, bells & whistles, refactoring, optimizing, documentating, buffer
- August 27. - August 31.: Final GSoC touches and handing in
- Done so far: Some CSG VolumeSource, untested 3d texture VolumeSource, generating an Octree out of the volume with debug visualization
- Forked Ogres Repo
- Ported playground project from own application to an Ogre sample project
- Adjusted all code to Ogres style guide
- Wiki page created
More warmup and making a comfortable environment.
- SDKTray TextBox, Main Sample class is registered as LogListener and fills it.
- Checkbox for showing and hiding the octree.
- Hotkey "h" hides/shows all UI elements, nice for screenshots.
- Read the first half of "Effective C++" 3rd edition to brush up my C++ knowledge. Highly recommended book!
As the Summer of Code hasn't officially started yet and also my Thesis waits for some formal stuff, I tackled
mostly stuff which is nice but not super directly relevant to the LOD isosurfacing: Triplanar Texturing! And
some other small stuff.
- Triplanar Texturing with a small test-mesh, implemented as CG shader.
- Finished "Effective C++". Now that went into detail...
- Began a DualCell class which currently just holds 8 corners and can add them to a manual object for debug visualization. Next step is to traverse the octree and generate the dual cell grid.
Slowly actually getting started.
- Completed the CSG cube
- Implemented the construction of the dual grid with a (switchable) debug visualization
- Put a new roadmap in the Wiki
- Added some first documentation.
- Implemented a MeshBuilder to build up a mesh from triangles using vertices and indices without duplicating vertices.
- Started with Marching Cubes.
Because the previous entry had no screenshot. First working Marching Cubes!
First closed gaps.
- Added a checkbox to hide and show the actual mesh.
- Using the cells of the DualGrid for Marching Cubes now.
- Updated the roadmap as the Marching Cubes stuff was earlier done than expected.
- Implemented Marching Squares to triangulate the open parts of the (future) chunks.
First rough LOD.
- Finished the triangulation via Marching Squares for the open parts of the chunks.
- Updated the roadmap, added the plugin point.
- Changed the Octree grid split method to use some kind of geometric error.
- Moved Mesh Generation to the class Chunk.
- Finished a first LOD version with chunks and approximated pixel error.
Making things more usefull.
- Made the TextureSource working with some real data coming from the editor Acropora.
- Added the possibility (switched on per default) to use trilinear interpolation of the normal in TextureSource.
- Some big loading time optimizations.
- Added a CSGUnarySource as abstract parent class, CSGNegateSource is now a child class of it.
- Added a CSGScaleSource (CSGUnarySource is the parent class) which scales the given Source. Good for scaling a TextureSource to the desired size.
Optimizations and getting a bit towards something for the real world.
- Updated the roadmap as the LOD stuff was done one week early and I'm happy for now with the Octree.
- Changed the density function Source::getValueAndNormal to Source::getValueAndGradient.
- Using a new OctreeNodeZhangSplitPolicy, some kind of this method extended: http://www.andrew.cmu.edu/user/jessicaz/publication/meshing/
- Don't generate those border/skirt triangles on the border of the world. Not needed here and saves a lot of triangles.
- Put the volume textures in a zip to decrease IO loading time.
- Don't generate chunks which don't contain triangles and don't patch cracks. This decreases the batchcount.
- Implemented a CacheSource. Might be usefull for very expensive (nested CSG) sources.
- Using far better normals for the crack patching marching squares.
- Added a debug/documentation visualization of the MC configurations, the key m switches through all 256.
- Switched off the handling of ambigous cases in marching squares as it caused a crack (this took some hours of debugging...).
- Changed the chunks from being closed with marching squares to just some skirts, also with marching squares. This saves 42% of the triangles for the testscenario!
- A bit more tweaking on the skirt normals. Some blending can be configured now from the outside.
- Introduced ChunkParameters as parameter of Chunk::load to clean up this giant amount of parameters a bit.
- The drawing octree leaves and their parents have the same size now. So the node before the leave doesn't split up into eight children but into one. The eight would be all drawn anyway so they can be one mesh. Saves again a lot of triangles (no more skirts here) and reduces the batchcount to something usable.
- Found an error in the OctreeNodeZhangSplitPolicy causing those big differences between chunks.
- Found an error in the Triplanar Texturing shader causing some bad lighting.
- Added the point "Loading basic (non-CSG) terrain from a config file." to the roadmap.
- Added the point "Have a deep look at the WorkQueue to parallelize loading of chunks." to the roadmap./list
As the geometry and lighting is now nice and smooth, I initially wanted to post a video today. But unfortunatly, my router died yesterday due to a thunderstorm here and I'm on a crappy UMTS connection for the next days. So only a screenshot again.
Bringing the shader to live.
- Supporting up to 3 lights in the shader.
- Supporting light attenuation in the shader.
- Supporting spotlights in the shader.
- Supporting fog in the shader.
- Supporting normal mapping in the shader.
- Added some ideas for later to the wiki.
- Added a possibility to get all chunks of a specific LOD level.
- Added a callback to get the actual meshdata of a specific LOD level while loading.
Some more features and optimizations.
- Finished the reference shader. Although it's missing shadows.
- Updated the roadmap again. Removed the own material generation system, moved the other features in front and added triplanar texturing for RTTS afterwards.
- Added a possibility to load a volume terrain from a config file.
- Moved the calculation of the triplanar blending weights from the vertexshader to the pixelshader. Gives slightly nicer results.
- Using Ogres memory allocators (OGRE_NEW, OGRE_DELETE, ...) for the resources now (OctreeNode deriving now from UtilityAlloc). Decreases the loading time by about a third!?
- Encoding the isovalue in the length of the normal of the marching squares vertices which are completly within the volume. This value is then added to the planar mappings in the shader. The effect is, that there are way less texture distortions in the skirts.
- Removed the parameter skirtBlendWeightInsideNormal as this is not needed anymore.
- Fixed a bug in the generation of trilinear interpolated normals of the TextureSource.
- Removed Chunk::setChunkMaterial() and overriding the now virtual SimpleRenderable::setMaterial().
Parallelization, small weaks and fixes.
- Correct colours in the shader now.
- A first version of parallel Chunk loading.
- Created http://www.ogre3d.org/tikiwiki/tiki-index.php?page=How+to+use+the+WorkQueue .
- Small tweaks with the normals (again) make the skirts now acceptable looking.
Some cleanup and separation.
- Updated the roadmap (again) and moved the plugin creation upfront the RTTS.
- Just hashing the vertex position in the MeshBuilder, removed the normal here. Gives a nice loading time reduction and should be OK in this case.
- Removed the not working QEF octree split policy. Removed the virtual part of this class for a small performance gain and also removed the dependency to the CML.
- Renamed all files according to the Ogre conventions.
- Moved all files except the actual Sample ones to a new Component "OgreVolume".
(No screenshot today, nothing new visually)
Fixing stuff and preparing the RTSS-part.
- Fixed a crash bug when stopping the sample.
- Fixed a giant memory leak when loading a terrain from a config file.
- Fixed a smaller memory leak when stopping the sample where the Chunk tree wasn't freed.
- Fixed a memory leak where the geometry memory wasn't freed on destruction of the chunks.
- Changed the sleeping in the wait-for-threads-loading-loop from OGRE_THREAD_SLEEP(50) to OGRE_THREAD_SLEEP(0) yielding in 33% smaller loading time!
- Some foundation work for the RTSS triplanar texturing.
Last actual roadmap feature done, optimizations and refactoring, right on time with the "pencils down" date.
- Some microoptimizations to decrease the loading time a tiny bit.
- First working triplanar texturing SubRenderState for the RTSS!
- Moved some hardcoded triplanar texturing parameters to uniforms in the reference shader.
- Removed the normalmapping part of the triplanar texturing RTSS roadmap as this would require to implement the whole FFP_LIGHTING execution. This is a bit over the top for this GSoC when looking at the NormalMapLighting SRS.
- Sample Thumbnail and text updated.
- Fixed build for non-unity-builds
- Using Ogres maps as default and removed Boosts unordered maps, yielding in about 25% less loading time and removed the (optional) dependency to Boost!
- Introduced a global scale parameter for easy huge terrain generation with lower resolution data, in this case a 2560x2560x2560 world from a 256x256x256 source.
- Moved the max accepted pixelerror from static to regular class member and made it loadable from the config file.
- Refactored the two methods of OgreVolumeUtils to other classes and so removed the now obsolete file.
- Using 16Bit indices now with a fallback to 32Bit if the current mesh is bigger (doesn't happen in the testscenario), reducing memory and further 5% faster loading time.
- Stripped down the sample and removed the development UI and the MC debug display. As I still need the MC debug display for some nice pictures in my thesis, I moved this to an own, separate application not in this repository.
- Supporting visibility settings in the Chunk (Chunk::setVolumeVisible(), Chunk::getVolumeVisible()).
- Refactored the debug visualization API of the octree and the dualgrid in the same way.
Tweaking and release preparing.
- Worked again on the texturing of the skirts. They are much better looking now for an acceptable cost of a tiny bit more loading time.
- Added the copyright headers to all files and some minor style cleanup.
- Fixed the never-before-tested CSGCubeSource.
- Some tweaks and cleanup of the sample VolumeTerrain.
- Added a CSG sample. The parameters had to be tweaked a bit so there are no holes. This CSG surely isn't made for modelling...
GSoC, but not project end! Release and more stabilizing.
- Created another fork of Ogre and added this project manually to the 1.9 branch there. Development happens from now on here as I need a clean branch for my thesis without commits from other persons. Will be merged every few commits to the main repo.
- Some more optimizations of the loading time by about 25%.
- Merged the code into the main repository!
- Fixed accidently slipped in Windows line endings.
- Added a small helptext to the terrain sample of how to move arround.
- Fixed the triplanar texturing shader of the RTSS.
- The CSG sample now uses the RTSS triplanar texturing and has a nicer thumbnail.
So, we reached August 31th.
Between the 26th and today, only some refactoring happened to the codebase and I submitted the version of August 20th to Google. For now, I'm mainly writing on my thesis. How will this project proceed? There are several areas which I'd like to develop, in my current priority:
Currently, there's hardly any documentation beside the two samples. I see the documentation in two areas: Usage and theory. The usage-part will get some wiki-articles in the next days. The theoretical part is all about how everything works. This is partly covered by my thesis I'm writing. I like the idea here of putting the LaTeX files on Bitbucket after I defended it. But this will take some time. The deadline for the written part is the 6th of Novembre, the defense is... at some point afterwards, maybe a few days, maybe a few weeks... I also aim to write a paper or two about this and present it on conferences. But no concrete plans about this.
More sources and procedural generation
I'd love to have a noise source.
The only serious way to create the 3D textures needed for the terrain I'm aware of, is Acropora 0. But this program is not free (and also my trial expired ). Please tell me, if you know something here, a free editor would be the best. So an editor would be awesome. For this, a few parts are still missing: Some intersection with rays. For editing, intersection with the actual volume should be good which doesn't sound that hard. Also a way of updating the volume data during runtime is needed ("add sphere here coming from a brush"). And the third thing is the update of the chunks. A chunk needs some milliseconds (like... 15-25ms on my dualcore?) to be created, might be practicable or not, has to be tried.
For real large worlds, paging is a must. The paged data could be the dualgrid and not the actual meshes of the chunks. Also the volume source needs to be paged maybe, as it's not reasonable to hold a giant memory block of the texture data in memory. With procedural sources, this would work, as they naturally require not a lot of memory. Must be further investigated.
Here I collect ideas which could be implemented in the future, likely after the GSoC. But nothing is guaranteed here, just collecting.
Needed for really large worlds.
A xml format to define "scenes" outside of the code. Something like this:
<csg> <union> <sphere radius="5" x="5" y="6" z="7" /> <texture src="terrain.dds" width="200" height="200" depth="200" /> </union> </csg>
Needed for picking and physics. But what to take here?
- Keep the Source and use this for intersection searching. Pro: More easy to find them. Con: Can be memory heavy (if the source contains a big 3D texture) and as the generated triangles are naturally only an approximation of the volume, it won't be triangle-perfect.
- Keep the DualCells of the highest LOD level. First pick the chunk candidates by using the AABB of the chunks and then walk through the dualcells. Test them for hits. And if so, generate internal triangles on the fly and test them. Pro: Less memory heavy, but could be still heavy. I have to find out, how to intersect rays with (often degenerated) cube-like thingies. Pro: Could be fast due to the nature of this broad-, middle- and nearphase.
- Test first the AABB of the chunks with the highest LOD level and then the triangles. Pro: No additional memory required (Is it so? Are vertices and indices still available after they are sent to the GPU memory?). Not that slow. Con: Skirt triangles might be also taken into consideration and this is not desired. Or is it? This is how: http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Raycasting+to+the+polygon+level&structure=Cookbook
Union a voxelized mesh with your terrain and you get Mount Rushmore.
Good Source: http://procworld.blogspot.de/2011/04/opencl-voxelization.html
So existing heightmaps can be used. This guy did it: http://www.guildhall.smu.edu/fileadmin/Masters_Thesis_PDFS/Software_Development/Talaber_ThesisProject_RevFinal.pdf
And also those guys do it: https://www.gitorious.org/thermite3d/thermite3d/blobs/master/Heightmap2Volume/source/main.cpp But both don't save the distance to the surface but just whether the voxel is in or out...
Maybe some indirect way with first generating a big mesh? Memory and CPU heavy...
Idea: Some image processing kernel like stuff.
You start for example with this 1D heightmap (O is out of the volume, X within):
Initialize the volume with zero:
Iterate with a 3x3 kernel, central hotspot and repeating border over the volume. If every kernel value is equal, look at the heightmap, whether the hotspot is outside the volume. If so, decrement the new volume value. Else, increment. Stop this process when nothing changed anymore. So after the first run, our volume looks like this:
Repeat, now it looks like this:
-> Ou, nothing changed, no repeatition anymore. Looks like a 2D volume, or?
Instead of +1 or -1, it could also depend where the difference is in the kernel.
So if the value with the lowest difference to the hotspot is northeast/northwest/southeast/southwest, add/dec by 0.707. If it is straight north/east/south/west, add/dec 1. If they are equal, +-1 is prefered.
-> This expands naturally to 2D heightmaps.
I think that even if it works (which it probably will) it will be unscalable for large height map images. both in term of memory and run time. In worst case scenario the time complexity can be O(width * height * depth^2). with a memory complexity of O(width * height * depth)
I think you need to start from an octree structure. remember that the figures of distance to ground don't have to be exact.
I think a general algorithm would be.
Create an octree.
- Divide those octree nodes that include the terrain in them recursively. Do this until you get to the required graininess. Do not attempt to calculate distance, at this point, only calculate whether the node is inside or outside the terrain.
- In the leafs of the octree only, calculate the shortest vector of the center of the node to the ground. (The length of the vector is also the distance to the terrain of course)
- work backward in a reversed DFS (depth first search) algorithm over the nodes. For each node calculate its approximate vector to the surface by:
- For each child node calculate the vector between the center of the node to the center of the child.
- For each such calculation add the resulting vector with the vector of the child to the surface.
- Select the shortest or median distance between all resulting vectors.
You can improve this algorithm's result by
1. calculating a non-leaf node distance to the surface by considering both the distance of its children and it's neighbours children.
2. not calculating the non-leaf node distance from its direct descendants but from some level lower down.
Procedural Terrain Generation
A Perlin Noise source. http://cs.nyu.edu/~perlin/noise/
So far, no feedback of Mr Perlin yet on how to use the code there. Looking for different implementations to not run into license issues.
This looks good: http://stackoverflow.com/questions/6963388/fastest-perlin-like-3d-noise-algorithm A big explanation PDF is linked and in that PDF, a public domain 2012 Java implementation is linked. This could be ported easily!
More CSG base primitives
De-/Serialize the Chunk Tree
As the loading time can be significant, an option would be nice to throw the generated tree with all its triangles to the disc and load it again. This way, this serialization could be delivered with the product or generated just once if too big for this. Might be also an idea to just serialize the DualCells?
This would also be good (== more or less required) for Paging.
Beside scaling, there could be rotation and translation. Or any 4D matrix? Depends on whether it's possible to get the inverse of any 4D matrix in Ogre.
Parallax Occlusion Mapping
This would be really cool.
And: The currently used textures already offer an heightmap!
Infrastructure to make plugins possible
Create a factory-pattern for the Source implementations so plugins can be written.
Like in http://http.developer.nvidia.com/GPUGems3/gpugems3_ch01.html
But this could increase the load time quite a bit! If implemented, then in a way to be able to switch it on and off via parameters.
THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
- "Collective Work" means a work, such as a periodical issue, anthology or encyclopedia, in which the Work in its entirety in unmodified form, along with a number of other contributions, constituting separate and independent works in themselves, are assembled into a collective whole. A work that constitutes a Collective Work will not be considered a Derivative Work (as defined below) for the purposes of this License.
- "Derivative Work" means a work based upon the Work or upon the Work and other pre-existing works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which the Work may be recast, transformed, or adapted, except that a work that constitutes a Collective Work will not be considered a Derivative Work for the purpose of this License. For the avoidance of doubt, where the Work is a musical composition or sound recording, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered a Derivative Work for the purpose of this License.
- "Licensor" means the individual or entity that offers the Work under the terms of this License.
- "Original Author" means the individual or entity who created the Work.
- "Work" means the copyrightable work of authorship offered under the terms of this License.
- "You" means an individual or entity exercising rights under this License who has not previously violated the terms of this License with respect to the Work, or who has received express permission from the Licensor to exercise rights under this License despite a previous violation.
- "License Elements" means the following high-level license attributes as selected by Licensor and indicated in the title of this License: Attribution, ShareAlike.
2. Fair Use Rights
Nothing in this license is intended to reduce, limit, or restrict any rights arising from fair use, first sale or other limitations on the exclusive rights of the copyright owner under copyright law or other applicable laws.
3. License Grant
Subject to the terms and conditions of this License, Licensor hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the duration of the applicable copyright) license to exercise the rights in the Work as stated below:
- to reproduce the Work, to incorporate the Work into one or more Collective Works, and to reproduce the Work as incorporated in the Collective Works;
- to create and reproduce Derivative Works;
- to distribute copies or phonorecords of, display publicly, perform publicly, and perform publicly by means of a digital audio transmission the Work including as incorporated in Collective Works;
- to distribute copies or phonorecords of, display publicly, perform publicly, and perform publicly by means of a digital audio transmission Derivative Works.
- For the avoidance of doubt, where the work is a musical composition:
- Performance Royalties Under Blanket Licenses. Licensor waives the exclusive right to collect, whether individually or via a performance rights society (e.g. ASCAP, BMI, SESAC), royalties for the public performance or public digital performance (e.g. webcast) of the Work.
- Mechanical Rights and Statutory Royalties. Licensor waives the exclusive right to collect, whether individually or via a music rights society or designated agent (e.g. Harry Fox Agency), royalties for any phonorecord You create from the Work ("cover version") and distribute, subject to the compulsory license created by 17 USC Section 115 of the US Copyright Act (or the equivalent in other jurisdictions).
- Webcasting Rights and Statutory Royalties. For the avoidance of doubt, where the Work is a sound recording, Licensor waives the exclusive right to collect, whether individually or via a performance-rights society (e.g. SoundExchange), royalties for the public digital performance (e.g. webcast) of the Work, subject to the compulsory license created by 17 USC Section 114 of the US Copyright Act (or the equivalent in other jurisdictions).
The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such modifications as are technically necessary to exercise the rights in other media and formats. All rights not expressly granted by Licensor are hereby reserved.
The license granted in Section 3 above is expressly made subject to and limited by the following restrictions:
- You may distribute, publicly display, publicly perform, or publicly digitally perform the Work only under the terms of this License, and You must include a copy of, or the Uniform Resource Identifier for, this License with every copy or phonorecord of the Work You distribute, publicly display, publicly perform, or publicly digitally perform. You may not offer or impose any terms on the Work that alter or restrict the terms of this License or the recipients' exercise of the rights granted hereunder. You may not sublicense the Work. You must keep intact all notices that refer to this License and to the disclaimer of warranties. You may not distribute, publicly display, publicly perform, or publicly digitally perform the Work with any technological measures that control access or use of the Work in a manner inconsistent with the terms of this License Agreement. The above applies to the Work as incorporated in a Collective Work, but this does not require the Collective Work apart from the Work itself to be made subject to the terms of this License. If You create a Collective Work, upon notice from any Licensor You must, to the extent practicable, remove from the Collective Work any credit as required by clause 4(c), as requested. If You create a Derivative Work, upon notice from any Licensor You must, to the extent practicable, remove from the Derivative Work any credit as required by clause 4(c), as requested.
- You may distribute, publicly display, publicly perform, or publicly digitally perform a Derivative Work only under the terms of this License, a later version of this License with the same License Elements as this License, or a Creative Commons iCommons license that contains the same License Elements as this License (e.g. Attribution-ShareAlike 2.5 Japan). You must include a copy of, or the Uniform Resource Identifier for, this License or other license specified in the previous sentence with every copy or phonorecord of each Derivative Work You distribute, publicly display, publicly perform, or publicly digitally perform. You may not offer or impose any terms on the Derivative Works that alter or restrict the terms of this License or the recipients' exercise of the rights granted hereunder, and You must keep intact all notices that refer to this License and to the disclaimer of warranties. You may not distribute, publicly display, publicly perform, or publicly digitally perform the Derivative Work with any technological measures that control access or use of the Work in a manner inconsistent with the terms of this License Agreement. The above applies to the Derivative Work as incorporated in a Collective Work, but this does not require the Collective Work apart from the Derivative Work itself to be made subject to the terms of this License.
- If you distribute, publicly display, publicly perform, or publicly digitally perform the Work or any Derivative Works or Collective Works, You must keep intact all copyright notices for the Work and provide, reasonable to the medium or means You are utilizing: (i) the name of the Original Author (or pseudonym, if applicable) if supplied, and/or (ii) if the Original Author and/or Licensor designate another party or parties (e.g. a sponsor institute, publishing entity, journal) for attribution in Licensor's copyright notice, terms of service or by other reasonable means, the name of such party or parties; the title of the Work if supplied; to the extent reasonably practicable, the Uniform Resource Identifier, if any, that Licensor specifies to be associated with the Work, unless such URI does not refer to the copyright notice or licensing information for the Work; and in the case of a Derivative Work, a credit identifying the use of the Work in the Derivative Work (e.g., "French translation of the Work by Original Author," or "Screenplay based on original Work by Original Author"). Such credit may be implemented in any reasonable manner; provided, however, that in the case of a Derivative Work or Collective Work, at a minimum such credit will appear where any other comparable authorship credit appears and in a manner at least as prominent as such other comparable authorship credit.
5. Representations, Warranties and Disclaimer
UNLESS OTHERWISE AGREED TO BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE MATERIALS, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
6. Limitation on Liability.
EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
- This License and the rights granted hereunder will terminate automatically upon any breach by You of the terms of this License. Individuals or entities who have received Derivative Works or Collective Works from You under this License, however, will not have their licenses terminated provided such individuals or entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of this License.
- Subject to the above terms and conditions, the license granted here is perpetual (for the duration of the applicable copyright in the Work). Notwithstanding the above, Licensor reserves the right to release the Work under different license terms or to stop distributing the Work at any time; provided, however that any such election will not serve to withdraw this License (or any other license that has been, or is required to be, granted under the terms of this License), and this License will continue in full force and effect unless terminated as stated above.
- Each time You distribute or publicly digitally perform the Work or a Collective Work, the Licensor offers to the recipient a license to the Work on the same terms and conditions as the license granted to You under this License.
- Each time You distribute or publicly digitally perform a Derivative Work, Licensor offers to the recipient a license to the original Work on the same terms and conditions as the license granted to You under this License.
- If any provision of this License is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this License, and without further action by the parties to this agreement, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
- No term or provision of this License shall be deemed waived and no breach consented to unless such waiver or consent shall be in writing and signed by the party to be charged with such waiver or consent.
- This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here. Licensor shall not be bound by any additional provisions that may appear in any communication from You. This License may not be modified without the mutual written agreement of the Licensor and You.