Cleanups.

This commit is contained in:
Relintai 2024-04-27 11:28:21 +02:00
parent b4b74565a3
commit a1cc43d629
32 changed files with 384 additions and 781 deletions

View File

@ -1,10 +1,8 @@
Canvas layers
=============
# Canvas layers
Viewport and Canvas items
-------------------------
## Viewport and Canvas items
`CanvasItem` is the base for all 2D nodes, be it regular
2D nodes, such as `Node2D`.
@ -34,8 +32,7 @@ transform. For example:
How can these problems be solved in a single scene tree?
CanvasLayers
------------
## CanvasLayers
The answer is `CanvasLayer`,
which is a node that adds a separate 2D rendering layer for all its
@ -58,6 +55,7 @@ CanvasLayers are independent of tree order, and they only depend on
their layer number, so they can be instantiated when needed.
Note:
CanvasLayers aren't necessary to control the drawing order of nodes.
The standard way to ensuring that a node is correctly drawn 'in front' or 'behind' others is to manipulate the
order of the nodes in the scene panel. Perhaps counterintuitively, the topmost nodes in the scene panel are drawn

View File

@ -1,17 +1,14 @@
Viewport and canvas transforms
==============================
# Viewport and canvas transforms
Introduction
------------
## Introduction
This is an overview of the 2D transforms going on for nodes from the
moment they draw their content locally to the time they are drawn onto
the screen. This overview discusses very low level details of the engine.
Canvas transform
----------------
## Canvas transform
As mentioned in the previous tutorial, `doc_canvas_layers`, every
CanvasItem node (remember that Node2D and Control based nodes use
@ -23,8 +20,7 @@ Also covered in the previous tutorial, nodes are drawn by default in Layer 0,
in the built-in canvas. To put nodes in a different layer, a `CanvasLayer
( CanvasLayer )` node can be used.
Global canvas transform
-----------------------
## Global canvas transform
Viewports also have a Global Canvas transform (also a
`Transform2D`). This is the master transform and
@ -32,8 +28,7 @@ affects all individual *Canvas Layer* transforms. Generally, this
transform is not of much use, but is used in the CanvasItem Editor
in Pandemonium's editor.
Stretch transform
-----------------
## Stretch transform
Finally, viewports have a *Stretch Transform*, which is used when
resizing or stretching the screen. This transform is used internally (as
@ -46,28 +41,22 @@ convert InputEvent coordinates to local CanvasItem coordinates, the
`CanvasItem.make_input_local()`
function was added for convenience.
Transform order
---------------
## Transform order
For a coordinate in CanvasItem local properties to become an actual
screen coordinate, the following chain of transforms must be applied:
![](img/viewport_transforms2.png)
Transform functions
-------------------
## Transform functions
Obtaining each transform can be achieved with the following functions:
+----------------------------------+---------------------------------------------------------------------------------------------+
| Type | Transform |
+==================================+=============================================================================================+
| Type | Transform |
|----------------------------------|-----------------------------------------|
| CanvasItem | `CanvasItem.get_global_transform()` |
+----------------------------------+---------------------------------------------------------------------------------------------+
| CanvasLayer | `CanvasItem.get_canvas_transform()` |
+----------------------------------+---------------------------------------------------------------------------------------------+
| CanvasLayer+GlobalCanvas+Stretch | `CanvasItem.get_viewport_transform()` |
+----------------------------------+---------------------------------------------------------------------------------------------+
| CanvasLayer+GlobalCanvas+Stretch | `CanvasItem.get_viewport_transform()` |
Finally, then, to convert a CanvasItem local coordinates to screen
coordinates, just multiply in the following order:
@ -83,8 +72,7 @@ screen coordinates. The recommended approach is to simply work in Canvas
coordinates (`CanvasItem.get_global_transform()`), to allow automatic
screen resolution resizing to work properly.
Feeding custom input events
---------------------------
## Feeding custom input events
It is often desired to feed custom input events to the scene tree. With
the above knowledge, to correctly do this, it must be done the following

View File

@ -1,10 +1,8 @@
Particle systems (2D)
=====================
# Particle systems (2D)
Intro
-----
## Intro
A simple (but flexible enough for most uses) particle system is
provided. Particle systems are used to simulate complex physical effects,
@ -17,8 +15,7 @@ organic look is the "randomness" associated with each parameter. In
essence, creating a particle system means setting base physics
parameters and then adding randomness to them.
Particle nodes
~~~~~~~~~~~~~~
### Particle nodes
Pandemonium provides two different nodes for 2D particles, `Particles2D` and
`CPUParticles2D`.
@ -38,8 +35,7 @@ node to your scene. After creating that node you will notice that only a white d
and that there is a warning icon next to your Particles2D node in the scene dock. This
is because the node needs a ParticlesMaterial to function.
ParticlesMaterial
~~~~~~~~~~~~~~~~~
### ParticlesMaterial
To add a process material to your particles node, go to `Process Material` in
your inspector panel. Click on the box next to `Material`, and from the dropdown
@ -52,8 +48,7 @@ white points downward.
![](img/particles1.png)
Texture
~~~~~~~
### Texture
A particle system uses a single texture (in the future this might be
extended to animated textures via spritesheet). The texture is set via
@ -61,11 +56,9 @@ the relevant texture property:
![](img/particles2.png)
Time parameters
---------------
## Time parameters
Lifetime
~~~~~~~~
### Lifetime
The time in seconds that every particle will stay alive. When lifetime
ends, a new particle is created to replace it.
@ -78,14 +71,12 @@ Lifetime: 4.0
![](img/paranim15.gif)
One Shot
~~~~~~~~
### One Shot
When enabled, a Particles2D node will emit all of its particles once
and then never again.
Preprocess
~~~~~~~~~~
### Preprocess
Particle systems begin with zero particles emitted, then start emitting.
This can be an inconvenience when loading a scene and systems like
@ -93,15 +84,13 @@ a torch, mist, etc. begin emitting the moment you enter. Preprocess is
used to let the system process a given number of seconds before it is
actually drawn the first time.
Speed Scale
~~~~~~~~~~~
### Speed Scale
The speed scale has a default value of `1` and is used to adjust the
speed of a particle system. Lowering the value will make the particles
slower while increasing the value will make the particles much faster.
Explosiveness
~~~~~~~~~~~~~
### Explosiveness
If lifetime is `1` and there are 10 particles, it means a particle
will be emitted every 0.1 seconds. The explosiveness parameter changes
@ -115,8 +104,7 @@ creating explosions or sudden bursts of particles:
![](img/paranim18.gif)
Randomness
~~~~~~~~~~
### Randomness
All physics parameters can be randomized. Random values range from `0` to
`1`. The formula to randomize a parameter is:
@ -125,24 +113,19 @@ All physics parameters can be randomized. Random values range from `0` to
initial_value = param_value + param_value * randomness
```
Fixed FPS
~~~~~~~~~
### Fixed FPS
This setting can be used to set the particle system to render at a fixed
FPS. For instance, changing the value to `2` will make the particles render
at 2 frames per second. Note this does not slow down the particle system itself.
Fract Delta
~~~~~~~~~~~
### Fract Delta
This can be used to turn Fract Delta on or off.
Drawing parameters
------------------
Visibility Rect
~~~~~~~~~~~~~~~
## Drawing parameters
### Visibility Rect
The visibility rectangle controls the visibility of the particles on screen. If this rectangle is outside of the viewport, the engine will not render the particles on screen.
@ -154,8 +137,7 @@ You can have Pandemonium generate a Visibility Rect automatically using the tool
You can control the emit duration with the `Generation Time (sec)` option. The maximum value is 25 seconds. If you need more time for your particles to move around, you can temporarily change the `preprocess` duration on the Particles2D node.
Local Coords
~~~~~~~~~~~~
### Local Coords
By default this option is on, and it means that the space that particles
are emitted to is relative to the node. If the node is moved, all
@ -168,18 +150,15 @@ node is moved, already emitted particles are not affected:
![](img/paranim21.gif)
Draw Order
~~~~~~~~~~
### Draw Order
This controls the order in which individual particles are drawn. `Index`
means particles are drawn according to their emission order (default).
`Lifetime` means they are drawn in order of remaining lifetime.
ParticlesMaterial settings
--------------------------
## ParticlesMaterial settings
Direction
~~~~~~~~~
### Direction
This is the base direction at which particles emit. The default is
`Vector3(1, 0, 0)` which makes particles emit to the right. However,
@ -193,8 +172,7 @@ particles emit toward the right, then go down because of gravity.
![](img/direction2.png)
Spread
~~~~~~
### Spread
This parameter is the angle in degrees which will be randomly added in
either direction to the base `Direction`. A spread of `180` will emit
@ -203,20 +181,17 @@ parameter must be greater than 0.
![](img/paranim3.gif)
Flatness
~~~~~~~~
### Flatness
This property is only useful for 3D particles.
Gravity
~~~~~~~
### Gravity
The gravity applied to every particle.
![](img/paranim7.gif)
Initial Velocity
~~~~~~~~~~~~~~~~
### Initial Velocity
Initial velocity is the speed at which particles will be emitted (in
pixels/sec). Speed might later be modified by gravity or other
@ -224,49 +199,42 @@ accelerations (as described further below).
![](img/paranim4.gif)
Angular Velocity
~~~~~~~~~~~~~~~~
### Angular Velocity
Angular velocity is the initial angular velocity applied to particles.
Spin Velocity
~~~~~~~~~~~~~
### Spin Velocity
Spin velocity is the speed at which particles turn around their center
(in degrees/sec).
![](img/paranim5.gif)
Orbit Velocity
~~~~~~~~~~~~~~
### Orbit Velocity
Orbit velocity is used to make particles turn around their center.
![](img/paranim6.gif)
Linear Acceleration
~~~~~~~~~~~~~~~~~~~
### Linear Acceleration
The linear acceleration applied to each particle.
Radial Acceleration
~~~~~~~~~~~~~~~~~~~
### Radial Acceleration
If this acceleration is positive, particles are accelerated away from
the center. If negative, they are absorbed towards it.
![](img/paranim8.gif)
Tangential Acceleration
~~~~~~~~~~~~~~~~~~~~~~~
### Tangential Acceleration
This acceleration will use the tangent vector to the center. Combining
with radial acceleration can do nice effects.
![](img/paranim9.gif)
Damping
~~~~~~~
### Damping
Damping applies friction to the particles, forcing them to stop. It is
especially useful for sparks or explosions, which usually begin with a
@ -274,35 +242,30 @@ high linear velocity and then stop as they fade.
![](img/paranim10.gif)
Angle
~~~~~
### Angle
Determines the initial angle of the particle (in degrees). This parameter
is mostly useful randomized.
![](img/paranim11.gif)
Scale
~~~~~
### Scale
Determines the initial scale of the particles.
![](img/paranim12.gif)
Color
~~~~~
### Color
Used to change the color of the particles being emitted.
Hue variation
~~~~~~~~~~~~~
### Hue variation
The `Variation` value sets the initial hue variation applied to each
particle. The `Variation Random` value controls the hue variation
randomness ratio.
Emission Shapes
---------------
## Emission Shapes
ParticlesMaterials allow you to set an Emission Mask, which dictates
the area and direction in which particles are emitted.
@ -323,8 +286,7 @@ Then select which texture you want to use as your mask:
A dialog box with several settings will appear.
Emission Mask
~~~~~~~~~~~~~
### Emission Mask
Three types of emission masks can be generated from a texture:
@ -344,8 +306,7 @@ Three types of emission masks can be generated from a texture:
![](img/emission_mask_directed_border.gif)
Emission Colors
~~~~~~~~~~~~~~~
### Emission Colors
`Capture from Pixel` will cause the particles to inherit the color of the mask at their spawn points.

View File

@ -1,10 +1,8 @@
2D movement overview
====================
# 2D movement overview
Introduction
------------
## Introduction
Every beginner has been there: "How do I move my character?" Depending on the
style of game you're making, you may have special requirements, but in general
@ -15,8 +13,7 @@ but the principles will apply to other node types (Area2D, RigidBody2D) as well.
Setup
-----
## Setup
Each example below uses the same scene setup. Start with a `KinematicBody2D` with two
children: `Sprite` and `CollisionShape2D`. You can use the Pandemonium icon ("icon.png)")
@ -27,8 +24,7 @@ input actions (see `InputEvent ( doc_inputevent )` for details):
![](img/movement_inputs.png)
8-way movement
--------------
## 8-way movement
In this scenario, you want the user to press the four directional keys (up/left/down/right
or W/A/S/D) and move in the selected direction. The name "8-way movement" comes from the
@ -83,8 +79,7 @@ Note:
you've set up input actions correctly as described in the
`doc_2d_movement_setup` part of this tutorial.
Rotation + movement
-------------------
## Rotation + movement
This type of movement is sometimes called "Asteroids-style" because it resembles
how that classic arcade game worked. Pressing left/right rotates the character,
@ -130,8 +125,7 @@ in the same direction as the body. `rotated()` is a useful vector function
that you can use in many circumstances where you would otherwise need to apply
trigonometric functions.
Rotation + movement (mouse)
---------------------------
## Rotation + movement (mouse)
This style of movement is a variation of the previous one. This time, the direction
is set by the mouse position instead of the keyboard. The character will always
@ -172,8 +166,7 @@ gdscript GDScript
```
Click-and-move
--------------
## Click-and-move
This last example uses only the mouse to control the character. Clicking
on the screen will cause the player to move to the target location.
@ -214,8 +207,7 @@ Tip:
This technique can also be used as the basis of a "following" character.
The `target` position can be that of any object you want to move to.
Summary
-------
## Summary
You may find these code samples useful as starting points for your own projects.
Feel free to use them and experiment with them to see what you can make.

View File

@ -1,10 +1,8 @@
2D lights and shadows
=====================
# 2D lights and shadows
Introduction
------------
## Introduction
This tutorial explains how the 2D lighting works in the
`lights and shadows ( https://github.com/Relintai/pandemonium_engine-demo-projects/tree/master/2d/lights_and_shadows )` demo project.
@ -18,8 +16,7 @@ on GitHub. I suggest you download it before starting. Alternatively,
it can be downloaded from the Project Manager. Launch Pandemonium and in the top
bar select "Templates" and search for "2D Lights and Shadows Demo".
Setup
-----
## Setup
For this demo we use four textures: two for the lights, one for the shadow casters,
and one for the background. I've included links to them all here if you want to download them
@ -42,8 +39,7 @@ The demo uses a blob to show where the light is and the larger light
image to show the effect of the light upon the rest of the scene.
Nodes
-----
## Nodes
The demo uses four different nodes:
* `CanvasModulate`
@ -64,8 +60,7 @@ used in other ways, for example masking out parts of the scene.
the scene cast shadows. The shadows appear only on areas covered by the `Light2D` and
their direction is based on the center of the `Light`.
Lights
------
## Lights
`Lights` cover the entire extent of their respective Texture. They use additive
blending to add the color of their texture to the scene.
@ -91,8 +86,7 @@ location of the light source. A child `Sprite` is not necessary to make a
![](img/light_shadow_light_blob.png)
Shadows
-------
## Shadows
Shadows are made by intersecting a `Light`.
@ -104,8 +98,7 @@ but in reality all you need is a couple of `LightOccluder2Ds`. By itself
the `LightOccluder2D` is
just a black square.
Step by step
------------
## Step by step
Now that we have covered the basics of the nodes being used, we can now walk step by step through
the process of making a scene like the one found in the demo.

View File

@ -1,10 +1,8 @@
2D meshes
=========
# 2D meshes
Introduction
------------
## Introduction
In 3D, meshes are used to display the world. In 2D, they are rare as images are used more often.
Pandemonium's 2D engine is a pure two-dimensional engine, so it can't really display 3D meshes directly (although it can be done
@ -18,8 +16,7 @@ You can experiment creating them yourself using `SurfaceTool` from code and disp
Currently, the only way to generate a 2D mesh within the editor is by either importing an OBJ file as a mesh, or converting it from a Sprite.
Optimizing pixels drawn
-----------------------
## Optimizing pixels drawn
This workflow is useful for optimizing 2D drawing in some situations. When drawing large images with transparency, Pandemonium will draw the whole quad to the screen. The large transparent areas will still be drawn.
@ -28,8 +25,7 @@ or layering multiple images on top of each other with large transparent areas (f
Converting to a mesh will ensure that only the opaque parts will be drawn and the rest will be ignored.
Converting Sprites to 2D meshes
-------------------------------
## Converting Sprites to 2D meshes
You can take advantage of this optimization by converting a `Sprite` to a `MeshInstance2D`.
Start with an image that contains large amounts of transparency on the edges, like this tree:

View File

@ -1,10 +1,8 @@
Custom drawing in 2D
====================
# Custom drawing in 2D
Introduction
------------
## Introduction
Pandemonium has nodes to draw sprites, polygons, particles, and all sorts of
stuff. For most cases, this is enough; but not always. Before crying in fear,
@ -27,8 +25,7 @@ Custom drawing in a 2D node is *really* useful. Here are some use cases:
but when you have unusual needs, you will likely need a custom
control.
Drawing
-------
## Drawing
Add a script to any `CanvasItem`
derived node, like `Control` or
@ -47,8 +44,7 @@ gdscript GDScript
Draw commands are described in the `CanvasItem`
class reference. There are plenty of them.
Updating
--------
## Updating
The `draw()` function is only called once, and then the draw commands
are cached and remembered, so further calls are unnecessary.
@ -94,16 +90,14 @@ gdscript GDScript
```
An example: drawing circular arcs
----------------------------------
## An example: drawing circular arcs
We will now use the custom drawing functionality of the Pandemonium Engine to draw
something that Pandemonium doesn't provide functions for. As an example, Pandemonium provides
a `draw_circle()` function that draws a whole circle. However, what about drawing a
portion of a circle? You will have to code a function to perform this and draw it yourself.
Arc function
^^^^^^^^^^^^
#### Arc function
An arc is defined by its support circle parameters, that is, the center position
and the radius. The arc itself is then defined by the angle it starts from
@ -166,8 +160,7 @@ support circle is big, the length of each line between a pair of points will
never be long enough to see them. If that were to happen, we would simply need to
increase the number of points.
Draw the arc on the screen
^^^^^^^^^^^^^^^^^^^^^^^^^^
#### Draw the arc on the screen
We now have a function that draws stuff on the screen;
it is time to call it inside the `draw()` function:
@ -188,8 +181,7 @@ Result:
![](img/result_drawarc.png)
Arc polygon function
^^^^^^^^^^^^^^^^^^^^
#### Arc polygon function
We can take this a step further and not only write a function that draws the plain
portion of the disc defined by the arc, but also its shape. The method is exactly
@ -212,8 +204,7 @@ gdscript GDScript
![](img/result_drawarc_poly.png)
Dynamic custom drawing
^^^^^^^^^^^^^^^^^^^^^^
#### Dynamic custom drawing
All right, we are now able to draw custom stuff on the screen. However, it is static;
let's make this shape turn around the center. The solution to do this is simply
@ -305,8 +296,7 @@ gdscript GDScript
Let's run again! This time, the rotation displays fine!
Antialiased drawing
^^^^^^^^^^^^^^^^^^^
#### Antialiased drawing
Pandemonium offers method parameters in `draw_line( CanvasItem_method_draw_line )`
to enable antialiasing, but it doesn't work reliably in all situations
@ -319,8 +309,7 @@ As a workaround, install and use the
(which also supports antialiased Polygon2D drawing). Note that this add-on relies
on high-level nodes, rather than low-level `draw()` functions.
Tools
-----
## Tools
Drawing your own nodes might also be desired while running them in the
editor. This can be used as a preview or visualization of some feature or

View File

@ -1,10 +1,8 @@
2D Sprite animation
===================
# 2D Sprite animation
Introduction
------------
## Introduction
In this tutorial, you'll learn how to create 2D animated
characters with the AnimatedSprite class and the AnimationPlayer. Typically, when you create or download an animated character, it
@ -20,8 +18,7 @@ Note:
Art for the following examples by https://opengameart.org/users/ansimuz and by
https://opengameart.org/users/tgfcoder
Individual images with AnimatedSprite
-------------------------------------
## Individual images with AnimatedSprite
In this scenario, you have a collection of images, each containing one of your
character's animation frames. For this example, we'll use the following
@ -68,8 +65,7 @@ fix this, change the *Speed (FPS)* setting in the SpriteFrames panel to 10.
You can add additional animations by clicking the "New Animation" button and
adding additional images.
Controlling the animation
~~~~~~~~~~~~~~~~~~~~~~~~~
### Controlling the animation
Once the animation is complete, you can control the animation via code using
the `play()` and `stop()` methods. Here is a brief example to play the
@ -91,8 +87,7 @@ gdscript GDScript
```
Sprite sheet with AnimatedSprite
--------------------------------
## Sprite sheet with AnimatedSprite
You can also easily animate from a sprite sheet with the class `AnimatedSprite`. We will use this public domain sprite sheet:
@ -126,8 +121,7 @@ Finally, check Playing on the AnimatedSprite in the inspector to see your frog j
![](img/2d_animation_play_spritesheet_animation.png)
Sprite sheet with AnimationPlayer
---------------------------------
## Sprite sheet with AnimationPlayer
Another way that you can animate when using a sprite sheet is to use a standard
`Sprite` node to display the texture, and then animating the
@ -185,8 +179,7 @@ Press "Play" on the animation to see how it looks.
![](img/2d_animation_running.gif)
Controlling an AnimationPlayer animation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Controlling an AnimationPlayer animation
Like with AnimatedSprite, you can control the animation via code using
the `play()` and `stop()` methods. Again, here is an example to play the
@ -218,8 +211,7 @@ Note:
If this turns out to be a problem, after calling `play()`, you can call `advance(0)`
to update the animation immediately.
Summary
-------
## Summary
These examples illustrate the two classes you can use in Pandemonium for
2D animation. `AnimationPlayer` is

View File

@ -1,7 +1,6 @@
Introduction to 3D
==================
# Introduction to 3D
Creating a 3D game can be challenging. That extra Z coordinate makes
many of the common techniques that helped to make 2D games simple no
@ -16,8 +15,7 @@ In 3D, math is a little more complex than in 2D, so also checking the
developers, not mathematicians or engineers) will help pave the way for you
to develop 3D games efficiently.
Spatial node
~~~~~~~~~~~~
### Spatial node
`Node2D` is the base node for 2D.
`Control` is the base node for everything GUI.
@ -35,8 +33,7 @@ scale.
![](img/tuto_3d2.png)
3D content
~~~~~~~~~~
### 3D content
Unlike 2D, where loading image content and drawing is straightforward,
3D is a little more difficult. The content needs to be created with
@ -44,8 +41,7 @@ special 3D tools (usually referred to as Digital Content Creation tools, or
DCCs) and exported to an exchange file format to be imported in
Pandemonium. This is required since 3D formats are not as standardized as images.
DCC-created models
------------------
## DCC-created models
.. FIXME: Needs update to properly description Pandemonium 3.x workflow
(used to reference a non existing doc_importing_3d_meshes importer).
@ -59,8 +55,7 @@ The second pipeline is by importing simple .OBJ files as mesh resources,
which can be then put inside a `MeshInstance`
node for display.
Generated geometry
------------------
## Generated geometry
It is possible to create custom geometry by using the
`ArrayMesh` resource directly. Simply create your arrays
@ -73,8 +68,7 @@ In any case, this method is meant for generating static geometry (models
that will not be updated often), as creating vertex arrays and
submitting them to the 3D API has a significant performance cost.
Immediate geometry
------------------
## Immediate geometry
If, instead, there is a requirement to generate simple geometry that
will be updated often, Pandemonium provides a special node,
@ -82,8 +76,7 @@ will be updated often, Pandemonium provides a special node,
which provides an OpenGL 1.x style immediate-mode API to create points,
lines, triangles, etc.
2D in 3D
--------
## 2D in 3D
While Pandemonium packs a powerful 2D engine, many types of games use 2D in a
3D environment. By using a fixed camera (either orthogonal or
@ -97,8 +90,7 @@ The disadvantage is, of course, that added complexity and reduced
performance in comparison to plain 2D, as well as the lack of reference
of working in pixels.
Environment
~~~~~~~~~~~
### Environment
Besides editing a scene, it is often common to edit the environment.
Pandemonium provides a `WorldEnvironment`
@ -106,8 +98,7 @@ node that allows changing the background color, mode (as in, put a
skybox), and applying several types of built-in post-processing effects.
Environments can also be overridden in the Camera.
3D viewport
~~~~~~~~~~~
### 3D viewport
Editing 3D scenes is done in the 3D tab. This tab can be selected
manually, but it will be automatically enabled when a Spatial node is
@ -122,8 +113,7 @@ similar to other tools in the Editor Settings:
![](img/tuto_3d4.png)
Coordinate system
-----------------
## Coordinate system
Pandemonium uses the `metric ( https://en.wikipedia.org/wiki/Metric_system )`
system for everything in 3D, with 1 unit being equal to 1 meter.
@ -146,8 +136,7 @@ means that:
- **Y** is up/down
- **Z** is front/back
Space and manipulation gizmos
-----------------------------
## Space and manipulation gizmos
Moving objects in the 3D view is done through the manipulator gizmos.
Each axis is represented by a color: Red, Green, Blue represent X, Y, Z
@ -163,8 +152,7 @@ Some useful keybindings:
or rotating.
- To center the view on the selected object, press :kbd:`F`.
View menu
---------
## View menu
The view options are controlled by the "View" menu in the viewport's toolbar.
@ -178,8 +166,7 @@ To hide a specific type of gizmos, you can toggle them off in the "View" menu.
![](img/tuto_3d6_2.png)
Default environment
-------------------
## Default environment
When created from the Project Manager, the 3D environment has a default sky.
@ -189,8 +176,7 @@ Given how physically based rendering works, it is advised to always try to
work with a default environment in order to provide indirect and reflected
light to your objects.
Cameras
-------
## Cameras
No matter how many objects are placed in the 3D space, nothing will be
displayed unless a `Camera` is
@ -219,8 +205,7 @@ each viewport:
- If an active camera leaves the scene tree, the first camera in
tree-order will take its place.
Lights
------
## Lights
Pandemonium has a limit of up to 8 lights per mesh. Aside from that, there
is no limitation on the number of lights, nor of types of lights, in

View File

@ -1,10 +1,8 @@
Using 3D transforms
~~~~~~~~~~~~~~~~~~~
# Using 3D transforms
Introduction
------------
## Introduction
If you have never made 3D games before, working with rotations in three dimensions can be confusing at first.
Coming from 2D, the natural way of thinking is along the lines of *"Oh, it's just like rotating in 2D, except now rotations happen in X, Y and Z"*.
@ -24,13 +22,11 @@ hat).
The idea of this document is to explain why, as well as outlining best practices for dealing with transforms when programming 3D games.
Problems of Euler angles
------------------------
## Problems of Euler angles
While it may seem intuitive that each axis has a rotation, the truth is that it's just not practical.
Axis order
==========
# Axis order
The main reason for this is that there isn't a *unique* way to construct an orientation from the angles. There isn't a standard mathematical function that
takes all the angles together and produces an actual 3D rotation. The only way an orientation can be produced from angles is to rotate the object angle
@ -58,8 +54,7 @@ If we were to apply rotation in the *X* axis first, and then in *Y*, the effect
Depending on the type of game or effect desired, the order in which you want axis rotations to be applied may differ. Therefore, applying rotations in X, Y, and Z is not enough: you also need a *rotation order*.
Interpolation
=============
# Interpolation
Another problem with using Euler angles is interpolation. Imagine you want to transition between two different camera or enemy positions (including rotations). One logical way to approach this is to interpolate the angles from one position to the next. One would expect it to look like this:
@ -76,15 +71,13 @@ There are a few reasons this may happen:
* Rotations don't map linearly to orientation, so interpolating them does not always result in the shortest path (i.e., to go from `270` to `0` degrees is not the same as going from `270` to `360`, even though the angles are equivalent).
* Gimbal lock is at play (first and last rotated axis align, so a degree of freedom is lost). See `Wikipedia's page on Gimbal Lock ( https://en.wikipedia.org/wiki/Gimbal_lock )` for a detailed explanation of this problem.
Say no to Euler angles
======================
# Say no to Euler angles
The result of all this is that you should **not use** the `rotation` property of `Spatial` nodes in Pandemonium for games. It's there to be used mainly in the editor, for coherence with the 2D engine, and for simple rotations (generally just one axis, or even two in limited cases). As much as you may be tempted, don't use it.
Instead, there is a better way to solve your rotation problems.
Introducing transforms
----------------------
## Introducing transforms
Pandemonium uses the `Transform` datatype for orientations. Each `Spatial` node contains a `transform` property which is relative to the parent's transform, if the parent is a Spatial-derived type.
@ -123,8 +116,7 @@ The gizmo's arrows show the `X`, `Y`, and `Z` axes (in red, green, and blue resp
For more information on the mathematics of vectors and transforms, please read the `doc_vector_math` tutorials.
Manipulating transforms
=======================
# Manipulating transforms
Of course, transforms are not as straightforward to manipulate as angles and have problems of their own.
@ -163,8 +155,7 @@ gdscript GDScript
rotate_object_local(Vector3(1, 0, 0), 0.1)
```
Precision errors
================
# Precision errors
Doing successive operations on transforms will result in a loss of precision due to floating-point error. This means the scale of each axis may no longer be exactly `1.0`, and they may not be exactly `90` degrees from each other.
@ -189,8 +180,7 @@ gdscript GDScript
transform = transform.scaled(scale)
```
Obtaining information
=====================
# Obtaining information
You might be thinking at this point: **"Ok, but how do I get angles from a transform?"**. The answer again is: you don't. You must do your best to stop thinking in angles.
@ -238,8 +228,7 @@ gdscript GDScript
All common behaviors and logic can be done with just vectors.
Setting information
===================
# Setting information
There are, of course, cases where you want to set information to a transform. Imagine a first person controller or orbiting camera. Those are definitely done using angles, because you *do want* the transforms to happen in a specific order.
@ -266,8 +255,7 @@ gdscript GDScript
As you can see, in such cases it's even simpler to keep the rotation outside, then use the transform as the *final* orientation.
Interpolating with quaternions
==============================
# Interpolating with quaternions
Interpolating between two transforms can efficiently be done with quaternions. More information about how quaternions work can be found in other places around the Internet. For practical use, it's enough to understand that pretty much their main use is doing a closest path interpolation. As in, if you have two rotations, a quaternion will smoothly allow interpolation between them using the closest axis.
@ -293,8 +281,7 @@ suffer from numerical precision errors.
Quaternions are useful when doing camera/path/etc. interpolations, as the result will always be correct and smooth.
Transforms are your friend
--------------------------
## Transforms are your friend
For most beginners, getting used to working with transforms can take some time. However, once you get used to them, you will appreciate their simplicity and power.

View File

@ -1,7 +1,6 @@
3D rendering limitations
========================
# 3D rendering limitations
See also:
@ -10,15 +9,13 @@ See also:
limitations on 3D rendering compared to desktop platforms.
See `doc_mobile_rendering_limitations` for more information.
Introduction
------------
## Introduction
Due to their focus on performance, real-time rendering engines have many
limitations. Pandemonium's renderer is no exception. To work effectively with those
limitations, you need to understand them.
Texture size limits
-------------------
## Texture size limits
On desktops and laptops, textures larger than 8192×8192 may not be supported on
older devices. You can check your target GPU's limitations on
@ -30,8 +27,7 @@ your texture to display correctly on all platforms, you should avoid using
textures larger than 4096×4096 and use a power of two size if the texture needs
to repeat.
Color banding
-------------
## Color banding
When using the GLES3 or Vulkan renderers, Pandemonium's 3D engine renders internally
in HDR. However, the rendering output will be tonemapped to a low dynamic range
@ -57,8 +53,7 @@ See also:
See `Banding in Games: A Noisy Rant ( http://loopit.dk/banding_in_games.pdf )`
for more details about banding and ways to combat it.
Depth buffer precision
----------------------
## Depth buffer precision
To sort objects in 3D space, rendering engines rely on a *depth buffer* (also
called *Z-buffer*). This buffer has a finite precision: 24-bit on desktop
@ -86,8 +81,7 @@ player.
Transparency sorting
--------------------
## Transparency sorting
In Pandemonium, transparent materials are drawn after opaque materials. Transparent
objects are sorted back to front before being drawn based on the Spatial's
@ -123,8 +117,7 @@ this feature. There are still several ways to avoid this problem:
**PixelAlpha**. This will make the material opaque. This way, it can also
cast shadows.
Multi-sample antialiasing
-------------------------
## Multi-sample antialiasing
Multi-sample antialiasing (MSAA) takes multiple *coverage* samples at the edges
of polygons when rendering objects. It does not increase the number of *color*

View File

@ -1,10 +1,8 @@
Spatial Material
================
# Spatial Material
Introduction
------------
## Introduction
`SpatialMaterial` is a default 3D material that aims to provide most of the features
artists look for in a material, without the need for writing shader code. However,
@ -29,15 +27,13 @@ the mesh.
The *Material Overlay* property will render a material **over** the current one being used by the
mesh. As an example, this can be used to put a transparent shield effect on a mesh.
Flags
-----
## Flags
Spatial materials have many flags determining the general usage of a material.
![](img/spatial_material1.png)
Transparent
~~~~~~~~~~~
### Transparent
In Pandemonium, materials are not transparent unless specifically configured to be.
The main reason behind this is that transparent materials are rendered
@ -54,14 +50,12 @@ specified otherwise. The main settings that enable transparency are:
* Blend mode set to other than "Mix"
* Enabling distance or proximity fade
Use Shadow to Opacity
~~~~~~~~~~~~~~~~~~~~~
### Use Shadow to Opacity
Lighting modifies the alpha so shadowed areas are opaque and non-shadowed
areas are transparent. Useful for overlaying shadows onto a camera feed in AR.
Unshaded
~~~~~~~~
### Unshaded
In most cases it is common for materials to be affected by lighting (shaded).
@ -71,8 +65,7 @@ pure, unlit color.
![](img/spatial_material26.png)
Vertex Lighting
~~~~~~~~~~~~~~~
### Vertex Lighting
Pandemonium has a more or less uniform cost per pixel thanks to depth pre-pass. All
lighting calculations are made by running the lighting shader on every pixel.
@ -90,8 +83,7 @@ can considerably increase rendering performance.
Keep in mind that when vertex lighting is enabled, only directional lighting
can produce shadows (for performance reasons).
No Depth Test
~~~~~~~~~~~~~
### No Depth Test
In order for close objects to appear over far away objects, depth testing
is performed. Disabling it has the result of objects appearing over
@ -103,73 +95,62 @@ and works very well with the *Render Priority* property of Material
![](img/spatial_material3.png)
Use Point Size
~~~~~~~~~~~~~~~
### Use Point Size
This option is only effective when the geometry rendered is made of points
(generally it's made of triangles when imported from 3D DCCs). If so, then
those points can be resized (see below).
World Triplanar
~~~~~~~~~~~~~~~
### World Triplanar
When using triplanar mapping (see below, in the UV1 and UV2 settings),
triplanar is computed in object local space. This option makes triplanar work
in world space.
Fixed Size
~~~~~~~~~~
### Fixed Size
This causes the object to be rendered at the same size no matter the distance.
This is useful mostly for indicators (no depth test and high render priority)
and some types of billboards.
Do Not Receive Shadows
~~~~~~~~~~~~~~~~~~~~~~
### Do Not Receive Shadows
Makes the object not receive any kind of shadow that would otherwise
be cast onto it.
Disable Ambient Light
~~~~~~~~~~~~~~~~~~~~~
### Disable Ambient Light
Makes the object not receive any kind of ambient lighting that would
otherwise light it.
Ensure Correct Normals
~~~~~~~~~~~~~~~~~~~~~~
### Ensure Correct Normals
Fixes normals when non-uniform scaling is used.
Vertex Color
------------
## Vertex Color
This setting allows choosing what is done by default to vertex colors that come
from your 3D modelling application. By default, they are ignored.
![](img/spatial_material4.png)
Use as Albedo
~~~~~~~~~~~~~
### Use as Albedo
Choosing this option means vertex color is used as albedo color.
Is sRGB
~~~~~~~
### Is sRGB
Most 3D DCCs will likely export vertex colors as sRGB, so toggling this
option on will help them look correct.
Parameters
-----------
## Parameters
`SpatialMaterial` also has several configurable parameters to tweak
many aspects of the rendering:
![](img/spatial_material5.png)
Diffuse Mode
~~~~~~~~~~~~
### Diffuse Mode
Specifies the algorithm used by diffuse scattering of light when hitting
the object. The default is *Burley*. Other modes are also available:
@ -188,8 +169,7 @@ the object. The default is *Burley*. Other modes are also available:
![](img/spatial_material6.png)
Specular Mode
~~~~~~~~~~~~~
### Specular Mode
Specifies how the specular blob will be rendered. The specular blob
represents the shape of a light source reflected in the object.
@ -203,8 +183,7 @@ represents the shape of a light source reflected in the object.
![](img/spatial_material7.png)
Blend Mode
~~~~~~~~~~
### Blend Mode
Controls the blend mode for the material. Keep in mind that any mode
other than *Mix* forces the object to go through the transparent pipeline.
@ -217,8 +196,7 @@ other than *Mix* forces the object to go through the transparent pipeline.
![](img/spatial_material8.png)
Cull Mode
~~~~~~~~~
### Cull Mode
Determines which side of the object is not drawn when backfaces are rendered:
@ -236,8 +214,7 @@ Note:
being culled by other faces. To resolve this, enable **Backface Culling** in
Blender's Materials tab, then export the scene to glTF again.
Depth Draw Mode
~~~~~~~~~~~~~~~
### Depth Draw Mode
Specifies when depth rendering must take place.
@ -251,19 +228,16 @@ Specifies when depth rendering must take place.
![](img/material_depth_draw.png)
Line Width
~~~~~~~~~~
### Line Width
When drawing lines, specify the width of the lines being drawn.
This option is not available on most modern hardware.
Point Size
~~~~~~~~~~
### Point Size
When drawing points, specify the point size in pixels.
Billboard Mode
~~~~~~~~~~~~~~
### Billboard Mode
Enables billboard mode for drawing materials. This controls how the object
faces the camera:
@ -279,13 +253,11 @@ faces the camera:
The above options are only enabled for Particle Billboard.
Billboard Keep Scale
~~~~~~~~~~~~~~~~~~~~
### Billboard Keep Scale
Enables scaling a mesh in billboard mode.
Grow
~~~~
### Grow
Grows the object vertices in the direction pointed by their normals:
@ -297,8 +269,7 @@ make it black and unshaded, reverse culling (Cull Front), and add some grow:
![](img/spatial_material11.png)
Use Alpha Scissor
~~~~~~~~~~~~~~~~~
### Use Alpha Scissor
When transparency other than `0` or `1` is not needed, it's possible to
set a threshold to prevent the object from rendering semi-transparent pixels.
@ -308,15 +279,13 @@ set a threshold to prevent the object from rendering semi-transparent pixels.
This renders the object via the opaque pipeline, which is faster and allows it
to use mid- and post-process effects such as SSAO, SSR, etc.
Material colors, maps and channels
----------------------------------
## Material colors, maps and channels
Besides the parameters, what defines materials themselves are the colors,
textures, and channels. Pandemonium supports an extensive list of them. They are
described in detail below:
Albedo
~~~~~~
## Albedo
*Albedo* is the base color for the material, on which all the other settings
operate. When set to *Unshaded*, this is the only color that is visible. In
@ -330,8 +299,7 @@ Albedo color and texture can be used together as they are multiplied.
object transparency. If you use a color or texture with *alpha channel*,
make sure to either enable transparency or *alpha scissoring* for it to work.
Metallic
~~~~~~~~
### Metallic
Pandemonium uses a metallic model over competing models due to its simplicity.
This parameter defines how reflective the material is. The more reflective, the
@ -347,8 +315,7 @@ material completely unreflective, just like in real life.
![](img/spatial_material13.png)
Roughness
~~~~~~~~~
### Roughness
*Roughness* affects the way reflection happens. A value of `0` makes it a
perfect mirror while a value of `1` completely blurs the reflection (simulating
@ -357,8 +324,7 @@ the right combination of *Metallic* and *Roughness*.
![](img/spatial_material14.png)
Emission
~~~~~~~~
### Emission
*Emission* specifies how much light is emitted by the material (keep in mind this
does not include light surrounding geometry unless `doc_gi_probes` are used).
@ -367,8 +333,7 @@ lighting in the scene.
![](img/spatial_material15.png)
Normal map
~~~~~~~~~~
### Normal map
Normal mapping allows you to set a texture that represents finer shape detail.
This does not modify geometry, only the incident angle for light. In Pandemonium,
@ -389,8 +354,7 @@ Note:
popular engines) can be found
`here ( http://wiki.polycount.com/wiki/Normal_Map_Technical_Details )`.
Rim
~~~
### Rim
Some fabrics have small micro-fur that causes light to scatter around it. Pandemonium
emulates this with the *Rim* parameter. Unlike other rim lighting implementations,
@ -404,8 +368,7 @@ it must be colored. If *Tint* is `0`, the color of the light is used for the
rim. If *Tint* is `1`, then the albedo of the material is used. Using
intermediate values generally works best.
Clearcoat
~~~~~~~~~
### Clearcoat
*This feature is only available when using the GLES3 backend.*
@ -423,8 +386,7 @@ right.
Note:
The effect will be more noticeable in Pandemonium 4.
Anisotropy
~~~~~~~~~~
### Anisotropy
*This feature is only available when using the GLES3 backend.*
@ -434,8 +396,7 @@ aluminum more realistic. It works especially well when combined with flowmaps.
![](img/spatial_material18.png)
Ambient Occlusion
~~~~~~~~~~~~~~~~~~
### Ambient Occlusion
It is possible to specify a baked ambient occlusion map. This map affects how
much ambient light reaches each surface of the object (it does not affect direct
@ -445,8 +406,7 @@ AO map. It is recommended to bake ambient occlusion whenever possible.
![](img/spatial_material19.png)
Depth
~~~~~
### Depth
*This feature is only available when using the GLES3 backend.*
@ -458,8 +418,7 @@ but it produces a realistic depth effect for textures. For best results,
![](img/spatial_material20.png)
Subsurface Scattering
~~~~~~~~~~~~~~~~~~~~~
### Subsurface Scattering
*This feature is only available when using the GLES3 backend.*
@ -469,8 +428,7 @@ liquids, etc.
![](img/spatial_material21.png)
Transmission
~~~~~~~~~~~~
### Transmission
This controls how much light from the lit side (visible to light) is transferred
to the dark side (opposite from the light). This works well for thin objects
@ -478,8 +436,7 @@ such as plant leaves, grass, human ears, etc.
![](img/spatial_material22.png)
Refraction
~~~~~~~~~~~
### Refraction
*This feature is only available when using the GLES3 backend.*
@ -489,8 +446,7 @@ distorting the transparency in a way similar to refraction in real life.
![](img/spatial_material23.png)
Detail
~~~~~~
### Detail
Pandemonium allows using secondary albedo and normal maps to generate a detail
texture, which can be blended in many ways. By combining this with secondary
@ -530,15 +486,13 @@ Normal: This is where you put a normal texture you want to blend. If nothing is
in this slot it will be interpreted as a flat normal map. This can still be used
even if the material does not have normal map enabled.
UV1 and UV2
~~~~~~~~~~~~
### UV1 and UV2
Pandemonium supports two UV channels per material. Secondary UV is often useful for
ambient occlusion or emission (baked light). UVs can be scaled and offset,
which is useful when using repeating textures.
Triplanar Mapping
~~~~~~~~~~~~~~~~~
### Triplanar Mapping
Triplanar mapping is supported for both UV1 and UV2. This is an alternative way
to obtain texture coordinates, sometimes called "Autotexture". Textures are
@ -550,8 +504,7 @@ world triplanar, so the brick texture continues smoothly between them.
![](img/spatial_material25.png)
Proximity and distance fade
----------------------------
## Proximity and distance fade
Pandemonium allows materials to fade by proximity to each other as well as depending
on the distance from the viewer. Proximity fade is useful for effects such as
@ -564,8 +517,7 @@ entire scene is usually not a good idea.
![](img/spatial_material_proxfade.gif)
Render priority
---------------
## Render priority
The rendering order of objects can be changed, although this is mostly
useful for transparent objects (or opaque objects that perform depth draw

View File

@ -1,10 +1,8 @@
3D lights and shadows
=====================
# 3D lights and shadows
Introduction
------------
## Introduction
Light sources emit light that mixes with the materials and produces a visible
result. Light can come from several types of sources in a scene:
@ -19,8 +17,7 @@ result. Light can come from several types of sources in a scene:
The emission color is a material property. You can read more about it
in the `doc_spatial_material` tutorial.
Light nodes
-----------
## Light nodes
There are three types of light nodes: `Directional light`,
`Omni light` and `Spot light`. Let's take a look at the common
@ -41,8 +38,7 @@ Each one has a specific function:
If you don't want disabled objects to cast shadows, adjust the `cast_shadow` property on the
GeometryInstance to the desired value.
Shadow mapping
^^^^^^^^^^^^^^
#### Shadow mapping
Lights can optionally cast shadows. This gives them greater realism (light does
not reach occluded areas), but it can incur a bigger performance cost.
@ -68,7 +64,6 @@ although that may lead to decreased performance.
Tip:
If shadow biasing is a problem in your scene, the following settings are a good starting point:
- Enable **Reverse Cull Face**. This reduces shadow peter-panning significantly
@ -88,8 +83,7 @@ Tip:
If shadow acne is still visible after performing the above tweaks,
try subdividing your meshes further in your 3D modeling software.
Directional light
~~~~~~~~~~~~~~~~~
### Directional light
This is the most common type of light and represents a light source
very far away (such as the sun). It is also the cheapest light to compute and should be used whenever possible
@ -105,8 +99,7 @@ does not affect the lighting at all and can be anywhere.
Every face whose front-side is hit by the light rays is lit, while the others stay dark. Most light types
have specific parameters, but directional lights are pretty simple in nature, so they don't.
Directional shadow mapping
^^^^^^^^^^^^^^^^^^^^^^^^^^
#### Directional shadow mapping
To compute shadow maps, the scene is rendered (only depth) from an orthogonal point of view that covers
the whole scene (or up to the max distance). There is, however, a problem with this approach because objects
@ -164,8 +157,7 @@ Shadowmap size for directional lights can be changed in Project Settings -> Rend
Increasing it can solve bias problems, but decrease performance. Shadow mapping is an art of tweaking.
Omni light
~~~~~~~~~~
### Omni light
Omni light is a point source that emits light spherically in all directions up to a given
radius.
@ -184,8 +176,7 @@ These two parameters allow tweaking how this works visually in order to find aes
![](img/light_attenuation.png)
Omni shadow mapping
^^^^^^^^^^^^^^^^^^^
#### Omni shadow mapping
Omni light shadow mapping is relatively straightforward. The main issue that needs to be
considered is the algorithm used to render it.
@ -200,8 +191,7 @@ If the objects being rendered are mostly irregular, Dual Paraboloid is usually
enough. In any case, as these shadows are cached in a shadow atlas (more on that at the end), it
may not make a difference in performance for most scenes.
Spot light
~~~~~~~~~~
### Spot light
Spot lights are similar to omni lights, except they emit light only into a cone
(or "cutoff"). They are useful to simulate flashlights,
@ -215,14 +205,12 @@ Spot lights share the same **Range** and **Attenuation** as **OmniLight**, and a
- **Angle**: The aperture angle of the light
- **Angle Attenuation**: The cone attenuation, which helps soften the cone borders.
Spot shadow mapping
^^^^^^^^^^^^^^^^^^^
#### Spot shadow mapping
Spots don't need any parameters for shadow mapping. Keep in mind that, at more than 89 degrees of aperture, shadows
stop functioning for spots, and you should consider using an Omni light instead.
Shadow atlas
~~~~~~~~~~~~
### Shadow atlas
Unlike Directional lights, which have their own shadow texture, Omni and Spot lights are assigned to slots of a shadow atlas.
This atlas can be configured in Project Settings -> Rendering -> Quality -> Shadow Atlas.
@ -253,8 +241,7 @@ If the slots in a quadrant are full, lights are pushed back to smaller slots, de
This allocation strategy works for most games, but you may want to use a separate one in some cases (for example, a top-down game where
all lights are around the same size and quadrants may all have the same subdivision).
Shadow filter quality
~~~~~~~~~~~~~~~~~~~~~
### Shadow filter quality
The filter quality of shadows can be tweaked. This can be found in
Project Settings -> Rendering -> Quality -> Shadows.

View File

@ -1,10 +1,8 @@
Reflection probes
=================
# Reflection probes
Introduction
------------
## Introduction
As stated in the `doc_spatial_material`, objects can show reflected or diffuse light.
Reflection probes are used as a source of reflected and ambient light for objects inside their area of influence.
@ -17,8 +15,7 @@ While these probes are an efficient way of storing reflections, they have a few
* They are efficient to render, but expensive to compute. This leads to a default behavior where they only capture on scene load.
* They work best for rectangular shaped rooms or places, otherwise the reflections shown are not as faithful (especially when roughness is 0).
Setting up
----------
## Setting up
Create a ReflectionProbe node and wrap it around the area where you want to have reflections:
@ -61,8 +58,7 @@ use the *Cull Mask* setting:
![](img/refprobe_cullmask.png)
Interior vs exterior
--------------------
## Interior vs exterior
If you are using reflection probes in an interior setting, it is recommended
that the **Interior** property be enabled. This stops
@ -77,8 +73,7 @@ Optionally, you can blend this ambient light with the probe diffuse capture by
tweaking the **Ambient Contribution** property (0.0 means pure ambient color,
while 1.0 means pure diffuse capture).
Blending
--------
## Blending
Multiple reflection probes can be used, and Pandemonium will blend them where they overlap using a smart algorithm:
@ -100,8 +95,7 @@ Finally, blending interior and exterior probes is the recommended approach when
levels that combine both interiors and exteriors. Near the door, a probe can
be marked as *exterior* (so it will get sky reflections) while on the inside, it can be interior.
Reflection atlas
----------------
## Reflection atlas
In the current renderer implementation, all probes are the same size and
are fit into a Reflection Atlas. The size and amount of probes can be

View File

@ -1,10 +1,8 @@
Using GIProbe
=============
# Using GIProbe
Introduction
------------
## Introduction
Note:
This feature is only available when using the GLES3 backend.
@ -33,8 +31,7 @@ The main downsides of `GIProbe` are:
- Reflections are voxelized, so they don't look as sharp as with `ReflectionProbe`. However, in exchange they are volumetric, so any room size or shape works for them. Mixing them with Screen Space Reflection also works well.
- They consume considerably more video memory than Reflection Probes, so they must be used with care in the right subdivision sizes.
Setting up
----------
## Setting up
Just like a `ReflectionProbe`, simply set up the `GIProbe` by wrapping it around
the geometry that will be affected.
@ -59,8 +56,7 @@ Warning:
one-sided walls). For interior levels, enclose your level geometry in a
sufficiently large box and bridge the loops to close the mesh.
Adding lights
-------------
## Adding lights
Unless there are materials with emission, `GIProbe` does nothing by default.
Lights need to be added to the scene to have an effect.
@ -79,8 +75,7 @@ And, as `GIProbe` lighting updates in real-time, this effect is immediate:
![](img/giprobe_indirect_energy_result.png)
Reflections
-----------
## Reflections
For very metallic materials with low roughness, it's possible to appreciate
voxel reflections. Keep in mind that these have far less detail than Reflection
@ -93,8 +88,7 @@ as a full 3-stage fallback-chain. This allows to have precise reflections where
![](img/giprobe_ref_blending.png)
Interior vs exterior
--------------------
## Interior vs exterior
GI Probes normally allow mixing with lighting from the sky. This can be disabled
when turning on the *Interior* setting.
@ -109,8 +103,7 @@ from spreading inside to being ignored.
As complex buildings may mix interiors with exteriors, combining GIProbes
for both parts works well.
Tweaking
--------
## Tweaking
GI Probes support a few parameters for tweaking:
@ -127,8 +120,7 @@ GI Probes support a few parameters for tweaking:
- **Compress** Currently broken. Do not use.
- **Data** Contains the light baked data after baking. If you are saving the data it should be saved as a .res file.
Quality
-------
## Quality
`GIProbe`\ s are quite demanding. It is possible to use lower quality voxel cone
tracing in exchange for more performance.

View File

@ -1,10 +1,8 @@
Baked lightmaps
===============
# Baked lightmaps
Introduction
------------
## Introduction
Baked lightmaps are an alternative workflow for adding indirect (or fully baked)
lighting to a scene. Unlike the `doc_gi_probes` approach, baked lightmaps
@ -33,8 +31,7 @@ use case. In general, GIProbe is easier to set up and works better with dynamic
objects. For mobile or low-end compatibility, though, baked lightmaps are your
only choice.
Visual comparison
-----------------
## Visual comparison
Here are some comparisons of how BakedLightmap vs. GIProbe look. Notice that
lightmaps are more accurate, but also suffer from the fact
@ -44,8 +41,7 @@ smoother overall.
![](img/baked_light_comparison.png)
Setting up
----------
## Setting up
First of all, before the lightmapper can do anything, the objects to be baked need
an UV2 layer and a texture size. An UV2 layer is a set of secondary texture coordinates
@ -54,8 +50,7 @@ not share pixels in the texture.
There are a few ways to ensure your object has a unique UV2 layer and texture size:
Unwrap on scene import
~~~~~~~~~~~~~~~~~~~~~~
### Unwrap on scene import
This is probably the best approach overall. The only downside is that, on large
models, unwrapping can take a while on import. Nonetheless, Pandemonium will cache the UV2
@ -86,8 +81,7 @@ Warning:
as these files guarantee that UV2 reimports are consistent across platforms
and engine versions.
Unwrap from within Pandemonium
~~~~~~~~~~~~~~~~~~~~~~~~
### Unwrap from within Pandemonium
Pandemonium has an option to unwrap meshes and visualize the UV channels.
It can be found in the Mesh menu:
@ -97,8 +91,7 @@ It can be found in the Mesh menu:
This will generate a second set of UV2 coordinates which can be used for baking,
and it will also set the texture size automatically.
Unwrap from your 3D DCC
~~~~~~~~~~~~~~~~~~~~~~~
### Unwrap from your 3D DCC
The last option is to do it from your favorite 3D app. This approach is generally
not recommended, but it's explained first so that you know it exists.
@ -120,16 +113,14 @@ Be wary that most unwrappers in 3D DCCs are not quality oriented, as they are
meant to work quickly. You will mostly need to use seams or other techniques to
create better unwrapping.
Checking UV2
~~~~~~~~~~~~
### Checking UV2
In the mesh menu mentioned before, the UV2 texture coordinates can be visualized.
Make sure, if something is failing, to check that the meshes have these UV2 coordinates:
![](img/baked_light_uvchannel.png)
Setting up the scene
--------------------
## Setting up the scene
Before anything is done, a **BakedLightmap** node needs to be added to a scene.
This will enable light baking on all nodes (and sub-nodes) in that scene, even
@ -141,8 +132,7 @@ A sub-scene can be instanced several times, as this is supported by the baker, a
each will be assigned a lightmap of its own (just make sure to respect the rule
about scaling mentioned before):
Configure bounds
~~~~~~~~~~~~~~~~
### Configure bounds
Lightmap needs an approximate volume of the area affected because it uses it to
transfer light to dynamic objects inside it (more on that later). Just
@ -150,8 +140,7 @@ cover the scene with the volume as you do with `GIProbe`:
![](img/baked_light_bounds.png)
Setting up meshes
~~~~~~~~~~~~~~~~~
### Setting up meshes
For a **MeshInstance** node to take part in the baking process, it needs to have
the **Use in Baked Light** property enabled.
@ -160,8 +149,7 @@ the **Use in Baked Light** property enabled.
When auto-generating lightmaps on scene import, this is enabled automatically.
Setting up lights
~~~~~~~~~~~~~~~~~
### Setting up lights
Lights are baked with indirect light by default. This means that shadowmapping
and lighting are still dynamic and affect moving objects, but light bounces from
@ -174,16 +162,14 @@ can be controlled from the **Bake Mode** menu in lights:
The modes are:
Disabled
^^^^^^^^
#### Disabled
The light is ignored when baking lightmaps. Keep in mind hiding a light will have
no effect for baking, so this must be used instead of hiding the Light node.
This is the mode to use for dynamic lighting effects such as explosions and weapon effects.
Indirect
^^^^^^^^
#### Indirect
This is the default mode, and is a compromise between performance and real-time
friendliness. Only indirect lighting will be baked. Direct light and shadows are
@ -193,8 +179,7 @@ This mode allows performing *subtle* changes to a light's color, energy and
position while still looking fairly correct. For example, you can use this
to create flickering static torches that have their indirect light baked.
All
^^^
#### All
Both indirect and direct lighting will be baked. Since static surfaces can skip
lighting and shadow computations entirely, this mode provides the best
@ -227,8 +212,7 @@ The light's **Size** property is ignored for real-time shadows; it will only aff
shadows. When the **Size** property is changed, lightmaps must be baked again to
make changes visible.
Baking
------
## Baking
To begin the bake process, just push the **Bake Lightmaps** button on top
when selecting the BakedLightmap node:
@ -238,8 +222,8 @@ when selecting the BakedLightmap node:
This can take from seconds to minutes (or hours) depending on scene size, bake
method and quality selected.
Balancing bake times with quality
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Balancing bake times with quality
Since high-quality bakes can take very long (up to several hours for large complex scenes),
it is recommended to use lower quality settings at first. Then, once you are confident
@ -264,8 +248,7 @@ Note:
For example, on a system with 8 logical CPU cores, adjusting the setting to
`-1` will use 7 CPU threads for lightmap baking.
Configuring bake
~~~~~~~~~~~~~~~~
### Configuring bake
Several more options are present for baking:
@ -274,8 +257,7 @@ Several more options are present for baking:
is *touching* the bake extents will have lightmaps baked for it, but dynamic
object capture will only work within the extents.
Tweaks
^^^^^^
#### Tweaks
- **Quality:** Four bake quality modes are provided: Low, Medium, High, and Ultra.
Higher quality takes more time, but result in a better-looking lightmap with
@ -310,8 +292,7 @@ Tweaks
*lower-resolution* lightmaps, which result in faster bake times and lower file
sizes at the cost of blurrier indirect lighting and shadows.
Atlas
^^^^^
#### Atlas
- **Generate:** If enabled, a texture atlas will be generated for the lightmap.
This results in more efficient rendering, but is only compatible with the
@ -323,8 +304,7 @@ Atlas
in a more efficient atlas, but are less compatible with old/low-end hardware.
If in doubt, leave this setting on its default value (4096).
Capture
^^^^^^^
#### Capture
- **Enabled:** This enables probe capture so that dynamic objects can *receive* indirect lighting.
Regardless of this setting's value, dynamic objects will not be able to
@ -340,8 +320,7 @@ Capture
dynamic objects. Adjust this value depending on your scene to make dynamic
objects better fit with static baked lighting.
Data
^^^^
#### Data
- **Light Data**: Contains the light baked data after baking. Textures are saved
to disk, but this also contains the capture data for dynamic objects, which can
@ -365,8 +344,7 @@ Tip:
to perform post-processing if needed. However, keep in mind that changes to
the EXR file will be lost when baking lightmaps again.
Dynamic objects
---------------
## Dynamic objects
In other engines or lightmapper implementations, you are generally required to
manually place small objects called "lightprobes" all around the level to

View File

@ -1,21 +1,18 @@
Environment and post-processing
===============================
# Environment and post-processing
Pandemonium 3 provides a redesigned Environment resource, as well as a new
post-processing system with many available effects right out of the box.
Environment
-----------
## Environment
The Environment resource stores all the information required for controlling
rendering environment. This includes sky, ambient lighting, tone mapping,
effects, and adjustments. By itself it does nothing, but it becomes enabled once
used in one of the following locations in order of priority:
Camera node
^^^^^^^^^^^
#### Camera node
An Environment can be set to a camera. It will have priority over any other setting.
@ -24,8 +21,7 @@ An Environment can be set to a camera. It will have priority over any other sett
This is mostly useful when wanting to override an existing environment,
but in general it's a better idea to use the option below.
WorldEnvironment node
^^^^^^^^^^^^^^^^^^^^^
#### WorldEnvironment node
The WorldEnvironment node can be added to any scene, but only one can exist per
active scene tree. Adding more than one will result in a warning.
@ -36,8 +32,7 @@ Any Environment added has higher priority than the default Environment
(explained below). This means it can be overridden on a per-scene basis,
which makes it quite useful.
Default environment
^^^^^^^^^^^^^^^^^^^
#### Default environment
A default environment can be set, which acts as a fallback when no Environment
was set to a Camera or WorldEnvironment.
@ -49,14 +44,12 @@ New projects created from the Project Manager come with a default environment
(`default_env.tres`). If one needs to be created, save it to disk before
referencing it here.
Environment options
-------------------
## Environment options
Following is a detailed description of all environment options and how they
are intended to be used.
Background
^^^^^^^^^^
#### Background
The Background section contains settings on how to fill the background (parts of
the screen where objects were not drawn). In Pandemonium 3.0, the background not only
@ -72,8 +65,7 @@ There are many ways to set the background:
- **Sky** lets you define a panorama sky (a 360 degree sphere texture) or a procedural sky (a simple sky featuring a gradient and an optional sun). Objects will reflect it and absorb ambient light from it.
- **Color+Sky** lets you define a sky (as above), but uses a constant color value for drawing the background. The sky will only be used for reflection and ambient light.
Ambient Light
^^^^^^^^^^^^^
#### Ambient Light
Ambient (as defined here) is a type of light that affects every piece of geometry
with the same intensity. It is global and independent of lights that might be
@ -109,8 +101,7 @@ flat ambient color and a GIProbe:
Using one of the methods described above, objects get constant ambient lighting
replaced by ambient light from the probes.
Fog
^^^
#### Fog
Fog, as in real life, makes distant objects fade away into an uniform color. The
physical effect is actually pretty complex, but Pandemonium provides a good approximation.
@ -134,8 +125,7 @@ In practice, it makes light stand out more across the fog.
![](img/environment_fog_transmission.png)
Tonemap
^^^^^^^
#### Tonemap
*This feature is only available when using the GLES3 backend.*
@ -175,8 +165,7 @@ The tone mapping options are:
between `6.0` and `8.0`. Higher values result in less blown out highlights,
but make the scene appear slightly darker as a whole.
Auto Exposure (HDR)
^^^^^^^^^^^^^^^^^^^
#### Auto Exposure (HDR)
*This feature is only available when using the GLES3 backend.*
@ -211,14 +200,12 @@ defaults, but you can still tweak them:
- **Max Luma:** Maximum luminance that auto exposure will aim to adjust for.
- **Speed:** Speed at which luminance corrects itself. The higher the value, the faster correction happens.
Mid- and post-processing effects
--------------------------------
## Mid- and post-processing effects
A large amount of widely-used mid- and post-processing effects are supported
in the Environment.
Screen-Space Reflections (SSR)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#### Screen-Space Reflections (SSR)
*This feature is only available when using the GLES3 backend.*
@ -243,8 +230,7 @@ A few user-controlled parameters are available to better tweak the technique:
Keep in mind that screen-space-reflections only work for reflecting opaque geometry. Transparent objects can't be reflected.
Screen-Space Ambient Occlusion (SSAO)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#### Screen-Space Ambient Occlusion (SSAO)
*This feature is only available when using the GLES3 backend.*
@ -289,8 +275,7 @@ Tweaking SSAO is possible with several parameters:
- **Blur:** Type of blur kernel used. The 1x1 kernel is a simple blur that preserves local detail better, but is not as efficient (generally works better with the high quality setting above), while 3x3 will soften the image better (with a bit of dithering-like effect), but does not preserve local detail as well.
- **Edge Sharpness**: This can be used to preserve the sharpness of edges (avoids areas without AO on creases).
Depth of Field / Far Blur
^^^^^^^^^^^^^^^^^^^^^^^^^
#### Depth of Field / Far Blur
This effect simulates focal distance on high end cameras. It blurs objects behind
a given range. It has an initial **Distance** with a **Transition** region
@ -301,8 +286,7 @@ a given range. It has an initial **Distance** with a **Transition** region
The **Amount** parameter controls the amount of blur. For larger blurs, tweaking
the **Quality** may be needed in order to avoid artifacts.
Depth of Field / Near Blur
^^^^^^^^^^^^^^^^^^^^^^^^^^
#### Depth of Field / Near Blur
This effect simulates focal distance on high end cameras. It blurs objects close
to the camera (acts in the opposite direction as far blur).
@ -318,8 +302,7 @@ given object:
![](img/environment_mixed_blur.png)
Glow
^^^^
#### Glow
In photography and film, when light amount exceeds the maximum supported by the
media (be it analog or digital), it generally bleeds outwards to darker regions
@ -370,8 +353,7 @@ gets rids of it, at a minimal performance cost.
![](img/environment_glow_bicubic.png)
Adjustments
^^^^^^^^^^^
#### Adjustments
At the end of processing, Pandemonium offers the possibility to do some standard
image adjustments.

View File

@ -1,10 +1,8 @@
High dynamic range lighting
===========================
# High dynamic range lighting
Introduction
------------
## Introduction
Normally, an artist does all the 3D modelling, then all the texturing,
looks at their awesome looking model in the 3D DCC and says "looks
@ -41,8 +39,7 @@ Note:
For advanced users, it is still possible to get a non-tonemapped image
of the viewport with full HDR data, which can then be saved to an OpenEXR file.
Computer displays
-----------------
## Computer displays
Almost all displays require a nonlinear encoding for the code values sent
to them. The display in turn, using its unique transfer characteristic,
@ -72,8 +69,7 @@ the wider dynamic range of the game engine's scene output using the simple
transfer function of the display. A more complex approach to encoding
is required.
Scene linear & asset pipelines
------------------------------
## Scene linear & asset pipelines
Working in scene-linear sRGB is not as simple as just pressing a switch. First,
imported image assets must be converted to linear light ratios on import. Even
@ -82,23 +78,20 @@ as textures, depending on how they were generated.
There are two ways to do this:
sRGB transfer function to display linear ratios on image import
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### sRGB transfer function to display linear ratios on image import
This is the easiest method of using sRGB assets, but it's not the most ideal.
One issue with this is loss of quality. Using 8 bits per channel to represent
linear light ratios is not sufficient to quantize the values correctly.
These textures may also be compressed later, which can exacerbate the problem.
Hardware sRGB transfer function to display linear conversion
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Hardware sRGB transfer function to display linear conversion
The GPU will do the conversion after reading the texel using floating-point.
This works fine on PC and consoles, but most mobile devices don't support it,
or they don't support it on compressed texture formats (iOS for example).
Scene linear to display-referred nonlinear
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Scene linear to display-referred nonlinear
After all the rendering is done, the scene linear render requires transforming
to a suitable output such as an sRGB display. To do this, enable sRGB conversion
@ -109,8 +102,7 @@ conversions must always be **both** enabled. Failing to enable one of them will
result in horrible visuals suitable only for avant-garde experimental
indie games.
Parameters of HDR
-----------------
## Parameters of HDR
HDR settings can be found in the `Environment`
resource. Most of the time, these are found inside a

View File

@ -1,10 +1,8 @@
Using MultiMeshInstance
-----------------------
# Using MultiMeshInstance
Introduction
~~~~~~~~~~~~
## Introduction
In a normal scenario, you would use a `MeshInstance`
node to display a 3D mesh like a human model for the main character, but in some
@ -19,8 +17,7 @@ MultiMeshInstance, as the name suggests, creates multiple copies of a
MeshInstance over a surface of a specific mesh. An example would be having a
tree mesh populate a landscape mesh with trees of random scales and orientations.
Setting up the nodes
~~~~~~~~~~~~~~~~~~~~
## Setting up the nodes
The basic setup requires three nodes: the MultiMeshInstance node
and two MeshInstance nodes.
@ -48,49 +45,40 @@ Click it and select *Populate surface* in the dropdown menu. A new window titled
![](img/multimesh_settings.png)
MultiMesh settings
~~~~~~~~~~~~~~~~~~
## MultiMesh settings
Below are descriptions of the options.
Target Surface
++++++++++++++
### Target Surface
The mesh used as the target surface on which to place copies of your
source mesh.
Source Mesh
+++++++++++
### Source Mesh
The mesh you want duplicated on the target surface.
Mesh Up Axis
++++++++++++
### Mesh Up Axis
The axis used as the up axis of the source mesh.
Random Rotation
+++++++++++++++
### Random Rotation
Randomizing the rotation around the up axis of the source mesh.
Random Tilt
+++++++++++
### Random Tilt
Randomizing the overall rotation of the source mesh.
Random Scale
++++++++++++
### Random Scale
Randomizing the scale of the source mesh.
Scale
+++++
### Scale
The scale of the source mesh that will be placed over the target surface.
Amount
++++++
### Amount
The amount of mesh instances placed over the target surface.

View File

@ -1,7 +1,6 @@
Occluder Nodes
==============
# Occluder Nodes
In addition to occlusion via `doc_rooms_and_portals`, Pandemonium also has the ability to provide basic occlusion using simple geometric `Occluder` nodes. These are geometric shapes that are shown in the editor using gizmos, but are invisible at runtime.
@ -19,8 +18,7 @@ The Occluder node itself is a holder for an OccluderShape resource, which determ
Tip:
You will see a yellow warning triangle that lets you know that you must set an OccluderShape from the inspector before the `Occluder` becomes functional.
OccluderShapeSphere
-------------------
## OccluderShapeSphere
The sphere is one of the simplest and fastest occluders, and is easy to setup and position. The downside is that the sphere only tends to make sense in certain game level designs, and is more suited to terrain or organic background geometry.
@ -45,8 +43,7 @@ At runtime the spheres can be switched on and off changing the Occluder node vis
A common use case for occluder spheres is providing occlusion on mountainous / hilly terrain. By placing spheres inside mountains you can prevent trees, plants, building and objects rendering behind mountains. With some creativity they can also be used for moving objects such as large spacecraft, planets etc.
OccluderShapePolygon
--------------------
## OccluderShapePolygon
The polygon is a generalist occluder. It can be made to work well in almost all situations, and can quickly provide a degree of occlusion culling to most scenes.
@ -55,8 +52,7 @@ As with all geometric occluders, the key to success is to make them large. They
Note:
Like all occluders, polygons **can** overlap, and in many cases they will work better if you overlap them (they are more likely to cull objects on boundaries).
Editing and details
~~~~~~~~~~~~~~~~~~~
### Editing and details
Occluder polygons are edited as a list of points which define a *convex* polygon, on a single plane. In order to confine the polygon to a single plane, the points are defined in 2D space rather than 3D. The orientation, position and scale of the polygon is taken instead from the transform of the `Occluder` Node.
@ -73,8 +69,7 @@ You are not restricted to 4 points, you can add and remove points in the Inspect
![](img/occluder_shape_polygon2.png)
Holes
~~~~~
### Holes
Real world game levels don't always have large continuous areas that should be occluded. Often walls will have a door or windows, caves will have an entrance, etc. In some cases we have to make do by placing several OccluderShapePolygons around such an opening, but Occluder polygons have one more trick up their sleeve - they can have "holes".
@ -89,8 +84,7 @@ The hole can be totally within the polygon (such as a window), abutting the edge
Note:
Placing holes is usually far more convenient, and works faster and better at runtime, than creating lots of smaller OccluderShapePolygons.
Hole Limits
^^^^^^^^^^^
#### Hole Limits
The main limitation of holes is that there can only be one per polygon. If you have a situation which requires two or more holes, you have a choice:
@ -100,14 +94,12 @@ The main limitation of holes is that there can only be one per polygon. If you h
Tip:
Remember that if you are using more than one polygon, they can overlap, and you should use this to your advantage.
How many Occluder polys are needed?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#### How many Occluder polys are needed?
This very much depends on your scene, but generally you can start getting a good benefit from 3 or 4 well placed polygons. After that it is totally up to you how much time you want to spend.
Placing occluders is a bit of an art form, and you will get better at it and learn new tricks the more you work with them.
Some ideas:
^^^^^^^^^^^
#### Some ideas:
- Build your levels to take advantage of occlusion.
@ -116,8 +108,7 @@ This is one of the secrets of the pros. A good level design is not just about wh
- When in a building with multiple floors, try placing an occluder polygon between each floor, with a hole for where the staircase transitions between them. This can potentially cull out entire floors and greatly improve performance.
- Don't be afraid to extend your occluder polygons far past the edges of visible geometry to cull more objects - for instance far into the ground or sky.
Using polygons dynamically
~~~~~~~~~~~~~~~~~~~~~~~~~~
### Using polygons dynamically
Like all geometric occluders, polygons are not confined to static (non-moving) geometry. You can place them on moving objects. You can even change the relative position of the points in realtime.

View File

@ -1,10 +1,8 @@
3D text
=======
# 3D text
Introduction
------------
## Introduction
In a project, there may be times when text needs to be created as
part of a 3D scene and not just in the HUD. Pandemonium provides two
@ -15,8 +13,7 @@ This page does **not** cover how to display a GUI scene in a 3D
environment. For information on how to do that see `this ( https://github.com/Relintai/pandemonium_engine-demo-projects/tree/master/viewport/2d_in_3d )`
demo project.
Label3D
-------
## Label3D
![](img/label_3d.png)
@ -33,8 +30,7 @@ GeometryInstance3D settings. This is because the node is a quad mesh
as Sprite3D. See `this page ( doc_3d_rendering_limitations_transparency_sorting )`
for more information.
Text mesh
---------
## Text mesh
![](img/text_mesh.png)

View File

@ -1,7 +1,6 @@
Introduction to Rooms and Portals
=================================
# Introduction to Rooms and Portals
The rooms and portals system is an optional component of Pandemonium that allows you to partition your 3D game levels into a series of `Room( Room )` s (*aka cells*), and `Portal( Portal )` s. Portals are openings between the rooms that the `Camera( Camera )` (and lights) can see through.
@ -16,8 +15,7 @@ The trade off for these features is that we have to manually partition our level
Note:
Pandemonium portals should not be confused with those in the `game of the same name ( https://en.wikipedia.org/wiki/Portal_(video_game) )`. They do not warp space, they simply represent a window that the camera (or lights) can see through.
Minimizing manual labour
^^^^^^^^^^^^^^^^^^^^^^^^
### Minimizing manual labour
Although the effort involved in creating rooms for a large level may seem daunting, there are several factors which can make this much easier:
@ -29,8 +27,7 @@ The performance benefits (especially in terms of occlusion) follow an L-shaped c
In general, when it comes to medium and large-sized levels, it is better to do a little portalling than none at all.
Some caveats
^^^^^^^^^^^^
### Some caveats
Note:
The portal system should be considered an **advanced feature** of Pandemonium. You should not attempt to use rooms and portals until you are familiar with the Pandemonium editor, and have successfully made at least a couple of test games.

View File

@ -1,13 +1,11 @@
First steps with Rooms and Portals
==================================
The RoomManager
~~~~~~~~~~~~~~~
# First steps with Rooms and Portals
## The RoomManager
Anytime you want to use the portal system, you need to include a special node in your scene tree, called the `RoomManager( RoomManager )`. The RoomManager is responsible for the runtime maintenance of the system, especially converting the objects in your rooms into a *room graph* which is used at runtime to perform occlusion culling and other tasks.
Room Conversion
^^^^^^^^^^^^^^^
### Room Conversion
This conversion must take place every time you want to activate the system. It does not store the *room graph* in your project (for flexibility and to save memory). You can either trigger it by pressing the **Convert Rooms** button in the editor toolbar (which also has a keyboard shortcut) or by calling the `rooms_convert()` method in the RoomManager. The latter method will be what you use in-game. Note that for safety, best practice is to call `rooms_clear()` before unloading or changing levels.
@ -20,8 +18,7 @@ Note:
![](img/room_manager.png)
The RoomList
^^^^^^^^^^^^
### The RoomList
Before we create any rooms, we must first create a node to be the parent of all the static objects, rooms, roomgroups, and so on in our level. This node is referred to as the the `RoomList`.
@ -36,18 +33,15 @@ Why do we use a specific branch of the scene tree and not the scene root? The an
Often you will end up completely replacing the roomlist branch at runtime in your game as you load and unload levels.
Rooms
~~~~~
## Rooms
What is a room?
^^^^^^^^^^^^^^^
### What is a room?
`Room( Room )`\ s are a way of spatially partitioning your level into areas that make sense in terms of level design. Rooms often quite literally *are* rooms (like in a building). Ultimately though, as far as the engine is concerned, a room represents a **non-overlapping** convex volume in which you typically place most of your objects that fall within that area.
A room doesn't need to correspond to a literal room. It could, for example, also be a canyon in an outdoor area or a smaller part of a concave room. With a little imagination, you can use the system in almost any scenario.
Why convex?
^^^^^^^^^^^
### Why convex?
Rooms are defined as convex volumes (or *convex hulls*) because it's trivial to mathematically determine whether a point is within a convex hull. A simple plane check will tell you the distance of a point from a plane. If a point is behind all the planes bounding the convex hull, then by definition it is inside the room. This makes all kinds of things easier in the internals of the system, such as checking which room a camera is within.
@ -55,8 +49,7 @@ Rooms are defined as convex volumes (or *convex hulls*) because it's trivial to
![](img/convex_hull.png)
Why non-overlapping?
^^^^^^^^^^^^^^^^^^^^
### Why non-overlapping?
If two rooms overlap, and a camera or player is in this overlapping zone, then there is no way to tell which room the object should be in (and hence render from), or be rendered in. This requirement for non-overlapping rooms does have implications for level design.
@ -68,13 +61,11 @@ The system does attempt to cope with overlapping rooms as best as possible by ma
There is one exception, however, for `internal rooms( doc_rooms_and_portals_internal_rooms )`. You do not have to worry about these to start with.
How do I create a room?
^^^^^^^^^^^^^^^^^^^^^^^
### How do I create a room?
A `Room( Room )` is a node type that can be added to the scene tree like any other. You can place objects within the room by making them children and grand-children of the Room node.
How do I define the shape and position of my room convex hull?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### How do I define the shape and position of my room convex hull?
Because defining the room bound is the most important aspect of the system, there are THREE methods available to define the shape of a room in Pandemonium:
@ -90,8 +81,7 @@ The automatic method is used whenever a manual bound is not supplied.
![](img/simple_room.png)
Portals
~~~~~~~
## Portals
If you create some rooms, place objects within them, then convert the level in the editor, you will see the objects in the rooms appearing and showing as you move between rooms. There is one problem, however! Although you can see the objects within the room that the camera is in, you can't see to any neighbouring rooms! For that we need portals.
@ -105,8 +95,7 @@ You should therefore place a portal in only one of each pair of neighbouring roo
Do not be confused by the arrow. Although the arrow shows which direction the portal faces, most portals will be *two-way*, and can be seen through from both directions. The arrow is more important for ensuring that the portal links to the correct neighbouring room.
Portal linking
^^^^^^^^^^^^^^
### Portal linking
There are two ways to specify which room the portal should link to:
@ -116,8 +105,7 @@ There are two ways to specify which room the portal should link to:
Note:
Portals are defined as a set of 2D points. This ensures that the polygon formed is in a single plane. The transform determines the portal orientation. The points must also form a *convex* polygon. This is enforced by validating the points you specify, ignoring any that do not form a convex shape. This makes editing easier while making it difficult to break the system.
Trying it out
~~~~~~~~~~~~~
## Trying it out
By now you should be able to create a couple of rooms, add some nodes such as MeshInstances within the rooms, and add a portal between the rooms. Try converting the rooms in the editor and see if you can now view the objects in neighbouring rooms through the portal.

View File

@ -1,47 +1,39 @@
Using objects in Rooms and Portals
==================================
# Using objects in Rooms and Portals
Normally, when you use Pandemonium, all objects that you can see (`VisualInstance( VisualInstance )`\ s) are treated in the same way by the engine. The portal renderer is slightly different, in that it makes a distinction between the different roles objects will have in your game. It makes this distinction to define the `Room( Room )`\ s, and to render and process everything in the most efficient way.
Portal mode
~~~~~~~~~~~
## Portal mode
If you look in the inspector, every VisualInstance in Pandemonium is derived from a `CullInstance( CullInstance )`, where you can set a `PortalMode`. This determines how objects will behave in the portal system.
![](img/cull_instance.png)
STATIC
^^^^^^
### STATIC
The default mode for objects is `STATIC`. Static objects are objects within rooms that will not move throughout the lifecycle of the level. Things like floors, walls, ceilings are good candidates for `STATIC` objects.
DYNAMIC
^^^^^^^
### DYNAMIC
Dynamic mode is for objects that are expected to move during the game. But there is a limitation - **they must not move outside of their original room**. These objects are handled very efficiently by the system. Examples might include moving platforms, and elevators.
ROAMING
^^^^^^^
### ROAMING
Roaming mode is for objects that can move between rooms. Things like players and enemies should be marked as roaming. These are more expensive to calculate than `STATIC` or `DYNAMIC` modes, because the system has to keep track of which room a roaming object is within.
GLOBAL
^^^^^^
### GLOBAL
Global mode is for objects that you don't want occlusion culled at all. Things like a main player's weapon, bullets and some particle effects are good candidates for `GLOBAL` mode.
IGNORE
^^^^^^
### IGNORE
Ignore is a special mode for objects that will be essentially free in the system. Manual bounds (`-bound`) get converted to ignore portal mode automatically. They don't need to show up during the game, but are kept in the scene tree in case you need to convert the level multiple times (e.g. in the Editor). You might also choose to use this for objects that you *only* want to show up in the editor (when RoomManager is inactive).
Should you place objects within rooms (in the scene tree) or not?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### Should you place objects within rooms (in the scene tree) or not?
`STATIC` and `DYNAMIC` objects are ideally placed within rooms in the scene tree. The system needs to know which room they are in during conversion as it assumes they will never change room. Placing them within rooms in the scene tree allows you to explicitly tell the system where you want them.
Autoplace
^^^^^^^^^
### Autoplace
However, for ease of use, it is also possible to place `STATIC` and `DYNAMIC` objects *outside* the rooms in the scene tree, but within the RoomList branch. The system will attempt to **autoplace** the objects into the appropriate room. This works in most cases but if in doubt, use the explicit approach. The explicit approach is especially needed when dealing with internal rooms, which have some restrictions for sprawling objects.
@ -51,8 +43,7 @@ Note that if you place `STATIC` and `DYNAMIC` objects outside of rooms, they wil
`ROAMING` and `GLOBAL` objects are recommended to be kept in a branch of the scene tree outside of any rooms or the RoomList. They *can* be placed inside the rooms, but to save confusion, they are normally better kept on their own branch. There are no restrictions on the placement of `IGNORE` objects.
Object Lifetimes
^^^^^^^^^^^^^^^^
### Object Lifetimes
It is important to note that the lifetime of `STATIC` and `DYNAMIC` objects is tied to the lifetime of the level, between when you call `rooms_convert()` to activate the portal system, and calling `rooms_clear()` to unload the system. This is because quite a bit of pre-processing goes on during the conversion phase in order to render them efficiently.
@ -66,29 +57,25 @@ The sequence should be therefore:
Objects that are `ROAMING`, `GLOBAL` or `IGNORE` can be freely created and deleted as required.
Sprawling
~~~~~~~~~
## Sprawling
Although users can usually ignore the internals of the portal system, they should be aware that it is capable of handling objects that are so big they end up in more than one room. Each object has a central room, but using the AABB or geometry the system can detect when an object extends across a portal into a neighbouring room (or several rooms). This is referred to as **sprawling**.
This means that if the corner of an object extends into a neighbouring room, but the object's main room is not showing (e.g. a train where the end is in a different room), the object will not be culled, and will still be shown. The object will only be culled if it is not present in any of the rooms that are visible.
Portal Margins
^^^^^^^^^^^^^^
### Portal Margins
It is hard to place objects exactly at the edges of rooms, and if we chose to sprawl objects to the adjacent room the moment a portal was crossed (even by a very small amount), there would be an unnecessary amount of sprawling, and objects would end up being rendered when not really required. To counter this, portals have an adjustable `margin` over which an object can cross without being considered in the next room. The margin is shown in the editor gizmo as a red translucent area.
You can set the margin globally in the RoomManager. You can also override this margin value in any portal if you need to finetune things. As you edit the margin values in the inspector, you should see the margins update in the 3D editor viewport.
Include in Bound
^^^^^^^^^^^^^^^^
### Include in Bound
The support for objects that are larger than a single room has one side effect. You may not want to include some objects in the calculation of the automatic room bound. You can turn this on and off in the inspector for each object. See **Cull Instance > Include In Bound**.
While sprawling works great for large moving objects, it also gives you a lot more leeway in level design. You can for instance create a large terrain section and have it present in multiple rooms, without having to split up the mesh.
Lighting
~~~~~~~~
## Lighting
In general lights are handled like any other visual instance. They can be placed in rooms, and they will sprawl to affect neighbouring rooms, following the dimensions and direction of the light. The exception to this is `DirectionalLight( DirectionalLight )`\ s. DirectionalLights have no source room as they affect *everywhere*. They should therefore not be placed in a room. As DirectionalLights can be expensive, it is a good idea to turn them off when inside, see the later `doc_rooms_and_portals_roomgroups` section for details on how to do this.

View File

@ -1,8 +1,7 @@
Advanced Room and Portal usage
==============================
Gameplay callbacks
~~~~~~~~~~~~~~~~~~
# Advanced Room and Portal usage
## Gameplay callbacks
Although occlusion culling greatly reduces the number of objects that need to be rendered, there are other costs to maintaining objects in a game besides the final rendering. For instance, in Pandemonium, animated objects will still be animated whether they appear on screen or not. This can take up a lot of processing power, especially for objects that use software skinning (where skinning is calculated on the CPU).
@ -14,8 +13,7 @@ The gameplay area is not confined to just the objects you can see in front of yo
This works because if a monster is in an area that is completely out of view for yourself or the monster, you are less likely to care what it is doing.
How does a monster know whether it is within the gameplay area?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### How does a monster know whether it is within the gameplay area?
This problem is solved because the portal system contains a subsystem called the **Gameplay Monitor** that can be turned on and off from the `RoomManager( RoomManager )`. When switched on, any roaming objects that move inside or outside the gameplay area (whether by moving themselves, or the camera moving) will receive callbacks to let them know of this change.
@ -36,8 +34,7 @@ Signals are sent just as any other signal. They can be attached to functions usi
In fact, you don't just receive these callbacks for `ROAMING` objects. In addition Rooms and RoomGroups (which can be used to form groups of rooms) can also receive callbacks. For example, you can use this to trigger AI behaviour when the player reaches certain points in a level.
VisbilityNotifiers / VisibilityEnablers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## VisbilityNotifiers / VisibilityEnablers
Gameplay callbacks have one more useful function. By default in Pandemonium, animation and physics are still processed regardless of whether an object is within view. This can sap performance, especially when using software skinning.
@ -47,10 +44,7 @@ The engine's solution to this problem is the `VisibilityNotifier( VisibilityNoti
What if the VisibilityEnabler could turn off objects when they were occlusion culled? Well it turns out VisibilityEnabler can. All you have to do is enable the **Gameplay Monitor** in the RoomManager and the rest happens automatically.
RoomGroups
~~~~~~~~~~
## RoomGroups
A `RoomGroup( RoomGroup )` is a special node which allows you to deal with a group of rooms at once, instead of having write code for them individually. This is especially useful in conjunction with gameplay callbacks. The most important use for RoomGroups is to delineate between "inside" and "outside" areas.
@ -65,15 +59,11 @@ This is an example of a simple RoomGroup script to turn on and off a Directional
Tip:
You can apply the same technique for switching on and off weather effects, skyboxes and much more.
Internal Rooms
~~~~~~~~~~~~~~
## Internal Rooms
There is one more trick that RoomGroups have up their sleeve. A very common desire is to have a game level with a mixed outdoor and indoor environment. We have already mentioned that rooms can be used to represent both rooms in a building, and areas of landscape, such as a canyon.
What happens if you wish to have a house in a terrain 'room'?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### What happens if you wish to have a house in a terrain 'room'?
With the functionality described so far you *can* do it - you would need to place portals around the exterior of the house though, forming needless rooms above the house. This has been done in many games. But what if there was a simpler way?
@ -91,8 +81,7 @@ The only differences:
- Portals of internal rooms are not considered as part of the bound of outer rooms.
- `STATIC` and `DYNAMIC` objects from outer rooms will not sprawl into internal rooms. If you want objects to cross these portals, place them in the internal room. This is to prevent large objects like terrain sections sprawling into entire buildings, and rendering when not necessary.
Internal room example
^^^^^^^^^^^^^^^^^^^^^
### Internal room example
The tent is a simple room inside a terrain room (which contains the ground, the trees etc).
@ -107,8 +96,7 @@ This is perfect for improving performance in open world games. Often your buildi
*Scene is 'Diorama Eco scene' by Odo, with slight changes for illustration purposes.* `CC Attribution ( https://creativecommons.org/licenses/by/4.0/ )`
Internal room scenes
^^^^^^^^^^^^^^^^^^^^
### Internal room scenes
Let us look in detail at another practical example for an open world. We want to place houses (as internal rooms) on an island, but have each house as a self-contained scene containing both the interior *and* the external mesh of the house.

View File

@ -1,8 +1,7 @@
Editing Rooms and Portals
=========================
Example SceneTree
~~~~~~~~~~~~~~~~~
# Editing Rooms and Portals
## Example SceneTree
Putting all the ideas together, here is an example scene tree:
@ -18,13 +17,11 @@ Putting all the ideas together, here is an example scene tree:
Creating room systems in Blender (or other modeling tools)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Creating room systems in Blender (or other modeling tools)
Although you can create your room system entirely within the editor, you can also build rooms and portals within your modeling tool. There is one small snag - modeling tools such as Blender have no knowledge of Room, RoomGroup and Portal nodes. In order to work around this we use a series of naming conventions. The first time Pandemonium encounters these specially named nodes, it will convert them into Rooms, RoomGroups and Portals.
Postfix convention
^^^^^^^^^^^^^^^^^^
### Postfix convention
- `-room` becomes a `Room( Room )`.
- `-roomgroup` becomes a `RoomGroup( RoomGroup )`.
@ -39,8 +36,7 @@ For example:
- `outside-roomgroup` - create a RoomGroup called "outside".
- `kitchen-portal` - create a Portal leading to the "kitchen" Room.
Portals
^^^^^^^
### Portals
Portals are different from Rooms. In Portals, we need to specify the geometry of the Portal in our modelling tool, in addition to just the name. To do this your "portal-to-be" should be created as a Mesh.
@ -48,8 +44,7 @@ Portal meshes have some restrictions to work properly. They must be convex, and
The portal's naming is quite important. You can either name the portal `-portal` which will attempt to autolink the Portal in Pandemonium, or you can use the name of the Room you wish to link the Portal to as a prefix.
Wildcard
^^^^^^^^
### Wildcard
In most cases, this can be done using a name such as `kitchen-portal`. However, there is one problem. Blender and Pandemonium do not deal well when you have multiple objects with the same name. What happens when we want more than one Portal to lead to the kitchen?
@ -63,16 +58,14 @@ This means we can use the following portal names:
Wildcards work on all of the nodes which use these naming conventions.
Manual bounds
^^^^^^^^^^^^^
### Manual bounds
Manual bounds are a way of explicitly setting the convex hull for a room, and are used if they are present as children of a room in the scene tree. Aside from the postfix, the naming is unimportant. They should be meshes (i.e. MeshInstance in Pandemonium). Bear in mind they will be converted to convex hulls during the conversion process, so they don't have to be perfect.
Tip:
Once used during conversion, they will be converted to the `IGNORE` **Portal Mode** and won't be shown. You can alternatively use **Generate Points** within the editor to convert these to a set of points stored in the room, and delete the original `-bound` MeshInstance.
Portal point editing
~~~~~~~~~~~~~~~~~~~~
## Portal point editing
Portals are defined by a combination of the transform of the portal node, and by a set of points which form the corners.
@ -84,8 +77,7 @@ You can edit these points in the gizmo or inspector to make a better match to th
Room point editing
~~~~~~~~~~~~~~~~~~
## Room point editing
You also have the option to manually edit the points used to define the convex hull of a room. These points are not present by default. You would typically create them by pressing the **Generate Points** button in the editor toolbar when a room is selected. This will transfer the auto bound from the geometry (or manual `-bound` mesh) into the inspector. Once there are points in the inspector, they will be used and override any other method. So if you wish to revert your manual editing, delete all the room's points.
@ -93,48 +85,38 @@ You also have the option to manually edit the points used to define the convex h
Manually editing points can be useful in some situations, especially where the auto-bound doesn't *quite* get the right result you want. It is usually a good idea to use a lot of **Simplification** in the inspector for the Room before generating the points. Be aware though that by default, the **Simplification** value will be inherited from the RoomManager.
RoomManager
~~~~~~~~~~~
## RoomManager
Show Debug
^^^^^^^^^^
### Show Debug
This can be used to control the amount of logging, especially the room conversion logs. Debug will always be set to `false` on exported projects.
Debug Sprawl
^^^^^^^^^^^^
### Debug Sprawl
This mode will only display meshes that are sprawling through portals from the current camera room. Large statics that cross portals are usually the ones you want to sprawl. Typical examples might be terrain mesh areas, or large floor or ceiling meshes. You usually don't want things like door frames to sprawl to the adjacent room - that is what fine tuning the **Portal Margin** is for.
Merge Meshes
^^^^^^^^^^^^
### Merge Meshes
To keep drawcalls to a minimum, the system offers the option to automatically merge similar `STATIC` meshes within a room (also known as *static batching*). This can increase performance in many cases. The culling accuracy is reduced, but as a room is a fairly logical unit for culling, this trade off usually works in your favor.
Plane Simplification
^^^^^^^^^^^^^^^^^^^^
### Plane Simplification
In some cases, the convex hulls automatically generated for rooms may contain a very large number of planes, particularly if you use curved surfaces. This is not ideal because it slows down the system. This option can optionally simplify hulls. The degree of simplification can be selected by the user, between `0` (no simplification) and `1` (maximum simplification). You can also override this value in individual rooms.
Portals
~~~~~~~
## Portals
Portal Active
^^^^^^^^^^^^^
### Portal Active
Portals can be turned on and off at runtime. This is especially useful if you have doors that can open and close.
Two Way
^^^^^^^
### Two Way
Portals can either be two-way or one-way. The default two-way portals are quicker to set up in most circumstances, but one-way portals can be useful in some cases. For example, you can use one-way portals to create windows that can be seen out of, but not seen into. This can help performance when viewing buildings from outdoors.
Particle Systems
~~~~~~~~~~~~~~~~
## Particle Systems
Be aware that when placing `STATIC` particle systems, the AABB on conversion may have zero size. This means the particle system may be unexpectedly culled early. To prevent this, either set the particle system `portal mode` to `DYNAMIC`, or alternatively, add an **Extra Cull Margin** to the particle system in the Geometry Inspector.
Multimeshes
~~~~~~~~~~~
## Multimeshes
Note that multimeshes will be culled as a group, rather than individually. You should therefore attempt to keep them localised to the same area wherever possible.

View File

@ -1,17 +1,15 @@
Rooms and Portals example
=========================
# Rooms and Portals example
Download this tutorial project:
`Simple Portals Example ( https://github.com/lawnjelly/pandemonium-demo-projects/tree/portals_simple_demo/3d/portals/room_and_portals_simple_example )`
.
Introduction
~~~~~~~~~~~~
## Introduction
This tutorial will introduce you to building a "Hello World" room system with two rooms, and a portal in between.
Step 1
~~~~~~
## Step 1
![](tutorial_simple/img/tutorial_simple_1.png)
@ -25,8 +23,7 @@ Step 1
- Create a `MeshInstance( MeshInstance )` for the floor. Create a box by adding a CubeMesh resource to the MeshInstance. Scale and position it to form a floor.
- Create MeshInstances for the walls. Create more box meshes for this, then scale and position them. Be sure to leave an opening on one side. You will need to create two wall segments to do this on that side.
Step 2
~~~~~~
## Step 2
![](tutorial_simple/img/tutorial_simple_2.png)
@ -35,8 +32,7 @@ Step 2
- Rotate and position the second room so that the openings line up.
- Rename the second room to `Lounge`.
Step 3
~~~~~~
## Step 3
![](tutorial_simple/img/tutorial_simple_3.png)
@ -45,8 +41,7 @@ Step 3
- Scale and position the portal using the node `Transform` in the inspector, so it fits within the opening between the two rooms.
- The portal plane should face *outward* from the source room, i.e. towards the lounge. This direction is indicated by the arrow in the editor gizmo, and portal gizmo's color.
Step 4
~~~~~~
## Step 4
![](tutorial_simple/img/tutorial_simple_4.png)
@ -56,8 +51,7 @@ Step 4
- Boxes also have a green SpatialMaterial assigned to them to make them stand out more from the rest of the room.
- Let's also create an `OmniLight( OmniLight )` so it will be autoplaced in one of the rooms.
Step 5
~~~~~~
## Step 5
![](tutorial_simple/img/tutorial_simple_5.png)
@ -65,8 +59,7 @@ Step 5
- Select the RoomManager and look in the Inspector window in the **Paths** section.
- You need to assign the **Room List** to point to the RoomList node we created earlier (which is the parent of all the rooms).
Step 6
~~~~~~
## Step 6
![](tutorial_simple/img/tutorial_simple_6.png)
@ -76,13 +69,11 @@ Step 6
- You can see a log of the conversion process in the output window. This is helpful for finding problems.
- If you now move the editor camera inside the rooms, you should see the meshes in the opposite room being culled depending on what you can see through the portal.
Conclusion
~~~~~~~~~~
## Conclusion
This concludes this simple tutorial. Don't be afraid to experiment with the new room system you have created.
Some things to try
^^^^^^^^^^^^^^^^^^
### Some things to try
- Create different types of geometry. CSG nodes, Particle systems, and Multimeshes are all supported by the portal system.
- Try creating a Camera and adding it to the scene. If you run the scene you will notice that the portal culling is not active. This is because the `room graph` must be created each time you load a level, by converting the rooms. Instead of using a button in the editor, in real games you call a function in the RoomManager to convert the level, called `rooms_convert()`. Try this out with a script, perhaps running within a `ready()` function.

View File

@ -1,29 +1,25 @@
Procedural geometry
===================
# Procedural geometry
There are many ways to procedurally generate geometry in Pandemonium. In this tutorial series
we will explore a few of them. Each technique has its own benefits and drawbacks, so
it is best to understand each one and how it can be useful in a given situation.
.. toctree::
:maxdepth: 1
:name: toc-procedural_geometry
[Array Mesh](02_arraymesh.md)
arraymesh
meshdatatool
surfacetool
immediategeometry
[Mesh Data Tool](03_meshdatatool.md)
What is geometry?
-----------------
[Surface Tool](04_surfacetool.md)
[Immediate Geometry](05_immediategeometry.md)
## What is geometry?
Geometry is a fancy way of saying shape. In computer graphics, geometry is typically represented
by an array of positions called "vertices". In Pandemonium, geometry is represented by Meshes.
What is a Mesh?
---------------
## What is a Mesh?
Many things in Pandemonium have mesh in their name: the `Mesh`,
the `MeshInstance`, and
@ -42,22 +38,19 @@ using a MultiMeshInstance is that each of your mesh's surfaces are limited to on
all instances. It uses an instance array to store different colors and transformations for each
instance, but all the instances of each surface use the same material.
What a Mesh is
--------------
## What a Mesh is
A Mesh is composed of one or more surfaces. A surface is an array composed of multiple sub-arrays
containing vertices, normals, UVs, etc. Normally the process of constructing surfaces and meshes is
hidden from the user in the `VisualServer`, but with ArrayMeshes, the user can construct a Mesh
manually by passing in an array containing the surface information.
Surfaces
^^^^^^^^
### Surfaces
Each surface has its own material. Alternatively, you can override the material for all surfaces
in the Mesh when you use a MeshInstance using the `material_override` property.
Surface array
^^^^^^^^^^^^^
### Surface array
The surface array is an array of length `ArrayMesh.ARRAY_MAX`. Each position in the array is
filled with a sub-array containing per-vertex information. For example, the array located at
@ -72,37 +65,32 @@ of indices which maps out how to construct the triangles from the vertex array.
array is faster, but it means you have to share vertex data between triangles, which is not always desired
(e.g. when you want per-face normals).
Tools
-----
## Tools
Pandemonium provides different ways of accessing and working with geometry. More information on each will
be provided in the following tutorials.
ArrayMesh
^^^^^^^^^
### ArrayMesh
The ArrayMesh resource extends Mesh to add a few different quality of life functions and, most
importantly, the ability to construct a Mesh surface through scripting.
For more information about the ArrayMesh, please see the `ArrayMesh tutorial ( doc_arraymesh )`.
MeshDataTool
^^^^^^^^^^^^
### MeshDataTool
The MeshDataTool is a resource that converts Mesh data into arrays of vertices, faces, and edges that can
be modified at runtime.
For more information about the MeshDataTool, please see the `MeshDataTool tutorial ( doc_meshdatatool )`.
SurfaceTool
^^^^^^^^^^^
### SurfaceTool
The SurfaceTool allows the creation of Meshes using an OpenGL 1.x immediate mode style interface.
For more information about the SurfaceTool, please see the `SurfaceTool tutorial ( doc_surfacetool )`.
ImmediateGeometry
^^^^^^^^^^^^^^^^^
### ImmediateGeometry
ImmediateGeometry is a node that uses an immediate mode style interface (like SurfaceTool) to draw objects. The
difference between ImmediateGeometry and the SurfaceTool is that ImmediateGeometry is a node itself that can be
@ -115,8 +103,7 @@ visualize physics raycasts etc.).
For more information about ImmediateGeometry, please see the `ImmediateGeometry tutorial ( doc_immediategeometry )`.
Which one should I use?
-----------------------
## Which one should I use?
Which approach you use depends on what you are trying to do and what kind of procedure you are comfortable with.

View File

@ -1,7 +1,6 @@
Using the ArrayMesh
===================
# Using the ArrayMesh
This tutorial will present the basics of using an `ArrayMesh`.
@ -20,52 +19,17 @@ The possible elements of `arrays` are listed below, together with the position t
See also `Mesh.ArrayType (enum_Mesh_ArrayType )`.
.. list-table::
:class: wrap-normal
:width: 100%
:widths: auto
:header-rows: 1
* - Index
- Mesh.ArrayType Enum
- Array type
* - 0
- `ARRAY_VERTEX`
- `PoolVector3Array`
* - 1
- `ARRAY_NORMAL`
- `PoolVector3Array`
* - 2
- `ARRAY_TANGENT`
- `PoolRealArray` of groups of 4 floats. First 3 floats determine the tangent, and
the last the binormal direction as -1 or 1.
* - 3
- `ARRAY_COLOR`
- `PoolColorArray`
* - 4
- `ARRAY_TEX_UV`
- `PoolVector2Array`
* - 5
- `ARRAY_TEX_UV2`
- `PoolVector2Array`
* - 6
- `ARRAY_BONES`
- `PoolRealArray` of groups of 4 ints. Each group lists indexes of 4 bones that affects a given vertex.
* - 7
- `ARRAY_WEIGHTS`
- `PoolRealArray` of groups of 4 floats. Each float lists the amount of weight an determined bone on `ARRAY_BONES` has on a given vertex.
* - 8
- `ARRAY_INDEX`
- `PoolIntArray`
| Index | Mesh.ArrayType Enum | Array type |
|-------|---------------------|--------------------|
| 0 | `ARRAY_VERTEX` | `PoolVector3Array` |
| 1 | `ARRAY_NORMAL` | `PoolVector3Array` |
| 2 | `ARRAY_TANGENT` | `PoolRealArray` of groups of 4 floats. First 3 floats determine the tangent, and the last the binormal direction as -1 or 1. |
| 3 | `ARRAY_COLOR` | `PoolColorArray` |
| 4 | `ARRAY_TEX_UV` | `PoolVector2Array` |
| 5 | `ARRAY_TEX_UV2` | `PoolVector2Array` |
| 6 | `ARRAY_BONES` | `PoolRealArray` of groups of 4 ints. Each group lists indexes of 4 bones that affects a given vertex. |
| 7 | `ARRAY_WEIGHTS` | `PoolRealArray` of groups of 4 floats. Each float lists the amount of weight an determined bone on `ARRAY_BONES` has on a given vertex. |
| 8 | `ARRAY_INDEX` | `PoolIntArray` |
The array of vertices (at index 0) is always required. The index array is optional and will only be used if included. We won't use it in this tutorial.
@ -75,8 +39,7 @@ four entries to describe a single vertex. These must be exactly four times large
For normal usage, the last two parameters in `add_surface_from_arrays()` are typically left empty.
ArrayMesh
---------
## ArrayMesh
In the editor, create a `MeshInstance` to it in the Inspector.
Normally, adding an ArrayMesh in the editor is not useful, but in this case it allows us to access the ArrayMesh
@ -166,8 +129,7 @@ gdscript GDScript
The code that goes in the middle can be whatever you want. Below we will present some example code
for generating a sphere.
Generating geometry
-------------------
## Generating geometry
Here is sample code for generating a sphere. Although the code is presented in
GDScript, there is nothing Pandemonium specific about the approach to generating it.
@ -237,8 +199,7 @@ gdscript GDScript
# Insert committing to the ArrayMesh here.
```
Saving
------
## Saving
Finally, we can use the `ResourceSaver` class to save the ArrayMesh.
This is useful when you want to generate a mesh and then use it later without having to re-generate it.

View File

@ -1,7 +1,6 @@
Using the MeshDataTool
======================
# Using the MeshDataTool
The `MeshDataTool` is not used to generate geometry. But it is helpful for dynamically altering geometry, for example
if you want to write a script to tessellate, simplify, or deform meshes.

View File

@ -1,7 +1,5 @@
Using the SurfaceTool
=====================
# Using the SurfaceTool
The `SurfaceTool` provides a useful interface for constructing geometry.
The interface is similar to the `ImmediateGeometry` node. You

View File

@ -1,7 +1,6 @@
Using ImmediateGeometry
=======================
# Using ImmediateGeometry
Unlike the SurfaceTool or ArrayMesh, `ImmediateGeometry` is an actual
node. Being a node makes it quick to add to a scene and get visual output. It uses an OpenGL 1.x-style