symbolic link – How to create a shadow directory

I want to create a shadow directory of a directory D. The shadow directory D1 should have the same structure as D, but should not share any nodes with D. Each file in D should be represented by a symlink to that file in D1. The idea is that operations on D1 should never modify D, so that I can freely delete anything in D1 without affecting D. How can I achieve this? I could of course do a simple tree copy of D but that would make duplicate copies of the files.

opengl – Matrix math in cascade shadow mapping

I am implementing cascade shadow mapping algorithm and currently stuck with matrix transformations – my AABBs, when projected in light space are pointing in the direction opposite to the light:

AABBs in light projection and view space


I was following the logic described in Oreon engine video on YouTube and NVidia docs.

The algorithm in my understanding looks like this:

  1. “cut” camera frustum into several slices
  2. calculate the coordinates of each frustum slice’ corners in world space
  3. calculate the axis-aligned bounding box of each slice in world space (using the vertices from step 2)
  4. create an orthographic projection from the calculated AABBs
  5. using the orthographic projections from step 4 and light view matrix, calculate the shadow maps (as in: render the scene to the depth buffer for each of the projections)
  6. use the shadow maps to calculate the shadow component of each fragment’ color; using fragmentPosition.z and comparing it to each of the camera frustum’ slices to figure out which shadow map to use

I am able to correctly figure out camera frustum’ vertices in world space:

camera frustum, sliced

The frustum extends further, but camera clipping distance… well, clips the further slices.

For this, I use inverse matrix multiplication of camera projection and camera view matrices and cube in normalized device coordinates:

std::array<glm::vec3, 8> _cameraFrustumSliceCornerVertices{
    {
        { -1.0f, -1.0f, -1.0f }, { 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, -1.0f }, { -1.0f, 1.0f, -1.0f },
        { -1.0f, -1.0f, 1.0f }, { 1.0f, -1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f }, { -1.0f, 1.0f, 1.0f },
    }
};

I then multiply each vertex $p$ by the inverse of the product $P_{camera} times V_{camera}$

This gives me the vertices of the camera frustum in world space.

To generate slices, I tried applying the same logic, but using perspective projection with different near and far distances with little luck.

I then used vector math to calculate each camera frustum slice by taking the entire camera frustum vertices in world space and calculating the vectors for each edge of the frustum: $v_i = v_i^{far} – v_i^{near}$.

Then I simply multiply these vectors by the lengths of an entire camera frustum and multiply them by the corresponding slice fraction: $v_i^{near} + v_i cdot |v_i^{far} – v_i^{near}| cdot d_i$. Then I simply add these vectors to the near plane of the entire camera frustum to get the far plane of each slice.

std::vector<float> splits{ { 0.0f, 0.05f, 0.2f, 0.5f, 1.0f } };

const float _depth = 2.0f; // 1.0f - (-1.0f); normalized device coordinates of a view projection cube; zFar - zNear

auto proj = glm::inverse(initialCameraProjection * initialCameraView);

std::array<glm::vec3, 8> _cameraFrustumSliceCornerVertices{
    {
        { -1.0f, -1.0f, -1.0f }, { 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, -1.0f }, { -1.0f, 1.0f, -1.0f },
        { -1.0f, -1.0f, 1.0f }, { 1.0f, -1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f }, { -1.0f, 1.0f, 1.0f },
    }
};

std::array<glm::vec3, 8> _totalFrustumVertices;

std::transform(
    _cameraFrustumSliceCornerVertices.begin(),
    _cameraFrustumSliceCornerVertices.end(),
    _totalFrustumVertices.begin(),
    (&)(glm::vec3 p) {
        auto v = proj * glm::vec4(p, 1.0f);
        return glm::vec3(v) / v.w;
    }
);

std::array<glm::vec3, 4> _frustumVectors{
    {
        _totalFrustumVertices(4) - _totalFrustumVertices(0),
        _totalFrustumVertices(5) - _totalFrustumVertices(1),
        _totalFrustumVertices(6) - _totalFrustumVertices(2),
        _totalFrustumVertices(7) - _totalFrustumVertices(3),
    }
};

for (auto i = 1; i < splits.size(); ++i)
{
    std::array<glm::vec3, 8> _frustumSliceVertices{
        {
            _totalFrustumVertices(0) + (_frustumVectors(0) * _depth * splits(i - 1)),
            _totalFrustumVertices(1) + (_frustumVectors(1) * _depth * splits(i - 1)),
            _totalFrustumVertices(2) + (_frustumVectors(2) * _depth * splits(i - 1)),
            _totalFrustumVertices(3) + (_frustumVectors(3) * _depth * splits(i - 1)),

            _totalFrustumVertices(0) + (_frustumVectors(0) * _depth * splits(i)),
            _totalFrustumVertices(1) + (_frustumVectors(1) * _depth * splits(i)),
            _totalFrustumVertices(2) + (_frustumVectors(2) * _depth * splits(i)),
            _totalFrustumVertices(3) + (_frustumVectors(3) * _depth * splits(i)),
        }
    };

    // render the thing
}

According to the algorithm, the next part is finding the axis-aligned bounding box (AABB) of each camera frustum slice and projecting it in the light view space.

I am able to correctly calculate the AABB of each camera frustum slice in world space:

camera frustum slices' AABBs

This is a rather trivial algorithm that iterates over all the vertices from the previous step and finds minimal x, y and z coordinate of each vertex of a camera frustum slice in world space.

float minX = 0.0f, maxX = 0.0f;
float minY = 0.0f, maxY = 0.0f;
float minZ = 0.0f, maxZ = 0.0f;

for (auto i = 0; i < _frustumSliceVertices.size(); ++i)
{
    auto p = _frustumSliceVertices(i);

    if (i == 0)
    {
        minX = maxX = p.x;
        minY = maxY = p.y;
        minZ = maxZ = p.z;
    }
    else
    {
        minX = std::fmin(minX, p.x);
        minY = std::fmin(minY, p.y);
        minZ = std::fmin(minZ, p.z);

        maxX = std::fmax(maxX, p.x);
        maxY = std::fmax(maxY, p.y);
        maxZ = std::fmax(maxZ, p.z);
    }
}

auto _ortho = glm::ortho(minX, maxX, minY, maxY, minZ, maxZ);

std::array<glm::vec3, 8> _aabbVertices{
    {
        { minX, minY, minZ }, { maxX, minY, minZ }, { maxX, maxY, minZ }, { minX, maxY, minZ },
        { minX, minY, maxZ }, { maxX, minY, maxZ }, { maxX, maxY, maxZ }, { minX, maxY, maxZ },
    }
};

std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;

std::transform(
    _aabbVertices.begin(),
    _aabbVertices.end(),
    _frustumSliceAlignedAABBVertices.begin(),
    (&)(glm::vec3 p) {
        auto v = lightProjection * lightView * glm::vec4(p, 1.0f);
        return glm::vec3(v) / v.w;
    }
);

I then construct an orthographic projection from that data – as per algorithm, these projections, one per camera frustum slice, will be later used to calculate shadow maps, aka render to depth textures.

auto _ortho = glm::ortho(minX, maxX, minY, maxY, minZ, maxZ);

To render these AABBs, I tried rendering the view cube, like with the camera frustum, but got some dubious results:

rendering view cube in orthographic projections

Both the position and the size of the AABBs were wrong.

I tried making the AABBs “uniform”, e.g. left = ((maxX - minX) / 2) * -1 and rihgt = ((maxX - minX) / 2) * +1, which resulted in only centering the AABBs around the same origin point (0, 0, 0):

const auto _width = (maxX - minX) / 2.0f;
const auto _height = (maxY - minY) / 2.0f;
const auto _depth = (maxZ - minZ) / 2.0f;

auto _ortho = glm::ortho(-_width, _width, -_height, _height, -_depth, _depth);

AABBs with unified params

I then used min / max values of each corresponding coordinate instead of +/- 1 in the view cube to get the correct results:

std::array<glm::vec3, 8> _aabbVertices{
    {
        { minX, minY, minZ }, { maxX, minY, minZ }, { maxX, maxY, minZ }, { minX, maxY, minZ },
        { minX, minY, maxZ }, { maxX, minY, maxZ }, { maxX, maxY, maxZ }, { minX, maxY, maxZ },
    }
};

camera frustum slices' AABBs

Last step of an algorithm, though is not willing to cooperate: I thought that by multiplying each of the orthogonal projections by the light’ view matrix I will align the AABB with the light direction, but all I got was misaligned AABBs:

std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;

std::transform(
    _aabbVertices.begin(),
    _aabbVertices.end(),
    _frustumSliceAlignedAABBVertices.begin(),
    (&)(glm::vec3 p) {
        auto v = lightView * glm::vec4(p, 1.0f);
        return glm::vec3(v) / v.w;
    }
);

multiplying camera slices AABBs by light view matrix

Only when I multiply it by both light projection matrix and light view matrix I get something similar to alignment:

std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;

std::transform(
    _aabbVertices.begin(),
    _aabbVertices.end(),
    _frustumSliceAlignedAABBVertices.begin(),
    (&)(glm::vec3 p) {
        auto v = lightProjection * lightView * glm::vec4(p, 1.0f);
        return glm::vec3(v) / v.w;
    }
);

AABBs in light projection and view space

Ironically, seems the direction is opposite to the light’ direction.

Despite my light being pointed to origin (0, 0, 0), the AABBs seem to be projected in reverse order.

I am not entirely sure how to resolve this issue or even why this is happening…

opengl – How to fix shadow not casted to terrain when rendering using default and terrain shader (depth shader included)?

Given that I have the TerrainShader class and DefaultShader class. Also a FBO (Frame Buffer Object) shadow map

The TerrainShader has all the terrain, light, shadow related calculations. While the DefaultShader has the generic objects light, shadow related calculations.

I have successfully cast a directional shadow map when I only use DefaultShader alone with random cube objects and a plane. Now the problem was when I move or use a terrain instead of TerrainShader, the shadow is not cast in the terrain.

Question: Am I using the FBO the correct way or I am doing it wrong.

Solution Idea (Not yet applied)

  • Merge terrain and default shader as one and create a flag if object or terrain will be rendered? (Still not sure if this is correct.)

Pseudocode (Current successful implementation)

  • Create shadow map fbo
  • Create default shader
  • Create depth shader
  • bind shadow map fbo
  • clear depth
  • render cubes & plane using depth shader (mvp)
  • unbind shadow map fb
  • clear color and depth
  • render cubes & plane using default shader

Pseudocode (with Terrain shadow not cast to terrain)

  • Create shadow map fbo
  • Create default shader
  • Create terrain shader
  • Create depth shader
  • bind shadow map fbo
  • clear depth
  • render cubes & plane using depth shader (mvp) and exclude terrain
  • unbind shadow map fb
  • clear color and depth
  • render cubes using default shader
  • render terrain using terrain shader

dnd 5e – Can a living shadow be dissipated with light?

Nothing happens.

The living shadow is not a creature. It doesn’t have hit points or an armor class, and most importantly, it always persists and there are no end conditions listed in its description:

The shadow you cast is animate and ever-present, even when lighting conditions would otherwise prevent it.

There isn’t much else to say here, there is just nothing in the gift description that gives any indication that firing a crossbow bolt at it, enchanted or otherwise, will do anything.

dnd 5e – Can a living shadow be dissapated with light?

Nothing happens.

The living shadow is not a creature. It doesn’t have hit points or an armor class, and most importantly, it always persists and there are no end conditions listed in its description:

The shadow you cast is animate and ever-present, even when lighting conditions would otherwise prevent it.

There isn’t much else to say here, there is just nothing in the gift description that gives any indication that firing a crossbow bolt at it, enchanted or otherwise, will do anything.

opengl – How to move the shadow map with the camera?

I implemented a directional light and a shadow map for that light based on learnopengl.com tutorials. And it is working great, but I would like to move the shadow map with camera/player, so I have shadows all over the scene.

What I am trying to do is update the “look at” matrix every frame, based in the camera position, but is not working properly. Here is relevant peace of code, witch I am using to update the shadow map position:

    glm::mat4 lightProjection = glm::ortho(-20.0f, 20.0f, -20.0f, 20.0f, 1.0f, 7.5f);
    glm::mat4 lightView = glm::lookAt(light.position + camera.position, camera.position, glm::vec3(0.0f, 1.0f, 0.0f));

With the light at position x: -1.0f y: 4.0f z: -1.0f:

enter image description here

Saving a isolate prop image with shadow to png

Interesting.

I come from a 3D rendering background besides photography. So I am used to think as separated layers. My approach would be:

  • Cut the object in one image without the shadow.

  • Depending on the background, using all the image with shadow as a separate layer. Probably convert it to grayscale if the background is not neutral white.

We can go several ways for the shadows.

  1. Using this grayscale image with a blending mode multiply and using it below the object layer.
    This is faster but only if you work with a layered method.

enter image description here

  1. Using this image as a transparency mask for a total black layer. You need to invert the image. This second option is the one that can give you a single PNG with shadows included.

enter image description here

You can play with the levels of this mask and the curves, to adjust the intensity of the shadow and to clean the background.

You probably need to paint a bit on the borders of this mask so it does not show behind the clipped object.

enter image description here

html – Why would you ever use the shadow DOM if you can’t apply global styles?

How can you expect to create re-usable components with the shadow DOM and also be expected to give it a separate style? How can anyone be able to share components with each other if that person can’t apply a style on top? I would never use anyone else’s components if they aren’t using my css library..

Side suggestion, <slot> should be useable in light dom with custom components.