shaders – View space positions from depth in DirectX

I have a depth texture and I’m trying to output the view space positions on the screen. In renderdoct I get the following image, and there are two 3D Models a plane and another model.
I’m using an orthographic camera
I wonder why do I get that results, my expectation is that I should get the vertices of the models with those colors, not a quad like thenter image description hereat.

float3 position_in_view_space(float2 uv)
{
    float z = depth_tex.SampleLevel(depth_sampler, uv, 0.0f).x;
    // Get x/w and y/w from the viewport position
    float x = uv.x * 2 - 1;
    float y = (1 - uv.y) * 2 - 1;
    float4 vProjectedPos = float4(x, y, z, 1.0f);
    // Transform by the inverse projection matrix
    float4 vPositionVS = mul(vProjectedPos, inv_proj);  
    // Divide by w to get the view-space position
    return vPositionVS.xyz / vPositionVS.w;  
    
}

float3 position_from =position_in_view_space(tex_coord.xy);
output.color = float4(position_from, 1.0);
return output;

raytracing – Moving the bulk of a recursive ray tracer function to the gpu, using DirectX12 not DirectX 12X HLSL

So I’m wanting to generate images by ray tracing. I’ve done so, but the main ray function is recursive. I know one can make a non recursive function out of a recursive function using a stack, but is it possible to do in HLSL? I have the bulk of the function I’m using here:

color ray_color(
    const ray& r,
    const color& background,
    const hittable& world,
    const shared_ptr<hittable>& lights,
    int depth) {
    
hit_record rec;

// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
    return color(0, 0, 0);

// If the ray hits nothing, return the background color.
if (!world.hit(r, 0.001, infinity, rec))
    return background;

scatter_record srec;
color emitted = rec.mat_ptr->emitted(r, rec, rec.u, rec.v, rec.p);

if (!rec.mat_ptr->scatter(r, rec, srec))
    return emitted;

if (srec.is_specular) {
    return srec.attenuation
        * ray_color(srec.specular_ray, background, world, lights, depth - 1);
}

auto light_ptr = make_shared<hittable_pdf>(lights, rec.p);
mixture_pdf p(light_ptr, srec.pdf_ptr);
ray scattered = ray(rec.p, p.generate(), r.time());
auto pdf_val = p.value(scattered.direction());

return emitted
    + srec.attenuation * rec.mat_ptr->scattering_pdf(r, rec, scattered)
    * ray_color(scattered, background, world, lights, depth - 1)
    / pdf_val;
}

directx – How to create a texture SRV with different sRGB format from a render target in DX11?

Is it possible to bind a texture with a different format as render target and as shader resource view?

Specifically with a different _SRGB suffix. My goal is to render a shader into an R8G8B8A8_UNORM texture but read the render target in another shader as R8G8B8A8_UNORM_SRGB, or the other way around. Basically controlling on the read and write side whether automatic sRGB conversion should happen. There is no window or swap chain involved.

So far, I get an invalid args exception when I try to create a texture view with a different format. I am using the Stride game engine set to Directx 11 graphics API, which uses SharpDX.

directx – Compile shader and root signature of a ray tracing shader into a single binary using DXC

I’m new to DXR, so please

If I got it right, when we want to compile a ray tracing shader using the DirectX Shader Compiler, we need to specify lib_6_* as the target profile.

Now assume I’ve got a HLSL file containing a single ray generation shader RayGen whose root signature is specified by a RootSignature attribute of the form

#define MyRS "RootFlags(LOCAL_ROOT_SIGNATURE),"  
        "DescriptorTable("                  
            "UAV(u0, numDescriptors = 1),"  
            "SRV(t0, numDescriptors = 1))"
(rootsignature(MyRS))
(shader("raygeneration")) 
void RayGen() {}

Using IDxcCompiler::Compile, I’m able to compile both the shader itself using the target profile lib_6_3 and the root signature using the target profile rootsig_1_1, but if I got it right it’s not possible to invoke IDxcCompiler::Compile such that the created IDxcBlob contains both the shader and the root signature. (I’ve tried to add the argument -rootsig-define MyRS to the call for the compilation of the shader, but it seems to me that the compiler expects the root signature specified in this way to be a global root signature.)

So, I end up with two IDxcBlob‘s. Is there any possibility to “merge” them into a single one which can later be used to specify the shader as well in a call of ID3D12Device5::CreateRootSignature?

directx – Specifying a root signature in the HLSL code of a DXR shader

I’ve noticed that I cannot specify a root signature in the HLSL code of a DXR shader. For example, if I got a ray generation shader with the following declaration

(rootsignature(
    "RootFlags(LOCAL_ROOT_SIGNATURE),"  
    "DescriptorTable("                  
    "UAV(u0, numDescriptors = 1),"  
    "SRV(t0, numDescriptors = 1))"))
(shader("raygeneration"))
void RayGen()
{}

CreateRootSignature yields the error message

No root signature was found in the dxil library provided to CreateRootSignature. ( STATE_CREATION ERROR #696: CREATE_ROOT_SIGNATURE_BLOB_NOT_FOUND).

I’ve noticed that even when I add a typo (for example, write roosignature instead of rootsignature), the compiler doesn’t complain about this typo. So, it seems like the whole attribute declaration is simply ignored.

If I change the code to a simple rasterization shader, everything works as expected.

So, is the specification of a root signature in the HLSL code of a DXR shader not supported?

directx – Blue color instead of alpha using Alpha Blending

I am testing rendering with alpha blending state according to this guide.
The aim is to add snow on terrain grass texture.
Finally, I got the wrong result — the blue color fillings up all alpha = 0 pixels.

I checked the instruction step triple time, can’t find any mistakes.

Could someone explain to me what’s is wrong with my code?

PS and VS:

SnowPSIn SnowVSMain(SnowVSIn Input)
{
    SnowPSIn Output;
    float fY = g_txHeightMap.SampleLevel(g_samLinear, Input.vTexCoord, 0).a * g_fHeightScale;
    float4 vWorldPos = float4(Input.vPos + float3(0.0, fY, 0.0), 1.0);
    Output.vPos = mul(vWorldPos, g_mViewProj);
    Output.vTexCoord.xy = Input.vTexCoord;
    Output.vTexCoord.z = FogValue(length(vWorldPos - g_mInvCamView(3).xyz));
    Output.vTexCoord.w = length(vWorldPos - g_mInvCamView(3).xyz);
    //Output.vShadowPos = mul(vWorldPos, g_mLightViewProj);

    return Output;
}

float4 SnowPSMain(SnowPSIn Input) : SV_Target
{
    float4 vSnowColor = g_txTerrSnow.Sample(g_samLinear, Input.vTexCoord.xy * 64);
    return vSnowColor;
}

BlendState:

BlendState SrcAlphaBlendingAdd
{
    BlendEnable(0) = TRUE;
    SrcBlend = SRC_ALPHA;
    DestBlend = INV_SRC_ALPHA;
    BlendOp = ADD;
    SrcBlendAlpha = ZERO;
    DestBlendAlpha = ZERO;
    BlendOpAlpha = ADD;
    RenderTargetWriteMask(0) = 0x0F;
};

Pass:

pass RenderShowPass
{
    SetVertexShader(CompileShader(vs_5_0, SnowVSMain()));
    SetGeometryShader(NULL);
    SetPixelShader(CompileShader(ps_4_0, SnowPSMain()));
    SetBlendState(SrcAlphaBlendingAdd, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xFFFFFFFF);
}

all sources

Before: initial scene

After: final scene

Apply multiple Shaders to one texture with DirectX

I’m beginning with DirectX development and I’m quite confused by the documentation about how to do the following:

I have 1 image (Texture2D), I’d like to apply 2 independent HLSL, one after the other, and render it.

For instance, one shader makes the texture semi-transparent, the other turns it to black and white.
Also, I’m computing everything off-screen, so I don’t have a SwapChain.

So far the output texture I get has the 1st shader applied but not the second. If I switch the order where the shaders are applied, then I’m seeing the 1st shader applied (in the new order) but not the second. In other words, both shaders work separately but not together.
I’m using SharpDX and C#.

Here is what I made

Setup

  1. Create D3D11 device.
  2. Set Blend state
            BlendStateDescription blendStateDescription = new BlendStateDescription
            {
                AlphaToCoverageEnable = false,
            };
            blendStateDescription.RenderTarget(0).IsBlendEnabled = true;
            blendStateDescription.RenderTarget(0).SourceBlend = BlendOption.SourceAlpha;
            blendStateDescription.RenderTarget(0).DestinationBlend = BlendOption.InverseSourceAlpha;
            blendStateDescription.RenderTarget(0).BlendOperation = BlendOperation.Add;
            blendStateDescription.RenderTarget(0).SourceAlphaBlend = BlendOption.Zero;
            blendStateDescription.RenderTarget(0).DestinationAlphaBlend = BlendOption.Zero;
            blendStateDescription.RenderTarget(0).AlphaBlendOperation = BlendOperation.Add;
            blendStateDescription.RenderTarget(0).RenderTargetWriteMask = ColorWriteMaskFlags.All;

            device.ImmediateContext.OutputMerger.SetBlendState(new BlendState(device, blendStateDescription));
  1. Set Depth stencil state
            var depthStencilState = new DepthStencilState(device, DepthStencilStateDescription.Default());
            device.ImmediateContext.OutputMerger.SetDepthStencilState(depthStencilState);
  1. Create a RenderTargetView and a texture
  2. Create DepthStencilView and a texture
  3. Set the RenderTargetView and DepthStencilView are targets of the output merger.

Run

  1. Load an image from the hard drive to a Texture2D
  2. Create a ShaderResourceView from this texture.
  3. For each effects I want to apply:
    • Load the shader from the bytecode into an Effect.
    • Create a vertice, then create a buffer from it.
    • Set this buffer as a vertex buffer in the Input Assembler: deviceContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(myBuffer, Marshal.SizeOf(typeof(VertexPositionTexture)), 0));
    • Set the input layout.
    • Get a technique from the shader and for each pass, I effectPass.Apply then do a deviceContext.Draw.
  4. deviceContext.Flush();
  5. Save the texture associated to the RenderTargetView as a PNG.

Question

Is my approach correct or am I doing something wrong in that method?

I wonder if the issue could be in the blend state or depth stencil.
Let me know if you’d like me to share more code to see my implementation details.

Thank you in advance for the help. 🙂

Correct way to set up camera buffer [DirectX]

I would like to play with two different implementation of particle system in my project (ok, actually not mine, but I am working on it).
I copied the particle system successfully, however, faced the camera buffer initialization problem, highlight this code:

bool ParticleShader::Render(Direct3DManager* direct, ParticleSystem* particlesystem, Camera* camera)
{
    bool result;

    result = SetShaderParameters(direct->GetDeviceContext(), camera, direct->GetWorldMatrix(), camera->GetViewMatrix(), direct->GetProjectionMatrix(), particlesystem->GetTexture());
    if (!result)
        return false;

    RenderShader(direct->GetDeviceContext(), particlesystem->GetVertexCount(), particlesystem->GetInstaceCount(), particlesystem->GetIndexCount());
    return true;
}

If I just call SetShaderParameters with the same args, there is a bug obviously: particle system scene sticks to the camera (originally, it looks like this).

I checked the params in debug mode and found out that the difference is in the World matrix. The DXUT CFirstPersonCamera is used in my project and it changed the World matrix while moving around the scene whereas in the original particle system project it’s constant (identity matrix). I even checked my assumption and hardcoded it, but got another bug.

I understand that there is a difference between DXUT camera and the default project camera at the ideological level. Nevertheless, I am a newbie in Graphic it’s too difficult to realize how to change the code in a proper way.

Thank you in advance

glsl – How to write shaders that can be compiled for DirectX, OpenGL, and Vulkan

This problem is often solved through the use of a transpiler, a program that can translate a shader written in one language into another.

HLSL2GLSL is one such example that was used in Unity up until 2016. Shaders could be authored in a standard HLSL syntax, then transpiled at build time into corresponding GLSL code.

SPIRV-Cross is another transpiler maintained by the Unity team that can serve as a bridge between HLSL, GLSL, MSL, and Vulkan.

Using an existing shader language as your source helps you avoid the overhead of designing a brand new one from scratch, and you can leverage a lot of work shared by other teams via projects like the ones linked above.

There are some extra considerations though, as outlined in the SPIRV-Cross Readme:

Implementation notes

When using SPIR-V and SPIRV-Cross as an intermediate step for cross-compiling between high level languages there are some considerations to take into account, as not all features used by one high-level language are necessarily supported natively by the target shader language. SPIRV-Cross aims to provide the tools needed to handle these scenarios in a clean and robust way, but some manual action is required to maintain compatibility.

The areas they call out include:

  • HLSL source to GLSL
    • HLSL entry points
    • Vertex/Fragment interface linking
  • HLSL source to legacy GLSL/ESSL
    • Separate image samplers (HLSL/Vulkan) for backends which do not support it (GLSL)
    • Descriptor sets (Vulkan GLSL) for backends which do not support them (HLSL/GLSL/Metal)
    • Linking by name for targets which do not support explicit locations (legacy GLSL/ESSL)
    • Clip-space conventions
    • Reserved identifiers

See the linked document for all the gory details of how to handle these situations.

directx11 – How to correctly initialize Direct2D with DirectX 11

I have a problem with creating Direct2D with DirectX11 .I have tried two methods to initialize Direct2D.

In the first attempt, I’ve create the surface pointer to the back buffer in DX11 and passed it to CreateDxgiSurfaceRenderTarget. I get an error from the function stating The parameter is incorrect.

In the second attempt, I did the same but more complicated. I used DXGI and I need to use a new interface Direct2D 1.1 instead of using ID2D1RenderTarget. I need to use ID2D1DeviceContex but in here i get a error from the function direct2d.factory1->CreateDevice() and the error is the same, the parameter is incorrect.

struct Direct2D {

    ID2D1Device *device = NULL;
    ID2D1DeviceContext *device_context = NULL;
    ID2D1Factory *factory = NULL;
    ID2D1Factory1 *factory1 = NULL;
    ID2D1RenderTarget *render_target = NULL;
    ID2D1SolidColorBrush *gray_brush = NULL;
    ID2D1SolidColorBrush *blue_brush = NULL;

    void init();
    void draw();
};

struct Direct3D {

    Direct2D direct2d;
    ID3D11Device *device = NULL;
    ID3D11DeviceContext *device_context = NULL;
    IDXGISwapChain *swap_chain = NULL;
    
    ID3D11RenderTargetView *render_target_view = NULL;
    ID3D11DepthStencilView *depth_stencil_view = NULL;
    ID3D11Texture2D *depth_stencil_buffer = NULL;
    ID3D11Texture2D* back_buffer = NULL;
    IDXGISurface* back_buffer2 = NULL;
    
    UINT quality_levels;

    Matrix4 perspective_matrix;

    void init(const Win32_State *win32);
    void shutdown();
    void resize(const Win32_State *win32);
};

void Direct2D::init()
{

    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &factory1));
    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &factory));

    float dpi_x;
    float dpi_y;
    factory->GetDesktopDpi(&dpi_x, &dpi_y);

    D2D1_RENDER_TARGET_PROPERTIES rtDesc = D2D1::RenderTargetProperties(
        D2D1_RENDER_TARGET_TYPE_HARDWARE,
        D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi_x, dpi_y
    );

    IDXGISurface *surface = NULL;
    HR(direct3d.swap_chain->GetBuffer(0, IID_PPV_ARGS(&surface)));

    //HR(factory->CreateDxgiSurfaceRenderTarget(surface, &rtDesc, &render_target));
    
    //HR(render_target->CreateSolidColorBrush(D2D1::ColorF(D2D1::ColorF::LightSlateGray),&gray_brush));
    //HR(render_target->CreateSolidColorBrush(D2D1::ColorF(D2D1::ColorF::CornflowerBlue),&blue_brush));
}

void Direct3D::init(const Win32_State *win32) 
{

    D3D_FEATURE_LEVEL feature_level;
    HRESULT hr = D3D11CreateDevice(0, D3D_DRIVER_TYPE_HARDWARE, 0, create_device_flag, 0, 0, D3D11_SDK_VERSION,&device, &feature_level, &device_context);

    if (FAILED(hr)) {
        MessageBox(0, "D3D11CreateDevice Failed.", 0, 0);
        return;
    }

    if (feature_level != D3D_FEATURE_LEVEL_11_0) {
        MessageBox(0, "Direct3D Feature Level 11 unsupported.", 0, 0);
        return;
    }


    HR(device->CheckMultisampleQualityLevels(
        DXGI_FORMAT_R8G8B8A8_UNORM, 4, &quality_levels));
    //assert(m4xMsaaQuality > 0);

    DXGI_SWAP_CHAIN_DESC sd;
    sd.BufferDesc.Width = win32->window_width;
    sd.BufferDesc.Height = win32->window_height;
    sd.BufferDesc.RefreshRate.Numerator = 60;
    sd.BufferDesc.RefreshRate.Denominator = 1;
    sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
    sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
    sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
    
    if (true) {
        sd.SampleDesc.Count = 4;
        sd.SampleDesc.Quality = quality_levels - 1;
    } else {
        sd.SampleDesc.Count = 1;
        sd.SampleDesc.Quality = 0;
    }

    sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
    sd.BufferCount = 1;
    sd.OutputWindow = win32->window;
    sd.Windowed = true;
    sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
    sd.Flags = 0;

    IDXGIDevice* dxgi_device = 0;
    HR(device->QueryInterface(__uuidof(IDXGIDevice), (void**)&dxgi_device));

    IDXGIAdapter* dxgi_adapter = 0;
    HR(dxgi_device->GetParent(__uuidof(IDXGIAdapter), (void**)&dxgi_adapter));

    IDXGIFactory* dxgi_factory = 0;
    HR(dxgi_adapter->GetParent(__uuidof(IDXGIFactory), (void**)&dxgi_factory));

    HR(dxgi_factory->CreateSwapChain(device, &sd, &swap_chain));
    

    // Init directx 2d
    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &direct2d.factory));
    HR(D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &direct2d.factory1));
    
    HR(direct2d.factory1->CreateDevice(dxgi_device, &direct2d.device));
    HR(direct2d.device->CreateDeviceContext(D2D1_DEVICE_CONTEXT_OPTIONS_NONE, &direct2d.device_context));
    
    IDXGISurface *surface = NULL;
    HR(swap_chain->GetBuffer(0, __uuidof(IDXGISurface), (void **)surface));

    auto props = BitmapProperties1(D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW, PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE));

    ID2D1Bitmap1 *bitmap = NULL;
    HR(direct2d.device_context->CreateBitmapFromDxgiSurface(surface, props, &bitmap));

    direct2d.device_context->SetTarget(bitmap);

    float32 dpi_x;
    float32 dpi_y;
    direct2d.factory->GetDesktopDpi(&dpi_x, &dpi_y);

    direct2d.device_context->SetDpi(dpi_x, dpi_y);


    RELEASE_COM(dxgi_device);
    RELEASE_COM(dxgi_adapter);
    RELEASE_COM(dxgi_factory);

    resize(win32);
}

void Direct3D::resize(const Win32_State *win32)
{
    assert(device);
    assert(device_context);
    assert(swap_chain);


    RELEASE_COM(render_target_view);
    RELEASE_COM(depth_stencil_view);
    RELEASE_COM(depth_stencil_buffer);


    // Resize the swap chain and recreate the render target view.

    HR(swap_chain->ResizeBuffers(1, win32->window_width, win32->window_height, DXGI_FORMAT_R8G8B8A8_UNORM, 0));

    ID3D11Texture2D* back_buffer = NULL;
    HR(swap_chain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&back_buffer)));
    HR(device->CreateRenderTargetView(back_buffer, 0, &render_target_view));
    RELEASE_COM(back_buffer);

    // Create the depth/stencil buffer and view.

    D3D11_TEXTURE2D_DESC depth_stencil_desc;

    depth_stencil_desc.Width = win32->window_width;
    depth_stencil_desc.Height = win32->window_height;
    depth_stencil_desc.MipLevels = 1;
    depth_stencil_desc.ArraySize = 1;
    depth_stencil_desc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;

    // Use 4X MSAA? --must match swap chain MSAA values.
    if (true) {
        depth_stencil_desc.SampleDesc.Count = 4;
        depth_stencil_desc.SampleDesc.Quality = quality_levels - 1;
    } else {
        depth_stencil_desc.SampleDesc.Count = 1;
        depth_stencil_desc.SampleDesc.Quality = 0;
    }

    depth_stencil_desc.Usage = D3D11_USAGE_DEFAULT;
    depth_stencil_desc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
    depth_stencil_desc.CPUAccessFlags = 0;
    depth_stencil_desc.MiscFlags = 0;

    HR(device->CreateTexture2D(&depth_stencil_desc, 0, &depth_stencil_buffer));
    HR(device->CreateDepthStencilView(depth_stencil_buffer, 0, &depth_stencil_view));


    // Bind the render target view and depth/stencil view to the pipeline.

    device_context->OMSetRenderTargets(1, &render_target_view, depth_stencil_view);


    // Set the viewport transform.

    D3D11_VIEWPORT mScreenViewport;
    mScreenViewport.TopLeftX = 0;
    mScreenViewport.TopLeftY = 0;
    mScreenViewport.Width = static_cast<float>(win32->window_width);
    mScreenViewport.Height = static_cast<float>(win32->window_height);
    mScreenViewport.MinDepth = 0.0f;
    mScreenViewport.MaxDepth = 1.0f;

    device_context->RSSetViewports(1, &mScreenViewport);

    perspective_matrix = get_perspective_matrix(win32->window_width, win32->window_height, 1.0f, 1000.0f);
}