gpu – How many triangles should I expect to be able to render in a second?

Assuming that I’m doing everything right and all I’m doing when I render my scene is going through all the vertex arrays that I want to render, how many triangles should I expect to be able to render compared to the FLOPS of the GPU I am using?

In my particular case, I’m trying to asses if my OpenGL pipeline is entirely crap or somewhat decent. This requires benchmarks but also something to actually benchmark against.

My benchmark machine is based on a Ryzen 5 2400GE with an onboard Vega 11 GPU that has a Tpeak performance of 1.746 TFLOPS for 32 bit floats(1).

On my benchmark machine I am able to push about 33000000 triangles per second, with a vertex shader that does one 4×4 matrix multiplication and a fragment shader that simply sets a colour. The triangles are defined using vertex with a 3d position, a 2d texture coordinate (unused in this benchmark, but still part of the data) and an RGB colour. This result is based on rendering VBAs with 40000 triangles, 500 times per frame. Rendering a frame takes about 0.5 seconds.

(1) https://www.techpowerup.com/gpu-specs/radeon-rx-vega-11.c3054

unity – How can I render 3D objects and particle systems in front of a Screen Space – Overlay Camera?

To render 3D objects on top of your canvas:

  1. Create a new Canvas in Screen Space – Overlay.
  2. Add a RawImage to that canvas.
  3. Create a new Render Texture.
  4. Add the Render Texture to the Raw Image.
  5. Create a new Camera.
  6. Set the camera to Solid Color background with an alpha of 0.
  7. Set the output texture of the camera to be the Render Texture you created.

You can now render 3D objects on top of your canvas; however, there are extra steps for particle systems:

  1. Create a material. Set its shader to Universal Render Pipeline/2D/Sprite-Lit-Default.
  2. Add your particle sprite to the Diffuse of your material.
  3. In the Renderer settings of your particle system, replace the material with the one you just created.

You should be good to go!

(If you have any problems, try going to your Render Settings and change the Anti Aliasing to 8x.)

c++ – Cube doesn’t render as expected when rotated (representing 3D in 2D)

Cube.
enter image description here
When I rotate the object around OZ moves to the left.
enter image description here
Rotation around OX – the cube rises up.
enter image description here
Rotation around the OY-the cube is compressed.
enter image description here
The function I use for rotation:

void rotation(double angleX, double angleY, double angleZ, int x, int y, int z)
{
    angleX = angleX * M_PI / 180;
    angleY = angleY * M_PI / 180;
    angleZ = angleZ * M_PI / 180;

    double cX = cos(angleX);
    double sX = sin(angleX);
    double cY = cos(angleY);
    double sY = sin(angleY);
    double cZ = cos(angleZ);
    double sZ = sin(angleZ);

    double x0 = x;
    double y0 = y * cX + z * sX;
    double z0 = z * cX - y * sX;

    double x1 = x0 * cY - z0 * sY;
    double y1 = y0;
    double z1 = z0 * cY + x0 * sY;

    double x2 = x1 * cZ + y1 * sZ;
    double y2 = y1 * cZ - x1 * sZ;


    SDL_RenderDrawPoint(ren, x2, y2);
}

Full code:

#include <SDL.h>
#include <iostream>
#include    <vector>
struct point
{
    int x, y;
};
using namespace std;

int SCREEN_WIDTH = 640;
int SCREEN_HEIGHT = 480;

SDL_Window* win = NULL;
SDL_Renderer* ren = NULL;

void init() {
    win = SDL_CreateWindow("Cube", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN);
    ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED);
}

void quit() {
    SDL_DestroyWindow(win);
    win = NULL;
    SDL_DestroyRenderer(ren);
    ren = NULL;
    SDL_Quit;
}

std::vector<point> lines(int centerX, int centerY)
{
    std::vector<point> v;
    //line 1
    int x1 = centerX;
    int y1 = centerY;
    for (int i = 0; i < 100; i++)
    {
        x1 += 1;
        v.push_back(point{ x1, y1 });
        //SDL_RenderDrawPoint(ren, x1, y1);
    }
    //line 2
    int x2 = x1;
    int y2 = y1;
    for (int i = 0; i < 100; i++)
    {
        y2 -= 1;
        v.push_back(point{ x2, y2 });
        //SDL_RenderDrawPoint(ren, x2, y2);
    }
    //line 3
    int x3 = x2;
    int y3 = y2;
    for (int i = 0; i < 100; i++)
    {
        x3 -= 1;
        v.push_back(point{ x3, y3 });
        //SDL_RenderDrawPoint(ren, x3, y3);
    }
    //line 4
    int x4 = x3;
    int y4 = y3;
    for (int i = 0; i < 100; i++)
    {
        y4 += 1;
        v.push_back(point{ x4, y4 });
        //SDL_RenderDrawPoint(ren, x4, y4);
    }
    //line 5
    int x5 = x1;
    int y5 = y1;
    for (int i = 0; i < 50; i++)
    {
        y5 -= 1;
        x5 += 1;
        v.push_back(point{ x5, y5 });
        //SDL_RenderDrawPoint(ren, x5, y5);
    }
    //line 6
    int x6 = x5;
    int y6 = y5;
    for (int i = 0; i < 100; i++)
    {
        y6 -= 1;
        v.push_back(point{ x6, y6 });
        //SDL_RenderDrawPoint(ren, x6, y6);
    }
    //line 7
    int x7 = x2;
    int y7 = y2;
    for (int i = 0; i < 50; i++)
    {
        y7 -= 1;
        x7 += 1;
        v.push_back(point{ x7, y7 });
        //SDL_RenderDrawPoint(ren, x7, y7);
    }
    //line 8
    int x8 = x7;
    int y8 = y7;
    for (int i = 0; i < 100; i++)
    {
        x8 -= 1;
        v.push_back(point{ x8, y8 });
       //SDL_RenderDrawPoint(ren, x8, y8);
    }
    //line 9
    int x9 = x3;
    int y9 = y3;
    for (int i = 0; i < 50; i++)
    {
        y9 -= 1;
        x9 += 1;
        v.push_back(point{ x9, y9 });
        //SDL_RenderDrawPoint(ren, x9, y9);
    }
    //line 10 
    int x10 = centerX;
    int y10 = centerY;
    for (int i = 0; i < 50; i++)
    {
        y10 -= 1;
        x10 += 1;
        v.push_back(point{ x10, y10 });
       // SDL_RenderDrawPoint(ren, x10, y10);
    }
    //line 11 
    int x11 = x10;
    int y11 = y10;
    for (int i = 0; i < 100; i++)
    {
        x11 += 1;
        v.push_back(point{ x11, y11 });
        //SDL_RenderDrawPoint(ren, x11, y11);
    }
    //line 12
    int x12 = x10;
    int y12 = y10;
    for (int i = 0; i < 100; i++)
    {
        y12 -= 1;
        v.push_back(point{ x12, y12 });
        //SDL_RenderDrawPoint(ren, x12, y12);
    }
    return v;
}

void rotation(double angleX, double angleY, double angleZ, int x, int y, int z)
{
    angleX = angleX * M_PI / 180;
    angleY = angleY * M_PI / 180;
    angleZ = angleZ * M_PI / 180;

    double cX = cos(angleX);
    double sX = sin(angleX);
    double cY = cos(angleY);
    double sY = sin(angleY);
    double cZ = cos(angleZ);
    double sZ = sin(angleZ);

    double x0 = x;
    double y0 = y * cX + z * sX;
    double z0 = z * cX - y * sX;

    double x1 = x0 * cY - z0 * sY;
    double y1 = y0;
    double z1 = z0 * cY + x0 * sY;

    double x2 = x1 * cZ + y1 * sZ;
    double y2 = y1 * cZ - x1 * sZ;

    SDL_RenderDrawPoint(ren, x2, y2);
}

int main(int arhc, char** argv) {
    init();
    SDL_SetRenderDrawColor(ren, 0xFF, 0xFF, 0xFF, 0xFF);
    int centerX = SCREEN_WIDTH / 3;
    int centerY = SCREEN_HEIGHT / 2 + 100;
    int angleX = 0;
    int angleY = 0;
    int angleZ = 50;
    auto v = lines(centerX, centerY);
    for (int i = 0; i < 1000; i++)
        rotation(angleX, angleY, angleZ, v(i).x, v(i).y, 100);
    SDL_RenderPresent(ren);
    SDL_Delay(500000);
    quit();
    return 0;
}

Used matrices. However, the result was the same.

Create a custom render Appender button to add Inner Blocks

The documentation for InnerBlock has a prop renderAppender which can be used to add a custom button. In the example:

// Fully custom
<InnerBlocks
    renderAppender={ () => (
        <button className="bespoke-appender" type="button">Some Special Appender</button>
    ) }
/>

the custom button does nothing on click. How can open the Block Picker Menu on click of the custom button?

8 – Render custom twig in a specific language

I need to specify the language,
which the drupal renderer uses,
when generating a custom render array.

$renderArray = (
    "#theme" => "DNMBE_email",
    "#body" => (
        "#theme" => "DNMBE_emailBody_groupMessage",
        "#author" => $groupMessage->author,
        "#group" => $groupMessage->group,
        "#message" => $groupMessage->message,
    ),
    "#leadin" => $partials->leadin,
    "#leadout" => $partials->leadout,
);

$render = Drupal
    ::service('renderer')
    ->renderRoot($renderArray);
$html = (string)$render;

#theme parameters resolve to twigs, which, among other things, contain {{ "From"|t }} markup which is incorrectly rendered as From.

I already posted a similar question, which was marked as duplicate, with the answer suggesting using Drupal::languageManager()->setConfigOverrideLanguage($language). I do not get the expected result.

I verified that /admin/config/regional/translate has got the translated string, case-sensitive. I also use ('#cache')('max-age') = 0 and I don’t “precalc”-store any Drupal objects.

c++ – In Unreal why is it safe to access a UTexture2D’s properties from the render thread despite the documentation stating this is not allowed?

I’ve been trying to implement something that will update textures on the render thread from a background thread. And from what I’ve read of the unreal engine documentation you should never access a descendant of UObject from the render thread since the game thread could deallocate it at any time. Epic describes an example of this situation in their documentation

Here is a simple example of a race condition / threading bug:

/** FStaticMeshSceneProxy Actor is called on the game thread when a component is registered to the scene. */
FStaticMeshSceneProxy::FStaticMeshSceneProxy(UStaticMeshComponent* InComponent):
    FPrimitiveSceneProxy(...),
    Owner(InComponent->GetOwner()) <======== Note: AActor pointer is cached 
    ...

   /** DrawDynamicElements is called on the rendering thread when the renderer is doing a pass over the scene. */
    void FStaticMeshSceneProxy::DrawDynamicElements(...)
    {
        if (Owner->AnyProperty) <========== Race condition!  The game thread owns all AActor / UObject state, 
            // and may be writing to it at any time.  The UObject may even have been garbage collected, causing a crash.
            // This could have been done safely by mirroring the value of AnyProperty in this proxy.
    }

However the actual code they write violates this rule all of the time. There are numerous examples in FTexture2DResource accessing its Owner property which is a UTexture2D* from the render thread. Just one is

/**
 * Called when the resource is initialized. This is only called by the rendering thread.
 */
void FTexture2DResource::InitRHI()
{
  FTexture2DScopedDebugInfo ScopedDebugInfo(Owner);
  INC_DWORD_STAT_BY( STAT_TextureMemory, TextureSize );
  INC_DWORD_STAT_FNAME_BY( LODGroupStatName, TextureSize );

#if STATS
  if (Owner->LODGroup == TEXTUREGROUP_UI) <========== Accessing LODGroup from owner should be unsafe
    {
        GUITextureMemory += TextureSize;
    }
...
}

This seems to directly contradict the documentation given by Epic even though this is commonplace in their source code.

From the source it doesn’t look like FTexture2DResource or any of its ancestors perform any smart pointer magic or add the UTexture2D object to the root set to prevent GC and even then race conditions would still apply.

I’ll probably end up answering this one myself, but it would be great if someone happened to know this.

opengl – How do I render to a resizable window from a large fixed size back buffer in current graphics APIs?

I have some code that uses DirectX 9 with Windows native window management, that I would like to port to newer graphics APIs, but this code has a fairly unusual approach to window resizing, and it doesn’t seem obvious how to achieve the same things with newer graphics APIs.

The code I want to port allocates a back buffer large enough for a full screen window which remains the same size across window maximised, minimised, and resize events. When the window is smaller than the back buffer, only part of the back buffer is shown.

In order to render from this large back buffer, in DirectX 9, I’m specifying regions with the pSourceRect and pDestRect in IDirect3DDevice9::Present and using D3DSWAPEFFECT_COPY.

The advantages of this, as I see it, are as follows:

  • There’s no need to free and reallocate any resources in response to window size changes.
  • Not reallocating resources reduces code complexity significantly.
  • Not reallocating resources makes the application more responsive in the case of windows size changes.
  • The rendered scene continues to be drawn, smoothly, as the application window is being resized, without any need for potentially tricky and complicated attempts to update rendering settings in response to resize events.

(As the code is written, the application is essentially paused during window resize. The user nevertheless gets a smoothly updated view of the relevant part of the current static rendered scene during window resize.)

How do I do the same thing with newer graphics APIs?

In newer DirectX versions it seems like the ‘BitBlt model’ described on this page roughly corresponds to D3DSWAPEFFECT_COPY in DirectX 9:
https://docs.microsoft.com/en-us/windows/win32/direct3ddxgi/dxgi-flip-model

However, when I try setting the swap effect to DXGI_SWAP_EFFECT_DISCARD, in DirectX 12, I get the following error:
IDXGIFactory::CreateSwapChain: This D3D API version requires the swap effect to be one of DXGI_SWAP_EFFECT_FLIP_*.

So I guess the BitBlit model is no longer supported. 🙁

I guess a logical alternative would then be to do something equivalent ‘manually’, but it seems like doing something like this would lose one of the advantages of the trick, as it works out in DirectX 9, which is the smooth update during resizing without needing to intercept resize events and update stuff in response to these, in the application code.

What I’d like to see, ideally, are links to working code that does something equivalent to what I’m doing in DirectX 9, in DirectX 12, OpenGL, or Vulkan.

If it helps to see the code in action, and the resulting resize behaviour, you can see this in the PathEngine graphical testbed, which can be downloaded here: https://pathengine.com/downloads
(And I could also look at stripping out and posting source code for a minimal example application that demonstrates this behaviour, if this would help.)

8 – Entity reference will not render

I’m trying to show a list of entities (messages) in an email. I’ve got a field “messages” which is a list of entity references. This is using the message stack.

I create a new message and set it’s messages field. I then send that out. The email all renders correctly except that the entity reference is entirely missing.
If I send one of the referenced entities as an email it renders out correctly. They just don’t work in the form of being referenced.

I don’t see any trace of it in the html (not even the label or anything). I’m not sure what is going on or what information is needed. I really need this to work so please ask me for any other information I need to provide.
Here’s a few screen shots to show it.

enter image description here

This is the message that I’m emailing out. Both the partial and count appear. Nothing of the entity appears. And yes, the ‘Notify – Email Body’ renders correctly when when that is sent on it’s own.

I know the field is set because I can display info of the messages using tokens.

How do you make lights affect 2D sprites in a 3D environment using Unity’s URP render pipeline?

I’m using Unity, URP and have 2d sprites characters in a 3D environment. I want 2D sprites to be affected by lights/shadows like the rest of the environment.

-I know if I use a 2D renderer I can use 2D lights but then I lose the projected shadows from both 3D objects and 2D Sprites.
-If I use the standard render pipeline i can make it work but I don’t want to miss on all the tools that URP brings to the table.

In summary, is it possible to use 2D lights or any light that affects 2D sprites in URP ?

Thanks!