Media html not rendering in blocks

I’m experiencing an issue on multiple sites where (image) media aren’t appearing in blocks despite working fine in node fields.

I first thought it only happened to me when using the linkicon module, but since then i created a custom block type for image blocks (using entity reference media fields) and the same thing happens, ie. the html renders nothing:

<div class="content"></div>

There’s no problem rendering the fields in nodes on the same sites using the exact same display settings.

I’ve checked both the media, block and block_content logs and they show no errors.

My setup:

  • Drupal 8.9.7
  • Barrio 5.1.3
  • Local environment running on ddev
  • Remote environment running on cloudways

Any ideas what the issue might be?

entities – Do fields that are not displayed affect rendering performance?

This is a general question about how Drupal renders content. Say I have a content type with 20 fields, including some entity references, text fields, integers, and so on.

When I have a lot of fields on a content type, I know that each field slows down operations like saving the content type because each field has to be processed.

However, what about when Drupal renders the content type to display it? If I hide 19 of the 20 fields on “Manage Display” for the content type, will Drupal avoid processing all those fields and have the same performance as rendering a content type that only has 1 field?

rendering – Implementing DirectX 7 light attenuation in Unity’s Universal Render Pipeline

I am create a modern Unity based client of an old MMO from 2000, and I am working on implementing their lighting system. Based on the data I have reverse engineered, it looks like they used a custom attenuation formula. The original client uses DirectX 7. I have found a reference sheet for DX7 light.

http://developer.download.nvidia.com/assets/gamedev/docs/GDC2K_D3D_7_Vert_Lighting.pdf

The relevant info starts at page 17. In the URP, in Lighting.hlsl, I have set up linear attenuation:

float range = rsqrt(distanceAndSpotAttenuation.x);
float dist = distance( positionWS, lightPositionWS);
half attenuation = 1.0f - dist/range;        

Looks good! https://i.imgur.com/5bfJ5fF.png

Everything looks good. Now I am trying to (based on the document) recreate the same linear attenuation. It should look something like this:

// DX 7
float range = rsqrt(distanceAndSpotAttenuation.x);
float dist = distance( positionWS, lightPositionWS);
half c0 = 1.0f;
half c1 = 1.0f;
half c2 = 0.0f;
half d = dist/range;
half attenuation = 1.0f / ( c0 + c1 * d + c2 * d * d );

This should give me linear attenuation according to the doc, but it ends up bleeding light way past the range of the light. So I am convinced the distance is wrong. What kind of value is expected here? Normalized distance? It doesn’t specify. I have played around with the distance a lot. Set it to the attenuation in the first code example, to the raw value, and a bunch of other values. No luck: https://i.imgur.com/JWJ0HF2.png

Does anyone have any ideas?

opengl – What is the best way to approach a multi pass rendering system?

I am trying to code a new feature in my engine but I can’t find a way to implement my idea, which is the next one. I am trying to get multi pass rendering with more than two passes.

I know how to do a two pass rendering pipeline for effects like blurring and shadow mapping, but my problem is that now I want to make an undefined number of passes without having to have that number of different functions (a different one for every pass).

Do you have any ideas about what could I do here? I have thought about doing some kind of function pointer that is called x amount of times, each time calling to a different function, but again, I don’t know what is the best (or the easiest) approach. Would love to hear your ideas and comments. Thanks!

PS, I am using openGL if that is useful information.

c++ – OpenGL model not rendering properly

Hi I have been playing with OpenGL for a while and i came into the wall that I dont know how to pass. I am trying to render a model of an object basing on a .obj file. In that file I have position coordinates, uv coordinates and indices of a positions and uv coordinates(faces). I am trying to render the model like so:

  1. Get all the positions from the files
  2. Get all the uv coordinates from the file
  3. Get all the faces.
  4. Generate array of vertices with all the positions and uv coordinates in order defined by indices.
  5. Index the vertices 0,1,2,…
  6. Draw the indexed vertices.

I got blocked when I tried just to render the model without the texture. I have been shown a monstrosity instead of what i am trying to achieve. When i have been drawing the model in the other way(get all the vertices and index them in the order they should be drawn) everything is fine but in this way i cannot texture the model the way I wanted. I am adding the code below:

Reading from the file:

std::vector<float> verts; //container for vertices
std::vector<unsigned int> inds; //container for indexes of vertices
std::vector<unsigned int> texinds; //container for indexes of textures
std::vector<float> texs; //container for textures


bool LoadFromFile(const std::string& path) {
    std::ifstream f(path);
    if (!f.is_open())
        return false;
    while (!f.eof()) {
        char line(128);
        f.getline(line, 128);
        std::strstream s;
        s << line;
        char junk;
        char junk1;
        char junk2; 
        char junk3;
        if ((line(0) == 'v') && (line(1) == 't')) {
            float Textu(2);
            s >> junk >> junk1 >> Textu(0) >> Textu(1);  //ingoring the first 2 characters (vt) before data
            texs.push_back({ Textu(0) });
            texs.push_back({ Textu(1) });
        }
        if (line(0) == 'f') {
            unsigned int Index(6);
            s >> junk >> Index(0) >> junk1 >> Index(1) >> Index(2) >> junk2 >> Index(3) >> Index(4) >> junk3>> Index(5); //ingoring f and every / between indexes 
            inds.push_back({ Index(0) - 1 });
            texinds.push_back({ Index(1) - 1 });
            inds.push_back({ Index(2) - 1 });
            texinds.push_back({ Index(3) - 1 });
            inds.push_back({ Index(4) - 1 });
            texinds.push_back({ Index(5) - 1 });
        }
        if ((line(0) == 'v') && (line(1) == ' ')) {
            float Vertex(3);
            s >> junk >> Vertex(0) >> Vertex(1) >> Vertex(2);
            verts.push_back({ Vertex(0) });
            verts.push_back({ Vertex(1) });
            verts.push_back({ Vertex(2) });
        }
    }
} 

Creating array of vertices and idexing them:

        float Vertices(89868);
        for (int i = 0; i < inds.size(); i++) {
            Vertices(i) = verts(inds(i)); //Creating array with the vertices in order defined in the index vector
        }
        unsigned int indices(89868);
        for (int i = 0; i < inds.size(); i++) {
            indices(i) = i;
        }

I understand maybe i have made a stupid mistake somewhere but i am literally incapable of finding it.

Data Visualization (rendering env)

How can I improve this code? It renders a ML env with openAI gym and matplotlib. I am new to coding so not sure if my variables or format or any lines can be improved.

```
def _render(self, obs):
    """Renders the environment.
    """

    ball_loc = obs(1)
    x, y = self.circle.center

    new_y = math.tan(-math.radians(obs(3))) * (ball_loc - self.target_location)

    self.circle.center = (ball_loc, 10 + new_y + 1)

    t_start = self.ax.transData
    coords = t_start.transform((self.target_location, 10))
    t = mpl.transforms.Affine2D().rotate_deg_around(coords(0), coords(1), -obs(3))
    t_end = t_start + t
    self.rect.set_transform(t_end)

    self.fig.canvas.draw()
    plt.pause(0.05)
    ```

python – Line plot for env rendering matplotlib

I have a reinforcement learning custom env made with openAI gym. How can I improve the def _init_plt(self) function below? I included the code before it for reference as well. Please be brutal with your review.

scroll down for init function, generates a subplot with axes:

    # Imports
import gym
import numpy as np
import random
from matplotlib import pyplot as plt
import matplotlib as mpl
import math
import os


class BeamEnv(gym.Env):

    def __init__(self, obs_low_bounds  = np.array((  0,   0,  1.18e10, -45)),
                       obs_high_bounds = np.array(( 12,  12, -1.18e10,  45)), 
                       obs_bin_sizes   = np.array((0.5, 0.5,        6,   5))):
        """Environment for a ball and beam system where agent has control of tilt.

        Args:
            obs_low_bounds (list, optional): (target location(in), ball location(in), 
                                              ball velocity(in/s), beam angle(deg)). Defaults to ( 0,  0, "TBD", -45).
            obs_high_bounds (list, optional): As above so below. Defaults to (12, 12, "TBD",  45).
        """
        super(BeamEnv, self).__init__()

        # Hyperparameters
        self.ACC_GRAV    = 386.22  # (in/s2)
        self.MOTOR_SPEED = 46.875  # 1.28(sec/60deg) converted to (deg/s)
        self.TIME_STEP   = 0.1     # (s)

        # Declare bounds of observations
        self.obs_low_bounds  = obs_low_bounds
        self.obs_high_bounds = obs_high_bounds
        self._set_velocity_bounds()

        # Declare bin sizes of observations
        self.obs_bin_sizes   = obs_bin_sizes
        
        # Bin observations
        self.obs = ()
        self.obs_sizes = ()
        for i in range(4):
            self.obs.append(np.sort(
                            np.append(
                            np.arange(self.obs_low_bounds(i), self.obs_high_bounds(i) + self.obs_bin_sizes(i), self.obs_bin_sizes(i)), 
                            0)))

            self.obs_sizes.append(len(self.obs(i)))
        
        # Declare observation space
        self.observation_space = gym.spaces.MultiDiscrete(self.obs_sizes)

        # Action Space
        # increase, decrease or keep current angle
        self.action_space = gym.spaces.Discrete(3)

        # Reward Range
        self.reward_range = (-1, 1)
        self.step_count = 0
        self.sample_freq = 10
    
    **def _init_plt(self):**
        """Initiates the animated line plot.

        """
        if self.fig:
            return
        plt.close()
        fig = plt.figure()
        self.fig = fig
        # fig.set_dpi(100)
        # fig.set_size_inches(7, 6.5)
        
        ax = plt.axes(xlim=(-2, 14), ylim=(0, 20))
        ax.set_aspect('equal')
        self.ax = ax
        circle = plt.Circle((self.ball_location, 10), 1)
        self.circle = circle
        ax.add_patch(circle)

        rect = plt.Rectangle((0, 10), 12, 0.1, linewidth=1, edgecolor='r', facecolor='r')
        self.rect = rect
        ax.add_patch(rect)

        x = self.target_location
        size = 1
        target = plt.Polygon(((x, 10-0.1), (x-size/2, 10-size), (x+size/2, 10-size)), color='b')
        ax.add_patch(target)

        plt.show(block=False)
        plt.pause(0.05)

opengl – Knowing the size of a framebuffer when rendering transformed meshes to a texture

I have a couple of 2D meshes that make a hierarchical animated model.
I want to do some post-processing on it, so I decided to render this model to a texture, so that I could do the post-processing with a fragment shader while rendering it as a textured quad.

But I don’t suppose that it would be very smart to have the render texture’s size as large as the entire screen for every layer that I’d like to compose – it would be nicer if I could use a smaller render texture, just big enough to fit every element of my hierarchical model, right?

But how am I supposed to know the size of the render target before I actually render it?

Is there any way to figure out the bounding rectangle of a transformed mesh?
(Keep in mind that the model is hierarchical, so there might be multiple meshes translated/rotated/scaled to their proper positions during rendering to make the final result.)

I mean, sure, I could transform all the vertices of my meshes myself to get their world space / screen space coordinates and then take their minima / maxima in both directions to get the size of the image required. But isn’t that what vertex shaders were supposed to to so that I wouldn’t have to calculate that myself on the CPU? (I mean, if I have to transform everything myself anyway, what’s the point of having a vertex shader in the first place? :q )

It would be nice if I could just pass those meshes through the vertex shader first somehow without rasterizing it yet, just to let the vertex shader transform those vertices for me, then get their min/max extents and create a render texture of that particular size, and only after that let the fragment shader rasterize those vertices into that texture. Is such thing possible to do though? If it isn’t, then what would be a better way to do that? Is rendering the entire screen for each composition layer my only option?

terrain – Rendering smooth ground

I’m attempting to render terrain made out of a triangle mesh. The problem is that whenever I have a northwest -> southeast ramp in the terrain, I get this diamond pattern:

enter image description here

The issue is that at the top and bottom of the ramp, the terrain is more flat than at the middle, so if the ramp is aligned with the triangles, each triangle will have two light vertices (at the top and bottom), and one dark vertex (at the middle).

How can I fix this effect? Subdividing the mesh helps, but doesn’t fix it completely.

Vertex shader:

uniform mat4 projection;
attribute vec3 position;
attribute vec3 normal;
attribute vec4 color;

varying vec3 f_position;
varying vec3 f_normal;
varying vec4 f_color;

void main(void) {
    gl_Position = projection * position;
    f_color = color;
    f_normal = normal;
    f_position = position;
}

Fragment shader:

varying vec3 f_position;
varying vec3 f_normal;
varying vec4 f_color;

uniform vec3 light_position;

void main(void) {
    float light = 0.5 + 0.5 * abs(dot(normalize(f_normal), normalize(light_position - f_position)));
    gl_FragColor = vec4(light, light, light, 1) * f_color;
}