glsl – Is linear filtering possible on depth textures in OpenGL?

I’m working on shadow maps in OpenGL (using C#).

First, I’ve created a framebuffer and attached a depth texture as follows:

// Generate the framebuffer.
var framebuffer = 0u;

glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

// Generate the depth texture.
var shadowMap = 0u;

glGenTextures(1, &shadowMap);
glBindTexture(GL_TEXTURE_2D, shadowMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowMap, 0);

// Set the read and draw buffers.
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);

Later (after rendering to the shadow map and preparing the main scene), I sample from the shadow map in a GLSL fragment shader as follows:

float shadow = texture(shadowMap, shadowCoords.xy).r;

Where vec3 shadowCoords is the coordinates of the fragment from the perspective of the global directional light source (the one used to create the shadow map). The result is shown below. As expected, shadow edges are jagged due to using GL_NEAREST filtering.

Shadows!

To improve smoothness, I tried replaced the shadow map’s filtering with GL_LINEAR, but the results haven’t changed. I understand there are other avenues I could take (like Percentage-Closer Filtering), but I’d like to answer this question first, if only for my sanity. I’ve also noticed that other texture parameters (like GL_CLAMP_TO_EDGE rather than GL_REPEAT for wrapping) don’t function for the shadow map, which hints that this may be a limitation of depth textures in general.

To reiterate: Is linear filtering possible using depth textures in OpenGL?

opengl – What is the best way to approach a multi pass rendering system?

I am trying to code a new feature in my engine but I can’t find a way to implement my idea, which is the next one. I am trying to get multi pass rendering with more than two passes.

I know how to do a two pass rendering pipeline for effects like blurring and shadow mapping, but my problem is that now I want to make an undefined number of passes without having to have that number of different functions (a different one for every pass).

Do you have any ideas about what could I do here? I have thought about doing some kind of function pointer that is called x amount of times, each time calling to a different function, but again, I don’t know what is the best (or the easiest) approach. Would love to hear your ideas and comments. Thanks!

PS, I am using openGL if that is useful information.

opengl – GLSL link fails with C9999 (too many buffer declarations?)

I’m receiving a C9999 (*** exception during compilation ***) linker error for an OpenGl 4.6 compute shader. It seems to be related to the number of SSBOs I have declared (14 separate declarations), but it really doesn’t seem like it should be a problem, given that my GTX 1070 has 96 buffer binding locations.

None of the names are reserved keywords, and I’m not using double underscores. This has happened to me before, but I’ve worked around it by managing to split my code into separate shaders with fewer buffer declarations.

I’m finally asking about it because, for performance reasons, I’d really rather not split this up.

enter image description here

This is all the information the driver gives me in this case.

Update:

While I have received that error for buffers in the past, now I’ve narrowed this down to the following:


buffer _elementmap   { uint elementmap(); };

void element_insert(vec4 element)
{
    uint hash = get_hash(element);
    uint spot = 2 + (hash % (elementmap.length() - 2));
                                        ~~^

    uint pos = atomicAdd(elementmap(0), 1);

    (...)
}

Calling length() on that buffer produces the error for me. Seems like a driver bug. If I pass the length in as a separate buffer or a uniform, it works fine.

Additionally, I have a separate much simpler compute shader that doesn’t include this particular function that uses .length() on that same declaration that does work.

opengl – GLES3 – GL_INVALID_OPERATION: Operation illegal in current state (Unity Android native)

I use Unity(editor 2020.1.8f1). My application use android native .so lib that use GLES3. In order to use it first of all I have done these steps :

first: go to Project Settings >>> Player >>> Other Settings.

second: find "Auto Graphic API" and uncheck it.

third: Now you can see a new panel just below the "Auto Graphic API". It's a list of "Graphics APIs". Remove all graphics APIs and just add "OpenGLES3".

Then in android CMakeList.txt file I marked that I use GLES3

...
target_link_libraries(
        libcocodec
        GLESv3               <----------------  THIS LINE
        decoder_engine_lib
        ${log-lib}
)
...

And there is a usage :

void RenderAPI_OpenGLCoreES::EndModifyTexture(
        void* textureHandle,
        int textureWidth,
        int textureHeight,
        int rowPitch,
        void* dataPtr,
        bool destroy)
{
    GLuint gltex = (GLuint)(size_t)(textureHandle);
    // Update texture data, and free the memory buffer
    glBindTexture(GL_TEXTURE_2D, gltex);
    
    GLenum format = GL_RG;

    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, textureWidth, textureHeight, format, GL_UNSIGNED_BYTE, dataPtr);
    if (destroy)
        delete()(unsigned char*)dataPtr;
}

and error message I get is –

2020-11-08 10:51:46.966 1512-1930/com.co.unityandroidplayer E/Unity: OPENGL NATIVE PLUG-IN ERROR: GL_INVALID_OPERATION: Operation illegal in current state 
    (Filename: ./Runtime/GfxDevice/opengles/GfxDeviceGLES.cpp Line: 358)

I assume that something wrong with usage of GLenum format = GL_RG;, because (just for test) – if I use GLenum format = GL_ALPHA; I don’t get any errors (as well as expected result). Looks like gles3 doesn’t know what is GL_RG format.

What am I doing wrong?

c++ – OpenGL model not rendering properly

Hi I have been playing with OpenGL for a while and i came into the wall that I dont know how to pass. I am trying to render a model of an object basing on a .obj file. In that file I have position coordinates, uv coordinates and indices of a positions and uv coordinates(faces). I am trying to render the model like so:

  1. Get all the positions from the files
  2. Get all the uv coordinates from the file
  3. Get all the faces.
  4. Generate array of vertices with all the positions and uv coordinates in order defined by indices.
  5. Index the vertices 0,1,2,…
  6. Draw the indexed vertices.

I got blocked when I tried just to render the model without the texture. I have been shown a monstrosity instead of what i am trying to achieve. When i have been drawing the model in the other way(get all the vertices and index them in the order they should be drawn) everything is fine but in this way i cannot texture the model the way I wanted. I am adding the code below:

Reading from the file:

std::vector<float> verts; //container for vertices
std::vector<unsigned int> inds; //container for indexes of vertices
std::vector<unsigned int> texinds; //container for indexes of textures
std::vector<float> texs; //container for textures


bool LoadFromFile(const std::string& path) {
    std::ifstream f(path);
    if (!f.is_open())
        return false;
    while (!f.eof()) {
        char line(128);
        f.getline(line, 128);
        std::strstream s;
        s << line;
        char junk;
        char junk1;
        char junk2; 
        char junk3;
        if ((line(0) == 'v') && (line(1) == 't')) {
            float Textu(2);
            s >> junk >> junk1 >> Textu(0) >> Textu(1);  //ingoring the first 2 characters (vt) before data
            texs.push_back({ Textu(0) });
            texs.push_back({ Textu(1) });
        }
        if (line(0) == 'f') {
            unsigned int Index(6);
            s >> junk >> Index(0) >> junk1 >> Index(1) >> Index(2) >> junk2 >> Index(3) >> Index(4) >> junk3>> Index(5); //ingoring f and every / between indexes 
            inds.push_back({ Index(0) - 1 });
            texinds.push_back({ Index(1) - 1 });
            inds.push_back({ Index(2) - 1 });
            texinds.push_back({ Index(3) - 1 });
            inds.push_back({ Index(4) - 1 });
            texinds.push_back({ Index(5) - 1 });
        }
        if ((line(0) == 'v') && (line(1) == ' ')) {
            float Vertex(3);
            s >> junk >> Vertex(0) >> Vertex(1) >> Vertex(2);
            verts.push_back({ Vertex(0) });
            verts.push_back({ Vertex(1) });
            verts.push_back({ Vertex(2) });
        }
    }
} 

Creating array of vertices and idexing them:

        float Vertices(89868);
        for (int i = 0; i < inds.size(); i++) {
            Vertices(i) = verts(inds(i)); //Creating array with the vertices in order defined in the index vector
        }
        unsigned int indices(89868);
        for (int i = 0; i < inds.size(); i++) {
            indices(i) = i;
        }

I understand maybe i have made a stupid mistake somewhere but i am literally incapable of finding it.

c++ – Attempt to fix sprite sheet pixel bleeding in OpenGL 2D causing sprite distortion

While working on a project, I encountered the common problem of pixel bleeding when trying to draw subregions of my sprite sheet. This caused sort of “seams” to appear at the edges of my sprites. You can see the issue here, on the right and top of the sprite .

Doing some searching, I found others with a similar problem, and a suggested solution (here, and here for example) was to offset my texture coordinates by a bit, such as 0.5. I tried this, and it seemed to work. But I have noticed that sometimes, depending on where the sprite or camera is, I get a bit of distortion on the sprites. Here, the left side appears to be cut off, and here, the bottom seems to have expanded. (I should note, the distortion happens on all sides, I just happened to take screenshots of it happening on the bottom and left.) It may be a little difficult to see in screenshots, but it is definitely noticeable in motion. For reference, here is the part of the sprite sheet that is being displayed here

Does anybody have any idea what is going on here? I didn’t actually notice this issue until recently. I originally set out to resolve the pixel bleeding when I saw it occurring between my tile sprites. This new issue does not occur with them using my current half-pixel offset solution (or if it does, it’s not noticeable).

Code:

Texture parameters

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

Texture coordinate calculation

std::vector<glm::vec4> Texture2D::GetUVs(int w, int h)
{
std::vector<glm::vec4> uvs;
int rows = Width/ w;
int columns = Height / h;

for(int c = 0; c < columns; c ++)
{
    for(int i = 0; i < rows; i ++)
    {
        float offset = 0.5;
        uvs.emplace_back(glm::vec4(float(((i) * w + offset))/Width,
                              float(((1 + i) * w - offset))/Width,
                              float(((c) * h + offset))/Height,
                              float(((1 + c) * h - offset))/Height));
    }
}
return uvs;

Where Width and Height are the dimensions of the sprite sheet, and w and h are the dimensions of the subregion, in this case 32 and 32.

How I pass the uvs to the shader

GLfloat verticies() =
{
    uv.x, uv.w,
    uv.y, uv.z,
    uv.x, uv.z,

    uv.x, uv.w,
    uv.y, uv.w,
    uv.y, uv.z
};

this->shader.Use().SetVector2fv("uvs", 12, verticies);

Where uv is the uv at an index in the uvs vector that was returned above in the GetUVs function.

Vertex shader

#version 330 core
layout (location = 0) in vec2 vertex; 

out vec2 TextureCoordinates;

uniform vec2 uvs(6);
uniform mat4 model;
uniform mat4 projection;

void main()
{
    const vec2 position (6) = vec2()
    (
        vec2(0.0f, 1.0f),
        vec2(1.0f, 0.0f),
        vec2(0.0f, 0.0f),

        vec2(0.0f, 1.0f),
        vec2(1.0f, 1.0f),
        vec2(1.0f, 0.0f)
    );

   TextureCoordinates = uvs(gl_VertexID);
   gl_Position = projection * model * vec4(position(gl_VertexID), 0.0, 1.0);
}

Fragment shader

#version 330 core
in vec2 TextureCoordinates;
out vec4 color;

uniform sampler2D image;
uniform vec4 spriteColor;

void main()
{    
    color = vec4(spriteColor) * texture(image, TextureCoordinates);
}  

Thanks for reading. I have asked this question a few places and not gotten any response, so any help is greatly appreciated.

OpenGL (GLSL) : Basic 2D Lighting Optimization Issue (Fragment Shader)

I’m using a fragment shader to implement 2D lighting (code further below). Even though i am satisfied with the visuals of the light i noticed that it has a quite big GPU usage, and when trying to add about 40+ light sources the usage is close to 100% (GTX 1050).

I have a uniform array of structs that contain data about each light source and for loop that goes through all of them.

At first i thought i was pushing too much data to the GPU so i combined the RGB values of the light color in a single 32 bit integer and the two strengths of the light in a single 32 bit integer as well. Then i tried simplifying the formulas i used ( using a composed ,by composed I’m not referring to the operation, function from multiple linear functions ) but it seemed that just made the matters worse.

I think it’s worth noting the difference between LightStrength and VisualStrength values that i used in the code, the LightStrength is the strength of the light that lights up the medium around it and the VisualStrength is the strength of the colored hue around the light. And there is also a dark hue variable that is used to make the scene darker as of in the different times of the day.

The code of the fragment shader:

#version 450 core

in vec2 texCoord0;
uniform vec3 CameraPosition;
uniform mat4 Projection;
uniform float DarkHue;

uniform sampler2D u_Texture;
uniform vec2 u_resolution;

struct LightSource {
    vec2 Position;
    int LightColor;
    int ArgumentValue;
};

uniform LightSource LightSources(300);
uniform int LightSourceCount;

float GetLightfactor(float x,float Streght) {
    return min(1/(x*(Streght/100.0)+1),1);
}

void main() {
    gl_FragColor = texture2D(u_Texture,texCoord0);

    vec3 LightSum = vec3(0);
    vec4 PCameraPosition = vec4(CameraPosition,0) * Projection;

    vec2 NormalizedPosition = gl_FragCoord.xy*2/u_resolution-1;

    float LightFactor,VisualFactor,LightStreght,VisualStreght;

    for (int i = 0;i < LightSourceCount;++i) {

        vec4 Pos = vec4(LightSources(i).Position,0,0) * Projection + PCameraPosition; 

        vec2 coord = (NormalizedPosition-Pos.xy) * u_resolution;
        LightFactor = 0.0;
        VisualFactor = 0.0;

        LightStreght = LightSources(i).ArgumentValue & 0xffff;
        VisualStreght = (LightSources(i).ArgumentValue >> 16) & 0xffff;

        float lng = length(coord);

        LightFactor = GetLightfactor(lng,LightStreght);
        VisualFactor = GetLightfactor(lng,VisualStreght);

        LightSum = mix(LightSum,vec3(1),gl_FragColor.rgb * LightFactor * (1-DarkHue)) + vec3(((LightSources(i).LightColor >> 16)&0xff)/255.0,((LightSources(i).LightColor >> 8)&0xff)/255.0,(LightSources(i).LightColor&0xff)/255.0) * VisualFactor;

    }

    gl_FragColor.rgb *= DarkHue;
    gl_FragColor.rgb += LightSum;
    
}

The code of the c++ function that adds a light source. (Yes when setting uniforms caching is used)

static void AddLightSource(Vec2 Position, uint8_t R, uint8_t B, uint8_t G, uint16_t LightStrenght,uint16_t VisualStrenght) {
        std::string access = "LightSources(" + std::to_string(ActiveLightSources) + ")";
        int Value = (VisualStrenght << 16) | LightStrenght;
        int Color = (R << 16) | (G << 8) | B;
        Vec3 Translated = VertexArrayManager::TranslateValue;
        shader->setUniform2f(access + ".Position", glm::vec2(Position.x+Translated.x,Position.y + Translated.y));
        shader->setUniform1i(access + ".LightColor", Color);
        shader->setUniform1i(access + ".ArgumentValue", Value);
        ActiveLightSources++;
        shader->setUniform1i("LightSourceCount", ActiveLightSources);
}
```

How to convert OpenGL 2.0 code to OpenGL-ES 2.0?

I have a 3D game library that uses OpenGL 2.0 for PC and I need to convert it to OpenGL-ES 2.0 to compile it for Android.

Because the library is huge, I’d like to avoid converting it line-by-line by hand.

Is there some faster way I can convert desktop OpenGL to OpenGL-ES source code, like a wrapper, or maybe some layer running on Android that converts desktop OpenGL to ES at runtime?

How to convert OpenGL 2.0 to OpenGL-ES 2.0?

I have a 3D game library that uses OpenGL 2.0 for PC and I need to convert it to OpenGL-ES 2.0 to compile it for Android.

Because the library is huge, I’d like to avoid converting it line-by-line by hand.

Is there some faster way I can convert desktop OpenGL to OpenGL-ES source code, like a wrapper, or maybe some layer running on Android that converts desktop OpenGL to ES at runtime?

Is there a program to convert OpenGL 2.0 to OpenGL-ES 2.0?

I have a 3D game library that uses OpenGL 2.0 for PC and I need to convert it to OpenGL-ES 2.0 to compile it for Android. Because the library is huge, it can’t be done by hand, so I was wondering if there is some kind of software to auto convert desktop OpenGL to OpenGL-ES source code, some wrapper, or maybe some layer running on Android that converts desktop OpenGL to ES on runtime?
Perhaps there is a tool that auto converts desktop OpenGL to a cross platform 3D rendering library ?