How to modify this old pre-3.0 OpenGL code to modern OpenGL?

I’ve done the getting started section on learnopengl website. However, that means I don’t understand the old way of doing things, so I am struggling to understand some code I want to take a look at to better understand cloth simulation. In particular, I wanted to know how to create the main structure of the cloth in OpenGL. I know how to create a box right now, maybe not instinctively yet but by looking at the code I could see what is happening.

Here is the github repository to this code: https://github.com/bailus/Cloth

I am looking at the Cloth.cpp and see that it is created through the use of many triangles.cpp objects:

glBegin(mode);
    for (std::vector<Triangle>::size_type i = 0; i < triangles.size(); i++)
        triangles(i).display();
    glEnd();

The display method of triangles looks like this:

void Triangle::display() {

    for (int i = 0; i < 3; i++) {
        glTexCoord2fv(glm::value_ptr(particles(i)->texCoord));
        glNormal3fv(glm::value_ptr(particles(i)->normal));
        glVertex3fv(glm::value_ptr(particles(i)->pos));
    }
}

so my question is… what would be going on here in terms of VBOs? I can see there are texture coordinates and vertex coordinates… I don’t know about the normals. I would assume two VBOs from this, but how would I modify my display function and drawing function to support the latest opengl? Would the current object orientated structure work..? I am not sure how to go about it.

opengl – Trouble generating a depth map (black screen)

I am having trouble generating a depth map for my scene. I tried to figure out this issue a 2 weeks ago and got nowhere after days of trying to fix it, so I took a break and tried to tackle it again today. I’m still stuck and with no ideas.

Here’s the main code:

#Sets up the depth FBO and the texture that will be used to render to a quad
const unsigned int d_width = 1024, d_height = 1024;

    unsigned int depthFBO;
    glGenFramebuffers(1, &depthFBO);
    glBindBuffer(GL_FRAMEBUFFER, depthFBO);

    unsigned int depthMap;
    glGenTextures(1, &depthMap);
    glBindTexture(GL_TEXTURE_2D, depthMap);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, d_width, d_height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthMap, 0);
    glDrawBuffer(GL_NONE);
    glReadBuffer(GL_NONE);

    if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
        std::cout << "Incomplete stuff!" << std::endl;

    glBindBuffer(GL_FRAMEBUFFER, 0);

    glm::vec3 lightPos(-2.0f, 4.0f, -1.0f);

Rendering loop:

/* Render here */
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glClearColor(0.1f, 0.1f, 0.1f, 1.0f);

        #render scene from the light's perspective
        float near_plane = 1.0f, far_plane = 8.0f;
        glm::mat4 lightProj = glm::ortho(-10.0f, 10.0f, 10.0f, 10.0f, near_plane, far_plane);
        glm::mat4 lightView = glm::lookAt(lightPos, glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
        glm::mat4 lightSpaceMat = lightProj * lightView;

        depthShader.use();
        depthS.setUniformMat4("lightSpaceMat", lightSpaceMat);
        glm::mat4 model = glm::mat4(1.0f);
        depthS.setUniformMat4("model", model);

        glViewport(0, 0, d_width, d_height);
        glBindFramebuffer(GL_FRAMEBUFFER, depthFBO);
        glClear(GL_DEPTH_BUFFER_BIT);
        
        #render everything else
        planeVAO.bind();
        model = glm::translate(model, glm::vec3(0.0f, -1.0f, 0.0f));
        depthS.setUniformMat4("model", model);

        glDrawArrays(GL_TRIANGLES, 0, 6);

        cubeVAO.bind();
        model = glm::translate(model, glm::vec3(1.0f, 1.0f, -2.0f));
        model = glm::scale(model, glm::vec3(0.6f));
        depthS.setUniformMat4("model", model);
        glDrawArrays(GL_TRIANGLES, 0, 36);

        model = glm::translate(model, glm::vec3(2.0f, 1.0f, -1.0f));
        depthS.setUniformMat4("model", model);
        glDrawArrays(GL_TRIANGLES, 0, 36);

        #render to the quad now which is supposed to display the depth map
        glBindFramebuffer(GL_FRAMEBUFFER, 0);
        glViewport(0, 0, scr_width, scr_height);
        glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
        
        quadVAO.bind();
        quadShader.use();
        glActiveTexture(GL_TEXTURE15);
        glBindTexture(GL_TEXTURE_2D, depthMap);
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

Here are the shaders:

depth shaders:

#vertex
#version 330 core

layout (location = 0) in vec3 aPos;

uniform mat4 lightSpaceMat;
uniform mat4 model;

void main()
{
    gl_Position = lightSpaceMat * model * vec4(aPos, 1.0);
}

#fragment
#version 330 core

void main()
{
    gl_FragDepth = gl_FragCoord.z;
}

visual quad shaders:

#vertex
#version 330 core

layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTex;

out vec2 TexCoords;

void main()
{
    TexCoords = aTex;
    gl_Position = vec4(aPos, 1.0);
}

#fragment
#version 330 core

out vec4 FragColor;

in vec2 TexCoords;

uniform sampler2D depthMap;

void main()
{
    float depthValue = texture(depthMap, TexCoords).r;
    FragColor = vec4(vec3(depthValue), 1.0);
}

Anyone have any idea what I might be doing wrong? I have tried debugging in every way I could think of and have made no progress.

Python with Catalina : No module named ‘OpenGL’

I am running the examples provided by the PyQtGraph python graphic module, on a MacOS Catalina. They all work fine for me, except those in 3D. I have a message :

Traceback (most recent call last):
  File "/Applications/anaconda3/lib/python3.7/site-packages/pyqtgraph/examples/GLVolumeItem.py", line 11, in <module>
    import pyqtgraph.opengl as gl
  File "/Applications/anaconda3/lib/python3.7/site-packages/pyqtgraph/opengl/__init__.py", line 1, in <module>
    from .GLViewWidget import GLViewWidget
  File "/Applications/anaconda3/lib/python3.7/site-packages/pyqtgraph/opengl/GLViewWidget.py", line 2, in <module>
    from OpenGL.GL import *
ModuleNotFoundError: No module named 'OpenGL'

The OpenGL module, though deprecated in Catalina, is present at the location(I checked)

/System/Library/Frameworks/OpenGL.framework

but I have no clue about how to make python find it. Does anyone have ?

Thanks a lot !!

opengl – How to use continious collision detection in a dynamic AABB tree

I am currently writing a game in c++ using openGL, and I am currently using a kinetic sweep and prune algorithm for the broad phase and then using GJK Raycast + GJK & EPA for the narrow phase. However I have realized that kinetic sweep and prune may not be the best choice because there are many objects in the scene that are just static and cause a lot of swapping when an object moves.

So basically I would like to implement a dynamic AABB tree for the broad phase knowing there will be only a few objects requiring continuous collision detection (but these objects are essential). However for these fast moving objects what is the best way to detect a possible collision with other objects in the broad phase? I am thinking about using an AABB that contains the object trajectory from one frame to another, is this a good idea? Or will this a lot of overhead due to many false positives?

Also I haven’t read to much about dynamic AABB trees but I think I understand the idea, the idea is for each object that moved check if it’s AABB if overlapping with the tree node, if it is do the same check with it’s children and do so until we are at the leaves of the tree.

All help is greatly appreciated

opengl – How do I calculate the bounding box for an ortho matrix for Cascaded shadow mapping?

I’ve been trying to get a cascaded shadow mapping system implemented on my engine, though it appears to be that the bounding boxes for the cascades aren’t correct.

The part I’m interested in can be found here, under the function name “CalcOrthoProjs”.

I’ve been trying to understand the matrix multiplications with this answer, and the ogldev variable and function names are kind of confusing me.

This is how I modified ogldev’s function to work with my variables:

void Scene::calcOrthoProjections(Camera &camera, glm::mat4 LightView, std::vector<glm::mat4> &orthoContainer, std::vector<GLfloat> &cascadeEnd) {
GLfloat FOV, nearPlane, farPlane, ratio;
camera.getPerspectiveInfo(FOV, nearPlane, farPlane);
ratio = static_cast<GLfloat>(RE::config.height) / static_cast<GLfloat>(RE::config.width);

GLfloat tanHalfHFov = glm::tan(glm::radians(FOV / 2.0f));
GLfloat tanHalfVFov = glm::tan(glm::radians((FOV*ratio) / 2.0));


for (GLuint i = 0; i < RE::config.r_shadow_cascade_factor; i++) {
    GLfloat xn = cascadeEnd(i)     * tanHalfHFov;
    GLfloat xf = cascadeEnd(i + 1) * tanHalfHFov;
    GLfloat yn = cascadeEnd(i)     * tanHalfVFov;
    GLfloat yf = cascadeEnd(i + 1) * tanHalfVFov;

    //The frustum Corners on View(Camera) space
    glm::vec4 frustumCorners(8){
        //Near Face
        glm::vec4( xn,  yn, cascadeEnd(i), 1.0),
        glm::vec4(-xn,  yn, cascadeEnd(i), 1.0),
        glm::vec4( xn, -yn, cascadeEnd(i), 1.0),
        glm::vec4(-xn, -yn, cascadeEnd(i), 1.0),
        //Far face
        glm::vec4( xf,  yf, cascadeEnd(i + 1), 1.0),
        glm::vec4(-xf,  yf, cascadeEnd(i + 1), 1.0),
        glm::vec4( xf, -yf, cascadeEnd(i + 1), 1.0),
        glm::vec4(-xf, -yf, cascadeEnd(i + 1), 1.0),


    };

    //The frustum Corners in LightSpace
    glm::vec4 frustumCornersL(8);

    GLfloat minX, maxX, minY, maxY, minZ, maxZ;
    minX = minY = minZ = std::numeric_limits<GLfloat>::max();
    maxX = maxY = maxZ = std::numeric_limits<GLfloat>::min();

    glm::mat4 cam = (camera.getProjection() * camera.getView());
    glm::mat4 camInverse = glm::inverse(cam);

    for (GLuint j = 0; j < 8; j++) {
        //View(Camera) space to world space
        glm::vec4 vW = camInverse * frustumCorners(j);

        //world space to light space
        frustumCornersL(j) = LightView * vW;

        minX = min(minX, frustumCornersL(j).x);
        maxX = max(maxX, frustumCornersL(j).x);
        minY = min(minY, frustumCornersL(j).y);
        maxY = max(maxY, frustumCornersL(j).y);
        minZ = min(minZ, frustumCornersL(j).z);
        maxZ = max(maxZ, frustumCornersL(j).z);
    }
    orthoContainer(i) = glm::ortho(minX, maxX, minY, maxY, minZ, maxZ) * LightView;
 }
}

LightView represents a matrix created with:

glm::LookAt(-glm::normalize(light.direction), glm::vec3(0.0), glm::vec3(0.0, 1.0, 0.0))

camera.getProjection() returns the perspective matrix of the main camera

camera.getView() returns a LookAt at the objective the camera is looking at

The orthoContainer values are then fed into the depth rendering unaltered afterwards (by anything other than the model matrix of each model)

I wrote some comments on how I think the math is done, trying to understand what’s wrong.

The result is a frustum too wide, resulting on low res shadows(even for the closest shadow map):
Low res shadows

and this is the depth map of the closest shadow map:
Depth buffer of the closest shadow map

any insight as to why this isn’t working, or any other best practice advice, is welcome. Thanks in advance!

c++ – OpenGL Mesh Class

I’ve written a simple mesh class. The purpose of it is to build a mesh, draw it to the screen, and provide some means by which the mesh can be transformed/scaled, etc. This was done with GLAD, GLFW, GLM, and OpenGL.

/*
The mesh class takes vertex data, binds VAOs, VBOs, drawing orders, etc, and draws it.
Other classes can inherit from this class.
*/
class Mesh {
private:

    //-------------------------------------------------------------------------------------------
    // GLenum: drawing mode (ie GL_STATIC_DRAW) and primitive type (ie GL_TRIANGLES)
    GLenum DRAW_MODE, PRIMITIVE_TYPE;

    //-------------------------------------------------------------------------------------------
    // Vertex buffer object, vertex array object, element buffer object
    unsigned int VBO, VAO, EBO;

protected:
    //-------------------------------------------------------------------------------------------
    // Vectors holding vertex and index data
    std::vector<Vertex> vertices;
    std::vector<unsigned int> indices;

    //-------------------------------------------------------------------------------------------
    void init() {
        // Generate vertex arrays
        glGenVertexArrays(1, &VAO);
        // Generate VBO
        glGenBuffers(1, &VBO);
        // Generate EBO
        glGenBuffers(1, &EBO);

        // Bind the VAO
        glBindVertexArray(VAO);

        // Bind the buffer
        glBindBuffer(GL_ARRAY_BUFFER, VBO);
        // Detail the VBO buffer data - attach the vertices
        glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), &vertices(0), DRAW_MODE);

        // Bind the indices
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
        // Detail the EBO data
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsigned int),
            &indices(0), DRAW_MODE);

        glEnableVertexAttribArray(0);
        // Tell OpenGL how the vertex data is structured
        glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (void*)0);
        glBindVertexArray(0);
    }

    //-------------------------------------------------------------------------------------------
    void set_vertices(std::vector<Vertex> _vertices) {
        vertices = _vertices;
    }

    //-------------------------------------------------------------------------------------------
    void set_indices(std::vector<unsigned int> _indices) {
        indices = _indices;
    }

    //-------------------------------------------------------------------------------------------
    void set_primitive_type(GLenum _PRIMITIVE_TYPE) {
        PRIMITIVE_TYPE = _PRIMITIVE_TYPE;
    }

    //-------------------------------------------------------------------------------------------
    void set_draw_mode(GLenum _DRAW_MODE) {
        DRAW_MODE = _DRAW_MODE;
    }

public:
    //-------------------------------------------------------------------------------------------
    Mesh(std::vector<Vertex> _vertices, std::vector<unsigned int> _indices,
        GLenum _DRAW_MODE = GL_STATIC_DRAW, GLenum _PRIMITIVE_TYPE = GL_TRIANGLES) {
        this->vertices = _vertices;
        this->indices = _indices;
        this->DRAW_MODE = _DRAW_MODE;
        this->PRIMITIVE_TYPE = _PRIMITIVE_TYPE;
        //std::cout << vertices(0).position.x << std::endl;
        init();
    }

    //-------------------------------------------------------------------------------------------
    // Constructor for an empty mesh. Note: it MUST RECIEVE VERTEX DATA
    Mesh(GLenum _DRAW_MODE = GL_STATIC_DRAW, GLenum _PRIMITIVE_TYPE = GL_TRIANGLES) {
        this->DRAW_MODE = _DRAW_MODE;
        this->PRIMITIVE_TYPE = _PRIMITIVE_TYPE;
    }

    //-------------------------------------------------------------------------------------------
    virtual ~Mesh() {
        glDeleteVertexArrays(1, &VAO);
        glDeleteBuffers(1, &VBO);
        glDeleteBuffers(1, &EBO);
    }

    //-------------------------------------------------------------------------------------------
    virtual void update() {}

    //-------------------------------------------------------------------------------------------
    void draw() {
        // Bind the EBO
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
        // Bind the vertex array object
        glBindVertexArray(VAO);

        // Ready to draw
        glDrawElements(PRIMITIVE_TYPE, indices.size(), GL_UNSIGNED_INT, 0);
        // Unbind the vertex array (although this isn't entirely necessary)
        glBindVertexArray(0);
    }

    //-------------------------------------------------------------------------------------------
    /* Now I will introduce some simple mesh transformation functions */
    void move(glm::vec3 _position) {
        // Transform each vertex in the given vector to achieve the desired effect
        for (std::size_t i = 0; i < vertices.size(); i++) {
            vertices(i).position += _position;
        }
    }

    //-------------------------------------------------------------------------------------------
    void scale(float factor) {
        // Initalise as identity matrix
        glm::mat3 scaling_matrix = glm::mat3();
        // Multiply by the scaling factor
        scaling_matrix = factor * scaling_matrix;
        // Apply the transformation
        for (std::size_t i = 0; i < vertices.size(); i++) {
            vertices(i).position = scaling_matrix * vertices(i).position;
        }
    }
};

I have also made a simple application of the mesh class: drawing a plane.

// A simple plane, the test shape that we'll use for drawing
class Plane : public Mesh {
private:
    //-------------------------------------------------------------------------------------------
    // Amount of vertices in the x direction
    std::size_t SIZE_X = 100;
    // Amount of vertices in the y direction
    std::size_t SIZE_Y = 100;
    // Width between vertices (x direction)
    std::size_t VERTEX_WIDTH = 1;
    // 'Height' between vertices (y direction)
    std::size_t VERTEX_HEIGHT = 1;

    //-------------------------------------------------------------------------------------------
    std::vector<Vertex> vertices;
    std::vector<unsigned int> indices;

    //-------------------------------------------------------------------------------------------
    // Set up the plane
    void create_mesh_plane() {
        const int w = SIZE_X + 1;
        for (std::size_t i = 0; i < SIZE_X + 1; i++) {
            for (std::size_t j = 0; j < SIZE_Y + 1; j++) {
                Vertex v;
                v.position.x = i * VERTEX_WIDTH;
                v.position.y = j * VERTEX_HEIGHT;
                v.position.z = 0;
                vertices.push_back(v);

                unsigned int n = j * (SIZE_X + 1) + i;

                if (j < SIZE_Y && i < SIZE_X) {
                    // First face
                    indices.push_back(n);
                    indices.push_back(n + 1);
                    indices.push_back(n + w);
                    // Second face
                    indices.push_back(n + 1);
                    indices.push_back(n + 1 + w);
                    indices.push_back(n + 1 + w - 1);
                }
            }
        }

        //-------------------------------------------------------------------------------------------
        set_vertices(vertices);
        set_indices(indices);
        set_primitive_type(GL_TRIANGLES);
        set_draw_mode(GL_STATIC_DRAW);
        init();
    }

public:

    //-------------------------------------------------------------------------------------------
    Plane() {
        create_mesh_plane();
    }

    //-------------------------------------------------------------------------------------------
    ~Plane() {

    }
};

This code works, and you can instantiate a plane object and draw it in your render function and it will work quickly and nicely.

I’m looking at a review on this code because it’s going to become a centrepiece of future things I work on with OpenGL and am concerned about the efficiency, particularly:

  • Multiple instantiations of mesh-like objects lead to multiple VBOs, causing unnecessary buffer switches.

  • The vectors seem like an incredibly inefficient way of storing the data. Particularly because as can be seen in my move() and scale() functions, I’m iterating over them, which is extremely slow in realtime and very detrimental to performance. Also, if I want to dynamically update mesh vertex data (I added a virtual update function for this purpose) it would be extremely slow.

  • I could probably split up the init() function such that it doesn’t have to be recalled every time the vertex data changes (ie, the drawing order is still the same, I could just feed in the new vertex data if I wanted to update the vertex data of the mesh during its existence).

I’d be grateful for any feedback.

drivers – Getting correct opengl version

I want to play red eclipse and it requires opengl 2.0 at least. But my system has version 1.4 installed As this command glxinfo | grep -i openglshows:

OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) G33 
OpenGL version string: 1.4 Mesa 20.0.4
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 2.0 Mesa 20.0.4
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 1.0.16
OpenGL ES profile extensions:

I am using Ubuntu Mate 20.04 LTS
My PC specs are:

  • Intel Core 2 Duo E4500 CPU
  • 2GB DDR2 RAM

I have seen different methods to upgrade the driver but I don’t know whether the newer versions are compatible with my system or not. How do I know which is the suitable version for my system? How do I install it?

opengl – Calculating Camera View Frustum Corner for Directional Light Shadow Map

I’m trying to calculate the 8 corners of the view frustum so that I can use them to calculate the ortho projection and view matrix needed to calculate to calculate shadows based on the camera’s position. Currently, I’m not sure how to convert the frustum corners from local space into world space. Currently, I have calculated the frustum corners in local space as follows (correct me if I’m wrong):

float tan = 2.0 * std::tan(m_Camera->FOV * 0.5);
float nearHeight = tan * m_Camera->Near;
float nearWidth = nearHeight * m_Camera->Aspect;
float farHeight = tan * m_Camera->Far;
float farWidth = farHeight * m_Camera->Aspect;

Vec3 nearCenter = m_Camera->Position + m_Camera->Forward * m_Camera->Near;
Vec3 farCenter = m_Camera->Position + m_Camera->Forward * m_Camera->Far;

Vec3 frustumCorners(8) = {
    nearCenter - m_Camera->Up * nearHeight - m_Camera->Right * nearWidth, // Near bottom left
    nearCenter + m_Camera->Up * nearHeight - m_Camera->Right * nearWidth, // Near top left
    nearCenter + m_Camera->Up * nearHeight + m_Camera->Right * nearWidth, // Near top right
    nearCenter - m_Camera->Up * nearHeight + m_Camera->Right * nearWidth, // Near bottom right

    farCenter - m_Camera->Up * farHeight - m_Camera->Right * nearWidth, // Far bottom left
    farCenter + m_Camera->Up * farHeight - m_Camera->Right * nearWidth, // Far top left
    farCenter + m_Camera->Up * farHeight + m_Camera->Right * nearWidth, // Far top right
    farCenter - m_Camera->Up * farHeight + m_Camera->Right * nearWidth, // Far bottom right
};

How do I move these corners into world space?

C++, SDL2, OpenGL – Compilation error

I want to draw a texture in C++ using SDL2 and OpenGL libraries, but when I compile the code, the terminal gives an error: Segmentation fault

All code:

#include <SDL2/SDL_image.h>
#include <SDL2/SDL.h>
#include <SDL2/SDL_opengl.h>

SDL_Window *window;
SDL_Event event;

int main(){
    window=SDL_CreateWindow("Window", SDL_WINDOWPOS_CENTERED,
                SDL_WINDOWPOS_CENTERED, 800, 800, SDL_WINDOW_OPENGL);

    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);

    SDL_GLContext context=SDL_GL_CreateContext(window);

    GLuint TextureID = 0;

    IMG_Init(IMG_INIT_JPG);

    SDL_Surface* Surface = IMG_Load("someimage.jpg");

    glGenTextures(1, &TextureID);
    glBindTexture(GL_TEXTURE_2D, TextureID);

    int Mode = GL_RGB;

    if(Surface->format->BytesPerPixel == 4) {
        Mode = GL_RGBA;
    }

    glTexImage2D(GL_TEXTURE_2D, 0, Mode, Surface->w, Surface->h, 0, Mode, GL_UNSIGNED_BYTE, Surface->pixels);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    glBindTexture(GL_TEXTURE_2D, TextureID);

    int X = 0;
    int Y = 0;
    int Width = 400;
    int Height = 400;

    while(window!=NULL){
        while(SDL_PollEvent(&event)){
            if(event.type==SDL_QUIT)
                window=NULL;
        }

        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        glOrtho(0, 800, 800, 0, -10, 10);

        glBegin(GL_QUADS);
            glTexCoord2f(0, 0); glVertex2f(0, 0);
            glTexCoord2f(1, 0); glVertex2f(Width, 0);
            glTexCoord2f(1, 1); glVertex2f(Width, Height);
            glTexCoord2f(0, 1); glVertex2f(0, Height);
        glEnd();

        SDL_GL_SwapWindow(window);
    }
}

Compilation command:

g++ main.cpp -o main -lFOX -lGL -lSDL2 -lSDL2_image

How can I fix this error?

Please help me!

opengl – Help in understanding atmospheric scattering

I have a made a planet and wanted to make an atmosphere around it. So I was referring to this site:

Click to visit site

I don’t understand this:

As with the lookup table proposed in Nishita et al. 1993, we can get the optical depth for the ray to the sun from any sample point in the atmosphere. All we need is the height of the sample point (x) and the angle from vertical to the sun (y), and we look up (x, y) in the table. This eliminates the need to calculate one of the out-scattering integrals. In addition, the optical depth for the ray to the camera can be figured out in the same way, right? Well, almost. It works the same way when the camera is in space, but not when the camera is in the atmosphere. That’s because the sample rays used in the lookup table go from some point at height x all the way to the top of the atmosphere. They don’t stop at some point in the middle of the atmosphere, as they would need to when the camera is inside the atmosphere.

Fortunately, the solution to this is very simple. First we do a lookup from sample point P to the camera to get the optical depth of the ray passing through the camera to the top of the atmosphere. Then we do a second lookup for the same ray, but starting at the camera instead of starting at P. This will give us the optical depth for the part of the ray that we don’t want, and we can subtract it from the result of the first lookup. Examine the rays starting from the ground vertex (B 1) in Figure 16-3 for a graphical representation of this.

First Question – isn’t optical depth dependent on how you see that is, on the viewing angle? If yes, the table just gives me the optical depth of the rays going from land to the top of the atmosphere in a straight line.

Second Question – What is the vertical angle it is talking about…like, is it the same as the angle with the z-axis as we use in polar coordinates?

Third Question – The article talks about scattering of the rays going to the sun..shouldn’t it be the other way around? like coming from the sun to a point?

Any explanation on the article or on my questions will help a lot.

Thanks in advance!