opengl – Problems with porting LWJGL code to C++ (glDrawElements call returns error 1281/GL_INVALID_VALUE)

So, as the title might suggets, I’m currently trying to port a gui stack for my game engine from LWJGL(Java) to C++.

I originally had this following code in java:

        if(guiFramebuffer != 0 && colorTexture != 0)
        {           
            glBindFramebuffer(GL_FRAMEBUFFER, 0);
            
            GL30.glDisable(GL_CULL_FACE);
            GL30.glDisable(GL_DEPTH_TEST);
            
            GL30.glBlendFunc(GL30.GL_SRC_ALPHA, GL30.GL_ONE_MINUS_SRC_ALPHA);
            GL30.glEnable( GL30.GL_BLEND );
            
            imageShaderProgram.start();                     
            GL30.glBindVertexArray(screenFillingQuad.getVAOId());
            GL30.glEnableVertexAttribArray(0);
            GL30.glEnableVertexAttribArray(1);
            
            GL30.glActiveTexture(GL30.GL_TEXTURE0);
            GL30.glBindTexture(GL_TEXTURE_2D, colorTexture);
            
            // Returns 0
            System.out.println("Error before: " + GL30.glGetError());
            GL11.glDrawElements(GL11.GL_TRIANGLES, screenFillingQuad.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
            // Returns 0
            System.out.println("Error after: " + GL30.glGetError());
            
            GL30.glDisableVertexAttribArray(1);
            GL30.glDisableVertexAttribArray(0);
            GL30.glBindVertexArray(0);
            
            imageShaderProgram.stop();
            GL30.glDisable(GL30.GL_BLEND);
        }
        else
            throw new NullPointerException("The framebuffer doesn't exist");        

Now, let’s get to the method that ports this whole thing to C++ (note that we use a nullptr as the last argument for glDrawElements here too, this will later be important.)

C++ code:

    basicImageShader.bindShader();

    glDisable(GL_CULL_FACE);
    glDisable(GL_DEPTH_TEST);

    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    glEnable(GL_BLEND);

    glBindVertexArray(planeVAO->getVAO());
    glEnableVertexAttribArray(0);
    glEnableVertexAttribArray(1);

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, colorTexture);

    std::cout << "BEFORE: ";
    GLint beforeError = glGetError();
    OUTPUT_OPENGLERR(beforeError); // Returns 0 (no error)
    
    // The issue seems to be that the indices buffer isn't being recognized.
    // If we take a look, we see the nullptr passed in as the last argument. If we change
    // it, we can achieve a similar effect as having a indices buffer, but only to an extend,
    // as this feature might be heavily slowing down the IO bus between graphics card and CPU.
    glDrawElements(GL_TRIANGLES, planeVAO->getVerticesAmount(), GL_UNSIGNED_INT, nullptr);

    GLint error = glGetError();
    OUTPUT_OPENGLERR(error); // Returns 1281 (GL_INVALID_VALUE)

    glDisableVertexAttribArray(1);
    glDisableVertexAttribArray(0);

    basicImageShader.unbindShader();

As you can see, the problematic gl call is equivalent to the java lwjgl code, but this call returns an error value of 1281.
Now, to an important observation:

This is not the case if we replace the nullptr in the draw call with an actual array. I’m not sure why this is the case. I’d be happy if you could provide any information and point out what I did wrong.