intel graphics – How do i know the shaders and opengl version required are in my pc?

The following question/answer(s) do not fully cover my question

How do I determine if my integrated Intel graphics support “Shader Model 3”?

I have a similar PC I am using for internet, but I would like to try gaming (steam-low spec) too. Here is the result of
glxinfo | grep OpenGL:

OpenGL vendor string: Intel Open Source Technology Center     
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 400 (BSW)     
OpenGL core profile version string: 4.6 (Core Profile) Mesa 20.0.8     
OpenGL core profile shading language version string: 4.60     
OpenGL version string: 3.0 Mesa 20.0.8     
OpenGL shading language version string: 1.30     
OpenGL context flags: (none)     
OpenGL ES profile version string: 
OpenGL ES 3.1 Mesa 20.0.8 
OpenGL ES profile shading language version string: 
OpenGL ES GLSL ES 3.10

I added some extra specs from glxinfo:

Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) HD Graphics 400 (BSW) (0x22b1)
Version: 20.0.8
Accelerated: yes
Video memory: 1536MB
Unified memory: yes
Preferred profile: core (0x1)
Max core profile version: 4.6
Max compat profile version: 3.0
Max GLES1 profile version: 1.1
Max GLES(23) profile version: 3.1

In the question someone asked earlier, there was no matching answer. Both answers (one in comment) do not match each other.
one says

profile shading language version string: 4.60 OpenGL version string

determines the shaders your pc has and another answer says to look at

OpenGL shading language version string: 1.30

my pc has similar specs, so I would want to know what shaders my pc supports. Also another question: does my pc have OpenGL 4.6 or 3.1? i see two values along two different results from glxinfo?

shaders – BIM Model clipping issues in Unity 2020

I have BIM model that I am using in 2018 version of Unity engine. When import the model into my project on 2019 OR 2020 version of the Unity engine, clippings on certain surfaces of the model occurs.

Has anyone encountered this issue with Unity 2020? Any solutions that can be suggested for me to try?

Examples attached.

Appreciate any help in advance…

Unity 2018 Version No Clipping
Unity 2020 Version With Clipping

shaders – glDrawArrays draws nothing

I am trying to draw a triangle using shaders in LWJGL, but nothing is being drawn on the screen, and no error is being produces. I can’t figure out what I’m doing wrong.

To create a vao, I use:

int buffer = glGenBuffers();
int vertexArray = glGenVertexArrays();

ByteBuffer data = ByteBuffer.allocateDirect(6 * 8).order(ByteOrder.nativeOrder());



glBindBuffer(GL_ARRAY_BUFFER, buffer);

int positionAttributeLocation = glGetAttribLocation(program, "position");
glVertexAttribPointer(positionAttributeLocation, 2, GL_FLOAT, false, 8, 0);

and then I draw using:

glDrawArrays(GL_TRIANGLES, 0, 3);

Here’s my vertex shader:

#version 110

in vec2 position;

void main(void) {
  gl_Position = vec4(position.xy, 1, 1.0);

and fragment shader:

#version 110

void main(void) {
  gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);

unity – How to share constant variables between Compute Shaders?

So, I have two compute shaders A and B (using unity and HLSL). Right now, before every dispatch, I send my mouse coordinates to both of them every update.

So, from my understanding, you can actually determine which register the mouse coordinates go to like so:

 float2 mxy :register(c1); // at the top of the shader

And by declaring this in both shaders, you can actually avoid sending the mouse coordinates twice.

The problem is, that this does not work! I’ve tried making cbuffers as well, but to no avail. (I don’t actually know how to work with cbuffers on the CPU side that said)

TLDR: why can’t I share variables between two compute shaders using float2 mxy :register(c1); ?

shaders – Unity: Efficient way to get the average pixel color for a portion of the camera viewport

At it’s core, the problem I’m trying to solve is this:
I have a camera (not main) that I would like to project to a 2 pixel RenderTexture. I would like the first pixel to be the average color of all pixels in the left portion of the camera viewport (or a close approximation). And the second pixel to be the average color of all pixels in the right portion of the camera viewport.

So far I’ve looked into doing this via OnRenderImage(src, dest) but I’m open to any suggestions. Same goes with camera resolutions, 256×256 seems like a good starting point but open to suggestions here too. It’s worth mentioning that I just threw together a brute-force non threaded approach that works but totally tanks FPS (nothing surprising)

Any help would be greatly appreciated. This doesn’t seem like an incredibly weird downscaling approach but I can’t seem to find anything regarding it. There might be a clever way of dealing with this via a shader but I have yet to find it. Bilinear downscaling (which I think is what unity uses by default) sadly does not give the appropriate result.

System compatibility is not an issue, Win 10, DX 12.

Why don’t ShaderToy shaders work with LibGDX?

LibGDX support GLSL shaders. GLSL is an standard languageā€¦ ShaderToy scripts, while based in GLSL are not standard.

It looks like you can use ShaderToy scripts directly with other frameworks so am wondering what the difference is.

What do you mean directly? I have seen plugins for a couple engine that will allow you to paste ShaderToy scripts and use them as materials. Not what I would call “directly”, but it works most of the time. People also convert ShaderToy scripts to GLSL by hand.

It is worth noting that ShaderToy scripts closely resemble fragment shaders, and – unless the ShaderToy script uses some other features, such as input or sound – it takes little work to adapt them by hand to GLSL fragment shaders. If the script uses such featuresā€¦ well, though luck, because shaders are programs uploaded to the GPU, and thus are unable to take user input or play sound. Which means that a complete implementation of a ShaderToy script runtime in another engine is not a trivial task.

What work does it take to adapt it? As per the fragment shader, first ShaderToy has some built-in variables that you would have to pass as uniforms. And second you need to write a main function that calls the mainImage function from ShaderToy (or just paste the code from mainImage inside main). If you want an output equal to the one you get in ShaderToy you will also have to setup an scene with a single quad that covers the view, and has the appropriate UV coordinates.

I want to make empahsis on that ShaderToy scripts are not standard. The function mainImage is not standard. On the other hand, main is part of the GLSL standard.

These are the input uniforms according to the ShaderToy documentation:

uniform vec3 iResolution;
uniform float iTime;
uniform float iTimeDelta;
uniform float iFrame;
uniform float iChannelTime(4);
uniform vec4 iMouse;
uniform vec4 iDate;
uniform float iSampleRate;
uniform vec3 iChannelResolution(4);
uniform samplerXX iChanneli;

You can see you will have to pass stuff like the current time and the position of the mouse. That means you also need a script that is updating these uniforms before doing the render call. For LibGDX in particular, you would do that inside render. You can imagine, from there, how to go about making a ShaderToy clone in LibGDX. If you are trying to do this as a shorthand to implement your game, you probably should simply implement your game instead.

However, chances are, you want to use the ShaderToy scripts as a material. That is, you would be rendering a scene, as you would normally do, and the ShaderToy scripts defines how to shade a surface. That means that you would create your GLSL fragment shader based on your ShaderToy script, then when rendering the mesh that uses it, you would set the uniforms and call render on the mesh. And at that point, it makes much more sense to implement only what you need instead of cloning all ShaderToy functionality, and it would be much more efficient.

To reiterate, ShaderToy is based on GLSL. Thus, if you can use ShaderToy, you got a head start in GLSL. However do not let ShaderToy prevent you from learning GLSL.

shaders – How to convert from frag position to UV coordinates when my viewport doesn’t cover the screen?

So, I’m implementing SSAO as part of my rendering pipeline using OpenGL/GLSL. It works pretty well when I have a camera that takes up the entire screen. However, when my camera is smaller than the full screen size, the SSAO texture doesn’t get sampled correctly. Here is the relevant GLSL shader code:

// Convert from clip-space
vec2 fragCoords = (fragPos.xy/ fragPos.w); // fragPos is MVP * worldPosition
vec2 screenCoords = fragCoords * 0.5 + 0.5; // Convert from (-1, 1) to (0, 1) to sample UV coordinates

// Sample texture
float ssaoFactor = texture2D(ssaoTexture, screenCoords).r;

I know that there is some funkiness going on with the viewport, but the fixes that I’ve tried haven’t worked. My first thought was to scale fragCoords by normalized size of my viewport (e.g. vec2(0.5, 0.5) for a viewport with half of the width and height of the screen), but that just produced a very strange result. Any thoughts?

shaders – Quixel asset to unity

I downloaded asset from Quixel, I’m using Unity HDRP, the problem I am facing is how to properly create a mask map cause the asset from Quixel doesn’t have metalness map where in HDRP mask map it requires channel R should be metalness map, and the base map requires RGB color and I’ve got diffuse map from the asset, to put diffuse map to the basemap slot the right way to do? I’m fairly new to the channel, so put it simple

1.How to pack channels for mask map if no metalness map in the asset, what to do with the empty channel

2.Is it right to put diffuse map to basemap slot?

3.What to do with the detail map in the maskmap , the same as to no metalness map provided in the asset.

4.Another question is heightmap, Unity provide pixsel mode and vertex mode, what is the consideration before choose the displacement mode, When I choose pixsel mode, the whole texture seems to be lowered down not the pixsel on the texture surface bumped or lowered down, it is the whole texture moved.

Thanks for answering these!

opengl – Send Geometry Data to Multiple Shaders

So I am implementing a deferred rending model for my engine, and I want to be able to send all scene geometry into a single shader to calculate ambient, diffuse, normal, ect thats not the question.

Once I have all of this data buffered into the GPU I can render all of that geometry from a camera perspective defined by a uniform for the cameras position in the scene. I am wondering if I can reuse this model data already in VRam in another shader translated by a light sources projection matrix to calculate the shadows on this scene geometry without needing to send the same data to the GPU again.

Is there a way to tell my light shader, hey all that juicy geometry/scene data you want is already in V-Ram, just use this matrix transformation instead.

Alternatively when I am sending the VAO to the GPU is there a way to send it to two shades in parallel, one for the deferred rending model, one for shadow casing light sources?

Thanks so much!