shaders – Tree batching problem in Unity

I have some problems with batching trees.
I’m using the default unity terrain system and trees.
My problem is that trees won’t get batched together (I’ve set up static, dynamic, and GPU instancing) and as I’ve inspected the frame debugger I’ve come to these results:

What causes distinct draw call:

1- Wind

2- color and size variations for trees

draw call reason:
non instanced properties set for instanced shader

if I remove Wind or variations the GPU instancing would work (I don’t want to remove wind and variations), Is there any way to batch trees in this case?

java – How to add Vertex and Fragment shaders using the old depreciated methods in LWJGL 2?

Recently, I am trying to make a 3D game in LWJGL 2, not LWJGL 3, just because I am more familiar with LWJGL 2. Since LWJGL decided to shut down their legacy wiki website, I’ve been researching alot lately on adding shaders. The thing is, most of these tutorials use VBOs and VAOs to render objects.

I am trying to use vertex and fragment shaders to edit the objects. Yes, I am using the depreciated methods glBegin(); and those rendering methods to draw shapes. I am not sure if this affects the use of shaders or not, so that is why I am asking this question.

If anyone wants to know, I am using the depreciated methods because I don’t want to really use VAOs and VBOs, I know it reduces performance to use the depreciated methods, but I am fine with this for now.

If anyone can help, much appreciated!

shaders – Need help getting an objects orientation / all my OpenGL rotations are reversed

I’m using OpenGL on Ubuntu Linux with the Eclipse IDE and the GCC compiler. I am bringing blender models into a homemade renderer/game engine.
I parse a text file containing object descriptions to load models.

Example object:

begin_object generic_object
generic_object_name lampost
generic_object_parent_name world
generic_object_position -10.0000 -10.0000 2.000000
generic_object_rotation 90.000000 0.000000 0.000000
generic_object_scale 1.000000 1.000000 1.000000
end_object

begin_object ...

The generic_object_rotation 90.000000 0.000000 0.000000 line describes 3 values:

  • Rotation around Z (XY).
  • Rotation around X (YZ).
  • Rotation around Y (XZ).

After going through all the headaches of Euler angles and their gimbal lock and singularities, I switched all my code to quaternions (highly recommended).

I am told that a counter-clockwise rotation around the Z-axis, looking down the Z-axis toward the origin uses a rotation matrix like:

cos(theta)  -sin(theta)  0  0
sin(theta)  cos(theta)   0  0
0           0            1  0
0           0            0  1

I got this from a document (rotgen.pdf) off the Song-Ho website.

If I replace theta with +90.0 (just like in my input file above), the result is:

  0.0 ,  -1.0 ,  0.0 ,  0.0  
  1.0 ,   0.0 ,  0.0 ,  0.0 
  0.0 ,   0.0 ,  1.0 ,  0.0 
-10.0 , -10.0 ,  2.0 ,  1.0

So, I make a quaternion for +90.0 degrees, turn it into a matrix and then print out the matrix to see if it is the same, and I actually get the same matrix.

  0.0 ,  -1.0 ,  0.0 ,  0.0  
  1.0 ,   0.0 ,  0.0 ,  0.0 
  0.0 ,   0.0 ,  1.0 ,  0.0 
-10.0 , -10.0 ,  2.0 ,  1.0

All is well… Then, I send this matrix to the shader to draw this object and it rotates my object CW instead.

In the shader there’s:

gl_Position = projection_matrix * view_matrix * model_matrix * vec4( aPos , 1.0 );

which seems correct.

So, I made a cube in Blender, attached different colour textures to each side of the cube so I could verify that my input data was good and, as long as the model_matrix is the identity matrix, the object is oriented correctly in space. Any time I add rotations, the models rotate in the
opposite direction. This happens on all 3 axes.

My current goal/project is the parenting system. I want to be able to extract orientation and position from the model matrix of any object (that data is stored with the object).

Specifically, right now, I want to extract the forward vector from the model_matrix so I can attach a light source to this rotating object. Set its light direction for the fragment shader. That is when I found this error.

What I am seeing:
The rotation of the object is opposite to what I command. When I rotate 0-360 over and over again, the forward vector I am reading from the objects model_matrix diverges from the direction of the object until it gets to 180 degrees, where the face of the object and the forward vector are coincident again; then they diverge again until we reach 360 and they are again coincident.

What I expect (and this may be part of my issue):
I want the rotation part of the model_matrix that rotates the object to be the current orientation of the object. And it looks like it is but the object does not render that way. The object rotates in the opposite direction (which is preventing me from getting the correct light direction vector, i.e. the forward vector).

Is this an OpenGL thing? Is the orientation of an object the transpose of the 3×3 rotation section of the model_matrix?

Unity: Opacity over Distance with Surface Shaders

I want to tweak the alpha value based on the distance to the camera. But I see no way of passing the vertex position on to the surface function using surface shaders. Not a problem with frag shader.

It’s for fading out vegetation and creating LOD systems.

Shader "asdf" {
SubShader {
    CGPROGRAM
    #pragma surface surf StandardSpecular alphatest:_Cutoff addshadow vertex:vert
     
    struct v2f {
        float4 pos : TEXCOORD1;
    };

    struct Input {
        float2 uv_Maintex;
    };
        
    v2f vert (inout appdata_base v) {
        v2f o;
        o.pos = mul(unity_ObjectToWorld, v.vertex);
        return o;
    }

    void surf (Input IN, inout SurfaceOutputStandardSpecular o) {
        //Need vertex or pixel distance here:
        //float dist = length(v2f.pos.xyz - _WorldSpaceCameraPos.xyz);
        //o.Alpha = saturate(50/dist);
    }
    ENDCG
}
}

rendering – When a Render Pass decides what textures it needs, how are shaders written?

I am studying render graph architectures (I’ve seen the Frostbite presentation).
A RenderPass has outputs (i.e. textures you draw to) and inputs.

How are these inputs bound to the internal pipeline ?
Lets say I have an AO Pass and it has Normals and Depth as the input.
Do I just bind the texture to a register and that’s it and sample it?
What about the actual shaders (for drawing geometry) that also use textures?

shaders – Any fast alternative to sine in GLSL?

Should I just use the built-in sin() function or my custom sine function?
I’m concerned about performance here. I don’t care about accuracy much here because I use it to just get wave effects in my shader.

The shader is for mobile platforms (OpenGL ES)

Here’s my code

float customSin(float x){
   x = fract( 0.75 + x*0.159155 )*2.0 - 1.0; 
   return x*x * (6.0 - 4.0*x) - 1.0;
}

Here’s my algorithm

x is the input, y is the output

r = 0.75 + x/(π*2)
a = | r- floor(r) – 0.5 |*2
y = ( 3(a^2) – 2(a^3) )*2 – 1

I checked the graph, and it’s pretty close to the actual sine.

Which one should I go with?

shaders – Unity Custom Deffered Shading

i was trying to understand Unity deferred shading in the documentation, and now i have many questions.

So far the idea is to have one or multiple “shaders” that all draw into “Gbuffers” which are SV_TARGET0 to SV_TARGET3, so its 4 in total.(Are there more depending on the gpu?)

Whats happens after this stage, once its drawn, something has to combine those layers to get a shaded picture.

Is it required to create a custom post-processing shader and force the camera/s to use that specific shader to combine the textures?

It really is confusing at the beginning to be honest, same with urp which is all over the place.

shaders – Make part of albedo transparent

I have a shader which creates a circle inside of a plane mesh. I would like to get get rid of the parts around the circle, which are the r and b parts of the ALBEDO but I can’t seem to figure out how to do it.

The only thing I’ve managed to find is ALPHA but that changes the transparency of the entire shader and not just parts of it.

shader_type spatial;

float circle(vec2 position, float radius, float feather)
{
    return smoothstep(radius, radius + feather, length(position - vec2(0.5)));
}

void fragment() {
    ALBEDO = vec3(0, circle(UV - vec2(0), 0.5, 0.005), 0);
}

Which currently looks like:

enter image description here

shaders – Most performant way to mask a camera in Unity?

I would like to mask a camera to only render to a certain (non-square) region of the screen, to be used as an overlay, for splitscreen, or any other application.

The goal is obviously to prevent any unnecessary rendering from that camera, so I specified performance, but I’m also happy to hear the easiest to implement option as well.

What I’m considering so far:

  1. Stencil shader on a mesh in front of the camera – but my understanding is that doing so would require modifying all shaders on world objects, which isn’t ideal. Is there an easier way I’m missing?
  2. Transparent mesh with a cutout – I could cut out a hole in a mesh and the solid part would block rendering of objects behind it, but I’m not sure how to make that part be transparent so other things can render

I feel like there must be a simple answer I’m missing but maybe I’m wrong?

shaders – Unity HDRP transparent material back-face rendering/front face culling

I try to render an object with 2 different transparent materials. One should only render the back faces (higher opacity/more color) and one only the front faces (lower opacity/less color). The purpose is to show another object inside the mesh better, while keeping the more saturated color on the back faces.

I tried to only render the back faces using a shader graph, but it seems to be impossible to only render back faces with TRANSPARENT material. With opaque materials, it was not a problem to isolate the back faces.

Is there a way to only render back faces of a transparent material, and overlap them with another material that only renders front faces?

(btw. this is the model, I need it for. The bones should pretty much preserve their details/color, while the jello is a rich green (screenshot from blender))

the problem child

Any help is very appreciated!
Thank you!