How can i properly display 2D Pixel sprite’s pixels without distortion while it’s moving ? (in unity)

Introduction to what i’m trying to do

Hi, I’m trying to make a 2D Pixel Platformer Game in Unity. Before that i used godot and for some reasons (not about the engine) i switched to unity. And in Unity there are things need to handle by myself, and they were handled by the engine in godot (i guess), because godot is more suitable and compatible with 2D Pixel games (i guess).
So; i am not sure why the problem happens

The Problem

In the scene, my pixel sprite is playing his Idle animation (he’s only breathing so you can think of that as head and the body moving 1px up and down repeatedly) While playing that simple animation, if my sprite is big enough on the screen, there is no problem and it looks pixel perfect animation is playing without distortion (or disruption idk what was the right word). But if the character looks smaller on the screen, i mean if i scale the character down or keep the scale same and make camera’s angle of view bigger, so character looks smaller (not too small a regular platformer character size), What happens is character’s eyes visibly goes thinner and thicker as the character plays his animation (head and the body moving 1px up and down repeatedly). Character’s eye is 2 black pixels side by side thats it. While the animation playing it kinda distorts. Height changes.

Yes I fixed it somehow but..

I fixed it, but idk how and seems like a temporary fix AND i don’t think that is how it’s supposed to be fixed because we wrote code, I’m okay with writing code for fixing it but first i want to know what was the problem and how to fix it in editor. aaand it just seems weird to write code for simply having a properly functioning pixel sprite animation.

How did i fixed it ? just watched this video and added the same script to the camera i have no idea how it worked and why my animation problem was occuring at the first place.

What I’m asking for

Pleaaaase somebody explain all that to me, why is that happening what are the possible and proper fixes. i’m researching for days and i thought unity had a “big community a lot of tutorials too easy to learn”

TL;DR : when playing animation pixel sprite character’s some pixels distorts

version: Unity 2019.3.3f1

I imported sprite the right way (Filter mode: Point(no filter) Compression: none, Pixels per unit: 16 (that option seems suspicious maybe the problem is about that pixels and units thingies))
I can provide more information about project or directly send you the whole project if you can’t reproduce the error. I’m just too curious and want to learn how to have a proper 2D display setup and finally start the fun part coding mechanics etc.
since a code about camera solved the issue the problem might be unity’s camera settings because unity has no 2D special camera component while godot has Camera2D and Camera3D seperatly. These are just my guesses please teach me what you know.

What is the name of this distortion or artifact?

What is the name of this distortion or artifact? This is usually found in smartphone cameras, Nikon and Sony point-and-shoot cameras. This is not found in Canon point-and-shoot cameras, in Canon point-and-shoot cameras, instead of this distortion or artifact, a speckled pattern is seen.

When a photo taken with those cameras is seen in full resolution on a normal sized monitor, you can notice a kind of haziness, splotches and smudges around edges and lines in the photo, wherever they may be, not towards to edges of the frame. Is this due to sensor or lens, what is the name of such distortion or artifact?

The distortion/artifact I’m talking about is visible in the feathers of duck in this photo
I’ve included a crop of the part that shows the distortion I am talking about:

enter image description here

In case the site prevents hotlinking, you can see the duck’s photo in this review: https://www.photographyblog.com/reviews/nikon_coolpix_a900_review
(Don’t forget to click it to see in full resolution)

mathematics – Emulate doom-style raycaster projection distortion with modern graphics apis

Currently working on a doom-like engine with fewer technical limitations, and looking into rendering techniques. I’m aware doom’s a raycaster, and I want to capture a similar look but with modern graphics APIs, without implementing a raycasted column-renderer in them.

One of the technical limitations I’m ignoring is on the inclination of the camera, which was locked to be parallel in doom due to the “column” drawing. Modern faithful sourceports, such as crispydoom, and some of the later-era doom-based games like Heretic, allowed looking up-and-down using y-shearing. In the image bellow, taken from the linked page, the green rectangle represent the regular un-inclinded view and the red rectangle represents the “inclined” view.

image from linked page, displaying y shearing animated view of y-sheared camera

From the image, you can see that it’s effectively rendering a vertically wide FOV, and cropping it – though with column rendering that’s a little more direct. This is different from simply adjusting the height of the camera, as with this method the vanishing points remain at their original location – causing some distortion as you look up and down. For example in the image above the apparent angle between the columns and roof don’t change with this method, where-as with moving the camera upwards you would expect the angle between them to become more orthogonal the closer they come to the vertical-center of the camera’s view.

I would like to emulate such a distortion using modern graphics APIs (wgpu, vulkan, dx12, metal) without implementing the equivalent of the software renderer, and without overdrawing the screen and then cropping. I’m also definitely only looking for an approximate emulation of the effect, if it looks pretty close then that’s good enough


I did also see this on one of the other exchanges, and it sounds eerily similar to how I described y-shearing:

I imagine that you would cancel out the inclination of the camera in the view matrix, then somehow map it to a vertical “shift” of the projection matrix, though I’m uncertain how to perform that mapping – or if there even is a sensical one.


Thanks for your time!

blueprints – Unreal How to Feed Images into Open CV Lens Distortion Calibrator

I’m currently trying to use the OpenCv Lens Distortion plugin that comes installed by default with the Virtual Production example in Unreal 4.27 to try and create a camera undistortion texture for my camera. So far I have had no issue getting this to work with by manually setting the lens distortion parameters (K(1-6), P1, P2, etc.) but now I want to automatically generate this data using a series of images from my camera. To do this I threw together a blueprint (couldn’t get access to OpenCV from C++ scripts) that takes a directory, grabs all jpg files and feeds them into the calibrator.

enter image description here

Unfortunately no matter what I try the calibrator refuses to accept any of my test images, and since this is a blueprint I can’t really debug the problem any more, all I know is that the FeedImage node returns false for every image. I’ve tried searching for documentation or tutorials using OpenCV Lens Distortion but for some reason it is a completely undocumented feature besides a couple pages saying the nodes exist and 2 or 3 questions on this answer hub asking how to turn the distortion variables into a texture. If anyone knows what I might be missing please let me know.

enter image description here

Just for reference, here is a sample of one of my test images:

  • Board Width: 10
  • Board Height: 7
  • Square Size: 3.2727 cm

I was able to get OpenCV working directly in another program using a python script but I would like to have this working in Unreal to minimize the amount of programs needed. Obviously the functions are difference since Python deals directly with OpenCv while Unreal is using a blueprint wrapper but at least I know the images and calibrator settings should be valid.

I also want to mention I saw the Lens Distortion plugin mentioned in this post, but unfortunately it does not seem to have the ability to accept images to generate a lens distortion model, it simply builds one using the k1, k2, k3, p1, p2 parameters and camera matrix values.

3d – Raycasting floor texture distortion problem

i’m currently working on a raycaster in construct 2.

The FOV is 90 degrees with a screen resolution of 199 x 174. I used this tutorial when it comes to the floor raycasting: https://wynnliam.github.io/raycaster/news/tutorial/2019/04/09/raycaster-part-03.html

The floors seem to render properly seam-wise, if that makes sense, but the textures themselves (for each square of the floor if you will) appears distorted in a circular motion. I think this happens when it comes to the part of displaying the correct part of the texture.

The tutorial towards the end says that it is to pick a pixel of the texture using

floor_point.x = player.x + cos(alpha) * d

floor_point.y = player.y – sin(alpha) * d

with alpha being the angle of the ray cast relative to the player (I believe, correct me if i’m wrong, i’m not 100% if alpha is actually the ray angle), and D being the distance of the floor point. I believe this is where I have a problem when it comes to determining the part of the texture to display.

How mine works is that the floor itself is a 32 x 32 tilemap with each tile being 1 pixel, having a range of 1024 tiles. I obtain the floor point X and floor point Y, they are modded to 32 since that is the texture width and multiplied to eachother, so the tile it picks can be a range of 1024.

With the fov being 90 and screen width being 199 the distance to projection plane is 199/tan(45) which equals to 122.856555133. The player height is set to 32.

The formula below is used to determine the pixel (tile for the tilemap) for each floor point.

((player.x + cos(ray.angle) * ((playerheight * 122.856555133 / (floorpoint.y) / Cos(abs(ray.angle – player.Angle))))%32) * ((player.y – sin(ray.angle) * ((playerheight * 122.856555133 / (floorpoint.y) / Cos(abs(ray.angle – player.Angle))))%32)

Here’s what the original texture is supposed to be displayed as

enter image description here

Here’s what the floor looks like

enter image description here

and here’s the same problem but the square size is bigger to make it more clear

enter image description here

Is modding the X and Y points to 32 what may be causing the problem? I’m not sure what math is incorrect here unless I did get the “alpha” variable from the tutorial confused with something else.

shaders – How to map 2d point on grid/plane mesh to 3d point on sphere with minimal distortion?

I have a grid of voxels that I want to “bend” into a sphere via a vertex shader with minimal distortion. I’ve tried 2 approaches so far, both of which don’t quite give me the desired effect.

  1. First I tried the solution suggested in this answer, but it seems to give me a very broken sphere with many faces backwards or overlapping and thus z-fighting.

enter image description here

  1. Then I tried implementing this much simpler solution discussed here, which looks a lot closer to what I want, but has huge amounts of distortion near the equator and also seems to cover less than half the sphere. It does not cause any backwards faces or z-fighting like the previous solution.

enter image description here

here’s the code for both approaches:

#version 150

#moj_import <light.glsl>

in vec3 Position;
in vec4 Color;
in vec2 UV0;
in ivec2 UV2;
in vec3 Normal;

uniform sampler2D Sampler2;

uniform mat4 ModelViewMat;
uniform mat4 ProjMat;

uniform mat4 SceneModelViewMat;
uniform mat4 ModelViewInverseMat;
uniform vec3 CameraPosition;
uniform float Radius;
uniform vec2 MaxUV;

out float vertexDistance;
out vec4 vertexColor;
out vec2 texCoord0;
out vec4 normal;

#define PI 3.14159265359
#define TWO_PI 6.28318530718

// First solution
vec3 toSphere(float u, float v, float radius)
{
    float lon = TWO_PI * u;
    float lat = PI * v;
    float x = radius * cos(lat) * cos(lon);
    float y = radius * cos(lat) * sin(lon);
    float z = radius * sin(lat);
    return vec3(x, y, z);
}

// Second solution
vec3 toSphere1(vec3 pos)
{
    return normalize(pos) * pos.y;
}

void main()
{
    vec3 posMS = (ModelViewInverseMat * vec4(Position, 1)).xyz;

    // First solution
    // float u = posMS.x / MaxUV.x;
    // float v = posMS.z / MaxUV.y;
    // float radius = posMS.y;
    // vec4 pos = SceneModelViewMat * vec4(toSphere(u, v, radius), 1);

    // Second solution
    vec4 pos = SceneModelViewMat * vec4(toSphere1(posMS), 1);
    
    gl_Position = ProjMat * ModelViewMat * pos;

    // Unrelated
    vertexDistance = length((ModelViewMat * pos).xyz);
    vertexColor = Color * minecraft_sample_lightmap(Sampler2, UV2);
    texCoord0 = UV0;
    normal = ProjMat * ModelViewMat * vec4(Normal, 0.0);
}

So my question is: what is a good way to map a 2d point to a 3d point on a sphere with minimal distortion? Pretty much all I can find online is only about forward projections where a point on a sphere in transformed to a point on a map, not the other way around.

Is it normal for RAW files to have lens distortion? How best to deal with it?

Is this normal for RAW images?

Yes, it is normal.

Based on my experience with Canon cameras with different lenses, the RAW image remains uncorrected w.r.t. to the corresponding JPEG.
This means that the following corrections are not applied (and can be applied separately in a RAW conversion program):

  • Colour corrections with picture profiles.
  • White balance corrections.
  • Noise “corrections”, i.e. denoising.
  • Lens corrections

How to best deal with the lack of correction in RAW images?

Use the Lightroom Lens Corrections panel.

You don’t need to eyeball it if you used a lens that is supported by Lightroom’s lens correction module.

Assuming the lens is supported, you can select all photos taken with that lens and apply the appropriate lens correction profile:

  1. In the Lens Corrections panel of the Develop module, click Profile and select Enable Profile Corrections.
  2. To change the profile, select a different Make, Model, or Profile.

There are additional steps to the process for more advanced tweaking, you can view them on the linked Lightroom help page.

Is the required correction the same for all RAW images of a camera + lens combo?

No, they required correction also depends on the used focal length, the aperture, focus distance and your personal taste.

Lightroom uses the correction profile to derive from the lens settings (aperture, focal length) what kind of correction it needs to apply. The correction is therefore not exactly the same for every RAW image, but is exactly the same for the same lens, aperture and focal length.

Of course, you have the (artistic) freedom to apply another amount of lens correction than LR suggest, depending on the scene or the look you’re going for. However, I find the default setting great to apply on all the photos (per lens) as a starting point and then tweak it from there.

computer vision – How to correct distortion from stitching dual camera to estimate homography

I am trying to distort the image obtained from two cameras that generate a stitching image. Unfortunately, I do not have access to the cameras and I would like to know what could be the best way to apply to correct the image and get the lines below to be straight.

enter image description here

The next step I am trying to do is to calculate the homography with respect to the field. And the only thing that I find is that: transformation between the image field and the template field plane has to be done through a linear transformation.