c# – How to tile a texture

I’m creating a simple program in C# and now i’m trying to slice an image into multiple tiles similar to a grid pattern so it does not get stretched when drawing to different resolutions.

What i’m trying to achieve:

  • I want the image to be sliced into multiple tiles.
  • I need the image to be scaled to the dimension of a specified rectangle.
  • I need the image to be drawn without losing the aspect it had before being scaled.

Below is the code i currently have, at the moment it does not scales the image correctly and it seem that the image is losing its aspect. Also, note that i’m using an Matrix to scale the graphics object to a certain resolution.

public static List<RectangleF> TileRectangle (this RectangleF rectangle, int rows, int columns)
    var rectangles = new List<RectangleF>();

    float width  = rectangle.Width / rows;
    float height = rectangle.Width / columns;

    for (int x = 0; x < columns; x++)
        for (int y = 0; y < rows; y++)
            rectangles.Add(new RectangleF(rectangle.X + (width * x), rectangle.Y + (height * y), width, height));

    return rectangles;

public void Draw (Graphics graphics)
    graphics.InterpolationMode = InterpolationMode.NearestNeighbor;
    graphics.PixelOffsetMode = PixelOffsetMode.Half;

    var destination_slices = new RectangleF(0, 0, 512, 512).TileRectangle(3, 3);
    var source_slices = new RectangleF(0, 0, this.image.Width, this.image.Height).TileRectangle(3, 3);

    for (int i = 0; i < source_slices.Count; i++)
        graphics.DrawImage(this.image, destination_slices(i), source_slices(i), GraphicsUnit.Pixel);

At the moment this code is causing the following problems:

  • The image scales correctly only in the horizontal direction.
  • The image does not maintains its aspect when scaled.
  • When changing the columns and rows to different values, the drawing becomes sliced.

The drawing code produces the following output for the “source slices” when sliced in 3 by 3 tiles:

X=63,333332, Y=0, Width=63,333332, Height=16,333334
X=63,333332, Y=16,333334, Width=63,333332, Height=16,333334
X=63,333332, Y=32,666668, Width=63,333332, Height=16,333334
X=126,666664, Y=0, Width=63,333332, Height=16,333334
X=126,666664, Y=16,333334, Width=63,333332, Height=16,333334
X=126,666664, Y=32,666668, Width=63,333332, Height=16,333334

Here is a image to describe the results i’m expecting:


I believe there might be a simple solution for this, so could someone post a direct answer with the code to accomplish these tasks?

Edit: Here is my implementation based on the answer from @Tyyppi_77. It is not working at the moment.

public void Draw (Graphics graphics)
    float padding = 32;
    var destination = new RectangleF(0, 0, 512, 512);
    var rectangles = SliceRectangle(padding, Texture.Width, Texture.Height);
    float dest_middle_width  = destination.Width  - 2 * padding;
    float dest_middle_height = destination.Height - 2 * padding;
    graphics.DrawImage(Texture, new RectangleF(destination.Left, destination.Top, rectangles(0).Width, rectangles(0).Height), rectangles(0), GraphicsUnit.Pixel);
    graphics.DrawImage(Texture, new RectangleF(destination.Right - padding, destination.Top, rectangles(2).Width, rectangles(2).Height), rectangles(2), GraphicsUnit.Pixel);
    graphics.DrawImage(Texture, new RectangleF(destination.Left, destination.Bottom - padding, rectangles(6).Width, rectangles(6).Height), rectangles(6), GraphicsUnit.Pixel);
    graphics.DrawImage(Texture, new RectangleF(destination.Right - padding, destination.Bottom - padding, rectangles(8).Width, rectangles(8).Height), rectangles(8), GraphicsUnit.Pixel);
    var middle = rectangles(4);
    for (int x = 0; x < Math.Min(dest_middle_width, middle.Width); x++)
        var width = Math.Min(middle.Width, dest_middle_width - x);
        var top_area = new RectangleF(rectangles(1).X, rectangles(1).Y, rectangles(1).Width, rectangles(1).Height);
        top_area.Width = width;
        graphics.DrawImage(Texture, new RectangleF(destination.Left + padding + x, destination.Top, top_area.Width, top_area.Height), top_area, GraphicsUnit.Pixel);
        var bottom_area = new RectangleF(rectangles(7).X, rectangles(7).Y, rectangles(7).Width, rectangles(7).Height);
        bottom_area.Width = width;
        graphics.DrawImage(Texture, new RectangleF(destination.Left + padding + x, destination.Bottom - padding, bottom_area.Width, bottom_area.Height), bottom_area, GraphicsUnit.Pixel);
    for (int x = 0; x < Math.Min(dest_middle_width, middle.Width); x++)
        var width = Math.Min(middle.Width, dest_middle_width - x);
        for (int y = 0; y < Math.Min(dest_middle_height, middle.Height); y++)
            var height = Math.Min(middle.Height, dest_middle_height - y);
            var area = new RectangleF(middle.X, middle.Y, middle.Width, middle.Height);
            area.Width = width;
            area.Height = height;
            graphics.DrawImage(Texture, new RectangleF(destination.Left + padding + x, destination.Top + padding + y, area.Width, area.Height), area, GraphicsUnit.Pixel);
public RectangleF() SliceRectangle(float padding, float width, float height)
    return new RectangleF()
        new RectangleF(0, 0, padding, padding),
        new RectangleF(padding, 0, width - 2 * padding, padding),
        new RectangleF(width - padding, 0, padding, padding),
        new RectangleF(0, padding, padding, height - 2 * padding),
        new RectangleF(padding, padding, width - 2 * padding, height - 2 * padding),
        new RectangleF(width - padding, padding, padding, height - 2 * padding),
        new RectangleF(0, height - padding, padding, padding),
                new RectangleF(padding, height - padding, width - 2 * padding, padding),
        new RectangleF(width - padding, height - padding, padding, padding),

c# – Making Texture Atlast in Unity

For optimization purposes and reducing batches count, I am trying to combine my floor tiles textures into different atlases. Sharing the same material with an atlas can decrease draw calls and hence increase the performance.
Here is my code snippet that creating atlas but i am unable to understand how to get texture back from the atlas:

public class TexturePack : MonoBehaviour
    public GameObject() allFloor;

    public GameObject() newFloor;
    // Source textures.
    Texture2D() atlasTextures;

    // Rectangles for individual atlas textures.
    Rect() rects;
    public Texture2D atlas;
    void Start()
        atlasTextures = new Texture2D(allFloor.Length);

        for (int i = 0; i < atlasTextures.Length; i++)
            atlasTextures(i) = (Texture2D) allFloor(i).gameObject.GetComponent<MeshRenderer>().material.mainTexture;
        // Pack the individual textures into the smallest possible space,
        // while leaving a two pixel gap between their edges.
        atlas = new Texture2D(512, 512);
        rects = atlas.PackTextures(atlasTextures, 2, 1024);

        //how to get texture from the atlas for specifc object

Within unity will it work or i have to go to modelling software for uv mapping?

unity – How to check if Texture 2D was provided in shader graph?

I have following section in my shader graph, that I want to simplify. At the moment I have to set manual boolean Use Normal Map in order to decide if I want to use a Normal Map texture or default Normal vector. Ideally, I want to figure out a way to change this and detect if Normal Map was provided: if yes go to Sample Texture 2D step, if not use default Normal Vector. This way getting rid of that manual boolean.

I tired checking for null, but I can’t seem to be able and plug in normal map into that.

enter image description here

c++ – Intel oneAPI DPCT can’t convert from CUDA 1-Channel texture to DPCT 4-Channel image_wrapper

I have the following problem:

I have a CUDA code that uses texture, for example:

texture<unsigned char, 2, cudaReadModeElementType> text;

unsigned char a = tex2D(text, cx + lx, cy);

So when I use DPCT, I have this output:

DPCT1059:12: SYCL only supports 4-channel image format. Adjust the code.
dpct::image_wrapper<unsigned char, 2> text;

SO, I changed the declaration with this:

dpct::image_wrapper<sycl::uchar4, 2> text;

My problem is that I don’t know how to create an equivalent when reading. I have this:

sycl::uchar4 a = text.read(cx + lx, cy);

But I don’t know how to get the same unsigned char from CUDA in my DPC code. Do I have to modify the indexes on the read ? I’m really lost.

Thank you so much in advance !!

c# – How to implement texture slicing

I’m making a simple game in C# and i’m trying to slice my UI images into 9 smaller rectangles, so
the image doesn’t get stretched when drawing to different resolutions, but i’m having lots of
problems with my code.

Currently, i have the following method for slicing an image:

public Rectangle() SliceImage (Rectangle rectangle)
    float x = rectangle.X;
    float y = rectangle.Y;
    float width = rectangle.Width;
    float height = rectangle.Height;
    float size= 32;
    return new Rectangle()
        new Rectangle(x, y, size, size), // top left
        new Rectangle(x + size, y, (width - (size * 2)), size), // top middle
        new Rectangle(x + (width - size), y, size, size), // top right
        new Rectangle(x, y + size, size, height - (size * 2)), // middle left
        new Rectangle(x + size, y + size, width - (size * 2), height - (size * 2)), // middle
        new Rectangle(x + (width - size), y + size, size, height - (size * 2)), // middle right
        new Rectangle(x, y + (height - size), size, size), // bottom left
        new Rectangle(x + size, y + (height - size), width - (size * 2), size), // bottom middle
        new Rectangle(x + (width - size), y + (height - size), size, size) // bottom right

And then i draw the sliced image like this:

public void Draw (Graphics graphics)
    var destinaton_slices = this.SliceImage(this.Bounds);
    var source_slices = this.SliceImage(new Rectangle(0, 0, this.Texture.Width, this.Texture.Height));
    for (int i = 0; i < source_patch.Length; i++)
        graphics.DrawImage(this.Texture, destinaton_slices(i), source_slices(i), GraphicsUnit.Pixel);

Now, the problem is that this method is producing completely wrong results, such as:

  • The image slices are not sized correctly.
  • When changing the the Bounds rectangle values, the slices gets sized incorectly.
  • When changing the size value, the image gets cut off.
    • I believe the size is what is making the slices draw incorrectly, so i would like to know what value it should have.
  • If i remove either the destination or source slices, i also get wrong results.

I tried lots of different values and combinations, but could not make this method work,
so i’m hoping someone here tests the code and help me find a solution to this problem.

Thanks in advance!

opengl – Render 3 textures into an RGB texture?

Flutter has support for external textures, but they have to be RGB. I want to render YUV video to Flutter.

On OpenGL I used to create 3 textures, and upload Y,U,V to each corresponding texture. Then I’d paint the screen using these 3 textures, forming an RGB image.

On Flutter, I need to render to a RGB texture. Is there a way to still do the YUV conversion using OpenGL, and then render to the flutter’s RGB texture?

Maybe rendering to Y,U,V textures and them rendering to the RGB texture from these 3 textures?

Texture “Mapping” (Using just part of a texture sheet on a object face)? Unreal Engine

I’ve created a Playing_Card class which is of course for card games such as Blackjack.
The class works fine, except until now I have not used graphics.

I have now started using Unreal Engine and have started getting to grips with the less advanced parts such as making my own GameModeBase and Player/Pawn class.

But until now, I have made-do with just starter content only.

So this is the first model I’ve ever imported to UE4. I created the model in Maya and have the UV-Mapping all in place. The rear of the card is of course the same for every card, so I have made that built-in to my exported fbx. But for the face, I am not sure how to deal with it.

What I’d like is to be able to create a “Texture Atlas” of sorts. Ie) a bunch of Vector2D values along with the single width/height values for the portion I need.

But Unreal is mindblowing to me with all the options and even the docs are mindblowing to me. I would really like to learn it though. I realise this might not something that can be easily written up as an answer.

I guess what I need to get started is just to know where in the documentation does it refers to all this stuff AND even better where in the UE4 UI can I find it (what do they call this stuff?).

ONE OTHER RELATED THING: I wondered if it would hurt the program/game if I made one long texture of the card faces. So the texture file would be something like 1300×150 or something (rather than the faces texture shown in my picture)?

Any help is greatly appreciated. Here are some images to help you understand what stage I am at:

(enter image description here

(enter image description here

3d – Raycasting floor texture distortion problem

i’m currently working on a raycaster in construct 2.

The FOV is 90 degrees with a screen resolution of 199 x 174. I used this tutorial when it comes to the floor raycasting: https://wynnliam.github.io/raycaster/news/tutorial/2019/04/09/raycaster-part-03.html

The floors seem to render properly seam-wise, if that makes sense, but the textures themselves (for each square of the floor if you will) appears distorted in a circular motion. I think this happens when it comes to the part of displaying the correct part of the texture.

The tutorial towards the end says that it is to pick a pixel of the texture using

floor_point.x = player.x + cos(alpha) * d

floor_point.y = player.y – sin(alpha) * d

with alpha being the angle of the ray cast relative to the player (I believe, correct me if i’m wrong, i’m not 100% if alpha is actually the ray angle), and D being the distance of the floor point. I believe this is where I have a problem when it comes to determining the part of the texture to display.

How mine works is that the floor itself is a 32 x 32 tilemap with each tile being 1 pixel, having a range of 1024 tiles. I obtain the floor point X and floor point Y, they are modded to 32 since that is the texture width and multiplied to eachother, so the tile it picks can be a range of 1024.

With the fov being 90 and screen width being 199 the distance to projection plane is 199/tan(45) which equals to 122.856555133. The player height is set to 32.

The formula below is used to determine the pixel (tile for the tilemap) for each floor point.

((player.x + cos(ray.angle) * ((playerheight * 122.856555133 / (floorpoint.y) / Cos(abs(ray.angle – player.Angle))))%32) * ((player.y – sin(ray.angle) * ((playerheight * 122.856555133 / (floorpoint.y) / Cos(abs(ray.angle – player.Angle))))%32)

Here’s what the original texture is supposed to be displayed as

enter image description here

Here’s what the floor looks like

enter image description here

and here’s the same problem but the square size is bigger to make it more clear

enter image description here

Is modding the X and Y points to 32 what may be causing the problem? I’m not sure what math is incorrect here unless I did get the “alpha” variable from the tutorial confused with something else.

directx – Relocate texture regions with pixel shader

Let’s assume you have an input texture coordinate variable called uv, running from (0,0) in the bottom-left to (1,1) in the top right.

If you wrote just

return tex2D(InputTexture, uv);

…then we’d pass through the source texture to the output texture unchanged. So to get the re-arrangement you want, we’ll need to play with the uv coordinates a bit.

First, we can do some rounding to establish which quadrant we’re in:

float2 floored = floor(uv * 2);
float2 quadRelative = uv - (floored * 0.5f);

floored will take on the values (0,1) (1,1) (0,0) and (1,0) in the input quadrants you labelled 1-4, respectively, and quadRelative will give our pixel’s texture coordinate offset from the bottom-left of its containing quadrant.

Now we can remap our quadrants as desired. If we want to shuffle counter-clockwise, then we need to sample from the next quadrant clockwise:


Here’s a formula that achieves that:

float2 remapped = float2(floored.y, 1 - floored.x);

(The formula to rotate in the opposite direction is left as an exercise for the reader 😉)

And finally we can reconstruct our new uv by adding our relative offset to our new quadrant:

float2 samplePoint = 0.5f * remapped + quadRelative;

You can pass this modified texture coordinate into your sampling function to read from this new quadrant. (I showed the old-style tex2D() version above, but this works just as well with other sampling functions)

You’ll want to ensure that mipmapping is disabled for your source texture sampler — otherwise the discontinuities in sampling coordinates at the quadrant boundaries will make the GPU think it needs to read from a tiny mip level, resulting in unnecessary blurry pixels. If you can’t disable mipmapping, then you can use the gradients from the original uv variable with tex2Dgrad() or similar.

plotting – Parametric Plot: different coordinates for texture and mesh

I try to create a parametric plot which maps from the complex unit circle to a different region in the complex plane. That works fine but I have one problem:
I want to see the images of a mesh with respect to polar coordinates in the unit circle AND I want to map a picture embedded in the unit circle as a texture for the plot. The problem for me is, that if I parametrize the plot by polar coordinates, Mathematica uses the x- and y-axis of the image as angle/modulus-axes. Is there any way to use Cartesian Coordinates for the texture of the plot?

I hope the formulation was understandable. Any advice is highly appreciated!