unity – Calculating size and position of Tilemap for camera bounds

I’m new to Unity and I’m following a series of tutorials covering off building a 2D Zelda-like game. I’m at a stage where I have two Tilemaps side by side and I’m working on a transition between them, triggered when the player collides with a trigger at the edge of Tilemap 1.

I have a basic camera movement script which follows the player and constrains the camera within the bounds of Tilemap 1 – it’s constrained by two Vector2 fields (a min and a max). I got the values for these two Vector2 fields manually in the Scene editor by moving my camera to the desired edges, noting its transform position and entering in those values to the camera movement script.

For the transition, I will change these min and max Vector2 values in the camera movement script to then constrain the camera to Tilemap 2. I could just hard code these in again, but it seems like a good learning opportunity to try and calculate these values dynamically.

My room change script stores a reference to the camera movement script so that it can amend the min and max values itself. In an attempt to get it to calculate the next room’s bounds dynamically, I added a public Tilemap field to the room change script so it can examine it and determine the bounds of the next room itself. I then set this field in the Inspector to point to Tilemap 2.

I’ve so far been unable to calculate these new bounds from the Tilemap reference. The cellBounds property looked promising: Position (-44, -7, 0), Size: (19, 20, 1) – the size is right, but the actual origin position I calculate manually by moving my camera in the scene editor is -35.14, -2.02. As a result, setting the bounds off of what cellBounds tells me does not work. I’ve made sure that I’ve used the ‘Compress Tilemap Bounds’ too.

I’ve tried accessing individual tiles within the Tilemap and using CellToWorld – this only seems to give me the Tile’s position relative to the Tilemap (?).

The transforms for all of the Tilemaps in my scene – and their parents (which are Grids) – are set to 0, 0, 0.

Basically, I know I should calculate a value of approximately -35.14, -2.02 for my new min position, but I can’t see any way to calculate that. How do I go about achieving this, and the same for max?

Here is what my scene looks like:

  • Main Camera (Camera, Audio Listener, Camera Movement (script))
  • Player (Sprite Renderer, RigidBody 2D, Box Collider 2D, Player Movement (script), Animator)
  • Room1 (Grid)
    • Ground (Tilemap, Tilemap Renderer)
    • Collision (Tilemap, Tilemap Renderer, Tilemap Collider 2D, RigidBody 2D, Composite Collider 2D)
  • Room2 (Grid)
    • Tilemap (Tilemap, Tilemap Renderer)
  • RoomTransfer (Box Collider 2D, Room Move (script))