I have a couple of 2D meshes that make a hierarchical animated model.
I want to do some post-processing on it, so I decided to render this model to a texture, so that I could do the post-processing with a fragment shader while rendering it as a textured quad.
But I don’t suppose that it would be very smart to have the render texture’s size as large as the entire screen for every layer that I’d like to compose – it would be nicer if I could use a smaller render texture, just big enough to fit every element of my hierarchical model, right?
But how am I supposed to know the size of the render target before I actually render it?
Is there any way to figure out the bounding rectangle of a transformed mesh?
(Keep in mind that the model is hierarchical, so there might be multiple meshes translated/rotated/scaled to their proper positions during rendering to make the final result.)
I mean, sure, I could transform all the vertices of my meshes myself to get their world space / screen space coordinates and then take their minima / maxima in both directions to get the size of the image required. But isn’t that what vertex shaders were supposed to to so that I wouldn’t have to calculate that myself on the CPU? (I mean, if I have to transform everything myself anyway, what’s the point of having a vertex shader in the first place? :q )
It would be nice if I could just pass those meshes through the vertex shader first somehow without rasterizing it yet, just to let the vertex shader transform those vertices for me, then get their min/max extents and create a render texture of that particular size, and only after that let the fragment shader rasterize those vertices into that texture. Is such thing possible to do though? If it isn’t, then what would be a better way to do that? Is rendering the entire screen for each composition layer my only option?