My own experience with doing similar things would suggests that you should take all the pictures you need to take in one go using a using a tripod (note that tripods are cheap). The workflow for the projects I’ve done looks as follows.
You take pictures with a tripod and remote control at the lowest ISO setting available. You should use manual focus and check for optimal focus by using the zoom features of the live view. The choice of the F-number is more complicated in case of super-resolution. Normally you could safely take pictures at F/6, this isn’t so high as to give unshaprness due to diffraction while high enough to have a decent DoF and reduce the effects of lens imperfections. However, you will stray well within the diffraction limit with F/6 as the effective pixels you will be working with will be much closer to each other than the physical pixels on the image sensor. So, you should take the F-number lower, it then depends on the quality of the lens how low you should take it. You will still end up with unsharpness due to diffraction, but the less unsharpness you get, the less image quality will be lost when correcting for it.
Then you put in an empty memory card of 32 GB or more in your camera to take the pictures (otherwise you would have to change the memory card or download the data to the computer and make it empty too frequently). For each fixed setting take at least 25 pictures. Change the exposure and then the focus. And then you point the tripod to another part of the object, making sure there is a reasonable amount of overlap to do the stitching later.
The post processing work flow looks as follows. Let’s first forget about super-resolution and just consider focus stacking + HDR. Using your raw processor you convert the raw files to TIFF, you should turn off noise reduction. Then you use the
align_image_stack program of the Hugin panorama stitcher to create aligned tiff files for each of the set of pictures that were taken at the same settings. Even if you take pictures on a tripod, there will still be shifts, typically a fraction a pixel, but even such small shifts must be eliminated.
There are different choices you can make for the options you must specify to run the
align_image_stack program. I typically use the following command:
align_image_stack -a al -C -t 0.3 -c 20 im1.tif im2.tif im3.tif....
-a al argument tells the program that all the remapped files will get a prefix al with a number attached to it. The
-C argument will crop all the remapped images to the same size. The
-t 0.3 option tells the program that the control points on the pictures that it attempts to match to each other must be within 0.3 pixels. The
-c 20 option specifies the number of control points to be 20. The order you type the file names is important in general, but not in this case. The program will align the images in the order you type. This seems irrelevant, but in case of alignment of pictures with different exposures you want top put the images with small differences in exposure next to each other. In this case this doesn’t matter.
You then average over each such set to eliminate the noise, I use ImageMagick for that. You put all the files you need to average over in one directory. The command is then of the form:
convert *.tif -poly "w,1,w,1,w,1,w,1..." av.tif
You take w = 1/number of pictures, the argument 1 is the power, and we don’t take a power, so this is taken equal to 1 and you need to give the weight and power for each picture, the output ends up in
Then with all the averages taken at different focus settings, you can do a focus stacking. You then have to align all these averages. You should first crop all the averages to the same size, and then use the command:
align_image_stack -a al -m -z -t 0.3 -c 20 im1.tif im2.tif im3.tif...
In this case you don’t use the
-C to crop, the options
-z need to be used to maximize the field of view and to correct for the magnification of the individual images due to the different focus settings. Then using the
enfuse program of the Hugin panorama stitcher, you combine the remapped images to compile the image with an extended DoF using the command:
enfuse --exposure-weight=0 --saturation-weight=0 --contrast-weight=1 --hard-mask *.tif
You repeat this for all the different exposures and the pictures of the different parts of the parts of the picture. You then combine the different exposures of the same parts together by first aligning them and then running enfuse with the default settings (the command is then simply
enfuse *.tif). You then have HDR pictures with enhanced DoF for each part. You combine the pictures for the different parts using the Hugin panorama stitcher program.
The work flow to get super-resolution requires you to split the images taken with the same settings in groups that are shifted in alignment modulo 1 pixel by less than the desired resolution in either direction. So, if we want to double the resolution, then we group the pictures by looking if the shift in alignment is closer to a half integer or integer in the x and y direction. We then get 4 groups of pictures, each of which is processed as above, except for HDR processing. The alignment of the averages of the pictures in the different groups must then be calculated precisely (I use the ImageJ program for that) and then via interpolation you shift them to the desired values. You then combine them to a picture with 4 times the resolution. A complication here is that the shifts are typically not uniform enough for them to fit in any particular group (I usually only use super-resolution to take pictures of small objects like the Moon that are no more than about 100 pixels across). What you then need to do is to cut the pciture into small parts and treat them separately.
You’ll then see that the combined super-resolution pictures will be unsharp. Using deconvolution you can sharpen the images. This requires transforming to linear color space, estimating the point spread function, running a deconvolution algorithm and then transforming back to sRGB. You can then combine the different exposures together using enfuse.
Note that instead of running Enfuse to do HDR, you can let Hugin do that in one go when it stitches the panorama. But you can then end up with a large number of pictures that Hugin has to processes. In principle, this should lead to a better result as Hugin transforms to linear colorspace to do the processing, and that part of the computations isn’t going to be accurate when you feed it HDR processed pictures. This doesn’t affect the alignment of the panorama, only the final HDR output. But I’ve never seen any significant difference between the two methods.