I mainly do astrophotography of deep-sky objects, but I use a Mac and can share what I use, since some of them apply to your needs.
In deep-sky astrophotography, the photographer collects many, many images of a deep-sky object such as galaxies, nebulae, etc., and each image should have more or less the same field of view as all other images (unless otherwise stated) you take mosaic or combine data captured by different cameras or different telescopes. The main thing, though, is that nothing in the pictures moves enough to be noticed over a period of a few hours (most things are not moving enough to be noticed over many years).
In landscape disaster photography, the photographer tries to capture both the landscape on earth and the star field in the sky. The "landscape" against the "sky" move from second to second relative to each other and this creates unique challenges that are unique to this type of astrophotography.
To do this, you need to record a good, clean "landscape" picture (do not worry about the stars expanding in the sky) and then record many "sky" images for use in stacking. You stack the "heaven" part of your data and then recombine it with your "landscape" part of your data to create a final product.
The process of deep-sky astrophotography involves capturing a whole range of data. This includes many regular exposures (often referred to as "light frames" or "lights" for short) as well as various other special types of exposures, which are various forms of calibration exposures.
The calibration data include bias Frame, dark Frames and just Frames (if you are unfamiliar with them, I can provide more details … but I'll skip the details unless you're asked for an answer).
During the acquisition sequence, different exposure times are often used. Longer exposures help with weak objects where more time is needed to collect light. In the meantime, if the exposure times are too long, things like the stars are blown out, and instead of having color in stars, only "white" stars are displayed because all three color channels are truncated.
Acquisition software can be used to sequence the data acquisition.
On the Mac, the two programs known to me are (1) nebulosity and (2) astroDSLR.
nebulosity Performs both image capture and image processing for deep sky objects.
AstroDSLR should only control the image acquisition process. It controls the camera to capture the sequence of all the different exposures you may want to capture, and can do so for hours if necessary. AstroDSLR is available through CloudMakers.eu (it is also available on the macOS App Store).
On a Windows PC, I've previously used Backyard EOS (only supports Canon EOS cameras) and they now have a program called Backyard NIK (only supports certain Nikon cameras). AstroDSLR is similar to it except that it runs on a Mac and supports a whole range of DSLR cameras. The same manufacturer manufactures so-called AstroImager, which are intended for special astrophotography CCD or CMOS sensor cameras Not DLSRs.
Other popular apps that PC astrophotographers use are Sequence Generator Pro (aka SGPro or SGP) and Maxim DL. These are not available on the Mac.
Mist is manufactured by Stark-Labs.com. It runs on a Mac and does both image capture and image processing. However, I find that image capture is not as comprehensive as AstroDSLR and that image processing is not nearly as comprehensive as PixInsight. So instead of using a program that does both tasks, I use individual apps that specialize in the tasks.
Once you've captured all your data (bright images plus all calibration images), you can combine them. Nebulosity has some features here, but I prefer another application called PixInsight, which offers a lot more.
PixInsight is 230 €. It has some learning curve (it's a bit like trying to learn Photoshop). It handles the entire stacking (image calibration, registration and integration process) and much more. One of the best learning resources for this is a website called IP4AP.com – there are tutorials. The tutorials are part of a subscription service that costs about $ 10 for a month or $ 100 for a year (but you can sign up for just one month at a time – I do not think you'll be forced to sign up for a long minimum term .)
PixInsight is VERY powerful in integration. It's the only application I've ever used where the stacked images can be taken from completely different cameras, different lenses, different telescopes, different angles of rotation and different image scales and resolutions, and it STILL HOW to figure out how to stack them. If you can not automatically figure out how to align the frames (and usually), you can perform a manual operation by selecting a common star visible in each frame and then selecting a second common star in each frame. Use this to find out how big the scale and the rotation are.
Note, however, that stacking software that aligns the frame based on star positions is primarily intended for deep-sky astrophotography. So she can not handle her landscapes well. You'll probably have to hide the landscapes and just stack the sky section … and then manually recombine the foreground landscape frame to create the final image.
You may want to purchase a tracking head.
The earth turns from west to east with an angular rotation of 15 arc seconds per second of time. This speed is called as sidereal Speed (or sidereal Rating). If you take long exposures with a camera on a stationary mount, you may find that the stars start to lengthen as a result of this rotation.
The guideline for this is called 500 rule, The rule is for use with full-screen cameras (so you would need to compensate for other sensor sizes). If you divide 500 by the focal length of your lens, the result is the number of seconds you can expose without noticing the extension in stars. There are other formulas that give a more accurate value (by specifying the field of view for your lens and dividing it by the camera resolution to determine the number of arc seconds per pixel for the combination of camera sensor and lens.) Rule is usually sufficient ,
With the tracking head you can make much longer exposures than is possible without tracking. It has a motor that allows it to rotate at the same speed as the earth … but in the opposite direction.
The result is that if you align the tracking head so that the axis of rotation is parallel to the Earth's axis (usually an alignment aid is included), the rotation of the head will cancel exactly the rotation of the earth and you can take a very long exposure. You can still point your camera in any direction (it does not have to point to the sky post).
The caveat for "landscape" astrophotography is that it works well for photography star, the landscape will blur now.
The tracking heads typically have multiple speed settings and one of them is typically a 1/2 sidereal speed option. This doubles the time you can expose.
However, if you use a tracking head to capture longer exposures and As you gather more data for stacking, you can add the foreground of the landscape to create a composite result.
Image integration (also known as "stacking") stops collecting light and captures more samples of the same data. In this way, the images can be statistically combined to improve the signal-to-noise ratio (SNR) and produce a much cleaner image with significantly reduced image noise.
If you have multiple images, you can imagine aligning each image to match the stars (this alignment process is called image registration). This is part of the workflow, but another step.
There is also a step called Picture calibration, The calibration step uses the data from the dark, flat and bias frames to convert each bright frame into one calibrated light Frame.
Once all frames have been calibrated and registered, they can be integrated.
Integration via averaging
The simplest form of integration is via statistical averaging.
For example, suppose you have 10 frames (in reality you will probably have much more). You can imagine comparing a single pixel in a frame with the corresponding pixel in all other frames. The software can "average" the value of these pixels. If this pixel had a star, then each image would have a brightness value from that star, and the average of all images would be what would be used for the final result.
Suppose it is a pixel that is supposed to be the black background of the sky. It should have a very dark brightness value. Hopefully in most of your pictures. However, if the occasional frame had a noisy pixel at that location, hopefully the final pixel should be pretty dark once you have averaged all the pixels in each frame. The Poisson relationship here is that the noise is reduced by the square root of the number of samples provided. If you have 16 pictures, the noise can be reduced to 1/4 of the noise in a single picture.
Integration of Sigma Clipping
It turns out that you can do it better than a simple average. If you have only 2 or 3 samples, you have no choice but to do a simple averaging. However, if you provide enough samples (for example, 10 or more), you can run a statistical method called sigma clipping. Sigma clipping is based on a process similar to a mean and a deviation from the mean.
Suppose that in just one of your pictures, an airplane has flown through the photo. You have a slight trace in the picture. Using averaging can weaken but not eliminate the light trail. In Sigma Clipping, you can actually make it disappear completely.
This method is similar to averaging, but it calculates the statistical mean of a pixel after comparing all the samples. Then a second pass is made to determine how much the pixel in each frame deviates from the mean. You set a threshold. If the pixel in a frame deviates too much, that single pixel is discarded. Basically, it's like all the frames "vote" over the value of the last pixel. So if we imagine each pixel as a percentage of the brightness, where 0 = completely black and 100 = completely bright, we assume that 19 out of 20 images have a value of 10%. Suppose 1 frame is 100 (with the aircraft track passing through the frame). This would give a statistical mean of 14.5. Suppose we set a sigma clipping threshold of 10. At pass 2, all pixels that deviate by more than 10 from 14.5 are discarded. This means that the pixel that registered 100 is ignored – it is "deselected from the island". The other than 10 registered pixels are retained. These pixels are then averaged and the value 10 is preserved. The airplane track disappears as if it had never been there. This happens pixelwise … so that not the entire 20th frame is discarded … only the pixels through which the plane has flown are discarded and the rest is preserved. It's a wonderful thing.
PixInsight is very powerful. It comes with a script called "Batch PreProcessing" that lets you easily combine all of your data into one master frame (combining all the lights, darks, layers, and bias frames and letting go … and it does it also ultimately creates a master-integrated image.)
Anyway, PixInisght will do it Likewise Have each preprocessing step run separately. As you get to know the tool better, you can take advantage of it by optimizing the steps it takes.
For example, I have seen times when an object is deep enough in the sky, that the atmosphere behaves a bit like a lens and creates an atmospheric dispersion (you can imagine it resembles a chromatic aberration – except that but this causes the atmosphere and not through the lens). But you get stars with a red ring on one side and a blue ring on the other. In PixInsight, I can extract the full-color data into separate LRGB channels. I can then use the image registration process (the star alignment) to realign (re-register) the LRGB frames and then Combine them again to a full-color image … and the dispersion problem disappears.
It's a pretty amazing tool, but it's optimized for astrophotography. (PixInsight is not Mac-specific … it runs on Windows, Linux and Mac).
If someone uses PixInsight, they usually need to use at least one other tool (like Photoshop or Affinity Photo, or GIMP, etc.) because they can use the mouse to selectively make adjustments / changes to a specific part of an image , in PixInsight you can not do that. All adjustments apply to either the entire image or can be applied to anything that is not hidden. However, only two types of masks are supported: a "star" mask (creates a mask based on the stars in the image – as the name implies) or an "area" mask (selects an area based on the brightness range. .. is typically used to generate a mask based on objects in the image, such as a nebula or galaxy.)