optics – Why does the “fisheye effect” look stronger or weaker with different scenes taken with the same lens?

The fisheye “effect” is dependent only on the angle between the camera and subject, it is thus totally independent of distance.

What you might be noticing is that a fisheye lens bends all straight lines unless they pass through the exact centre of the image.

In some natural scenes the horizon will be the only straight line in the image, thus if you happen to get the horizon dead level then there can be few cues that the image was taken with a fisheye lens. If the next shot has a lower horizon the effect will be very obvious.

optics – Telephoto lens for camera whose lens cannot be removed

It is theoretically possible, but not advisable. The lens manufacturer would have spent a lot of time and effort researching the correct lens groups to match so that you have the best optical design at that focal length and price point. Anything you place in front of the lens will lower the light transmission, and most likely also lower the image quality.
You could place hyper expensive optical elements in front of the camera, but unless those lens elements were manufactured for your camera, in order to correct minor faults in your lens, you will still come out with lower I.Q(image quality, not intelligence quotient) than before.

I once tried one such attachment(on a smartphone). It was a waste of money. The reason you can get away with using a fisheye adapter is coz fisheye lenses cause so much distortion and change the perspective so exaggeratedly that you don’t generally notice the reduction in IQ.

Adding a lens/optical elements to the front of your lens to make it a zoom lens or increase the focal length will just be a waste of time. Trust me, I experimented doing just that. When I was a student and had just gotten in to photography and was strapped for cash, I tried various combinations of multiple lens designs(using a lens on another lens, using a focussing screen and magnifying that image using another lens, using extension tubes on the second lens to magnify the image circle of the first lens and a whole lotta other dexter’s laboratory stuff) for getting a focal length advantage.

IT JUST DOESN’T WORK.

optics – How do I calculate f/stop change for teleconverter that increases lens magnification?

The job of the camera lens is to project an image of the outside world onto the surface of film or digital sensor. The image size of object (magnification) is determined by the actual size of the object intertwined with distance from the camera and the focal length of the camera lens used. If you increase the focal length of the camera lens, the projection distance is also increased. This results and image that displays greater magnification. As an example, if you increase the distance screen to projector of a slide or movie projector and re-focus, the image projected on the screen is enlarged.

Peter Barlow, English Mathematician / Optician, invented an achromatic (without color error) supplemental lens that increased the magnification of telescopes in 1833. The Barlow lens design is the one used in modern teleconverters.

Such supplemental lenses increase the versatility of our camera lens. Commonly they double or nearly double the focal length. A 2X teleconverter doubles the focal length granting a 2X focal length increase which results in 2X grater magnification.

This increased magnification comes with a price. Along with the increased image size comes a reduction in the intensity of projected image. To calculate the impact of this magnification gain on image brightness, we square the magnification gain. Thus for a 2X teleconverter the math is 2 X 2 = 4. We find the reciprocal of this reduction factor by annexing 1/ before the number. Thus, a reduction factor of 4 tells us that the amount of light reaching film or image chip is ¼ or 25% of the former.

Now the f-number system we use is based on an incremental change of 2. In other words, each f-number change doubles or halves the exposing energy. Thus, we divide the magnification increase granted by the teleconverter by 2 to find out how many f-stops reduction results. In this case, a 2X doubling of the magnification results in a reduction factor of 2 X 2 = 4. This value, divided by 2 = 2. This tells us that the functioning f-number is 2 f-stops so we open up 2 f-stops. Go left on the below f-number set.

The f-number set:
1 – 1.4 – 2 – 2.8 – 4 – 5.6 – 8 – 11 – 16 – 22

Thus if the f-number is f/8 and we add a 2X teleconverter, the working f-number changes two f-stope to f/4. Also note – the f-number set is its neighbor multiplied going right by the square root of 2 = 1.4.

Let me that that understanding the resulting reduction factor holds for figuring out exposure when adding filters (filter factor). This value is a multiplier used to manipulate exposure time. Thus if the factor is 4, we can multiply the exposure time by this factor to calculate a compensating exposure time.

Suppose the exposure without filter or teleconverter is 1/400 of a second at f/8. We mount a filter of teleconverter with a factor of 4.

The revised exposure time is 4/1 X 1/400 = 4/400 = 1/100 the revised shutter time @ f/8
Or 1/400 second @ f/4

optics – Calculating f/stop chance for teleconverter that increases lens size

With a rear-mount converter, it’s always the factor of 2 (unless there’s a gross mismatch between lens and converter), but with your front-mount one, it’s not that simple.

The relevant question is where the light path is effectively limited.

Or, to put it another way: Does all the light collected in your converter’s front lens reach the sensor, or is some part of it blocked by the front opening of your base lens? If part is blocked, then the large converter front lens doesn’t help and is just a waste of material.

As a quick check, you can detach the combo from the camera body and look into it from the rear side, a few centimeters behind the lens, roughly where you’d expect the sensor. Do that with aperture full open. If you can fully see the circular edge of the converter’s front lens, then the 40mm-based calculation is indeed valid (all the light collected on that 40mm circle reaches the sensor). If you don’t, that means that the 30mm opening of the base lens still is the limiting factor, and the 30mm calculation will probably give better results.

And, if you want to do some experiments, compare the exposure times for full-open shots with and without the converter (of course, in a constant-lighting situation). If there’s a factor of 4 between the two times, the classical calculation applies, if it’s a factor of about 2, your 40mm-based calculation is correct.

Having said all that, I bet that you won’t be able to see the converter front lens edge, and that the exposure-time experiment will result in a factor of 4.

optics – In what way does the lens mount limit the maximum possible aperture of a lens?

There are two hard limits on how fast a lens can be:

The first is a thermodynamic limit. If you could make a lens arbitrarily fast, then you could point it to the sun and use it to heat your sensor (not a good idea). If you then get your sensor hotter than the surface of the Sun, you are violating the second law of thermodynamics.

This sets a hard limit at f/0.5, which can be derived from the conservation of etendue. Well, technically it’s more like T/0.5. You can make lenses with f-numbers smaller than 0.5, but they will not be as fast as their f-numbers suggest: either they will work only at macro distances (with “effective” f-numbers larger than 0.5), or they will be so aberrated as to be useless for photography (like some lenses used to focus laser beams, which can only reliably focus a point at infinity on axis).

The second limit is the mount. This limits the angle of the light cone hitting the sensor. Your trick of using a diverging element dos not work. You certainly get a wider entrance pupil, but then you have a lens combination which has a longer focal length than the initial lens. Actually, your trick is very popular: it’s called a “telephoto” design. Bigger lens, same f-number.

If the lens mount allows for a maximum angle α for the light cone, then the fastest lens you can get will have an f-number equal to

N = 1/(2×sin(α/2))

or, equivalently, N = 1/(2×NA), where NA is the numerical aperture. This formula also shows the hard limit at 0.5: sin(α/2) cannot be larger than 1. Oh, BTW, if you try to derive this formula using small-angle approximations, you will get a tangent instead of a sine. Small-angle approximations are not good for very fast lenses: you should use the Abbe sine condition instead.

The same caveat about f-numbers v.s. T-numbers applies to this second limit. You can get a lens with an f-number smaller than 1/(2×sin(α/2)), but it will work as macro-only, and the bellows-corrected f-number will still be larger than the limit.

Derivation

This section, added on Nov. 26, is intended for the mathematically inclined. Feel free to ignore it, as the relevant results are already stated above.

Here I assume that we use a lossless lens (i.e. it conserves luminance) to focus the light of an object of uniform luminance L into an image plane. The lens is surrounded by air (index 1), and we look at the light falling on an infinitesimal area dS about, and perpendicular to, the optical axis. This light lies inside a cone of opening α. We want to compute the illuminance delivered by the lens on dS.

In the figure below, the marginal rays, in green, define the light cone with opening α, while the chief rays, in red, define the target area dS.

diagram of lens
(source: edgar-bonet.org)

The etendue of the light beam illuminating dS is

dG = dS ∫ cosθ dω

where dω is an infinitesimal solid angle, and the integral is over θ ∈ (0, α/2). The integral can be computed as

dG = dS ∫ 2π cosθ sinθ dθ
      = dS ∫ π d(sin2θ)
      = dS π sin2(α/2)

The illuminance at the image plane is then

I = L dG / dS = L π sin2(α/2)

We may now define the “speed” of the lens as its ability to provide image-plane illuminance for a given object luminance, i.e.

speed = I / L = dG / dS = π sin2(α/2)

It is worth noting that this result is quite general, as it does not rely on any assumptions about the imaging qualities of the lens, whether it is focused, aberrated, its optical formula, focal length, f-number, subject distance, etc.

Now I add some extra assumptions that are useful for having a meaningful notion of f-number: I assume that this is a good imaging lens of focal length f, f-number N and entrance pupil diameter p = f/N. The object is at infinity and the image plane is the focal plane. Then, the infinitesimal area dS on the image plane is conjugated with an infinitesimal portion of the object having a solid-angular size dΩ = dS/f2.

Given that the area of the entrance pupil is πp2/4, the etendue can be computed on the object side as

dG = dΩ π p2 / 4
      = dS π p2 / (4 f2)
      = dS π / (4 N2)

And thus, the speed of the lens is

speed = π / (4 N2)

Equating this with the speed computed on the image side yields

N = 1 / (2 sin(α/2))

I should insist here on the fact that the last assumptions I made (the lens is a proper imaging lens focused at infinity) are only needed for relating the speed to the f-number. They are not needed for relating the speed to sin(α/2). Thus, there is always a hard limit on how fast a lens can be, whereas the f-number is only limited insofar as it is a meaningful way of measuring the lens’ speed.

physics – Tensor algebra in nonlinear optics

In nonlinear optics, the polarization is written in tensor form as

$$ P = varepsilon_0 left(
chi^{(1)} E
+ chi^{(2)} E^2
+ chi^{(3)} E^3
+ dots
right)$$

where $chi^{(n)}$ is a tensor of rank n+1 and P and E are vector (with 3 elements). In all the textbooks (for example New’s Introduction to Nonlinear Optics, Eq. 1.9, p.15 ), the explicit expression is also given:

$$ P_i = varepsilon_0 left(
sum_j chi^{(1)}_{ij} E_j
+ sum_{jk} chi^{(2)}_{ijk} E_jE_k
+ sum_{jkl} chi^{(3)}_{ijkl} E_jE_kE_l
right).$$

I am trying to understand how to find the explicit form from the tensor form. The summation seems easy to generalize but it would be nice to understand why it’s like this. I didn’t find a single textbook / course explaining what is exactly going on between the tensor and vectors. Specifically: in which order are the elements supposed to be multiplied, and by which kind of product? And in addition, what is the correct way to write the tensor form to make it mathematically unambiguous?

I’m not familiar at all with tensor algebra, but I pieced the following together:

  • $chi^{(1)}E$ is equivalent to a normal matrix × vector multiplication.

  • The multiplication probably starts from the left since textbooks also separate the last $E$ to define $chi = chi^{(1)} + chi^{(2)} E+ chi^{(3)} E^2$.

  • Sometimes (this lecture, on wikipedia) the equation is written as $ P = varepsilon_0 left(
    chi^{(1)} cdot E
    + chi^{(2)} : E^2
    + chi^{(3)} vdots E^3
    + dots
    right)$
    . I found some references to the $:$ symbol (this course page 3) but none to the $vdots$ one. Those represent dot, double dot and (I guess) triple dot products, which give a result with respectively 2, 4 and (I guess) 6 ranks less than the initial values.

  • At this point my first guess was that one should first take the outer product of all the $E$ to get a tensor of rank n, then take the “n-th dot product” of $chi^{(n)}$ with the new tensor to obtain a vector.

  • Tensor algebra writes products differently than what I know from vectors and matrices: $ab$ without any sign is an outer product (at least for one matrix and one vector, not sure if it’s the case in general), and $a cdot b$ means to first do the outer product, and then take the contraction of the result.

  • Answers to this question about the same equation say it we should contract $chi^{(n)}$ with $E$, then contract the result with the next $E$, etc. But the equation would be ambiguous even written $left((chi^{(n)} cdot E)cdot Eright)cdot E$ because it doesn’t say which indices to contract, hence my question about the correct way to write the tensor form.

  • So now my understanding is that one should successively take the outer product $chi^{(n)} otimes E$, which gives a tensor of rank n+1, contract it (but how do I know along which dimension?), which always removes 2 dimensions so the rank is n-1, take the outer product with the next E, etc. Each contraction with $E$ removes a dimension so that at the end only a vector is left. But I have trouble figuring out what $:$ and $vdots$ mean in this context.

So that’s it, I’d appreciate if anyone could shine light on the kind of products going on there, whether it’s $left((chi^{(3)} cdot E)cdot Eright)cdot E$, or $chi^{(3)} vdots left(E otimes E otimes Eright)$ or maybe something else.

optics – How does the human eye compare to modern cameras and lenses?

Let me throw a question back at you: What is the bitrate and bit depth of a vinyl record?

Cameras are devices designed to, as faithfully as possible, reproduce the image that is projected onto their CCD. A human eye is an evolved device whose purpose is simply to enhance survival. It is quite complex and often behaves counter-intuitively. They have very few similarities:

  • An optical structure for focusing light
  • A receptive membrane to detect projected light

The photoreceptors of the retina

The eye itself is not remarkable. We have millions of photoreceptors, but they provide redundant (and ambiguous at the same time!) inputs to our brain. The rod photoreceptors are highly sensitive to light (especially on the blueish side of the spectrum), and can detect a single photon. In darkness, they operate quite well in a mode called scotopic vision. As it gets brighter, such as during twilight, the cone cells begin to wake up. Cone cells require around 100 photons at minimum to detect light. At this brightness, both rod cells and cone cells are active, in a mode called mesopic vision. Rod cells provide a small amount of color information at this time. As it gets brighter, rod cells saturate, and can no longer function as light detectors. This is called photopic vision, and only cone cells will function.

Biological materials are surprisingly reflective. If nothing was done, light that passes through our photoreceptors and hits the back of the eye would reflect at an angle, creating a distorted image. This is solved by the final layer of cells in the retina which absorb light using melanin. In animals that require great night vision, this layer is intentionally reflective, so photons which miss photoreceptors have a chance to hit them on their way back. This is why cats have reflective retinas!

Another difference between a camera and the eye is where the sensors are located. In a camera, they are located immediately in the path of light. In the eye, everything is backwards. The retinal circuitry is between the light and the photoreceptors, so photons must pass through a layer of all sorts of cells, and blood vessels, before finally hitting a rod or cone. This can distort light slightly. Luckily, our eyes automatically calibrate themselves, so we’re not stuck staring at a world with bright red blood vessels jetting back and forth!

The center of the eye is where all the high-resolution reception takes place, with the periphery progressively getting less and less sensitive to detail and more and more colorblind (though more sensitive to small amounts of light and movement). Our brain deals with this by rapidly moving our eyes around in a very sophisticated pattern to allow us to get the maximum detail from the world. A camera is actually similar, but rather than using a muscle, it samples each CCD receptor in turn in a rapid scan pattern. This scan is far, far faster than our saccadic movement, but it is also limited to only one pixel at a time. The human eye is slower (and the scanning is not progressive and exhaustive), but it can take in a lot more at once.

Preprocessing done in the retina

The retina itself actually does quite a lot of preprocessing. The physical layout of the cells is designed to process and extract the most relevant information.

While each pixel in a camera has a 1:1 mapping the digital pixel being stored (for a lossless image at least), the rods and cones in our retina behave differently. A single “pixel” is actually a ring of photoreceptors called a receptive field. To understand this, a basic understanding of the circuitry of the retina is required:

retinal circuitry

The main components are the photoreceptors, each of which connect to a single bipolar cell, which in turn connects to a ganglion cell that reaches through the optic nerve to the brain. A ganglion cell receives input from multiple bipolar cells, in a ring called a center-surround receptive field. The center of the ring and the surround of the ring behave as opposites. Light activating the center excites the ganglion cell, whereas light activating the surround inhibits it (an on-center, off-surround field). There are also ganglion cells for which this is reversed (off-center, on-surround).

receptive fields

This technique sharply improves edge detection and contrast, sacrificing acuity in the process. However overlap between receptive fields (a single photoreceptor can act as input to multiple ganglion cells) allows the brain to extrapolate what it is seeing. This means that information heading to the brain is already highly encoded, to the point where a brain-computer interface directly connecting to the optic nerve is unable to produce anything we can recognize. It is encoded this way because, as others have mentioned, our brain provides amazing post-processing capabilities. Since this isn’t directly related to the eye, I won’t elaborate on them much. The basics are that the brain detects individual lines (edges), then their lengths, then their direction of movement, each in subsequently deeper areas of the cortex, until it is all put together by the ventral stream and the dorsal stream, which serve to process high-resolution color and motion, respectively.

edge contrast

The fovea centralis is the center of the eye and, as others have pointed out, is where most of our acuity comes from. It contains only cone cells, and, unlike the rest of the retina, does have a 1:1 mapping to what we see. A single cone photoreceptor connects to a single bipolar cell which connects to a single ganglion cell.

The specs of the eye

The eye is not designed to be a camera, so there is no way to answer many of these questions in a way you may like.

What’s the effective resolution?

In a camera, there is rather uniform accuracy. The periphery is just as good as the center, so it makes sense to measure a camera by the absolute resolution. The eye on the other hand is not only not a rectangle, but different parts of the eye see with different accuracy. Instead of measuring resolution, eyes are most often measured in VA. A 20/20 VA is average. A 20/200 VA makes you legally blind. Another measurement is LogMAR, but it is less common.

Field of view?

When taking into account both eyes, we have a 210 degree horizontal field of view, and a 150 degree vertical field of view. 115 degrees in the horizontal plane are capable of binocular vision. However, only 6 degrees provides us with high-resolution vision.

Maximum (and minimum) aperture?

Typically, the pupil is 4 mm in diameter. Its maximum range is 2 mm (f/8.3) to 8 mm (f/2.1). Unlike a camera, we cannot manually control the aperture to adjust things like exposure. A small ganglion behind the eye, the ciliary ganglion, automatically adjusts the pupil based on ambient light.

ISO equivalence?

You can’t directly measure this, as we have two photoreceptor types, each with different sensitivity. At a minimum, we are able to detect a single photon (though that does not guarantee that a photon hitting our retina will hit a rod cell). Additionally, we do not gain anything by staring at something for 10 seconds, so extra exposure means little to us. As a result, ISO is not a good measurement for this purpose.

An in-the-ballpark estimate from astrophotographers seems to be 500-1000 ISO, with daylight ISO being as low as 1. But again, this is not a good measurement to apply to the eye.

Dynamic range?

The dynamic range of the eye itself is dynamic, as different factors come into play for scotopic, mesopic, and photopic vision. This seems to be explored well in How does the dynamic range of the human eye compare to that of digital cameras?.

Do we have anything that is equivalent to shutter speed?

The human eye is more like a video camera. It takes in everything at once, processes it, and sends it to the brain. The closest equivalent it has to shutter speed (or FPS) is the CFF, or Critical Fusion Frequency, also called the Flicker Fusion Rate. This is defined as the transition point where an intermittent light of increasing temporal frequency blends into a single, solid light. The CFF is higher in our periphery (which is why you can sometimes see the flicker of old florescent bulbs only if you look at them indirectly), and it is higher when it is bright. In bright light, our visual system has a CFF of around 60. In darkness, it can get as low as 10.

This isn’t the whole story though, because much of this is caused by visual persistence in the brain. The eye itself has a higher CFF (while I can’t find a source right now, I seem to remember it being on the order of magnitude of 100), but our brain blurs things together to decrease processing load and to give us more time to analyze a transient stimulus.

Trying to compare a camera and the eye

Eyes and cameras have completely different purposes, even if they seem to superficially do the same thing. Cameras are intentionally built around assumptions that make certain kinds of measurement easy, whereas no such plan came into play for the evolution of the eye.

internet – Comcast upgraded my neighborhood cables to fiber optics and now I’m getting 5.5 GBPS upload speeds over WIFI. What is going on?

Link to speed test screenshot

A few weeks ago Comcast upgraded my neighborhood cable infrastructure from coax copper to fiber optics. Since then my network connectivity has not been great… Initially throughput was about 10% of max speeds both down and up. Tech support did some modifications on their end and download speeds are in line with expectations about 70% of the time. I was also experiencing exceptionally weird upload speeds. My upload speeds have typically been initially spiking to about 2-3 gigabits per second, crash, and then reset to 40 megabits per second. I’ve been able to replicate this on multiple devices too. It’s also been a challenge doing work remote because my servers/services have no clue what’s going on with my wonky connection and drop my file uploads like it’s hot.

Now here is where things get really weird. After ascending the many tiers of tech support to try and resolve my problems, they did some more work to my building’s (multi-unit dwelling consisting of about 20 condos) physical network yesterday afternoon and it seems my problems are more pronounced. Last night I got 5.5 GBPS upload speeds over WIFI.

But does anyone have any clue as to what might be problem??? All I want is my network connection to be steady and consistent. I’m not pointing fingers, something very weird is going on here. I’m just hoping someone has some idea of what to do next. I’m at a loss as to what’s going on. Comcast also hasn’t gotten back to me either since my most recent update was outside of working hours last night.

Here are some extra details:

  • Primary computer: Surface Book 3 (802.11ax capable, everything up to date)
  • Modem/Router: Netgear CAX80 (802.11ax capable, DOCSIS 3.1, running the latest firmware, remotely rebooted at least 20 times and counting)
  • Internet plan: Comcast 1.2 GBPS downloads, 40 MBPS uploads

optics – Why do mirrors give less sharpness, gamut, and contrast than lenses?

Mirrors are better than lenses in that they are inherently free of chromatic aberrations, and are reflective over very wide spectral bandwidths. For these reasons, they are very attractive design tools. The downside is that the image and object are on the same side of the mirror, which makes things complicated. Additionally, adding more mirrors to correct geometrical aberrations gets in the way of the existing mirrors, so telescopes must always contain few elements.

There are many telescope designs. The simplest is the Newtonian telescope – a spherical primary mirror with a flat mirror in the barrel, known as a fold mirror. The fold mirror folds the image into a place where it is accessible to an observer with an eyepiece or a camera sensor.

The Newtonian telescope is not corrected for any aberrations, so it must only be used at moderate apertures with extremely narrow fields of view; f/10 and about a 500mm focal length is the feasible ceiling.

By making the primary mirror parabolic one creates a “modern Newtonian” which is corrected for spherical aberration completely. As long as the field of view is small, the speed can be increased to f/4 or even f/3 or f/2 for very narrow fields of view.

Such a design is still limited by coma, astigmatism, and field curvature in field of view.

In the single-mirror class there is also the Schmidt telescope, which uses a spherical primary mirror and an aspheric window at the center of curvature of the mirror. By placing the aperture stop at the center of curvature, the design is inherently corrected for coma and astigmatism, and the asphere removes spherical aberration. The result is a telescope that only has field curvature and spherochromatism, a variation in the amount of spherical aberration with color, due to the glass used to make the corrector plate. This can be reduced by using a low dispersion material, such as Calcium Fluorite, but that is usually not necessary unless the telescope is extremely fast (> f/2).

Unfortunately, because the center of curvature of a mirror is at two times its focal length, these telescopes are very long, despite their extremely high image quality.

Moving to two mirrors, there is the RC telescope, which is corrected for spherical aberration as well as basic coma. Hubble is the most prominent example of an RC telescope, though the majority of scientific telescopes in use today are RC designs.

The RC form is not corrected for higher-order coma which becomes significant at large apertures (> f/3 or so), is not well corrected for field curvature, and suffers from extreme high-order astigmatism. The result is the form being limited very strongly in field of view. Still, over narrow fields of view, the image quality is superb.

The final step in telescopes is TMAs, or three mirror anastigmats. TMAs are corrected for spherical aberration, coma, and astigmatism, leaving only field curvature; what is considered to be the fundamental problem of lens design, as it is the only aberration with no zero condition. The James Webb Space Telescope is a TMA, and a good example of how the name has lost some meaning. JWST’s primary camera is a 5-mirror design, and NIRCAM adds a further 9(!) mirrors, but we still consider the design to be a TMA.

TMAs are used when large fields of view are desired. The JWST is both slow and has a narrow field of view, but due to its nearly 150km focal length, the geometrical aberrations are inherently far larger than e.g. Hubble, as they scale with focal length.

Where does all this sit with the mirror lenses you can buy for your camera? Those lenses are all catadioptric telescopes, using both mirrors and lenses. These systems combine the issues of both reflective and refractive systems, obscuration and chromatic aberration, respectively.

Most mirror camera lenses are maksutov designs, which utilize a meniscus lens and a spherical mirror. Neither of these corrects spherical aberration, but they contribute it in opposite signs if the meniscus lens is negative. Meniscus lenses are also used to correct field curvature, and when away from the aperture stop (which is usually the primary mirror in these lenses), coma as well. The result is a design which, in theory, should provide good performance over a decent field of view if used at small apertures.

So where’s the problem? In the beginning of this answer I mentioned the issue of obscuration. A Maksutsov camera still features a secondary mirror to reflect the image into the camera body. This produces an obscuration. Obscurations strongly impact the low and mid spatial frequencies, resulting in images that are low contrast.

Additionally, these designs are somewhat alignment sensitive compared to a standard camera lens. Nearly all of these lenses are sold by lower-price third parties; it is possible they are nearly all misaligned enough to visibly impact the images.

The meniscus lens is also not very good for stray light when working with objects far away; it makes objects closer to the camera appear further away, and they will form images on the detector as well, albeit out of focus ones. The result is a further loss of contrast due to veiling glare.