## sharpness – How to calculate MTF ? Need help with the formula

I’m trying to reproduce MTF chart programmatically but I’m stuck at some point.

I’m starting with a clean bar chart like this:

I then take a picture of this pattern with a smartphone (printed on HD quality paper, no gloss, no texture, …) and I get something like this: (there is a big of distortion)

I used the following page to understand how MTF is calculated:
https://www.imatest.com/docs/sharpness/

With my script, I’m basically scanning all vertical lines one by one, I scan pixels one by one from top to bottom for each line and I get the min and max value for luminance (I get the LAB value for each picture and I take the “l” value).

For the first step (amplitude), I get the following graph for my “perfect” example:

And when I do the same with the picture I took, I get something like this:

So far, it seems to be in line with what I see on the Imatest page except that my values seem inverted and that I go above 100 while I only apply the following formula: C(f)=Vmax−Vmin/Vmax+Vmin

Now my problem is that I can’t get to the MTF formula, if I apply MTF(f)=100%×C(f)C(0), I get exactly the same graph. My maths are rusty and maybe I’m missing something somewhere.

What I’m trying to achieve is to give a easy to understand score to sharpness. MTF is a good starting point but in the end, I would like to give a score like an average. I know it’s not accurate because the center is sharper than the edge but at least it would allow to rank sharpness in a more or less neutral way.

Any idea on how I can build the MTF chart or how I could assign a final score?

Thanks

## sharpness – How to calculate MTF programmatically ? (with PHP for example)

I’m trying to reproduce MTF chart programmatically but I’m stuck at some point.

I’m starting with a clean bar chart like this:

I then take a picture of this pattern with a smartphone (printed on HD quality paper, no gloss, no texture, …) and I get something like this: (there is a big of distortion)

I used the following page to understand how MTF is calculated:
https://www.imatest.com/docs/sharpness/

With my script, I’m basically scanning all vertical lines one by one, I scan pixels one by one from top to bottom for each line and I get the min and max value for luminance (I get the LAB value for each picture and I take the “l” value).

For the first step (amplitude), I get the following graph for my “perfect” example:

And when I do the same with the picture I took, I get something like this:

So far, it seems to be in line with what I see on the Imatest page except that my values seem inverted and that I go above 100 while I only apply the following formula: C(f)=Vmax−Vmin/Vmax+Vmin

Now my problem is that I can’t get to the MTF formula, if I apply MTF(f)=100%×C(f)C(0), I get exactly the same graph. My maths are rusty and maybe I’m missing something somewhere.

What I’m trying to achieve is to give a easy to understand score to sharpness. MTF is a good starting point but in the end, I would like to give a score like an average. I know it’s not accurate because the center is sharper than the edge but at least it would allow to rank sharpness in a more or less neutral way.

Any idea on how I can build the MTF chart or how I could assign a final score?

Thanks

## How to technically measure sharpness?

I’m trying to measure sharpness through a script in order to measure sharpness across different devices.

My methodology:

• I have a pattern with pure white and black stripes like this:
• I use HD printing, matte, no reflection or texture
• I take a picture in exactly the same conditions

When I take the picture with a device, I then analyse the picture pixel by pixel. Of course, the picture will never be perfect but as testing conditions will always be the same, I should be able to compare sharpness.

So what I measure:

• the amount of pixels in pure white (usually close to 0)
• the amount of pixels in pure black (usually close to 0)
• the amount of pixels in grey
• the amount of pixels for everything else
For each of those measurements, I have the amount of unique colors (in LAB format) as well as the sum of all pixels for each type of color.

From what I see with my eyes and what the pixels are saying, I see some common trends but I also see different directions I could take.

Some discoveries:

• the number of distinct greys seems to give a good indication about sharpness (less distinct greys = more sharpness)
• the difference of light greys (when L from lab color is >50) and dark greys (when L from lab color is <50) also seems to give a hint about sharpness/contrast as when the difference between both is big, sharpness is better

Do you have ideas of the criterias I should use to measure sharpness?

Thanks

## sharpness – Why are my new camera’s photos coming out blurrier than my iPhone’s when I look at them zoomed in?

I am a beginner at photography and recently got a camera (Nikon D5600). I watched a few videos on how to use it but I am still finding it difficult to operate. I have several questions but the main one is how to make pictures sharper?

I have an iPhone X which takes better pictures, when I zoom in I can see the image clearly, however when I zoom in on a photo with my camera, everything is kind of blurry. The photos I’m using as comparison are the ones from the SnapBridge app.

Does anyone have any tips on how to learn the best way to take pictures or how to at least make my pictures a higher resolution?

This is the image from my iPhone:

This is the image from my D5600 and 18-55 kit lens @28mm:

Settings:
M mode, ISO 800, 1/200s, f/13, shooting RAW+jpeg.

## optics – Why do mirrors give less sharpness, gamut, and contrast than lenses?

Mirrors are better than lenses in that they are inherently free of chromatic aberrations, and are reflective over very wide spectral bandwidths. For these reasons, they are very attractive design tools. The downside is that the image and object are on the same side of the mirror, which makes things complicated. Additionally, adding more mirrors to correct geometrical aberrations gets in the way of the existing mirrors, so telescopes must always contain few elements.

There are many telescope designs. The simplest is the Newtonian telescope – a spherical primary mirror with a flat mirror in the barrel, known as a fold mirror. The fold mirror folds the image into a place where it is accessible to an observer with an eyepiece or a camera sensor.

The Newtonian telescope is not corrected for any aberrations, so it must only be used at moderate apertures with extremely narrow fields of view; f/10 and about a 500mm focal length is the feasible ceiling.

By making the primary mirror parabolic one creates a “modern Newtonian” which is corrected for spherical aberration completely. As long as the field of view is small, the speed can be increased to f/4 or even f/3 or f/2 for very narrow fields of view.

Such a design is still limited by coma, astigmatism, and field curvature in field of view.

In the single-mirror class there is also the Schmidt telescope, which uses a spherical primary mirror and an aspheric window at the center of curvature of the mirror. By placing the aperture stop at the center of curvature, the design is inherently corrected for coma and astigmatism, and the asphere removes spherical aberration. The result is a telescope that only has field curvature and spherochromatism, a variation in the amount of spherical aberration with color, due to the glass used to make the corrector plate. This can be reduced by using a low dispersion material, such as Calcium Fluorite, but that is usually not necessary unless the telescope is extremely fast (> f/2).

Unfortunately, because the center of curvature of a mirror is at two times its focal length, these telescopes are very long, despite their extremely high image quality.

Moving to two mirrors, there is the RC telescope, which is corrected for spherical aberration as well as basic coma. Hubble is the most prominent example of an RC telescope, though the majority of scientific telescopes in use today are RC designs.

The RC form is not corrected for higher-order coma which becomes significant at large apertures (> f/3 or so), is not well corrected for field curvature, and suffers from extreme high-order astigmatism. The result is the form being limited very strongly in field of view. Still, over narrow fields of view, the image quality is superb.

The final step in telescopes is TMAs, or three mirror anastigmats. TMAs are corrected for spherical aberration, coma, and astigmatism, leaving only field curvature; what is considered to be the fundamental problem of lens design, as it is the only aberration with no zero condition. The James Webb Space Telescope is a TMA, and a good example of how the name has lost some meaning. JWST’s primary camera is a 5-mirror design, and NIRCAM adds a further 9(!) mirrors, but we still consider the design to be a TMA.

TMAs are used when large fields of view are desired. The JWST is both slow and has a narrow field of view, but due to its nearly 150km focal length, the geometrical aberrations are inherently far larger than e.g. Hubble, as they scale with focal length.

Where does all this sit with the mirror lenses you can buy for your camera? Those lenses are all catadioptric telescopes, using both mirrors and lenses. These systems combine the issues of both reflective and refractive systems, obscuration and chromatic aberration, respectively.

Most mirror camera lenses are maksutov designs, which utilize a meniscus lens and a spherical mirror. Neither of these corrects spherical aberration, but they contribute it in opposite signs if the meniscus lens is negative. Meniscus lenses are also used to correct field curvature, and when away from the aperture stop (which is usually the primary mirror in these lenses), coma as well. The result is a design which, in theory, should provide good performance over a decent field of view if used at small apertures.

So where’s the problem? In the beginning of this answer I mentioned the issue of obscuration. A Maksutsov camera still features a secondary mirror to reflect the image into the camera body. This produces an obscuration. Obscurations strongly impact the low and mid spatial frequencies, resulting in images that are low contrast.

Additionally, these designs are somewhat alignment sensitive compared to a standard camera lens. Nearly all of these lenses are sold by lower-price third parties; it is possible they are nearly all misaligned enough to visibly impact the images.

The meniscus lens is also not very good for stray light when working with objects far away; it makes objects closer to the camera appear further away, and they will form images on the detector as well, albeit out of focus ones. The result is a further loss of contrast due to veiling glare.

## Slow shutter speed and image sharpness

I’m new to photography, I hope I won’t misuse any terms here.

• Let say I shoot a still subject like a product on a tripod with controlled lighting.
• Let say I have little light and so I use a slow shutter speed, say 3s.

Even if everything is still and constant will this shutter speed affect image sharpness?

In other words, if I would be to increase the light source and increase shutter speed accordingly to get the same exposure, could my resulting image be sharper?

## photoshop – Reduce image sharpness and DPI

It doesn’t.

Well, let’s say, it doesn’t if you only change the dpi without re-rendering the image.

DPI is not very meaningful for image-files, at least until you print them.
So unless you change the pixel dimensions of an image, changing the DPI will do nothing in Photoshop other than attaching this value to the image. You can test this by disabling `resample image`.

However, if you increase the DPI and have the dimensions set to something else like mm, inches, cm etc, then you are changing the pixel dimensions of the image.

This will cause the image to be re-rendered in the new dimensions. As long as the pixel count is a multiple like x2 of the original size, this is fairly easy, but this only seldom is the case. So Photoshop has to interpret the image. The interpreter can be chosen via `Resample image`. Each option has its own merits.

This step can reduce sharpness just due to the resampling.

The secondary effect is, that sharpness is more or less another word for contrast between pixels of differing brightness. By resampling an edge that formerly transitioned from black to white in 4 pixels, it might mean that the transition now takes 8 pixels, as the resolutions was doubled.

Seen from far away, the sharpness ist the same, but if you zoom in, it looks less sharp.

For that reason, it is sometimes needed to resharpen the image after changing the pixel resolution.

## sharpness – Is it worth investing in a used 35 mm Film camera?

It certainly is not worth investing in a 35mm film camera for the perceived higher resolution, additional color, or sharpness.

To get results you will likely have to either invest in, or at least have access to a drum scanner that gives you the highest resolution possible right now. Otherwise you will likely be scanning on a flatbed that almost certainly does not produce resolution even near the Canon 60D.

Are you trying to print 24x30inch prints at 360dpi? Sure, grab a 35mm camera, a \$20,000 drum scanner, and you may be able to achieve high resolution that would benefit images of this size. You also might not achieve that.

It sounds like the main issue is that you usually end up cropping your images. If this is the case, it sounds like you either need to frame up the subject better before you take the image, or invest in further reaching lenses.

## canon – Picture looking sharp in Live View mode, then losing sharpness after picture has being taken

Since recently I am experiencing an issue with the sharpness of my images.

I am using canon 70D with a macro lens for shooting jewellery and I need to have a very clear picture to capture the brilliance of the crystals. That’s why I am using a tripod, I am using tethering between computer and camera and I shoot from the software program basically – my camera is as still as it gets when taking a photo.

What happens is that the crystals of the jewellery look sharp when I am looking at Live View mode and then when I look at the picture after it has been shot sharpness is lost to a certain extend.

This is how the image looks before taking the picture (in live view mode):
https://imgur.com/9nSD7W1

This is how it looks afterwards:
https://i.imgur.com/xubtioq

I took these pictures with my smartphone capturing the screen of the camera. If you look closely you will see the difference. It won’t be noticeable in daily life pictures I suppose, but for my job, it is an issue.

Does anybody have an idea what is the problem here ? I recently upgraded the firmware to 1.1.3 (canon 70D), of course, that shouldn’t be an issue, but I figured I will mention it.

Thank you!

## Sword of Sharpness maximizing damage only applies when attacking objects and only applies to weapon damage dice

### Sword of Sharpness only applies to objects

This is clear from the Sword of Sharpness description which states (emphasis mine):

When you attack an object with this magic sword and hit, maximize your weapon damage dice against the target. (…)

Thus, any feature that only triggers when attacking a creature cannot even apply here. Sneak Attack is one such feature as gone over in the following:

The feature, for reference, states the following (emphasis mine):

(…) you can deal an extra 1d6 damage to one creature you hit (…)

Because Sneak Attack only works on creatures and Sword of Sharpness only works on objects, these cannot occur at the same time. (Probably)…

### Sword of Sharpness only applies to weapon damage dice

This, again, is clear from the description (emphasis mine):

When you attack an object with this magic sword and hit, maximize your weapon damage dice against the target. (…)

Because of this, features that add damage to an attack that do not count as adding weapon damage dice have no interaction with a Sword of Sharpness. We have some questions related to this:

The first two are about features which explicitly apply to weapon damage dice (like the Sword of Sharpness) and the last one is a question about a feature that explicitly adds to your weapon damage dice. Most features do not explicitly add to your weapon damage dice and thus do not count as weapon damage dice and thus are not maximized by a Sword of Sharpness.

The Sage Advice Compendium states the following:

Q. If you use Great Weapon Fighting with a feature like Divine Smite or a spell like hex, do you get to reroll any 1 or 2 you roll for the extra damage?

A. The Great Weapon Fighting feature—which is shared by fighters and paladins—is meant to benefit only the damage roll of the weapon used with the feature. For example, if you use a greatsword with the feature, you can reroll any 1 or 2 you roll on the weapon’s 2d6. If you’re a paladin and use Divine Smite with the greatsword, Great Weapon Fighting doesn’t let you reroll a 1 or 2 that you roll for the damage of Divine Smite.

From this we can see that features like Divine Smite, which do add extra damage but do not explicitly add extra weapon damage do not count as weapon damage dice for the purposes of things like the Sword of Sharpness. The Cavalier Fighter’s Unwavering Mark feature is, as far as I’m aware, one of very few features which explicitly adds to your weapon damage dice.