Comparing images of full frame against APS-C at same (large) focal length

About me

As an enthusiast photographer, I am currently working with a Canon 80D with the following lenses:

  • Canon EF-S 15-85 mm f3.5-5.6 IS USM
  • Canon EF 70-300 mm f4-5.6L IS USM

Only a small fraction of pictures are taken with other lenses.

I am thinking about switching to the new mirrorless Canon RF lineup, preferably the Canon R6. Of course, I am well aware of the consequences. In that case, I should at least replace the lower end lens with an appropriate EF or RF one – probably the RF 24-70 f2.8 or 24-120 f4.

Problem

Today, I evaluated all my photos (by EXIF data) to find out about my past shots. It turns out that around 25 % of my images were taken at the focal length of 300mm, a lot of them when travelling or observing animals (mostly in zoos). Indeed, I appreciate the quality of the EF lens and the large focal length.

enter image description here

Will I be disappointed after a switch to a full frame camera, because I will not get the same frame at 300mm? Or will the image quality that much better so I will not worry about the loss of the pixels?

Assuming the crop-factor of 1.6 at Canon and the 20 MP Canon R6, I would need to crop an image down to 12.5 MP in order to get the same image area as with the 24 MP Canon 80D.

Notes

I know I could solve this problem by buying an additional lens with higher focal length or by buying the R5 which has more pixels.

I am also aware that this is not a classical Q&A question, but I would love to hear some input from other photographers about this dilemma.

plotting – How to properly write multiline text with LaTeX symbol in a frame and with background

I wanted to include a number of text-box with background color white inside a plot. The text inside the box will also have LaTex expressions as well as multiple lines.

Is there any way to include multiline text (including LaTeX symbol) inside a plot?

In addition I want to set frame and background color for the box.
I am currently using Epilog and Prolog to include such text.

  ListLinePlot(
  Table({k, 
  PDF(BinomialDistribution(50, p), k)}, {p, {0.3, 0.5, 0.8}}, {k, 0,50}), 
  Filling -> Axis, 
  FillingStyle -> Automatic,
  (**Option-1**)
  Epilog -> Text(Style(ToExpression("\text{E}_{x} \n text2", TeXForm, HoldForm),Bold),{30, 0.13}),
  (**Option-2**)
  Prolog -> {Inset(Framed( "E_{x}ntext2" , RoundingRadius -> 5,Background -> White), {45, 0.13})}
  (**Option-3**)
(*Prolog -> {Inset(Framed( MaTeX("E_{x}\n text2" ) , RoundingRadius -> 5,Background ->White),{45,0.13})}*)
   )

Option-1 fails writing E_{x} as well as breaking line.
Option-2 fails writing E_{x} but breaks line.
Option-3 is the one I wanted but it fails in breaking line.

Is there any way to achieve this.

Moreover I also wanted to ask, what should one do if one has to include say more that two text boxes?

screen – After your Android phone uses OLED, does the graphics looks a little “frame by frame”?

I have had iPhones and Android phones and when I play Pokemon Go, I do see that the LCD and LED ones have smooth animations, but on phones with OLED, the Pokemon when standing in front, when the player can throw the Pokeball towards them, appear choppy.

The graphic animation begin to look like 15 frames per second or maybe even 10 frames per second, while on the LCD and LED phones, they look like 30 or even 60 frames per second — smooth to a point I can’t tell whether it is 30 or 60 fps.

With movies playing, I found similar things: I can’t see the lip frame by frame movements on LCD or LED display, but on OLED, I actually can see the frames.

I am wondering if it is due to OLED not dimming down fast enough as the LCD or LED can.

Is that the overall experience on Android phones with OLED too?

computer networks – Calculating minimum frame size

I’m actually quite stuck on a question I have for an assignment…I’d appreciate any help with this:

Consider a CSMA/CD network with maximum cable length of 8km and where the ratio of propagation speed to bandwidth (i.e. propagation speed/bandwidth) is 10 meters per bit. What is the minimum frame size in bytes?

I know that frame size, S >= 2BL where B is bandwidth, and cable length is L but I can’t quite get it done.

python – How can the game graphics be stabilised when after each frame hundreds of calculations are taking place?

I am building a very basic 2D game purely using pygame library in Python 3x. The “game object” is blit(ed) onto the screen using a loop and after each pass, there are calculations taking place in the background for example, to test for collision. Now as the game get’s more complex with more features, the calculations take longer and hence the “game object” is put onto the screen once, but disappears for a split second when the calculations are taking place and appears again during the second pass, meaning it flickers. How can I tackle this problem? I cannot move ahead with more complex calculations for other features without avoiding the flickering due to more time required for the calculations.

dnd 5e – Is there an absolute rest frame in Forgotten Realms or D&D cosmology?

Frame Challenge: The rules of D&D are not a physics simulator.

This question goes beyond what the rules are concerned with, and beyond what is necessary for adjudicating the rules. If there are any edge cases that actually depend on a substantive answer to this question (there aren’t), it will be entirely up to the DM.

That said, I use a reference frame argument in this answer about casting tiny hut upside down, but the calculus employed there is largely unnecessary most of the time (it was probably unnecessary there too).

I recall reading a thread on 4chan where a DM decided that an immovable rod had a fixed position with respect to all reference frames:

A while ago I got my hands on an Immovable Rod. I placed it in the air and told it to stay. The GM asks whether i’m stading to the east or west fo the rod. I say west. The GM states that I die. The rod shot forward at the same speed the earth revolves around the sun, as the rod is stuck in a universal stillpoint and the earth moved away from under it. The rod also cut its way through a large area of woods before being ejected into space.

lens – APS-C lenses on full frame Mirrorless bodies

Since the adapter moves the lenses further from the sensor, I’d imagine that the coverage of the lens would be larger, and that the adapter or body alters focus to compensate. Is this thinking correct?

No.

The entire point of the EF→RF adapter is to place an EF or EF-S lens at exactly the same distance from the sensor when used with an RF mount camera as the lens is placed when used with an EF mount camera.

The adapter moves the lens away from the camera so that the lens is the same distance away from the sensor as it would be when mounted on a camera for which it is designed. The image circle at the sensor is the same size whether an EF-S lens is used with an APS-C EF mount camera or with an RF mount camera + EF→RF adapter. The EF-S lens will always converge focused light 44mm behind the flange ring.

The design registration distance (sometimes colloquially referred to as the flange focal distance) for EF-S lenses is 44mm. This is the distance from the sensor to the flange on EF mount cameras, including all FF, APS-H, and APS-C models.

EF and EF-S lenses are designed to focus the light they project 44mm behind the lens flange ring.

The design registration distance for RF cameras and lenses is 20mm. This is the distance from the sensor to the flange on all RF mount cameras.

RF lenses are designed to focus the light they project 20mm behind the lens flange ring.

The EF→RF adapter is 24mm thick. When placed on an RF mount camera it provides a flange 44mm in front of the sensor on which an EF or EF-S lens can be mounted. The light projected by the EF-S lens will then come into focus 44mm behind the lens flange, just as it would when the lens is mounted on an EF mount camera.

Your intuition is partly correct, though, in a reverse sort of way. If it were possible to mount the EF-S lens closer than 44mm from the sensor of an RF mount camera, the image circle would be smaller than it would be at 44mm behind the lens’ flange ring. Of course, in such a case if the lens were focused at infinity the sensor would be too close to the lens and the entire image would be too blurry.

To increase the image circle of an EF-S lens larger than the size it is projected onto the sensor of an EF mount camera, the lens would either need to be moved even further forward than the 24mm the EF→RF adapter provides or magnifying optical elements would need to be placed between the lens and the EF→RF adapter. It would be exactly the same as using extension rings or a teleconverter/extender on an EF mount camera, except the extension rings or teleconverter/extender would need to be placed in front of the EF→RF adapter when using an EF-S lens on an RF camera.

I guess it depends upon the specific lens, but does anyone know of a list of EF-S lenses that provide full sensor coverage?

With some zoom lenses the projected image circle enlarges as the lens is zoomed to longer focal lengths. If one uses such an APS-C only zoom lens on a full frame (FF) camera, at the longer focal lengths the image circle might expand enough to fill the FF sensor. There would be no guarantees about how the image quality in the areas that would be larger than an APS-C sensor would hold up as the lens is zoomed, though, since the lens as designed was not intended to use those portions of the enlarge image circle in images created using an APS-C camera.

Though not an APS-C lens, the Canon EF 8-15mm f/4 L Fisheye is a lens that has an expanding image circle as it is zoomed. We’ll use it as an illustration.

  • At 8mm, the entire image circle is enclosed within a FF sensor.
  • At 10mm, the image circle is large enough to cover an APS-C sensor.
  • At 12mm, the image circle is large enough to cover an (now defunct) APS-H sensor.
  • At 15mm the image circle is large enough to cover a FF sensor.

enter image description here

Based on the way the image circle of the EF 8-15mm f/4 L Fisheye works out, one might be able to predict that an EF-S lens without a baffle at the rear (which would crop the expanding part of the image circle by blocking that light from passing through the back of the lens) would need to be zoomed to about 2X the lens’ focal length at its widest angle of view. An 18-55mm lens, for example, may need to be zoomed to about 35mm or longer to expand the image circle enough to cover the FF sensor. Lenses that use a rectilinear, rather than fisheye, projection may or may not follow the same ratio, though.

Any EF-S lens that zooms in this way might be used with a FF camera when zoomed to the longer part of its range of focal lengths. That’s assuming that the camera would allow one to use the camera in FF instead of “crop” mode when an EF-S lens is attached to the EF→RF adapter. When an EF-S lens is adapted to an RF camera, the camera automatically crops the image to APS-C size dimensions in the center of the sensor. I’m not sure if any of the RF mount cameras have a menu item that would allow the user to override that. The menu item that allows for cropping third party APS-C only EF mount lenses¹ may or may not allow for telling the camera to use a Canon EF-S lens in FF mode.

¹ Every third party EF mount APS-C only lens I’ve seen has a standard EF mount, rather than including the extra tab used on Canon EF-S lenses that prevent them from being mounted to Canon FF cameras. The third party APS-C lenses will mount on FF Canon EF cameras, but of course the image circle will not be large enough to cover the full 36mmx24mm sensor. Apparently there is no electronic communication from the third party lens informing the camera that it has a smaller than FF sized image circle.

lens – APS-C lenses on full frame bodies

I was wondering about the use of lenses designed for smaller sensors on full frame bodies. This originally comes up regarding a discussion of Canon EF-S lenses with adapters on full frame bodies with RF mounts and the adapter. Since the adapter moves the lenses further from the sensor, I’d imagine that the coverage of the lens would be larger, and that the adapter or body alters focus to compensate. Is this thinking correct?

wi fi – Understanding the 802.11 frame structure and Android probe request different?

I have read here and on Google a lot about the 802.11 Frames
but something doesn’t add up.

can it be that there is a different between Android 8 and Android 5.1?
the request is not the same? or maybe it sent some random MAC ?

because when I “sniff ” android 5.1 device – I can see it “on the air” while when I “sniff” – android 8 device , I don’t see it

(I know the MAC of both devices I’m searching )

Thanks ,

astrophotography – How do I keep it from moving out of the frame so fast when taking a video of Jupiter?

Tracking

To directly answer your question, you are not doing anything wrong per se and the effect is normal. But ordinarily the telescope is equipped with a motor and tracks the object being imaged to prevent this from happening.

The Earth rotates on its axis from West to East at 15.04 arc-seconds per second. If a telescope were mounted such that its axis of rotation is parallel to Earth’s axis of rotation and if it rotates from East to West at 15.04 arc-seconds per second, then the telescope mount will cancel the effect of Earth’s rotation and the telescope will remain fixed on the same section of sky.

This is normally achieved by using a telescope on an equatorial mount (commonly a German Equatorial Mount – aka GEM). Telescopes that use altitude/azimuth style mounts can often be fitted with a polar wedge which tilts the azimuth axis of the mount onto an angle so that it’s axis is parallel to Earth’s axis. These mounts are motorized to rate at sidereal rate (15.04 arc-seconds per second).

You didn’t mention what telescope or mount you are using but based on your description of Jupiter quickly exiting your field of view, I’m assuming it isn’t motorized or tracking. You may be able to capture some useful data anyway. More on that later (see Working with what you have below).

Seeing Conditions

The atmosphere acts like a lens in that it bends light passing through it. As warm & cool air mix or winds aloft (such as the jet stream) will also create a lot of turbulence. This results in constant distortions … like attempting to view a coin at the bottom of a fountain or pool … through waves. If the waves were to stop, the view of the coin would become very clear.

For best results, attempt to image on nights when the upper atmosphere is calm. Ideally this would be mean you are at least 300km away from any warm front, cold front, or the jet-stream.

These atmospheric effects also result in the “twinkling” effect (called atmospheric scintillation) you see in stars — especially stars located lower toward the horizon (because you are looking through more air-mass to view those stars).

Your geographic location will have an impact as well. Views over massive bodies of water with on-shore winds (e.g. viewing across the flat ocean) tends to reduce turbulence in the air and provide steadier viewing conditions.

With all of that … the clarity of the planets will come and go in fractions of a second.

Here’s an example by Damien Peach: Exceptionally Poor Seeing Conditions

Here’s my own example: Jupiter, Ganymede, & Seeing Conditions

Lucky Imaging

The idea behind Lucky Imaging is that, for brief fractions of a second, you’ll get clearer images where the differences between light and dark regions of the planet will have better contrast. If you grab enough images… then there’s a chance that just a few of them may be better quality imagines and you can reject the rest.

The best way to get a lot of frames in a hurry is to use video (but you do want a format that does not “compress” the video frames. Ideally .SER or .AVI format.)

I typically grab about 30 seconds worth of video. Ideally the camera should have a very high frame rate (hopefully not less than 60 frames per second). The video is processed via software such as Registax or AutoStakkert (both are free planetary stacking programs). These programs will analyze each frame of video … looking for those frames with the best contrast. Most of the frames are rejected — you might tell the software to use only the 5% of the best frames … or even less. This eliminates all the frames where the details were not very good due to the effects of the atmosphere. This is what is meant by lucky imaging … just taking the best frames where you got lucky … and rejecting the rest. You don’t need very many good frames.

I have a several imaging cameras and do not use the same camera for long exposure imaging as I would for planetary imaging. For planetary imaging, a web-cam style camera works well … provided it has a high enough frame rate.

A camera such as a ZWO ASI120MC-S is a good entry level planetary imaging camera. With deeper pockets, there are camera with even higher frame rates, greater sensitivity, etc. The chip need not be very big because the planets are tiny … so most of the frame is just the blackness of space (stars will not be visible in planetary imaging because the frames are too short).

The sample image below was shot using a ZWO ASI174MM-Cool. This is a monochrome camera and to achieve color I have to capture at least 3 videos … one with a red filter, one with a green filter, and one with a blue filter. But I recommend using a full-color camera rather than a monochrome camera because it is easier. Anyway, the camera was capturing 128 frames per second for 30 seconds in each color. Only the best frames were kept (most frames are rejected) and each color channel was combined to create the single color result.

Jupiter

I should mention… this camera was using a Celestron C14 telescope… this is a 14″ (356mm) aperture f/11 telescope with a 3910mm focal length. Your telescope will not show nearly this much detail. Ideally I should have used at least a 2x to 2.5x barlow to increase the focal length to f/22 or f/27.5. The very best planetary images are captured at f/30-f/50 range (no kidding!). This has to do with something called Nyquist-Shannon sampling theorem.

Working with what you have

Given your equipment, your camera is probably ok. You can get decent results with a 640×480 camera. The frame rate on that camera is a bit low (ideally it should be 60 frames per second or faster) so I’m a little worried about that as I personally struggled when I attempted to use a camera at 30 frames per second.

Be realistic about what your telescope can achieve. There is a relationship between the telescopes’s physical aperture and its ability to resolve details. This relationship is described by Dawes’ Limit.

My first telescope had a 90mm aperture. I could see the cloud bands on Jupiter … as bands or belts. I could see the rings around Saturn. I later got a 125mm telescope … just a bit larger. Now I could sometimes see that the “belts” on Jupiter had some texture in them and could occasionally glimpse the thin black gap in Saturn’s rings (the Cassini Division) — which I really couldn’t see in the 90mm instrument. The larger the telescope… the better the detail. The image above was captured using a telescope that has a 356mm aperture.

If your mount is not able to track, then you’ll need to point the telescope to the spot in the sky just ahead of the planet …. as soon as the planet is in the frame, start capturing video as the planet moves through the field.

The fact that the planet is in a different position in the frame will not be a problem. The image stacking software (e.g. Registax or AutoStakkert) will align each frame based on the disk of the planet.

Let the software analyze and reject most of the frames (you only need perhaps a few dozen decent frames out of the hundreds it will capture).

Depending on how high the planet appears above the horizon, you may notice that one edge of the planet has a blue fringe and the opposite edge has a red fringe. This effect is called atmospheric dispersion. This is the atmosphere acting like a prism as the light enters the atmosphere at a strong angle. The different wavelengths of light are splitting into a rainbow spectrum … but only a little — just enough to see the fringing. Registax has a feature that lets you re-align the red and blue channels onto the green channel to produce a sharper result.