What does sensor size 1 / 5.8 "mean?

No, the math is correct, but not exactly right for the sensor size. 1 / 5.8 inches is an outdated method of describing the dimension of old-time video equipment in early television cameras. See a Wikipedia diagram of it. (it shows 1/6 inch)

That was the size of the round outer glass tube diameter, and the included rectangular image size was slightly smaller.

It seems like fraud when used for digital sensor descriptions, but they do it and 5.8 inches sounds bigger. Digital video sensors have width and height measurements in mm, like maybe 2.4 x 1.8 mm (which sounds very small).

Sensor – LIDAR burnout; Ways to Look for Harmful Infrared Lasers before Shooting, and Not Just Look for Warnings?

The BBC News article Guided Automobile Laser Ruin Camera describes a situation in which a particularly powerful infrared laser from the LIDAR of a prototype car on the CES show has damaged the sensor of a photographer's camera.

Question: Aside from finding a sign saying "Beware, no photography, used infrared laser" are there any ways to check the presence of infrared lasers before taking pictures?

For cameras with continuous lenses, you can see the scene as it is perceived by the sensor, and detect damage if it is bad enough.

However, are there any ways to detect unannounced infrared laser beams that could damage a camera, or possibly use the camera itself to do so in a less risky manner?

At the end of the article (cited below) "fiber lasers" are mentioned, and these often have longer wavelengths (1300 to 1600 nm) than most semiconductor lasers (often 800 to 950 nm). The problem with the longer wavelengths is that the silicon itself may not produce a signal, so in these cases you would not see a "purple dot" from the IR light, just a sensor damage. (I asked separately, which wavelengths are most commonly used in laser scanners and LIDAR systems?)

When shooting in bright sunlight, there is obviously general attention and caution.

However, infrared laser beams at eye level are something new and different, and these are invisible. Therefore, you do not necessarily know that you are photographing a laser until the dot is visible in the photo.

If I understand correctly, these LIDAR systems use wavelengths that are absorbed in the front of the eye and therefore never pass through the lens and are focused on a small area on the retina. An IR cut filter on the lens can mitigate the problem, but an IR cut filter on the sensor near the focus may melt and fail because it absorbs the energy that is now focused on a small spot.

Enter the image description here

Jit Ray Chowdhury / BBC

The Lidar system on top of the demonstration car

Enter the image description here

Jit Ray Chowdhury / BBC

The purple dots and lines in this photo of the Stratosphere Las Vegas hotel show the damage …

The article further explains:

Similar to radar and sonar, Lidar uses lasers instead of radio or sound waves, said Zeina Nazer, a postgraduate researcher at the University of Southampton specializing in driverless automotive technology.

"Powerful lasers can damage cameras," she said.

"Camera sensors are generally more prone to damage than the human eye by laser, and consumers are usually warned never to aim a camera directly at laser transmitters during a laser show."

Ms. Nazer added that for protection from high-power laser beams, cameras require an optical filter that turns off the human-invisible infrared. However, it can affect night vision if infrared can be beneficial.

"AEye is known for its Lidar units with a far greater range than its rivals, which are 1km in comparison to 200m or 300m," She said.

"In my opinion, AEye should not use a powerful fiber laser during shows."

Sensor – LIDAR burnout; Standards, specifications or even guidelines for thermal damage caused by infrared lasers?

The BBC News article Guided Automobile Laser Ruin Camera describes a situation in which a particularly powerful infrared laser from the LIDAR of a prototype car on the CES show has damaged the sensor of a photographer's camera.

Question: Are there any standards, specifications or even guidelines in the sensor or camera manufacturing industry for thermal damage due to intense light sources?

  • If a LIDAR manufacturer wanted to be responsible and build a system that he claims will not damage security cameras and traffic cameras on the road, is there a place where they could find information or limitations on laser emission? Maybe a maximum radiation value in each of several wavelength ranges?

  • Or if a camera manufacturer wanted to be responsible and build a camera that he could say was not damaged by a car, a robot or other LIDAR systems?

  • Or, if a LIDAR were part of a display of another product (such as a car or a robot), it is not obvious to anyone in the public that IR lasers are involved, and the display owners wanted to know what laser output might justify them including a warning of cameras?

So far, answers to the question Are there industry standards or specifications for the image sensor's resistance to intense light damage?
Ask a Question
are basically "no", but outdoor photography is so ubiquitous that there is plenty of experience.

However, the infrared laser beams at eye level are something new and invisible, and you do not necessarily know that you are photographing a laser until the point is visible in the photo.

If I understand correctly, these LIDAR systems use wavelengths that are absorbed in the front of the eye and therefore never pass through the lens and are focused on a small area on the retina. An IR cut filter on the lens can mitigate the problem, but an IR cut filter on the sensor near the focus may melt and fail because it absorbs the energy that is now focused on a small spot.

Enter the image description here

Jit Ray Chowdhury / BBC

The Lidar system on top of the demonstration car

Enter the image description here

Jit Ray Chowdhury / BBC

The purple dots and lines in this photo of the Stratosphere Las Vegas hotel show the damage …

The article further explains:

Similar to radar and sonar, Lidar uses lasers instead of radio or sound waves, said Zeina Nazer, a postgraduate researcher at the University of Southampton specializing in driverless automotive technology.

"Powerful lasers can damage cameras," she said.

"Camera sensors are generally more prone to damage than the human eye by laser, and consumers are usually warned never to aim a camera directly at laser transmitters during a laser show."

Ms. Nazer added that for protection from high-power laser beams, cameras require an optical filter that turns off the human-invisible infrared. However, it can affect night vision if infrared can be beneficial.

"AEye is known for its Lidar units with a far greater range than its rivals, which are 1km in comparison to 200m or 300m," She said.

"In my opinion, AEye should not use a powerful fiber laser during shows."

Compact Cameras – What are the sensor differences between the Panasonic Lumix DMC-LX7 and the DMC-LX15 practical?

The sensor of the Panasonic Lumix DMC-LX15 is 2.74 times larger than the sensor of the Panasonic Lumix DMC-LX7. That's a big advantage.

If two sensors use the same technology, a 2.8X sensor would almost be the same as the other two stops Advantage in terms of signal-to-noise ratio (SNR or S / N ratio). By comparison, full-screen sensors are about 1.5 times larger than APS-C sensors and thus enjoy an approximate rating a stop Difference if both use the same technology.

The pixels of the LX15 are slightly larger than the pixels of the LX7. This means that the LX15 is 2.74X as large as the LX15 compared to the 10X LX7. While megapixels are not the nuts and bolts of what some people make out, more megapixels allow for larger display sizes before a picture appears pixelated.

The LX15's 24-72mm (35mm equivalent) f / 1.4-2.8 lens compares pretty well with the LX7's 24-90mm (35mm equivalent) f / 1.4-2.3 lens. It is not zoomed out so far and it is one little At 72 mm (35 mm equivalent) slower than the older camera, it is 90 mm (35 mm equivalent), but only about two-thirds of a stop.

Combine the differences between the sensor sizes and the maximum aperture sizes of the two cameras. The newer LX15 has one 35 mm equivalent opening¹ of f / 3.8-7.6 compared to LX7 with a 35 mm equivalent opening¹ of f / 6.4-10.6. This means that at 24 mm (equivalent to 35 mm) you can expect a performance advantage of around four-thirds and at 72 mm (equivalent to 35 mm) with the LX15 compared to the LX7 an approximate stop.

In other words, in terms of low light performance, the LX15 is similar to the LX7 like an FF camera with an APS-C camera. That's a big difference in low light.

This is also true before taking into account the potential impact of the four-year technological difference between the LX7 published in 2012 and the LX15 published in 2016. This can be a big factor as the sensor technology improvement has increased a bit. In general, comparable models between 2008 and 2012 are expected to show more improvement between 2012 and 2016 than comparable models. But every case is different.

As far as the price differences are concerned, the cheapest FF cameras are at least twice as high as the cheapest APS-C cameras. The Canon EOS 6D Mark II is offered for $ 1,800, but currently sells for about $ 1,300 at an immediate $ 500 discount in the United States. The Canon EOS 77D lists for $ 750 and costs about $ 700. (The 77D is quite similar in terms of features and controls to the 6D Mark II, both introduced in 2017, both have similar 45-point AF systems, etc.) Rebel / xx0D models can be cheaper, but they do not the same level of controls and other functions.)

Is securing the future a big factor here?

There is no proofing for the future, especially when it comes to cameras. When a model comes on the market, the replacement for this model is already anticipated by many camera-driven gearboxes before photography.

¹ Equivalent aperture (in 135 film terms) is calculated by multiplying the lens aperture by the crop factor (a.k.a., focal length multiplier).

Color – Are colors different from CCD sensors compared to CMOS sensor colors?

Some of your previous answers have not been well received and you run the risk of being blocked from answering.

Please note the following notes exactly:

  • Please be sure answer the question, Enter details and share your research!

But avoid

  • Ask for help, clarification or answering other answers.
  • Make statements based on opinions; secure them with references or personal experiences.

For more information, see our tips for writing great answers.

Sony A7 and A9 camera sensor – Why are the edges of the sensor beveled?

I recently cleaned my Sony Mirrorless camera sensor with a rocket fan and a wet cleaning solution. I was able to successfully remove all dust / stains, but I still have some dust on the edges of the sensor that does not seem to appear on my photos.

I'm not very worried about them because I do not see them, but that made me think, why are the edges of the sensor / glass beveled / bevelled on Sony non-mirror cameras? See the attached picture for reference.

What is the purpose of the chamfer?

Are there concerns that dust is on the chamfer itself? Although I do not see the dust in my pictures, I am afraid that it will eventually shift to the middle of the sensor. I am also worried that I should try to clean that part of the sensor, since in the end I can collect oils and what not from the sides of the camera, and that this comes to the sensor.

Enter the image description here

Where is the True Tone Ambient Light Sensor of the MacBook Pro 2018?

The MacBook Pro 2018 has an ambient light sensor for the True Tone display. Where exactly is this sensor located? I hope for a diagram so that I know where to position the cover of a webcam slider accurately without blocking the sensor.

According to MacRumors:

According to Apple, the new MacBook Pro has a multi-channel ambient light sensor in addition to the FaceTime HD camera

but that is not specific enough to say which side or how far away from the camera.

Cropped Sensor – Why does Full Screen have no faster shutter speed than trim (all other things the same)?

I like animal photography by amateurs and have always used camera bodies + long lenses and usually made handheld shots. The main difficulty, of course, is that I need very short shutter speeds to get sharp photos, especially of moving subjects. It's also obvious that I have no control over the weather (even lighting) or the wildlife. After all, as an amateur, I just can not spend the money needed for a main lens. So, for example, my zoom is currently a Sigma 150-600 Sport, f5-f6.3.

All of which means that if the light is bad and / or the wildlife is moving fast, I often find myself raising my ISO to thousands. I opted for a full-screen camera as an alternative to switching when the light is not great. I thought Since the full-frame gathers more light, I do not need that much ISO to get the same shutter speed. This would mean less noise and better shots in less light. However, in some tests, the full screen seems to require similar settings. Maybe I miss something?

For example, today I took pictures of a snowy owl. I started with my crop camera and at 600mm / f8 / ISO 640 I got shutter speeds of about 1/5000. This was fine, it was sunny, but sometimes a cloud blocked the sun and the shutter speed dropped to about 1/2000. This is the minimum I would like to see for motives that may move. I decided to switch to full screen and do some tests. To achieve the same shutter speed, I had to increase the ISO value to exactly the same setting. Yes, the lens and iris settings were the same, but does the full screen still get more light and therefore requires less ISO? The full-frame body shot in full mode, not in the cut.

Here is the specific equipment list, thanks for any hint!

  • Harvest: Nikon D500
  • Fullscreen: Nikon Z 6
  • Lens: Sigma 150-600mm F5-6.3 DG OS HSM Sport

Edit: After I posted that, I looked at two of the images again for comparison. One difference I noticed right away is that the fullscreen is clearly low in noise even though the ISO value is the same. That's great, but I still need fast shutter speeds for moving subjects. Do frames not need such a fast shutter speed to avoid motion blur / handshake, etc.? In this case, maybe I could reduce the ISO value and take about 1/1000 shots?

Noise – Is the low light advantage of larger sensors due to the sensor itself or to the larger aperture of the lenses?

Think of it like this: The effective aperture (called right) admission students) is the diameternot the area of ​​the aperture, seen through the front of the lens. This means:

  • Doubling the diameter of the aperture increases the amount of light that can be transmitted fourfold, But all this light still falls on the same image circle. This means that each point on the image circle receives four times the illuminance.
  • As the focal length of a lens increases, the minimum diameter of the front element must increase to maintain the same f-number. For a 100mm lens, a 1: 2 aperture is 50mm wide, so the front of a 100mm (2: 2) bezel must be at least 50mm wide. Otherwise, you will not be able to measure the aperture through the front of the 50mm wide lens. A 200 mm 1: 2 lens must have a front element with a width of at least 100 mm.
  • If we refer to apertures by total area and not the f-number, we would need different combinations of Tv / Av / ISO for the same amount of light at each different focal length! By using the f ratio, the correct exposure values ​​for an object with a certain brightness can remain the same regardless of the focal length.

More about why the exposure is determined by the amount of light per unit areaand not the total amount of light collected, see lens aperture number and speed for custom lenses

I have opened this link and will be reading it, but I wanted to say quickly that I understand and agree with the reasons for measuring the aperture in f-numbers, not the entire area. In this case, however, I find that it is rather unclear than that I can understand what is going on. The FF sensor only works better because it receives more light and not because it is more sensitive.

It only receives more light because the sensor is larger. The amount of light per cm² of a 50 mm aperture 2 corresponds exactly to the amount of light per cm² of a 100 mm aperture 2 (provided that both see the same scene).

Does not the noise depend on the total amount of light entering the sensor, not on the light per unit area? If the latter were the case, you would find that a FF sensor with a 1: 2.8 lens is not better than an APS-C sensor with a 1: 2.8 lens, right?

NO. The picture noise depends on the signal-to-noise ratio. Since the noise at each pixel is fairly constant, the stronger the signal at each pixel, the lower the noise level, the lower the overall level of the pixel. Therefore, larger pixels are inherently less noisy: each pixel can collect more light / photons /.signal No more reading noise is generated than with a smaller pixel.

Larger sensors allow either: larger pixels for the same resolution / number of pixels or higher resolution / number of pixels for the same pixel size or a combination of both (moderately larger pixels and moderately more of them).

If the pixels are the same size in both the FF and APS-C sensors (and are identical in terms of other technological issues), then at the pixel-accurate 100% display level you are correct that the noise level in the FF and APS C cameras would be the same. BUT: If you then display the images of the different size sensors with the same display size (eg 8×10 or 16×20, even 36×24 or even larger), the higher magnification is required to get the 10 MP APS-C sensor in place Compared to the lower one The magnification required to display the 22MP FF image would also increase the perceived noise.

If both sensors have the same pixel size, the APS-C image would be less than half the total area of ​​the FF image at a display size of 100% on your monitor.

On the other hand, if both the APS-C and FF sensors have the same number of pixels, each pixel on the FF sensor is 2.25 times the area of ​​each pixel on the APS-C sensor. This means that the FF camera collects 2.25 times more light / photons / for the same scene through the same lens.signal Pixels per pixel as the APS-C camera, which means that the SNR is twice as large at each pixel (one stop) as that of the APS-C camera.

I totally agree with your last five comments, Michael. I do not think that we actually have a difference of opinion here. To be clear, I view the photos on a computer monitor of fixed size, as I wrote in my updated question, rather than 100%. Since with this setting the noise depends on all the light falling on the sensor (not per unit area), it is not wrong to say that it is not the sensor itself but the larger entrance pupil of the lens that is responsible for the better Picture quality. Right?

If you are viewing the output from two different sized sensors on the same sized monitor, the magnification difference (between the size of each sensor and the size of your screen) will be related to the noise.

"For this setup, the noise is dependent on the total light falling on the sensor (not per unit area) …

Just because you keep repeating yourself is not the correct way to do it the first time.

They also ignore the elephant in the room because they made it clear that this is not a comparison of FF and APS-C using exactly the same technology, but rather a Sony APS-C and a μ4 / 3 from Olympus. The difference in how each manufacturer designed the sensor and how it processed the output of this sensor is probably more related to the relative performance of each sensor.

No matter what sensor size I have, a lens with an entrance pupil of 20 mm² gives me photos with less noise in low light than a lens with an entrance pupil of 10 mm². Even if these two lenses are mounted on different sized sensors. As long as the FF equivalent focal length of these two lenses is the same and we do not have a mismatched system (such as attaching an FF lens to APS-C without Speed ​​Booster, which wastes light, or an APS-C) lens on FF housing).

They still ignore the effects of the different magnification factors on the noise. Depending on the difference between the two sensors, it can be very important when comparing photos taken in low light.

If all other factors are equal, a frame sensor outperforms a 1.5X APS-C sensor with respect to SNR by a factor of 2.25 (approximately 1.15 stops). A 50mm APS-C camera lens offers the same FoV as a 75mm FF camera lens. The 26.8 mm wide e.p. On the 75mm f / 2.8, the 50mm lens is placed in a 1: 1.9 aperture. That's about 1.15 stops. But here ends the "all other things same". The smaller sensor requires as many circuits per pixel on the sensor chip as the FF sensor, which means that the smaller sensor with less total area actually collects light.

Let's assume the same size of the sensor. I explain since the problem came up. In this context, one can use two APS-C sensors with different megapixel number and thus different sensor sizes. Let's not use that other factor.

This "other factor" is the real world versus the purely theoretical.