Is there a long-term negative impact of frequent automatic sensor cleaning in the camera?

Many newer digital cameras offer automatic sensor cleaning in the menu that uses sensor vibration to remove dirt. This seems to me to be a possible source of long-term mechanical stress. Have studies been conducted that demonstrate a change in longevity or other negative consequences of frequent automatic sensor cleaning?

I am particularly interested in the Nikon D800, but would be pleased to receive information on cameras and DSLRs in general.

Lens effect of enlarging the sensor or focal length to the resolution of the target object

Your guesses are just the opposite of what actually happens.

In the first scenario, you will increase the size of your creative so that the subject covers more pixels on the same sensor as it occupies more of the entire image area. Since the number of pixels has not changed, your theme now covers more of it because it's larger than projected onto the sensor.

In the second scenario, your subject covers the same number of pixels in the center of the sensor, but now more area is displayed around your subject.

If you double the width and height of your sensor, you quadruple the area and the number of pixels. 2000 pixels in an area of ​​8 x 8 mm correspond to 8000 pixels in a range of 16 x 16 mm. However, the size of the object projected by the lens is the same regardless of the size of the sensor. It only shows a larger angle.

There is a third scenario: keep the same lens and sensor size, but increase the pixel density so that there are more pixels in the same sensor area. This increases the resolution of your subject if the resolution limit of the lens was not exceeded by the resolution of the original sensor.

Linear Programming – Multiple Sensor Pointing Optimization: Formulation and MILP is the right approach

So I'm trying to optimize the following hint problem. I have a series of sensors (cameras) and a set of targets. Each camera can be aimed directly at a target, but based on the FOV, it can see multiple targets. The goal is to cover as many targets as possible with the user-defined number of cameras (eg stereo, 2).

I initially chose MATLAB and intlinprog and will show the wording. However, since I am new to MILP, I am open to criticism and suggestions

Problem formulation

given:

  1. S sensors
  2. T aims
  3. P Show options for everyone S sensor
  4. T == P
  5. R required degree of coverage (ie 2 for stereo or 1 for mono)

Problem structure:

The source data for the problem is logical / binary 3D arrays Vis (P x T x S) indicated (i, j, k)

  1. Each layer (3rd dimension S) represents a camera
  2. Each line in a plane represents a pointing function
  3. The column is in a specific row and level true when the goal is seen

Vis (i, j, k) = true if camera(k) can see aim (j) using the pointing function (i)

Limitations:

  1. Each camera can only be assigned a pointing function

Goal: Maximize the number of goals that have at least reached the required coverage level.

I will insert an example of my code below. Some comments are that I take the matrix from Vis and convert it to a 2D array by concentrating each "layer" along the first dimension.

There is a binary decision variable for each pointing function for each camera and a binary switching variable that can be used to optimize the number of targets with the desired coverage level.

S = 156; % 100 cameras
T = 100; % 100 goals
P = T; % 100 pointing options

R = 2; % Required degree of care (stereo)

% Load Vis Matrix is ​​(P x T X S)
% If the matrix is ​​very scattered, the solver works fine. The problem is
% Some cameras can see a lot of targets, others no

load (& # 39; Vis.mat & # 39;);

% Use the permutation and transformation matrix so that each level of the "3rd" Dimension is located in a plane
% focused on the bottom of the previous one
VisNew = permute (Vis,[1 3 2]);
VisNew = reshape (VisNew,[]Size (Vis, 2), 1);

[m,n] = Size (VisNew);

Set %% optimization problem
% The first set of variables is the logical t / f for pointing
% Option (m). The last sentence of Logic t / f will be the coverage variable (s)
NumVars = m + n;

% All variables are integers
prob_struct.intcon = 1: NumVars;

% All variables have lb of 0 and ub of 1
prob_struct.lb = zeroes (NumVars, 1);
prob_struct.ub = ones (NumVars, 1);

% The "maximize" (min for the function) is the sum of the "Required
% Coverage "Switching variables that apply only when needed
% Coverage is received
prob_struct.f = [zeros(m,1); -1*ones(n,1)];

% The equality constraint results from the fact that every camera can only
% 1 Show option is assigned. So the sum of these option variables for each
% Camera should be 1
prob_struct.Aeq = zeros (S, NumVars);
for x = 1: S
prob_struct.Aeq (x, ((x-1) * P + 1: x * P)) = 1;
The End
prob_struct.beq = ones (S, 1);

% The inequality condition only ensures the coverage
The variable% will only be switched if there is sufficient coverage for this target
prob_struct.Aineq = [VisNew'.*-1 R.*eye(n)];

prob_struct.bineq = zeros (n, 1);

% Specify which solver to use
prob_struct.solver = & # 39; intlinprog & # 39 ;;
prob_struct.options = optimoptions (& # 39; intlinprog & # 39;);

%To solve the Problem
[X,Y] = intlinprog (prob_struct);

Abstract mathematical signal and noise model for a logarithmic image sensor

A conventional grayscale image sensor pixel is often modeled as a "light scoop". Photons arrive at the sensor pixel for a fixed exposure time. The light box fills with photoelectrons, which are linearly proportional to the number of incident photons until their full absorption capacity is reached. The last read is a Poisson random noise (shot noise) plus some Gaussian noise (due to quantization, dark current and readout noise). Although this model is abstracted from the underlying analog pixel electronics, it is quite a useful model for signal and noise in a conventional image sensor pixel.

Main question:
Is it possible to build a similar signal and noise model for a logarithmic image sensor, abstracted from the pixel's MOSFET microelectronics? (I am looking for a mathematical model similar to Section 2 in this document by Hasinoff et al. Or Figure 1b of the EMVA 1288 standard.)

Related / Sub-questions: Is there any idea of ​​saturation for a log pixel? How does the shot noise in the logarithmic non-linearity come to fruition? Is there any idea of ​​"shutter speed" or "integration time" to average the shot noise in a log pixel? Is there an EMVA standard for protocol sensors (and more generally for nonlinear photo-response sensors, such as quantum image sensors)?

I tried to skim over several papers [ eg. Spivak et al. , Kavadias et al. ] however, could not find an abstract model without being bogged down in transistor electronics ("MOSFET sub-threshold biasing" and "exponential transconductance relationship" between voltage and current, etc.).

Network – What is an Albert sensor?

Recent news articles have started talking about government networks that use "Albert sensors" (eg this article). What is an Albert sensor? And how effective is it?

Purely out of context, it sounds like an Albert sensor might be a network access network looking for known signatures of attacks (like Bro / Snort / Zeek). Could be? I speculate wildly. Is it something like that? What is known about Albert sensors and how effective they are?

Field of View – How do I find a horizontal and vertical FOV at image sensor resolution and focal length?

I have a camera with an image sensor of 1280 pixels x 1024 pixels with a physical resolution of 15 pixels / mm and a focal length of 50 mm. If I wanted to find the horizontal and vertical field of view in degrees, would that be the right method?

1280 pixels / (15 pixels / mm) = 85.33 mm

1024 pixels / (15 pixels / mm) = 68.27 mm

horizontal FOV grades = 2 * arctan * (h / (2 * f))

vertical FOV grades = 2 * arctan * (v / (2 * f))

80.9 ° = 2 * arctane * (85.33 / (2 * 50))

68.7 ° = 2 * arctane * (68.27 / (2 * 50))

I also heard that it was similar to use similar triangles with physical resolution and sensor size.