## Optics – Estimation of the focal plane of the camera using a thin lens equation

I have a camera that outputs a JSON file for every picture I take. In the JSON file I find camera-specific and also the photo-specific acquisition parameters used. The problem is that I want to estimate the distance from the focus plane of a photo exactly, so I want to use the JSON file to do this. Assuming that the equation applies to thin lenses,

1 / f = 1 / {z_o} + 1 / {z_i}

O is the actual object and i is the image.

If I resolve to {z_o}, I would expect to get the distance in front of the camera where the camera is in focus.

Based on what I focused on, the focus plane should be 10 to 20 cm in front of the camera. When searching the JSON file for parameters, I find a focal length parameter that was 69 mm according to the camera display (after taking the cropping factor 3.19 into account), and I assume that the distance to the microlens sensor is with the exitPupilOffset parameter. If I put these values ​​in, I get them in the end

z_0 = 29 mm

I assume that the values ​​given in the JSON are expressed in meters. In this case, the answer is not nearly as far as the object is actually distant. Am I misinterpreting the JSON, is there a possibility that my JSON file is just not calibrated, or is the exitPupilOffset parameter not the distance between the main lens and the microlens?

Internal diagram of the camera.

For all extensive purposes, simply assume that the microlens is the sensor array, since it is at a fixed distance from the actual sensor and does not change from photo to photo. According to JSON, the microlenses 37 µm are in front of the sensor.

Attached JSON.

``````  {
"picture": {
"dcfDirectory": "100PHOTO",
"dcfFile": "IMG_1010",
"totalFrames": 1,
"frameIndex": 0
},
"generator": "lightning",
"settings": {
"flash": {
"exposureCompensation": 0.0,
"curtainTriggerSync": "front",
"zoomMode": "auto",
"mode": "unknown",
"afAssistMode": "auto"
},
"focus": {
"roi": (
{
"top": 0.0,
"right": 1.0,
"left": 0.0,
"bottom": 1.0
}
),
"afDriveMode": "manual",
"bracketCount": 3,
"bracketOffset": 0.0,
"bracketStep": 3.0,
"bracketEnable": false,
"mode": "auto",
"afActuationMode": "single",
"ringLock": false,
"captureLambda": -4.0
},
"zoom": {
"ringLock": false
},
"shutter": {
"driveMode": "single",
"selfTimerEnable": false,
"selfTimerDuration": 10.0
},
"depth": {
"histogram": "off",
"assist": "off",
"overlay": "off"
},
"whiteBalance": {
"cct": 2917,
"tint": -14.0,
"mode": "auto"
},
"exposure": {
"bracketCount": 3,
"meter": {
"roiMode": "af",
"roi": (
{
"top": 0.0,
"right": 1.0,
"left": 0.0,
"bottom": 1.0
}
),
"mode": "evaluative"
},
"bracketOffset": 0.0,
"bracketStep": 1.0,
"compensation": -2.0606489181518555,
"bracketEnable": false,
"mode": "manual",
"aeLock": false
}
},
"image": {
"pixelPacking": {
"bitsPerPixel": 10,
"endianness": "little"
},
"orientation": 1,
"color": {
"ccm": (
2.0307798385620117,
-0.574540913105011,
-0.45623892545700073,
-0.5976311564445496,
1.8244290351867676,
-0.22679781913757324,
-0.7936299443244934,
-1.610772967338562,
3.404402732849121
),
"whiteBalanceGain": {
"r": 1.0,
"b": 1.8171261548995972,
"gr": 1.0173076391220093,
"gb": 1.0173076391220093
}
},
"height": 5368,
"width": 7728,
"pixelFormat": {
"black": {
"r": 65,
"b": 65,
"gr": 65,
"gb": 65
},
"white": {
"r": 1023,
"b": 1023,
"gr": 1023,
"gb": 1023
},
"rightShift": 0
},
"modulationExposureBias": -0.2632111608982086,
"iso": 125,
"originOnSensor": {
"y": 0,
"x": 0
},
"limitExposureBias": 0.0,
"mosaic": {
"tile": "r,gr:gb,b",
"upperLeftPixel": "gr"
}
},
"devices": {
"shutter": {
"pixelExposureDuration": 0.050014421343803406,
"maxSyncSpeed": 0.004,
"frameExposureDuration": 0.050014421343803406,
"mechanism": "focalPlaneCurtain"
},
"mla": {
"scaleFactor": {
"y": 1.000274658203125,
"x": 1.0
},
"lensPitch": 2e-05,
"sensorOffset": {
"y": 1.3719854354858398e-06,
"x": 3.7586116790771486e-06,
"z": 3.7e-05
},
"rotation": 0.0004427027015481144,
"tiling": "hexUniformRowMajor",
"config": "com.lytro.mla.3"
},
"accelerometer": {
"samples": (
{
"y": 9.672821044921875,
"x": -0.8846282958984375,
"z": 0.53753662109375,
"time": 0.0
}
)
},
"clock": {
"isTimeValid": true,
"zuluTime": "2019-06-21T22:30:09.765Z"
},
"battery": {
"cycleCount": 13,
"model": "B01-3760",
"make": "Lytro",
"chargeLevel": 96
},
"lens": {
"fNumber": 2.1714898266121816,
"focalLength": 0.02165041380790351,
"opticalCenterOffset": {
"y": -3.0073242669459432e-06,
"x": -7.64151627663523e-05
},
"infinityLambda": 51.51889459300562,
"focusStep": 289,
"zoomStep": -222,
"exitPupilOffset": {
"z": 0.08305148315429688
}
},
"sensor": {
"analogGain": {
"r": 1.5625,
"b": 1.5625,
"gr": 1.5625,
"gb": 1.5625
},
"pixelWidth": 7728,
"perCcm": (
{
"ccm": (
2.006272077560425,
-0.5362802147865295,
-0.46999192237854004,
-0.6019303798675537,
1.836044430732727,
-0.23411400616168976,
-0.8173090815544128,
-1.6435128450393677,
3.4608218669891357
),
"cct": 2850.0
},
{
"ccm": (
2.4793264865875244,
-1.2747985124588013,
-0.20452789962291718,
-0.5189455151557922,
1.6118407249450684,
-0.09289517253637314,
-0.3602483570575714,
-1.0115599632263184,
2.3718082904815674
),
"cct": 4150.0
},
{
"ccm": (
2.1902196407318115,
-1.0231428146362305,
-0.16707684099674225,
-0.4134329855442047,
1.7654914855957031,
-0.3520585000514984,
-0.18222910165786743,
-0.7082417607307434,
1.8904708623886108
),
"cct": 6500.0
}
),
"bitsPerPixel": 10,
"baseIso": 80,
"pixelPitch": 1.4e-06,
"normalizedResponses": (
{
"cct": 5100,
"r": 0.7512235641479492,
"b": 0.7596154808998108,
"gb": 1.0,
"gr": 1.0
}
),
"pixelHeight": 5368,
"mosaic": {
"tile": "r,gr:gb,b",
"upperLeftPixel": "gr"
}
}
},
"camera": {
"make": "Lytro, Inc.",
"firmware": "2.0.0 (42)",
"model": "ILLUM"
},
"algorithms": {
"ae": {
"roi": "followAf",
"computed": {
"ev": 1.5625
},
"mode": "live"
},
"awb": {
"roi": "fullFrame",
"computed": {
"cct": 2917,
"gain": {
"r": 1.0,
"b": 1.8171261548995972,
"gr": 1.0173076391220093,
"gb": 1.0173076391220093
}
}
},
"af": {
"roi": "focusRoi",
"computed": {
"focusStep": 289
}
}
},
"schema": "http://schema.lytro.com/lfp/lytro_illum_public/1.3.5/lytro_illum_public_schema.json"
``````

## Performance – Block Bootstrap Estimation in Java – Part 2

The following is my attempt to make the code in my previous Block Bootstrap Estimation question more efficient with Java (I learned something about parallel computing last weekend, so I'm very new to it). The text file text.txt can be found at https://drive.google.com/open?id=1vLBoNmFyh4alDZt1eoJpavuEwWlPZSKX (please download the file directly if you want to test it; there are some strange things you won't choose from) even notepad). This is a small 10×10 record with `maxBlockSize = 10`However, this must be scaled up to twenty 5000 x 3000 data records `maxBlockSize = 3000`just to get an idea of ​​the size.

``````import java.io.FileInputStream;
import java.lang.Math;
import java.util.Scanner;
import java.io.IOException;
import java.io.PrintWriter;
import java.io.FileOutputStream;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.DoubleStream;

public class BlockBootstrapTestParallel {

// Sum of a subarray, based on B(x, i, L) -- i is one-indexing
public static double sum(double() x, int i, int L) {
return IntStream.range(i, i + L)
.parallel()
.mapToDouble(idx -> x(idx - 1))
.sum();
}

// Mean of a subarray, based on B(x, i, L) -- i is one-indexing
public static double mean(double() x, int i, int L) {
return IntStream.range(i, i + L)
.parallel()
.mapToDouble(idx -> x(idx - 1))
.average()
.orElse(0);
}

// Compute MBB mean
public static double mbbMu(double() x, int L) {
return IntStream.range(0, x.length - L + 1)
.parallel()
.mapToDouble(idx -> mean(x, idx + 1, L))
.average()
.orElse(0);
}

// Compute MBB variance
public static double mbbVariance(double() x, int L, double alpha) {
return IntStream.range(0, x.length - L + 1)
.parallel()
.mapToDouble(idx -> (Math.pow(L, alpha) * Math.pow(mean(x, idx + 1, L) - mbbMu(x, L), 2)))
.average()
.orElse(0);
}

// Compute NBB mean
public static double nbbMu(double() x, int L) {
return IntStream.range(0, x.length / L)
.parallel()
.mapToDouble(idx -> (mean(x, 1 + ((idx + 1) - 1) * L, L)))
.average()
.orElse(0);
}

// Compute NBB variance
public static double nbbVariance(double() x, int L, double alpha) {

double varSum = IntStream.range(0, x.length / L)
.parallel()
.mapToDouble(idx -> (Math.pow(mean(x, 1 + ((idx + 1) - 1) * L, L) - nbbMu(x, L), 2)))
.average()
.orElse(0);

return Math.pow((double) L, alpha) * varSum;

}

// factorial implementation
public static double factorial(int x) {
double() fact = {1.0, 1.0, 2.0, 6.0, 24.0, 120.0, 720.0, 5040.0, 40320.0, 362880.0, 3628800.0};
return fact(x);
}

// Hermite polynomial
public static double H(double x, int p) {
double out = 0;
for (int i = 0; i < (p / 2) + 1; i++) {
out += Math.pow(-1, i) * Math.pow(x, p - (2 * i)) /
((factorial(i) * factorial(p - (2 * i))) * (1L << i));
}
out *= factorial(p);
return out;
}

// Row means
public static double() rowMeans(double()() x, int nrows, int ncols) {
double() means = new double(nrows);
for (int i = 0; i < nrows; i++) {
means(i) = mean(x(i), 1, ncols);
}
return means;
}

public static void duration(long start, long end) {
System.out.println("Total execution time: " + (((double)(end - start))/60000) + " minutes");
}

public static void main(String() argv) throws IOException {
final long start = System.currentTimeMillis();
FileInputStream fileIn = new FileInputStream("test.txt");
FileOutputStream fileOutMBB = new FileOutputStream("MBB_test.txt");
FileOutputStream fileOutNBB = new FileOutputStream("NBB_test.txt");
FileOutputStream fileOutMean = new FileOutputStream("means_test.txt");

Scanner scnr = new Scanner(fileIn);
PrintWriter outFSMBB = new PrintWriter(fileOutMBB);
PrintWriter outFSNBB = new PrintWriter(fileOutNBB);
PrintWriter outFSmean = new PrintWriter(fileOutMean);

// These variables are taken from the command line, but are inputted here for ease of use.
int rows = 10;
int cols = 10;
int maxBlockSize = 10; // this could potentially be any value <= cols
int p = 1;
double alpha = 0.1;
double()() timeSeries = new double(rows)(cols);

// read in the file, and perform the H_p(x) transformation
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
timeSeries(i)(j) = H(scnr.nextDouble(), p);
}
scnr.next(); // skip null terminator
}

// row means
double() sampleMeans = rowMeans(timeSeries, rows, cols);
for (int i = 0; i < rows; i++) {
outFSmean.print(sampleMeans(i) + " ");
}
outFSmean.println();
outFSmean.close();

for (int j = 0; j < rows; j++) {
for (int m = 0; m < maxBlockSize; m++) {
outFSMBB.print(mbbVariance(timeSeries(j), m + 1, alpha) + " ");
}
outFSMBB.println();
}
outFSMBB.close();
}).start();

for (int j = 0; j < rows; j++) {
for (int m = 0; m < maxBlockSize; m++) {
outFSNBB.print(nbbVariance(timeSeries(j), m + 1, alpha) + " ");
}
outFSNBB.println();
}
outFSNBB.close();
}).start();
duration(start, System.currentTimeMillis());
}
}
``````

If required, I have 8 cores with 64 GB RAM and two GPUs, the use of which I do not know (Intel UHD Graphics 630, NVIDIA Quadro P620). I will be thinking about how to use them when I have to over the next few days.

## Which process should be done in SharePoint project estimation?

I have new customer requirements. I wanted to set for each component. Which standard process should I follow?

## Probability – Gauß-Bernulli mixture model MLE estimation

I look at a Gaussian blend model that has a different probability for the blend components and try to derive MLE estimates from $$phi$$ and $$lambda$$, The main probability of X for all parameters is:

$$P (X) = sum_ {i = 1} ^ {N} sum_ {j = 1} ^ {M} log ( sum_ {k = 1} ^ {K} N (x_ {ij} mid mu_k, Sigma_k) p (z_ {ij} = k mid y_i) p (y_i))$$

assuming that x, y, z are fully observed. Where probabilities are given as:

$$N$$ = Normal distribution for a given covariance and mean.

$$p (z_ {ij} mid y_i) = lambda ^ {I (z_ {ij} = y_i)} (1 – lambda) ^ {(1 – I (z_ {ij} = y_i))}$$

(I is the indicator that is 1 hen z_ij = y_i

$$p (y_i) = phi ^ {y_i} (1 – phi) ^ {(1 – y_i)}$$

I'm trying to understand how the MLE from $$phi$$ and $$lambda$$ are derived for this problem, which are given as follows:

$$MLE ( phi) = frac {1} {N} sum_ {i = 1} ^ {N} y_i$$

$$MLE ( lambda) = frac {1} {N} sum_ {i = 1} ^ {N} sum_ {j = 1} ^ {M} (1-z_ {ij}) ^ {1-y_i } (z_ {ij}) ^ {y_i}$$

The solutions just seem to be the regular MLE estimates for Bernoulli, but I'm having trouble seeing how to start:

$$frac { partial P} { partial phi} = frac {1} { partial phi} sum_ {i = 1} ^ {N} sum_ {n = j} ^ {M} log ( sum_ {k = 1} ^ {K} N (x_ {ij} mid mu_k, Sigma_k) p (z_ {ij} = k mid y_i) p (y_i)) \$$

$$frac { partial P} { partial Lambda} = frac {1} { partial Lambda} sum_ {n = i} ^ {N} sum_ {n = j} ^ {M} log ( sum_ {k = 1} ^ {K} N (x_ {ij} mid mu_k, Sigma_k) p (z_ {ij} = k mid y_i) p (y_i)) \$$

Any help in solving these partials and setting them to 0 would be greatly appreciated.

## Reference requirement – Rigorous error estimation semi-discrete heat equation

To let $$Omega$$ be a limited Lipschitz domain $$mathbb R ^ N$$ and $$u_h$$ to be a solution of
$$begin {cases} partielle_t u_h – Delta_h u_h = f & text {in} Omega \ u_h = 0 & text {in} partially Omega end {cases}$$ Where $$Delta_h$$ is the approximation of the finite difference of the Laplace operator.
How can we estimate the mistake? $$Vert u_h – u Vert_ {L ^ infty ( bar Omega)}$$, Where $$u$$ is the solution of $$begin {cases} partielle_tu – Delta u = f & text {in} Omega \ u = 0 & text {in} partially Omega end {cases}$$
(the constant problem)?

## Computer Vision – Laser Plane Estimation for Laser Camera System?

I have to set up a system consisting of a laser line / plane projector and a web camera to locate the 3D position of the laser in the camera image. I have read / found several resources, but the idea is still not quite concrete in my head.

My intuition is that since we have set up a laser projector and the camera and want to determine the position of the laser point in the image, we have the & # 39; correct & # 39; Must find laser plane that intersects with the camera / image plane. I'm confused about how we find the relative pose of this plane in relation to the camera and how we can use it to find the 3D coordinates.

## Complexity Theory – Estimation of vertex coverage in a constant limit

for the following function:
$$f left (G, v right) : = : size : of : minimal : vertex : cover : v : belongs to$$,

The function receives an undirected graph G and vertex v and returns a natural number that corresponds to the size of the smallest vertex coverage in G to which v belongs.

Problem: Proof that if it is possible to estimate f in a constant limit of 5 in polynomial time, P = NP. That is, if it is possible to compute a function in polynomial time $$g (G, v)$$ and that is guaranteed $$f (G, v) -5 leq g (G, v) leq f (G, v) + 5$$ then P = NP.

I don't understand why it happens and why if I know it $$f (G, v)$$ can be calculated in polynomial time and $$f (G, v) -5 leq g (G, v) leq f (G, v) + 5$$ then P = NP

## Reference request – Estimation of the Dirichlet-Neumann map for mixed boundary value problems

To be precise, consider a model problem
on $$Omega subset mathbb {R} ^ 2$$ with Lipschitz limit,
$$begin {cases} – Delta u = 0 & text {in} Omega, \ u = g & mbox {on} Gamma_D, \ partial_n u = g_N & text {on} Gamma_N, end {cases}$$
from where $$partial Omega = bar Gamma_D cup bar Gamma_N$$. $$Gamma_N cap Gamma_D = emptyset$$,

I am curious to see if there is literature on the DtN map (existence, estimates, explicit construction such as layer potentials) at the Dirichlet border: $$mathcal {V}: H ^ {1/2} ( Gamma_D) to H ^ {- 1/2} ( Gamma_D),$$
so that
$$| mathcal {V} g | _ {- 1/2, Gamma_D} ^ 2 leq langle mathcal {V} g rangle _ { Gamma_D} + ( text {terms on} Gamma_N).$$

## Does the compatibility level from SQL Server 2019 have any influence on the cardinality estimation?

In SQL Server 2017 and earlier versions, if you want to get cardinality estimates that match an earlier version of SQL Server, you can set the compatibility level of a database to an earlier version.

For example, in SQL Server 2017, if you want execution plans that have estimates that match SQL Server 2012, you can set the compatibility level to 110 (SQL 2012) and get execution plan estimates that match SQL Server 2012.

This is underpinned by the documentation that states:

Changes to the Cardinality Estimator released on SQL Server and Azure
SQL databases are enabled only in the default compatibility level of a
New version of the database engine, but not in earlier compatibility levels.

For example, if SQL Server 2016 (13.x) was released, it will change
The method of estimating cardinality was only available for databases
Default Compatibility Level of SQL Server 2016 (13.x) (130). Previous
Compatibility levels maintained cardinality estimation behavior
was available before SQL Server 2016 (13.x).

Later, when SQL Server 2017 (14.x) was released, there were recent changes to the
The method of estimating cardinality was only available for databases
Default Compatibility Level of SQL Server 2017 (14.x) (140). Database
Compatibility level 130 maintained SQL Server 2016 (13.x)
Kardinalitätsschätzungsverhalten.

However, in SQL Server 2019 this does not seem to be the case. If I take the Stack Overflow 2010 database and run this query:

``````CREATE INDEX IX_LastAccessDate_Id ON dbo.Users(LastAccessDate, Id);
GO
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140;
GO
SELECT LastAccessDate, Id, DisplayName, Age
FROM dbo.Users
WHERE LastAccessDate > '2018-09-02 04:00'
ORDER BY LastAccessDate;
``````

I get an execution plan with an estimated 1,552 rows coming from the index search operator:

However, if I use the same database and query for SQL Server 2019, a different number of rows from the index lookup is estimated. The comment on the right shows "SQL 2019". Note, however, that this is compatibility level 140:

And if I set the compatibility level to 2019, I get the same estimate of 1,566 lines:

Does the SQL Server 2019 compatibility level no longer affect the cardinality estimate as in SQL Server 2014-2017? Or is that a bug?

## Estimation of the convolution of two multiplicative functions

To let $$f, g: mathbb {N} to mathbb {C}$$ be two multiplicative arithmetic functions.
Suppose we know an asymptotic behavior of $$f$$ and $$g$$, Is there a general result for the asymptotic behavior of the convolution? $$f * g$$?