optics – Why is the image plane at Z = f in pinhole camera models?

Don’t confuse formulas meant to be used with refractive optics (i.e., lenses) with projection mapping functions (such as the pinhole projection model).

The thin lens formula only applies for refractive lenses, such as glass elements that bend light. The thin lens formula is really just an idealization of a refractive element described by the lensmaker’s equation with negligible thickness of the lens (hence the name, thin lens). Concepts such as depth of field are defined and derived from applications of the thin lens formula.

The pinhole model is a projection model. That is, it describes the mapping from the field of view to the image plane. The pinhole model is a 1:1 mapping — every ray entering the pinhole leaves the pinhole at the same angle, for the entire field of view. Many refractive lenses (i.e., subject more-or-less to the thin lens model) have a pinhole projection mapping function. But not all. Wide angle lenses, and especially fisheye lenses, do not follow the pinhole projection formula.

This is easy to understand in the degenerate case: how can a circular fisheye lens with a 180° angle of view in all directions project onto the camera’s image plane, unless there is some sort of angular distortion such that the further away from the optical axis the subject is, the more the image rays are bent to project within a confined cone? That’s impossible to do with a pinhole projection model. But it’s not difficult with a series of concave lenses in front of the lens to bend the incoming light into the “funnel” of the lens’s collection area and project it onto the camera’s image plane.

It appears your first image came from a slide deck PDF (or one of it many copies online) for a senior-level undergraduate class in computer science. Unfortunately, the slide deck could have used one more very simple image to demonstrate the pinhole projection model:

enter image description here
Pinhole camera model, from Wikipedia Commons. Public domain.

Here is easy to see the relation between the real-word subject (tree), and its image formation inside the pinhole camera. The depth of the pinhole camera is the focal length, ƒ. The two red rays, the bounding rays of the subject tree, enter the pinhole, and leave (continue towards the image plane) at the same angle. Thus, simple similar-triangle geometry describes the pinhole projection formula.

c# – How to make legacy camera client “observable”?

I have a legacy camera client to communicate with a camera. The way it works is a bit awkward for the times. Being myself a fan of the “reactive” stuff, I would like to communicate with the camera in a reactive fashion.

Whenever you want to start capturing frames, you call Start and it begins capturing a frame every 0.5 seconds and you get notified on every frame subscribing to the “on capture” event. The event keeps raising until you call Stop.

As such, the camera has:

  • a method to start capturing frames
  • another method to stop the capture
  • an event that is raised when a new frame is captured

I would like to encapsulate the camera into a more handy abstraction in which the Start and Stop methods are replaced by an IObservable<bool> and the resulting observable should emit batches of captures (Frames).

This is the legacy camera class:

class Camera 
    void StartRecording();
    void StopRecording();
    public event CaptureEventHandler OnCapture;
    public delegate void CaptureEventHandler(Camera sender, Frame capture);

What I want is to create wrap the camera into a class like this:

public class ObservableCamera 
      public ObservableCamera(Camera camera, IObservable<bool> enableCapture)
          // TODO: Define the 'Captures' observable using the parameters above

      IObservable<Frame()> Captures { get; }

The camera should start capture frames when the enableCapture observable emits a true and should stop capturing when it pushes false.

To clarify it a bit, this is the marble diagram showing the interaction:

  • The first sequence is enableCapture
  • The second should be an auxiliary sequence created from the OnCapture event.

enter image description here

I have received comments saying that my goal isn’t clear. To clarify it, this is my question:

How should I implement the observable to deal with the legacy camera? I don’t ever know how to start.

macos – EvoCam (video camera software) did not capture audio for 2 weeks on Mac running Mavericks—way to find it?

I know it’s a longshot.

I have an old Mac Mini running Mavericks running EvoCam 4.2.6 (old software no longer available).

The Mac Mini is connected via WiFi to an Amcrest camera. The Mac Mini is not compatible with the built in mic on the Amcrest, so I have a USB microphone running from the Mac Mini. Video comes from the Amcrest and audio from the USB mic. It usually works fine.

It records in hour long increments and outputs to .mov files.

For some reason starting two weeks ago, the audio didn’t record even though I could see the system sound setting could detect audio from the microphone. Something on the software side messed up.
The videos are there but with no audio. It took a restart to get it to record audio to the files again from the EvoCam app.

The audio is important.

I’m wondering if there are any ideas as to where the audio might have gone–again I know longshot. It records to an external drive connected to the computer. I tried looking at hidden files and I can see a .Trashes folder, but it says I don’t have permission to open it.

Are there any other places scuttled audio could have gone?

applications – Device with Android OS on which I can open my favorite Camera app straight from Lock Screen, without PIN

I keep losing Photo/Video opportunities on great moments, for two reasons:

  1. My device doesn’t let me choose which Camera app to open from Lock Screen’s shortcut.
    (only the native Camera app, which sucks)

  2. I must type my PIN to be able to even open the native Camera app.

I keep telling myself that my next phone must have a way to be able to open my favorite Camera app straight from the Lock Screen, without having to type my PIN or struggling with the “swipe up” gesture to actually show the keypad (which many times doesn’t display on first try). It’s just frustrating.

Is there any device that actually does this?

And as a bonus question… why are all native Camera apps now forcing us to “swipe” to move from “Photo” to “Video” mode, which just slows down everything even more? Why can’t the “Take Photo” and “Start Recording” buttons be on the SAME screen?

nikon – ‘File contains no image data’ but I can see the image briefly on the camera viewer. Is there a way to recover the images?

I get nothing in Windows Picture viewer or only a portion of the

I will be assuming the file format is JPEG.

So there’s two issues here:

  • File will not open at all; so then at least the header is damaged
  • File will open but is truncated or at least appears to be

File will not open at all

So, in this case the header is invalid but possibly a whole lot more. So before trying to repair header it makes sense to open the file in a hex editor and see what we have. My favorite hex editor, which happens to be free is HxD.

First thing we’ll do is see if there’s actually data, any data:

enter image description here

In this case it becomes obvious we will not be able to repair the file. But how about:

enter image description here

We first browse approximately half way into the file, normally we’d be certain we’d be looking at the encoded JPEG data there. JPEG is high entropy data. That means chaotic unpredictable data. The data in the image looks pretty chaotic to me so that would be a good thing, and yet the data I picked for the example is not JPEG data and I’ll tell how you can spot this:

In the JPEG specification the byte FF has special meaning as it tell the JPEG decoder we’re dealing with a JPEG marker. FF is always accompanied by a second byte which tells the decode that marker it is dealing with. For example FF D9 is end of image marker. FF DA is start of scan marker. I keep a list of all markers I know of here: https://www.disktuna.com/list-of-jpeg-markers/.

This also means you will not find markers berried inside the encoded JPEG data as any FF xx byte combination would make the decoder think it ran into a JPEG marker it has to do something with. There’s a few exception though:

FF 00 is valid and FF nn where nn is D0 to D7. So FF D3 is valid. In the dump however we see for example FF 9A or FF 5E and these we would never see in valid JPEG data.

So both of these dumps are from files that can not be repaired. If we have high entropy data and no invalid JPEG markers in encoded data it is likely we can repair the image, at least to a degree by using the header of a known good JPEG that was shot with the same device.

To ‘repair’ header using HxD and reference file while assuming damage is limited to the header:

Open corrupt file in HxD, Search > Find, search for FF DA using HEX data type. If not found the file is beyond repair. It is possible multiple instances of FF DA are found, you need the LAST one. There may be a few if the JPEG included a thumbnail and preview.

Write down the address (Using View > Offset base you can switch to decimal numbers if you like). Now search for FF D9. Or, go to end of the file which is where you’d normally find FF D9.

Once found, select the entire block including from the last FF DA upto and including FF D9 > right click > copy.

Open a new file > Paste Insert > Save as ‘image.jpg’. You have now copied the image data to a new file.

Open a known good file that was shot with the same camera, using same resolution and orientation (portrait/landscape).

Use Search > Find, search for FF DA using HEX data type.

It is possible multiple instances of FF DA are found, you need the LAST one.

Select the block preceding the FF DA bytes all the way to the start of the file (FF D8)

Switch to your image.jpg file TAB containing the image data, make sure you’re at offset 0 (zero) > Paste Insert > Save the file.

A drawback of this method is, that if reference file contains a preview, this preview will be copied to the repaired file too.

Image appears only partially

There can be several causes:

  • File is truncated: Compare size of corrupt file with size of intact JPEG shot with same device. If file is only half expected size then it’s to be expected half of the image is missing. You can not repair this.

  • What I also see is file is only filled half. So, if you open file in HxD and find a good portion at bottom of file consists of zeros then image data is simply missing.

  • Last, and this can sometimes be repaired is corrupt data in encoded JPEG data. As we saw before invalid FF xx byte combinations will make the decoder (photo viewer) think it encountered a JPEG marker. Depending on the software, it may simply hang and stop decoding or pop up an error message like ‘invalid JPEG marker’.

The free tool JpegSnoop will report such markers.

enter image description here

I have been able to get images to decode that had just one or two of those invalid markers using HxD by simply replacing the invalid marker FF xx by FF 00 as you can see in below example.

enter image description here

Finding out about all these options for repair I wrote a tool that can do much of this more easily. Also it for example strips the header of the reference file from data specific to the reference file, such as the preview thumbnail.

It also allows me to quickly spot issues with a file. It shows a byte histogram and calculates the entropy.

enter image description here

In this example you can see entropy is too low (it’s why it is displayed in red characters rather than green) and also the byte histogram looks nothing like an intact JPEG.

enter image description here

Good Low Noise Long Exposure Astrophotography camera?


I was reading that Canons were the best at low noise long exposure photographs. Then I found this site, https://www.brendandaveyphotography.com/more/long-exposure-sensor-testing/

It seems to say that some Fuji cameras are better at low noise (half the noise) than Canon for 5 minuite exposures. Which is better Canon or Fuji?

Comparing cannon EOS 60d vs fujifilm X E1 on a second site: https://www.photonstophotos.net/Charts/RN_ADU.htm
Seems to suggest that the fuji has half the noise. Why don’t I see reccomendation s for Fuji cameras as used for astrophotography?

focal length – How can I calculate a camera sensors width and height?

I try to compute the focal length for different field of views of a camera.

Therefore I need the sensor size horizontally and vertically, but in the datasheet I only have the information: 1/3" 2.4 MP Image Sensor.
The image size is 1920x1080 it that’s needed.

How do I get the size of the size of the sensor?


unity – How do I stop gyroscope-controlled camera from jittering when holding phone still?

I have here a simplified version of my gyro-controlled camera with a sensitivity modification (a side effect of increasing sensitivity is that the jitteriness is exacerbated).

public class GyroControl : MonoBehaviour{

private Transform _rawGyroRotation;
Vector3 gyroAdjust;
(SerializeField) private float _smoothing = 0.1f;

void Start()
    Input.gyro.enabled = true;
    Application.targetFrameRate = 60;

    _rawGyroRotation = new GameObject("GyroRaw").transform;
    _rawGyroRotation.position = transform.position;
    _rawGyroRotation.rotation = transform.rotation;


private void Update()
    _rawGyroRotation.rotation = Input.gyro.attitude;

    gyroAdjust = _rawGyroRotation.rotation.eulerAngles * 2; //increase rotation sensitivity
    transform.rotation = Quaternion.Euler(gyroAdjust);

    transform.rotation = Quaternion.Slerp(transform.rotation, _rawGyroRotation.rotation, _smoothing);


When in motion, the jittering isn’t noticeable. But when you hold the phone still, there’s what I assume to be just analogue noise that causes jittering. I would really appreciate any help or advice on how to add a filter or something to reduce the jittering for this kind of controller.