android – Camera preview in framelayout with distorted picture

Need help again!

I am developing an app that only uses the smartphone camera to show the camera, ie it does not record any photos or videos, just a preview.

Already managed to open the smartphone camera in activity and all. The problem was that I noticed my app's camera view compared to other apps and the default app on my smartphone and noticed that my app was distorting the image displayed on my camera.

Below the pictures, to illustrate what happens.

Note: Both pictures were taken from the same distance and position.

Figure 1 – Standard Smartphone Camera and Normal Mode (ORIENTATION_LANDSCAPE)

Enter image description here

Picture 2 – Camera of my app and normal mode (ORIENTATION_LANDSCAPE)

Enter image description here

Note that Image 1, is sharper and "thinner", ie original image size without zoom included. Nice picture 2, is "chubby" flattened, ie distorted.

Below the code of the classes, layout XML and manifest file of the application:





public class MainActivity extends AppCompatActivity {

Camera camera;
FrameLayout frameLayout;
CameraPreview cameraPreview;

protected void onCreate(Bundle savedInstanceState) {

    frameLayout = findViewById(;
    camera =;
    cameraPreview = new CameraPreview(this, camera);


public void onWindowFocusChanged(boolean hasFocus) {
    View decorView = getWindow().getDecorView();
    if (hasFocus){
                | View.SYSTEM_UI_FLAG_FULLSCREEN
                | View.SYSTEM_UI_FLAG_HIDE_NAVIGATION);

protected void onPause() {
    if (camera != null){
        camera = null;



public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback{

Camera camera;
SurfaceHolder holder;

public CameraPreview(Context context, Camera camera) {
    super(context); = camera;
    this.holder = getHolder();


public void surfaceCreated(SurfaceHolder holder) {
    Camera.Parameters parameters = camera.getParameters();
    List sizes = parameters.getSupportedPictureSizes();
    Camera.Size mSize = null;

    for (Camera.Size size: sizes){
        mSize = size;

    if (this.getResources().getConfiguration().orientation != Configuration.ORIENTATION_LANDSCAPE){
        parameters.set("orientation", "portrait");
    }else {
        parameters.set("orientation", "portrait");

    parameters.setPictureSize(mSize.width, mSize.height);

    try {
    } catch (IOException e){

public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {


public void surfaceDestroyed(SurfaceHolder holder) {






If someone can give me a light where I make a mistake, thank you?

macos – How can I observe which low-level system functions support a specific peripheral device (especially the internal camera of an iMac)?

This question asks about disabling the FaceTime Camera kernel module but has no answers. I have a similar goal and want to find out what is loaded to support the camera hardware / interface. Be known with +VIs there a similar way to keep track of which loads support the camera's hardware / interface stream?

For the record I tried to disable (move / delete /System/Library/Extensions):
… and move (delete) /Library/CoreMediaIO

None of these features disabled the camera (testing by opening the Zoom Conferencing application).

(I also tried to follow this post, which contains a suggestion to this question.)

Java – How can I calculate all the tiles visible in 2D for a camera?

I'm creating a basic, tile-based 2D game – mostly from scratch in Java, but all I need is pseudo code to accomplish that. My problem lies in the fact that my world (stored in one HashMap So I do not have to have a null object for every position, except for 0) maybe millions or even billions of tiles, so I'm sure it's not efficient to go through every tile.

One solution I have considered to solve this problem would be to try to calculate how many in the X and Y axes fit on the screen, based on the position of the camera in the world and the size of the tiles might be, as well as the offset of those on the screen, but I've tried (and failed) to implement this because I'm not sure how to convert this is math / logic.

How can I do that? Below I linked an image that shows which tiles I want to display on the screen, where the black square represents the center and tiles that do not cover the screen are not displayed.

Diagram of what I want

Indoor – LCD camera versus computer screen

If the LCD of your camera displays the image as you would like it to on your computer monitor, you should check if the color settings of your monitor are correct.

This may also be the case if you view the file with the software you are using. Therefore, check the files with other software to see if there are any differences.

It would be helpful to have an image at hand showing the problem.

Unit – How do I position a network just outside the camera view?

I want to move an object from the right into the view.

To achieve this, I need to position the object next to the camera view so that it is still not visible, but close enough for a quick insertion.

For example, placing the object at X = 1000000 (to make it safely out of camera view) would not work for security reasons because it can not slide in fast enough. For a decent slide, I really should have it near the camera view.

How could I calculate the position of the object for it?

Many thanks.

Filter – What is this mirror-like object in front of the lens of this industrial camera?

I have an industrial camera. The model is BFLY-23S6M (see picture below). This camera is used for license plate recognition. An 850 nm infrared projector is located near the camera. As I know, it is better to reject visible light.

After the sensor is a mirror-like object. You can see it in silver color. I want to know what it is and how it is needed. Is it a filter (eg anti-glare or bandpass)?

From the documentation I know the transparent filter (for monochrome) and the non-transparent filter, which is used for color cameras. This mirror-like is a user-defined user and not just preset to protect against dust.

Camera with reflective surface visible

Lens – Can a camera be a mile long?

Well, if the light is exactly parallel from one element to the next, the removal means nothing but additional problems. If the light does not move parallel from one element to another, you either need huge elements, or it will not get so much light to the next element that the camera does not have enough light left for a decent picture. And you definitely want to prevent stray light from entering, so you want to have a tube around it all.

That does not sound like a good investment.