## Autofocus – What is the benefit of the large number of AF points?

With my Pentax K10D, which has only 11 AF points, it is possible for the bird to drop into a gap between the AF points or drop off the edge of the pattern when I track a small object, such as a distant bird. As a result, the AF system "hunts" for anything to focus on, and I can not even see the bird following it.

Thanks to a large number of control points, they can be packed tightly so that a moving target moves smoothly from one point to the next without falling into a gap.

(You could also solve the "gap" problem by making each AF point sensitive to a larger area, but this would make it harder to see what the system is focusing on.) You might think it affects the eyes of one Portrait motive focused, but actually focused on the nose.)

## The growth of the database log file is so unexpectedly large

My database has two log files. Their size is growing so fast. My backup sizes are getting so big. I know that when you fully back up my database, the log files are flushed but not. I can not control it. How can I control it?

## What would cause a single photo of my phone camera to have large, spotty magenta areas?

What would the pink do in this picture? All pictures before and after were normal. No filter, no flash.

## What is the best way to do a very large and repetitive task?

I need to determine the length of more than 18000 audios using the library `audioread` For each audio it takes about 300 ms, ie at least 25 to 30 minutes.

With a system of `Queue` and `Process`If I use all the available cores of my processor, I can lower the processing average of each audio to 70 ms, but it still takes 21 minutes. How can I improve this? I want to be able to read all the audios in at least 5 minutes, thinking that I have no competition on the computer. It will only run my software so that I can consume all resources.

``````while not q.empty():
index = q.get()
au(index)('duration') = f.duration * 1000
``````

Code that creates the processes:

``````for i in range(os.cpu_count()):
pr = Process(target=get_duration, args=(queue, audios, ))
pr.daemon = True
pr.start()
``````

There is only one in my code `Queue` with some `Process`and use the `Manager` to edit the objects.

## USA – Are there large junkyards near Indianapolis?

Are there large (and best abandoned) junkyards near Indianapolis, IN or Bloomington, IN?

I'm looking for something like that (not necessarily) The large):

My research on Google and Google Maps revealed only a few very small junkyards next to auto parts stores.

## Periodic Markov chain contains all sufficient large multiplies of the period.

The part I do not get when the proof introduces $$M$$, How do we know there exists $$Md_x$$ and $$(M + 1) d_x$$, What is the main idea behind to show that $$D_x$$ contains all sufficient large multiplies of $$D_x$$?

## crop photoshop JPEGs in large quantities, with the clothing shots aligned with the models

If you show me an algorithm that can do that, I'm more than happy to automate it.

For a software approach, I would try to use OpenPose, a deep learning-based human posture tracking stack.

They have pre-built builds so you do not have to recompile anything. All you have to do is download the neural network model, put the data in the right place, and follow the usage instructions.

You would run the openpose program for each image individually. This will output a .json file with "key points" representing a human model. By default, a 25-point model suitable for this application is used.

From this you can determine the 2D position of head, shoulders, hands and hips, which should be sufficient to normalize the images.

You would first set the scale of the image by scaling up or down based on the expected pixel height of the area between the shoulders and hips. Then you would move up or down to adjust the position of the zone. For images that need to be resized or moved to "make visible" missing parts of the photo, use the system that you would use manually, such as adding blanks.

## postgresql – What makes a large single index size (except bloat?)

I did a bit of work on the size of our PostgreSQL 9.6 production database and found some results that I found surprising.

We have a table (let's call it `foos`) with about 10 million records, The primary key is one `integer`, We have a B-tree index for this table for an optional foreign key in another table (let's call it `bars`). Let's call the index `index_foos_on_bar_id`, `bars.id` is only one `integer` Data Type column.

If I look at the index size with that `di+` meta command I see that it is occupied somewhere in the neighborhood of 1 GB Room. A bit of math on the back of the envelope would mean that every entry in the index needs about 1 GB / 10 million = 100 bytes of memory per line,

There is almost no deletion on the Internet `foos` Table, the upswing is absent.

In my opinion, an index would contain something like sorted pairs of numbers that map the indexed column to the primary key of the relevant table. There it is only `integer` Types that would only use about 4 + 4 = 8 bytes per lineThis is far from the 100 bytes per line that are actually occupied. I think the fact that it is a tree structure could easily increase that, but the over 10x difference was a bit of an eyebrow pull up for me.

What is the point that all "extra" space is used by the index?

## Directory-based music player for large music collections without tags

I have a large collection of music with only partially tagged music files, but neatly arranged in directories, and looking for a music player for this scenario. I do not want the music player to "scan" my collection and sort it by artist or not, but simply navigate to a directory and render all the files it contains (though not in subdirectories).

And I do not want to create playlists for all my directories (how would I manage them?). I am not looking for a workaround, but a music player dedicated to this scenario. Is there such an app out there?

## dg.differential geometry – Large class of curves that intersect only finitely often

I am trying to find a large subset of piecewise differentiable plane curves of finite length (subsets of $$mathbb {R} ^ 2$$) with the following property:

For every couple $$gamma_1, gamma_2$$ of curves in this class, their pictures $$Gamma_1, Gamma_2$$ are like that $$Gamma_1 cap Gamma_2$$ finally has many related components.

I tried to prove that this is the case for this set of curves:

Piecewise smooth curves (finite length) in which each piecewise component is either a straight line segment or a curve whose derivative is injective,

However, I have failed to provide a proof or counter-example to the claim that this satisfies the desired characteristics.

Could anyone suggest how to prove it, or why he believes it is a false claim? If it is wrong, would another restriction produce the desired properties?

Obviously, one can confine oneself to considering only one piecewise component at a time – and thus it is easy to show that no such curve can intersect a line segment at infinite points (using the injectivity of the curve). And of course, each line segment can only intersect another segment at one point or the entire segment. I have not provided any proof that both components are curved. I believe that an infinite number of intersections should eventually lead to a noninjectivity of the curves, but this could not show.